text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Introduction: Automatic Car Parking System I am very excited to create an IoT project with the arduino . Today I am going to teach you guys how to make an awesome Remote Car Parking System. Move on to the next step to find more! Step 1: Introduction Hey guys, in this tutorial we are going to build an Automatic Car Monitoring System. We will be learning a lot of stuff. We will learn to : •Code in PHP and SQL. •Sync C code with PHP •Create a professional website •Make a real world solution. •Prototype with Arduino. Now we will proceed to the next step and take a look at the parts required to build this project successfully. Step 2: Parts Here are the list of parts that you will need . I will also provide a link for all those parts . The links will refer to their functionality and the store where you can buy them from. (For example ebay, amazon etc.) 1. Ultrasonic sensors (x6) :...... 2. Arduino Mega 2560 (x1) :...... 3. Wires (+30)(any type) :-... 4. Arduino Ethernet Shield :-...... 5. Breadboard (x1) :-...-... Ethernet Cable (x1) :... Step 3: Connections In this step we will looking at the connections of the project which we are dealing with. The connections are quite easy. What you have to do is follow the instructions given below. ◘ Connect the Vcc pin to the positive rail on your breadboard. ◘ Connect the Gnd pin to the negative rail on your breadboard. ◘ Connect the Trig pin to any digital pin on the arduino. ◘ Connect the Echo pin to any digital pin on the arduino. ◘ Finally, connect the positive rail of the breadboard to 5V pin on the arduino and the negative rail of the breadboard to the Gnd pin on the arduino. We have therefore completed connecting all the pins. Now try this sample code out to check if it works. I also have some useful comments in the file attached below, so that conceptual understanding would be easier. Now, having completed the connections let us proceed to the software part. <p>#define trigPin 26 //here, it will be the pin which you connected it to<br>#define echoPin 27 //here, it will be the pin which you connected it to</p><p>void setup() { Serial.begin(9600); pinMode(trigPin,OUTPUT); pinMode(echoPin,INPUT); Serial.begin(9600); } void loop() { // put your main code here, to run repeatedly: long duration,distance; digitalWrite(trigPin,HIGH); //we will send the sound wave delayMicroseconds(200); //wait for it to reach digitalWrite(trigPin,LOW); //stop sending the sound waves delayMicroseconds(10); //wait for it to sense duration=pulseIn(echoPin,HIGH); //start recieving them distance=(duration/2) /29.1; //a simple equation to get the distance in meters //Printing it to the console. if (distance>200){ Serial.println("Can't read"); } else{ Serial.println(distance); } }</p> Step 4: Software For building this project successfully you must have the following software installed in your computer. A download link will be provided. •Arduino : •Xampp : (this is the web server application) •Sublime Text Editor : •You will also need an Amazon IP address if you want to access it across the globe. The Arduino software is used to program the arduino. You can download Sublime Text Editor if you want, because you can write the program in the notepad itself. But I prefer Sublime. It is a great software to write code in. Xampp is a simple web server solution provided by apache. More information on Xampp can be found in this website : We will be using the MariaDB database and the PHP interpreter available in Xampp. Step 5: A Few Explanations Before we move onto the major meat of the whole project, I need to make sure that you understand what this project after all does. First we will capture the data from the ultrasonic sensor, which you should have tried out with the sample code provided in STEP 3. Then we should push that data to a webpage which is quite straightforward. And then we need to get the data with PHP and push that data to the MariaDB database. After passing it into the database, we need to get the data from the database and then use visual representations so that the user can easily interpret the application. We will be using five different coding files to do this project successfully. Now , let us proceed to our first coding file which will the arduino code. Step 6: Arduino Code In this step we will be looking at the arduino code for the Smart Car Parking project. The code will get the data from the ultrasonic sensors and it will post it to the webpage. Here is the code : #include <p>#include #include </p><p>Servo servo; //declaring variables to get the right data int available1; int available2; int available3; int available4; int available5; //distance for teh ultrasonic sensor long distance1; long distance2; long distance3; long distance4; long distance5; //duration taken for the sensor to transmit ultrasonic waves long duration1; long duration2; long duration3; long duration4; long duration5; //opening the servo for the cars to enter long servoduration; long servodistance; //the output pins for the sensor #define trig1 26 #define trig2 28 #define trig3 30 #define trig4 32 #define trig5 34 //the input pins for the sensor #define echo1 27 #define echo2 29 #define echo3 31 #define echo4 33 #define echo5 35 //the out and in pins for the sensor #define servotrig 38 #define servoecho 39 //ethernet datapins char server[] = "192.168.137.1"; EthernetClient client; //default computer address byte mac[]={ 0xDE, 0xAD, 0xBE, 0xEF, 0xFE, 0xED };</p><p>int response;</p><p>void setup() {</p><p> Ethernet.begin(mac); Serial.begin (9600); delay(100);</p><p>// response = client.connect(server, 80); // // if (response) { // Serial.println("connected"); // } else { // // if you didn't get a connection to the server: // Serial.println("connection failed"); // } //defining the modes of the pins for the sensor pinMode(trig1,OUTPUT); pinMode(trig2,OUTPUT); pinMode(trig3,OUTPUT); pinMode(trig4,OUTPUT); pinMode(trig5,OUTPUT); pinMode(echo1,INPUT); pinMode(echo2,INPUT); pinMode(echo3,INPUT); pinMode(echo4,INPUT); pinMode(echo5,INPUT); pinMode(servotrig,OUTPUT); pinMode(servoecho,INPUT);</p><p> servo.attach(37); }</p><p>void loop() { //finding the distance for each of the sensors digitalWrite(trig1, LOW); delayMicroseconds(2); digitalWrite(trig1, HIGH); delayMicroseconds(10); digitalWrite(trig1, LOW); duration1 = pulseIn(echo1, HIGH); distance1 = (duration1/2) / 29.1;</p><p> digitalWrite(trig2, LOW); delayMicroseconds(2); digitalWrite(trig2, HIGH); delayMicroseconds(10); digitalWrite(trig2, LOW); duration2 = pulseIn(echo2, HIGH); distance2 = (duration2/2) / 29.1;</p><p> digitalWrite(trig3, LOW); delayMicroseconds(2); digitalWrite(trig3, HIGH); delayMicroseconds(10); digitalWrite(trig3, LOW); duration3 = pulseIn(echo3, HIGH); distance3 = (duration3/2) / 29.1;</p><p> digitalWrite(trig4, LOW); delayMicroseconds(2); digitalWrite(trig4, HIGH); delayMicroseconds(10); digitalWrite(trig4, LOW); duration4 = pulseIn(echo4, HIGH); distance4 = (duration4/2) / 29.1;</p><p> digitalWrite(trig5, LOW); delayMicroseconds(2); digitalWrite(trig5, HIGH); delayMicroseconds(10); digitalWrite(trig5, LOW); duration5 = pulseIn(echo5, HIGH); distance5 = (duration5/2) / 29.1;</p><p> Serial.print("Distance 1:"); Serial.print(distance1); Serial.println("cm");</p><p> Serial.print("Distance 2:"); Serial.print(distance2); Serial.println("cm");</p><p> Serial.print("Distance 3:"); Serial.print(distance3); Serial.println("cm");</p><p> Serial.print("Distance 4:"); Serial.print(distance4); Serial.println("cm");</p><p> Serial.print("Distance 5:"); Serial.print(distance5); Serial.println("cm");</p><p> delay(100);</p><p>//getting the actual value required to do the project successfully if(distance1<6) { available1=0; } else{ available1 =1; } </p><p> if(distance2<6) { available2=0; } else{ available2 =1; }</p><p> if(distance3<6) { available3=0; } else{ available3 =1; } if(distance4<6) { available4=0; } else{ available4 =1; } if(distance5<6) { available5=0; } else{ available5 =1; }</p><p> digitalWrite(servotrig, LOW); delayMicroseconds(2); digitalWrite(servotrig, HIGH); delayMicroseconds(10); digitalWrite(servotrig, LOW); servoduration = pulseIn(servoecho, HIGH); if(servodistance<10) { servo.write(90); } else { servo.write(-90); }</p><p>//passing the variables to the php page.</p><p> if (!client.connected()) { Serial.print("Available1="); Serial.println(available1); Serial.print("Available2="); Serial.println(available2); Serial.print("Available3="); Serial.println(available3); Serial.print("Available4="); Serial.println(available4); Serial.print("Available5="); Serial.println(available5); client.stop(); delay(100); if (client.connect(server, 80)) { Serial.println("connected"); // Make a HTTP request: client.print("GET /UpdateAvailability.php?Position="); client.print("1"); client.print("&&"); client.print("Available="); client.print(available1); client.println(" HTTP/1.1"); client.println("Host: 192.168.137.1"); client.println("Connection: close"); client.println(); client.stop(); } else { // if you didn't get a connection to the server: Serial.println("connection failed"); }</p><p> if (client.connect(server, 80)) { Serial.println("connected"); // Make a HTTP request: client.print("GET /UpdateAvailability.php?Position="); client.print("2"); client.print("&&"); client.print("Available="); client.print(available("3"); client.print("&&"); client.print("Available="); client.print(available("4"); client.print("&&"); client.print("Available="); client.print(available("5"); client.print("&&"); client.print("Available="); client.print(available5); client.println(" HTTP/1.1"); client.println("Host: 192.168.137.1"); client.println("Connection: close"); client.println(); client.stop(); } else { // if you didn't get a connection to the server: Serial.println("connection failed"); } } }</p> Step 7: Xampp Interface In this step we will explore the Xampp interface. Xampp as I previously mentioned has a web server which helps us use PHP, Perl and SQL for making projects. The web server we are using is called as Apache. Find more about Apache in this website : In order to create the table, you need to first install Xampp and then start MySQL and Apache. Make sure that those ports are available. You can find an image relating to this in the image bar above this. Please note that if MySQL does not function properly, then it is probably because of Skype which also uses the same port as MySQL. So you need to right click on the Skype application in the task-bar and click on quit. So, we need to create a table and then look at how to update the value. Note the word update. We cannot push values to a null (empty) variable. There should be some value in the table in the beginning so that we can change them accordingly later. Please follow these steps to create the table successfully. You may also look at the pictures above for easier understanding. ◘Click on new. ◘Create a database name. ◘Then create a table name. (remember this name as you will need it for the next step) ◘Then you will be given a layout, just enter the name for each row and then choose which datatype you want. Then leave the rest as it is. (the row name will also be very important) ◘Enter the default values and leave em aside. ◘After creating the table, you will see that it has been added to the side bar. Now that we have successfully created the table, we will move onto the next step. In the next step we will look at the SQL connection code. Step 8: Adding the SQL Connection In this page , we will be adding the code which connects to the database. The database will need some code to connect to as it will have a password and a username, so we should connect to the database with those specifications. Here is the code : *********************************************************************************************************************************Sorry that I am not able to post the code in this instructable. It is not accepting html code, so therefore, please visit this link where my code is available in GitHub. Thanks a lot. *********************************************************************************************************************************... The link attached above has the dbcon (database connection) code. Once again, sorry for the inconvenience. Now that we have written the SQL connection code, let us proceed to get and update the values. Step 9: Updating the Table In this step we will go ahead and update the availability of the table so that we can look at the table which will be more easily understandable. We need to first get the values and then update them . Note that you have to use your table name and the names of the rows which was why I had cautioned you guys before. Here is the link to this code :... The only major step to make the whole program function is to add the data to the website. Let us do that in the next step before heading to make an awesome professional looking webpage with Bootstrap. Step 10: Parking Lot Display In order to show the values to the user we must select and fetch them from the database. So we can do that with PHP. Showing them to the screen is will not suffice as the user seeing the screen should understand what the program is trying to say. In order to make it decipherable, we must add a tiny bit of bootstrap into it. We will also be adding something called as a glyph-icon , it is a term in bootstrap which you can look into in more depth in this link : Here is the GitHub link to my code :... Note that your data display page should look like the one in the image. Now, the only thing remaining is the fancy part of the whole program which is the main webpage. Let us finish that off in the next step. Step 11: Fancy Styling We will be using a lot of bootstrap with this. Bootstrap is a framework by Twitter which is immensely helpful in creating websites. Here are the links to my webpage :...... Note : You have to provide a different IP address for the code. I have a different one and it will not work for you. Use localhost/(name of your webpage) OR try using your IP address. There are two links attached, you can also take a look at all of my files associated with this project in this link : Finally ,I would like to congratulate you on completing a pretty difficult and complicated project on the Smart Car Parking System. I would also like you to contact me through my YouTube channel or with the help of the comments in Instructables itself if you have any questions. And please don't forget to vote for my project for Spark-Fun Home Automation contest. Participated in the Automation Contest 2016 Participated in the Beyond the Comfort Zone Contest 1 Person Made This Project! - roniloapin04 made it! Recommendations 58 Comments 2 years ago. 4 years ago you are using ethernet shield for this? Reply 2 years ago Yes. They are shown in the pictures. Reply 3 years ago Yes, he is. Question 2 years ago Hi, may I know is it possible to make a wireless parking sensor based on arduino, which can send the parking availability data to the html page.? Question 3 years ago Hi sir. Do you know how to add date and time for the car entry for this? 3 years ago i have started to do same as the above but I didn't get to know how to connect the servotrig and servoecho i.e., 38 and 39 can you pls elaborate Reply 3 years ago Follow the same steps as shown in the image. Reply 3 years ago but there is no photo on connecting the pins to 38 and 39 and it is not detailed in the video too please explain Question 3 years ago on Introduction Can you send images for fix sensors and mega board via 830-Tie-Points-Solderless- board. Answer 3 years ago Please follow the pictures posted above. And repeat them. 4 years ago I pretty much used this code and everything was fine. Did you copy paste it? Reply 4 years ago it has a problem with the $_GET function only. you may use the isset function.... Reply 3 years ago $_GET works for me. Try $_POST otherwise. Reply 3 years ago GET works better as Arduino is focusing on the URL itself. Reply 4 years ago I downloaded ur code n change it into olny 2 ultrasonic n without servo in it. Reply 3 years ago Then, it should have worked. If I were you, I would debug carefully. Tip 3 years ago You can visit my website to view other projects that I have built as well! 4 years ago Dude, can I ask something? I got nothing sent from arduino to sql database (nothing change in updateavailability.php). And ur updateavailability code is error in my pc, do u have any suggestion? Thank you :) Reply 3 years ago Did you turn SQL on? I'd Skype running parallels? Try copying and pasting my code in.
https://www.instructables.com/Automatic-Car-Parking-System/
CC-MAIN-2022-05
refinedweb
2,753
56.96
Improved Trending Insight In addition to exposing users’ data, like who they are and what content they work with (files, messages, conversations, tasks, etc.), the Microsoft Graph also exposes calculated insights based on the users’ activity. These insights enable applications to get to relevant data about users. For example, using the Trending API to get documents that are trending or using the People API to get people they closely work with. Today we are announcing an improvement in our Trending API based on feedback we received from our developer community. This API returns documents that are relevant to users in their organization. The Trending API was initially exposed under the /trendingAround navigation on any user in the beta endpoint. We are removing this navigation over the next few months and the improved version of the Trending API is now available under a new navigation: /insights/trending. This improved Trending API returns not only the list of relevant documents but also new visualization properties that let you render the documents in your app experiences as cards, the same way we do in Office 365, for example in Delve. And it also returns reference properties, that enable navigations to the actual documents. The Trending API uses an intelligent background analysis to deliver the most relevant documents. To improve the API further, the analysis now happens in near real-time. This means that users see what is trending around them at the very moment. Using the improved Trending API The Trending API is accessible in the Microsoft Graph via /insights/trending, available in the beta namespace. You can call the API to get documents trending around you: Or around someone in your organization: You can try calling the new API right now with our Graph explorer. Try making a call with the demo tenant or log in with your own user and see what documents are currently trending around you. Working with the results A call to the improved Trending API returns a set of trending documents – the top 10 documents by default. The API supports the standard ‘top’ and ‘select’ query options. Each document returned by the API has a weight property with a value that shows to what extent the item is trending around the user. The higher the value, the more relevant the item is to the user. The results are sorted by this value in descending order. Each returned document contains a resourceVisualization and a resourceReference complex value type (CVT). The resourceVisualization CVT contains properties such as ‘title’ and ‘previewImageUrl’. We use visualization properties to render the files in our experiences: The resourceReference CVT contains a ‘webUrl’ that allows you to navigate users to the location of the trending document, in either SharePoint, OneDrive or Outlook attachments. See our documentation for more information on the Trending API. Let us know what you think in the comments below! Mário Henriques and Jakub Cech on behalf of the Insights team in Microsoft Graph.
https://dev.office.com/blogs/improved-trending-insight-microsoftgraph
CC-MAIN-2018-09
refinedweb
492
53.1
RationalWiki:LiquidThreads From RationalWiki LiquidThreads (LQT) is a mediawiki extension that changes how talk pages work. MediaWiki talk pages were never intended to act as forums, and they are not very well suited for that. LiquidThreads was developed in order to give MediaWiki a better discussion system that combines the advantages of wikis with that of traditional forums. With LiquidThreads, every comment is a separate page in a special namespace called "Thread". Contents Advantages of LiquidThreads[edit] - Indentation and signing is handled automatically. - Replying is simpler. You don't get a wall of text consisting of everyone else's comments when you try to reply. You get an empty textbox, type in your reply, and click save. - There are no edit conflicts when replying to a thread. - There is no need to archive. You can make a permanent link to any comment at any time. You can post a reply to any comment and bump the thread, at any time. - Finding new replies is easier, thanks to Special:NewMessages. Unread messages will be highlighted, making them easy to find. - Indentation is done in a more sensible fashion - a reply is always indented one level more than the parent comment. Because comments are visually separated, there's no need to indent comments just to separate them. - An interface that avoids unnecessary page loads for posting and editing threads, and other functions, making the experience smoother and faster. - Customizable thread order: by default, threads are listed last modified first, but you can change this to oldest first or newest first. - It is possible to delete a single comment (or just edit out parts of it, then oversight the first revision) in case of a privacy violation. With traditional talk pages, all revisions between the one where the comment was added up to the one where it was removed must be deleted. - Traditional talk pages are an enormous waste of hard disk space. With each edit, the entire talk page is saved, resulting in a lot of redundant threads. LiquidThreads eliminates this overhead. Disadvantages of LQT[edit] - More scrolling required by more whitespace on talkpages. - Users familiar with traditional MediaWiki talk pages may have a hard time adjusting to the new system. - LiquidThreads limits what you can do to a talk page, for example, it is not possible to collapse a thread using {{Collapse}}. - LiquidThreads is used on very few wikis and has not been adopted at core Wikimedia sites such as Wikipedia (as its developers intended). Support is non-existent and compatibility with future MediaWiki upgrades is not guaranteed. RationalWiki's instance was already broken once—the toolbar doesn't show up. FAQ[edit] How are LQT talk pages archived?[edit] - The short answer is, they're not. If there are more threads on a page than a set limit, the talk page will be split into multiple pages, like traditional web forums or our own forums. You can see an example here. - The limit is 20 by default, and you currently cannot set a different default in preferences. However, you can change the limit on a per-page basis, using the lqtpagelimit parser function, e.g. to set it to 10, use {{#lqtpagelimit:10}} - This also means that bringing an archived discussion back is as easy as adding a new comment to it, since that will bump it up to the top of the first page. How do I get a difflink in LQT? How do I link to a specific comment?[edit] - Because each comment is a separate page, you can just link to that comment as you would to a normal wiki page. Similarly, you can view its history and link to a diff or a specific revision if you want. The link for a comment is available by clicking "Link to" in the "More" menu in the lower right corner of the comment; the history (fossil record) is also available there. For example: Thread:User talk:Nx/Templates/reply (7) - Note that this link will also show all the replies to that comment as well. This is called a fragment. - You can also get a link to an "old revision" of a thread if you click "Fossil record" in the thread header and click a date. This will show the thread as it was at that point in time, emulating the behavior of traditional talk pages. Why am I getting "new messages"?[edit] - This is most likely because you have a page on your watchlist that has been converted into an LQT talk page. All comments on that page will be added to your "New messages" page. If you don't want that, simply unwatch the page (don't forget to mark the messages as read). - Any thread you create or reply to is also automatically added to your watchlist, and you will get a new messages notification for these (but not for other threads on that page). You can disable this behavior in your preferences, in the "Watchlist" tab, by unchecking "Watch threads that I create or reply to". To watch or unwatch an individual thread, click Watch or Ignore in the thread header. What is "Show N replies"? Why isn't the entire thread shown?[edit] - Long threads are not shown completely to avoid unnecessarily loading them in case you do not want to read them. After a certain reply depth, or a certain number of replies, you will instead see a button titled "Show N replies", where N is a number. Just click it to load the rest of the thread. You can customize the limits in your preferences, in the "Threaded discussion" tab.
https://rationalwiki.org/wiki/RationalWiki:LiquidThreads
CC-MAIN-2019-09
refinedweb
939
64
havana to icehouse upgrade (alternative technique) Hi, My current installation (havana): 2 controller (passive HA) 1 network (+ 1 for backup) 12 compute nodes I'm using OVS + GRE with neutron I need to upgrade to icehouse and i thought an alternative technique. I don't know if it is a stupid technique, so here is the details: Basically i want to install a completely new controller and network node and gradually move the instances from havana to icehouse using snapshots (as suggested here -> (...)) The problem is that i have floating ips associated to vms and i cannot modify this floating ips. In particular i have only one external network (say 1.2.3.0/24) and more precisely, the floating ip pool is 1.2.3.10 -- 1.2.3.254. Studying the havana network node, i discovered that 1.2.3.10 (the first ip in the allocation pool) is allocated as a gateway (the qg interface inside the qrouter namespace) when "neutron router-gateway-set" command is used. The important thing here is that 1.2.3.1, the physical router, has 1.2.3.10 -> aa:bb:cc:dd:ee:ff (the "virtual" mac address associated to qg in the qrouter namespace) in the arp table. So when i try to ping 1.2.3.10 from the outside a will reach the havana network node (precisely the qg interface inside the qrouter namespace). So, if I install a new network node with icehouse and I create a new external network (from the icehouse controller) with a slightly different pool (1.2.3.9 -- 1.2.3.254), I can use 1.2.3.9 as a gateway on this node without ip conflicts. Next, i can migrate gradually all instances ( with a script...) and force icehouse neutron to give a particular floating ip to the instance. Furthermore, i can create private lans on the icehouse environment also with the same ip because this kind of traffic wil be Snatted from iptables on the network node. Can I have an opinion for this idea? I think that if works, it can be great because i can control the downtime and i can move gradually instances with a script. Thanks
https://ask.openstack.org/en/question/65726/havana-to-icehouse-upgrade-alternative-technique/
CC-MAIN-2021-17
refinedweb
372
63.09
I have a custom dialog builder going for an extension I'm writing of transforming data to be used by SPSS's machine learning models. I need to read from a .pkl file that was made that contains the feature set header names as well as what sklearn transformers to run it. When I loaded the file SPSS complains of an "ImportError: No module named" of files that I are needed to unpickle the .pkl. Where should I put the python folder(module) that the .pkl file is looking for at load time? I've put the files in a folder(module) in the working directory where my .pkl file is that I know it's finding correctly but no luck. I've added the files (not in a folder because it won't let me add them as a folder) to the extension's properties but no luck there either. Any ideas would be great, thanks. Answer by jkpeck (5348) | May 02, 2017 at 12:35 PM Statistics normally uses a private Python installation (unless you change it via Edit > Options > Files). You can put your modules to be imported anywhere on the Python search path (print sys.path in Python code run from a syntax window to see the whole search list). The /python/lib/site-packages directory under your Statistics installation is one place that will always work. @jkpeck In case it matters I'm using Modeler. I've put my folder/module with the external python files in a place listed on the sys.path (/Applications/IBM/SPSS/Modeler/18/SPSSModeler.app/Contents/as/scripts) but no luck still. I don't see a /python/lib/site-packages under /Applications/IBM/SPSS/Modeler/18/SPSSModeler.app/Contents anywhere so I haven't put it there. I did try and put the folder/module in the python install folder's site-packages and no luck there either. Answer by jkpeck (5348) | May 02, 2017 at 01:24 PM @AaronSmith I didn't realize you were asking about Modeler. My answer was for Statistics. For Modeler, you could try runnning a piece of Python code that imported a module from the standard library such as re and then do print re to see where it lives. I don't have Modeler 18, but the scripting in older versions uses Jython, not Python, although I believe support for real Python in the Modeler backend was added. Answer by jkpeck (5348) | May 02, 2017 at 02:18 PM A stab in the dark: try starting your code with this import sys sys.append(r"--folder where modules are--") Answer by jkpeck (5348) | May 03, 2017 at 08:32 AM Sorry, I lost a piece typing that. It should be sys.path.append(... Answer by jkpeck (5348) | May 03, 2017 at 09:35 AM @AaronSmith Ok. Are you really sure that all the dependencies for these files are resolved in that folder? If an import in turn does another import that fails, the first import will also fail. And that the case of the file names matches the import statement? Beyond that, I don't know what to suggest. @jkpeck alright, after trying just about everything I could think of I moved all my files out of the folders/modules and remade the .pkl file with all the files in the base directory of the .py file that is creating the .pkl file. Then I loaded that .pkl file with all the files out of their directories and that apparently worked. I'm guessing the python PATH doesn't look recursively in or for folders? Answer by jkpeck (5348) | May 03, 2017 at 05:51 PM @AaronSmith I'm glad you have a solution, but the problem is still a mystery. Each time an import is executed, the entire Python path (sys.path) is searched. For each directory, besides its file contents, it looks in any subdirectory that contains a file named init.py for import statements. There are some slight differences between Python2 and Python3, but that's the basic idea. Answer by jkpeck (5348) | May 03, 2017 at 07:51 PM @AaronSmith That initial above was mangled by this site. What I wrote was underscoreunderscoreinitunderscoreunderscore.py 113 people are following this question.
https://developer.ibm.com/answers/questions/372308/$%7BprofileUser.profileUrl%7D/
CC-MAIN-2019-18
refinedweb
713
74.39
Leveraging the Flickr API from the .NET Framework Introduction I think any developer would agree: If it doesn't have an API, it ain't Web 2.0. And although the big guys often don't provide managed code interfaces for the .NET developer, there's always someone out there who will take the time to wrap their API and provide easy access for the rest of us. Flickr, of course, is among the original Web 2.0 applications. Like Gmail, and a few others, Flickr was Web 2.0 before anyone called it Web 2.0. By now, Flickr's ability to host your vacation shots or your fine art photography and its rich supporting functionality is the standard by which all the others measure. And, fortunately, Flickr hasn't neglected its API. The API regularly keeps pace as new features are added to the site. There's very complete support in the API, for example, for using a photo's geographical location (the GPS information on where the photo was shot). In this piece, I'd like to introduce you to the Flickr API, point out some highlights and write an application that retrieves and displays a user's photos. Wrap and Roll You could use the Flickr API directly, but since it's written in unmanaged code, you'd have to use COM-access techniques. It's not difficult, but it's not that pretty either. And it's a hassle you'd have to deal with throughout your application. The easier approach is to leverage a wrapper that's done all of that for you. Fortunately, just such a wrapper is available on Codeplex (Microsoft's open-source library for cool, free .NET applications and source code). Go to: Click the big green Download button on the top right of the page and that will get you the DLL you need. (No need to download the source code unless you are curious.) Unzip the download someplace and you'll find two folders - Debug and Release. If you're not using the source code, you'll use the DLL in the Release folder. Big kudos go to Sam Judson for his great work in creating this Flickr wrapper and making it available to the .NET community. Getting Keyed In Flickr wants to know who is calling their API. So to use it they require that you get a key and then reference that key when you make calls to the Flickr functions. There's no way to avoid this. Fortunately the process is simple and instantaneous. Go here:. Click the Request an API Key link. On the next page click the big blue box that says Apply for a Non-Commercial Key. Finally, there's a form asking you to identify and describe your application. Fill in the form and click Submit. Two giant numbers will be displayed: a key and a secret. Copy them both and paste them someplace safe where you can reference them later in your code. For this example application, you'll only need the key. Fumbling with Flickr So what's next? How do I know what functions to call? The Documentation page for the Flickr wrapper on Codeplex begins "Ha Ha Ha Ha! No but seriously..." This doesn't bode well. Fortunately, you don't really need a lot of documentation because the wrapper closely mirrors the API of Flickr itself. And documentation for that is fairly complete:. Plus I'll show you how to get started in the next few sections. Fire it Up Start Microsoft Visual Studio. You can use the Flickr wrapper with either Visual Studio 2005 or Visual Studio Pro 2008 (I haven't tried it out yet on Visual Studio 2010, but there's no reason it shouldn't work there, too.). Create a new application. You can access Flickr from a Windows Forms application, but the example for this article is ASP.NET. Now right-click your project and choose Add Reference. Navigate to the location where you unzipped the .NET wrapper you downloaded and select ...\Release\FlickrNet.dll. The DLL is copied to your project and you are ready to go. Also, to make your coding a bit simpler, go to the project properties window, References section, and in the bottom listbox, scroll down to FlickrNet and check it off. That will import the namespace for all the pages in your application. Avatars (No, Not the Blue People) This example application will provide a page for the user to upload a picture to use as their avatar on your site (on a discussion forum, for example). They can either choose to upload the picture from their local computer or they can provide a Flickr account name and use one of their photos from there. The local upload option leverages the standard FileUpload server control. I'll focus on the implementation of the Flickr option in this article. Figure 1. The Select an Avatar page in Microsoft Visual Studio If the user chooses to select a picture from Flickr, they must first enter their Flickr account user name and then click Get Photos. They are not required to enter a password because this application will not be logging in to their account. The username will simply be used to search for the right pictures--so only the publicly accessible ones will be available (There are a set of functions to allow your application to actually log in with the user's credentials. The first time your application tries to do so, the user is required to confirm that your application should have access. From then on your application has full access to everything on their account.). If the username is valid and there are pictures for that user, the listbox under the Get Photos button will be filled with the list of photo titles. The user can then click on a title to see the corresponding picture displayed in the image, as in Figure 2. Once they've found the one they want, they can click Use This One. Figure 2. The user selects a title in the listbox and the photo is displayed with the image control. Instantiation is the Key Got that Flickr key ready to use? You only have to specify it once when you instantiate the Flickr API class and the wrapper handles passing it whenever it is needed. Listing 1 is from the Click event of the Get Photos button. If FlickrUsername.Text = "" Then FlickrStatus.Text = "You must enter a Flickr username" Return End If Dim myFlickr As New Flickr("AAAbbbCCC111222333DDDeeeFFF") Listing 1. The first part of the GetPhotos_Click event. After verifying that the user entered something for the username, the Flickr class is instantiated, passing the key you copied earlier (replace the bogus key above with your own). The instantiated Flickr object (myFlickr above) is your entry point into the API. There are a number of classes that you can access within it. Among them: - People - Contacts - Blogs - Galleries - Activity - Authorization - Photos - Geographical Information - Licenses - Notes - Groups - Group Pools - Places - URLs As you can begin to see, virtually every feature available on the site is available in the API. Chasing Down the User The next step is to get a reference to the user we want based on the username typed in. A handy function named PeopleFindByUsername does just that. In Listing 2, I call this function and receive the reference back. The data type of the reference variable is FoundUser--a type provided by the Flickr wrapper. Dim userRef As New FoundUser Try userRef = myFlickr.PeopleFindByUsername(FlickrUsername.Text) Catch ex As FlickrException FlickrStatus.Text = ex.Message Return End Try Listing 2. The second part of the GetPhotos_Click event The function is wrapped in a Try/Catch so that if the user is not found (or some other error occurs), the user is informed via the FlickrStatus label. Retrieving the List of PhotoTo retrieve the list of photos for the user, I use the PhotoSearch function. PhotoSearch accepts one argument of type PhotoSearchOptions. An instance of this class is used to fill in all the possible options you might want to use in a photo search. And there are a lot of possible options. For this application, however, I only need the UserId. UserId here is not the same as the username. It's a unique number that's used to identify users in functions like this one. Dim searchOptions As New PhotoSearchOptions Dim photoList As New Photos Dim currentPhoto As New Photo searchOptions.UserId = userRef.UserId photoList = myFlickr.PhotosSearch(searchOptions) FlickrPicsList.Visible = True For Each currentPhoto In photoList.PhotoCollection FlickrPicsList.Items.Add(New ListItem(currentPhoto.Title, _ currentPhoto.SmallUrl)) Next Listing 3. The third and final part of the GetPhotos_Click event Next the listbox is made visible and each of the photos in the collection returned is added to the listbox. For each item, a text and a value is set. The text (which is displayed) is the photo's title. The value (which is not displayed) is the URL needed to directly access the photo. Showing Off The user now has a list of their photos available. However all they see for each is its title. And not all Flickr users are careful to set their titles individually. So to make it easy for the user to identify the photo they've selected, the listbox's SelectedIndexChanged event is used to display the photo using the image control. Dim selectedPhotoUrl As String selectedPhotoUrl = FlickrPicsList.SelectedItem.Value PhotoDisplay.ImageUrl = selectedPhotoUrl Listing 4. The FlickrPicsList_SelectedIndexChanged event After the Flickr class is instantiated, I retrieve the value (not the text) of the listbox's currently selected item. You'll remember that this is the URL necessary to reference the photo directly. Since this was saved in the value for each listbox item, I don't need to do another query to Flickr to get the photo. I can just set the ImageUrl directly. Conclusion Of course this article only begins to demonstrate the possibilities available with the Flickr API, but I hope it whets your appetite for Flickr integration and gives the knowledge you need to launch into it more deeply.
http://www.codeguru.com/print/csharp/article.php/c17033/Leveraging-the-Flickr-API-from-the-NET-Framework.htm
CC-MAIN-2015-27
refinedweb
1,701
65.01
Hello. This has been driving me crazy for a few days now. I have a homework assignment that involves making an index creating program. Basically, the program reads a text file and outputs another text file as an "index" of all the words and their line numbers from the input file. One class is called "IndexEntry". Every instance of this class represents one line on the output file, essentially holding a String of the word and its line numbers. Here's its code: import java.util.*; public class IndexEntry { private String word; private ArrayList<Integer> numsList; public IndexEntry(String aWord) { word = aWord.toUpperCase(); numsList = new ArrayList<Integer>(); } public void add(int num) { /*Integer intObj = new Integer(num); if (numsList.contains(intObj) == false) numsList.add(intObj);*/ numsList.add(num); } public String getWord() { return word; } public String toString() { String tempString = word + " "; for (Integer i : numsList) tempString += i + ", "; return tempString; } } I commented some stuff out in the add(int num) method because it seems that an ArrayList can in fact take ints before turning them into objects and storing them. Next is the "DocumentIndex" class. It extends an ArrayList of IndexEntry and represents all of the IndexEntry instances (I think). Here is it's code: import java.util.*; public class DocumentIndex extends ArrayList<IndexEntry> { public DocumentIndex() { super(); } public DocumentIndex(int initialCapacity) { super(initialCapacity); } public void addWord(String word, int num) { } public void addAllWords(String str, int num) { } private int foundOrInserted(String word) { } } As you can see, there isn't much there yet. There's also a third class with the main method and some other important stuff, but I don't think it pertains to my problem and so I won't post it unless requested. Here are my problems: 1) I don't really understand how "DocumentIndex" can extend an ArrayList of "IndexEntry"s. What does this mean? What is inherited? 2) "DocumentIndex" needs to look at every instance of "IndexEntry" to see if a word already exists in an entry before inserting it into a new IndexEntry (which is what "DocumentIndex"'s add method is for). How do I do this? I thought about using a for-each loop, something we just learned in my programming class, but no luck. I don't think my problem is too complicated but I'm a total novice and was just introduced to ArrayLists and Arrays last week. I've done some Google searching and should've gone to bed awhile ago but I can't find any answers. If you can help me, thank you.
http://www.javaprogrammingforums.com/collections-generics/13959-traversing-arraylist-superclass-help-%5B.html
CC-MAIN-2015-18
refinedweb
422
55.13
in reply to More than mod_cgi less than mod_perl. possible way to do what you want may be to have a 'cgi-server' wrapped around all cgi-scripts, and have Apache pass all request to this other process. This sounds very similar to fast-cgi. (Caveat: I've never used it, but I've heared good things about it.) update: url.. I wouldn't say that FastCGI is dead. Stable, yes. Dead, no. It's been working beautifully on my Debian boxes for years, and is still supported in the new Sarge that was just released. I believe the burgeoning Ruby on Rails community is picking it up, too, as one of the persistent mechanisms of choice. It just works, and darn well I might add. One particularly thorny problem it helped me solve recently- I provide "boutique" hosting services for a rather large perl-based CMS that's under heavy development. I have clients that are on the stable branch and are happy where they are at, and clients that want to run the development "bleeding edge" branch. mod_perl makes this difficult because of namespace clashes, while with FastCGI I can run multiple long-running "instance scripts" and everything just works. And nicely, too. You do lose the direct integration with the Apache API (I love mod_perl), but FastCGI is a very nice alternative persistent environment that's language independent. -Any sufficiently advanced technology is indistinguishable from doubletalk. My Biz Perhaps that just means that they have both already reached perfection ;) Seriously though, FastCGI is a very mature technology, so I can understand why development may have stopped. After all, when was the last time Apache made any changes to the way CGI works? As for PersistentPerl, there is only one registered bug with the project on RT, and it is platform dependant (OSX), and includes a proposed fix. Really, there are lots of perl modules that haven't had an upgrade in ages that are still relevant. How long did Apache::Session go without an update? HTML::Template has only had one release in the last 3 years and it is still very heavily used. I'm sure there are many others...... Yes No Results (279 votes). Check out past polls.
http://www.perlmonks.org/index.pl?node_id=464253
CC-MAIN-2017-13
refinedweb
373
72.56
Mark Heily wrote: > This seems I would like Autoconf to generate a stub Makefile to > include in the top-level of each tarball. The purpose of this stub > Makefile is to call ./configure > to generate the real Makefile and then re-invoke make(1) using the > real Makefile. I also like the ability to simply check out a pristine copy of the source and then simply type 'make'. Fortunately this is quite easy. I have always been fond of Jim's GNUmakefile in Coreutils.;a=blob;f=GNUmakefile Here is a derivation that is maximally condensed for discussion and modified to automatically run autoreconf and ./configure if needed. I have been using this in my own projects. have-Makefile := $(shell test -f Makefile && echo yes) have-configure := $(shell test -f configure && echo yes) have-Makefile-maint := $(shell test -f Makefile.maint && echo yes) ifeq ($(have-Makefile),yes) include Makefile else ifeq ($(have-configure),yes) all: @echo "There seems to be no Makefile in this directory." @echo "Running ./configure before running 'make'." sh ./configure @$(MAKE) else all: @echo "There seems to be no Makefile in this directory." @echo "There seems to be no configure script in this directory." @echo "Running 'autoreconf' to generate the configure script." autoreconf --install @$(MAKE) endif endif ifeq ($(have-Makefile-maint),yes) include Makefile.maint endif By including a GNUmakefile such as in a project then what you are suggesting is available but without a conflict over the generated Makefile. > My intention is to save typing for power users and make it easier for > novice users to build programs from source. This effectively hides > the Autoconf/Automake mechanism unless someone wants to pass specific > options to the configure script. > > Any thoughts? I am not sure there is enough drudgery in ./configure to really warrant doing crazy stuff. :-) Bob
http://lists.gnu.org/archive/html/autoconf/2007-06/msg00032.html
CC-MAIN-2014-42
refinedweb
301
57.98
Introduction to Multilevel Inheritance in Java Inheritance is one of the important features of OOPS concepts. It helps in the reuse of code by inheriting the features of one class known as parent class by another class known as its child class. When the process of inheriting extends to more than 2 levels then it is known as multilevel inheritance. In other words, when one new class derives features from a class that has been derived from one base class is said to be a multilevel inheritance. The new class is said to be a grandchild of a parent class. For eg, A is the parent class B is its child And C is child class of B and grandchild of A. Similarly, A is parent class for class B and grandparent for class C. Syntax of Multilevel Inheritance in Java Let us see the syntax of multilevel inheritance in java which is given below: class A{ //class A is parent of class B //class A is grand parent of class C public A(){ //A constructor } public void fun1(){ //function in Parent Class } } class B extends A{ //class B is a child class of class A //class B is a parent class of class C public B(){ //class B constructor } } class C extends B{ //class C is a child class of class B //class C is grand child class of class A public C(){ //Class C constructor } } public class Test{ public static void main(String[] args){ C obj = new C(); } } For the implementation of multilevel inheritance, there must be one base class eg – A. Then there must be a derived class B which extends class A and class C extends A. How Multilevel Inheritance Works in Java? Multilevel inheritance is a type of extending the features of one derived class to another new class. Since the features of parent class are extended up to multiple levels thus this type of inheritance is known as multilevel inheritance. When a child class extends a parent class it can use all the features parent class. Thus if there is a class that extends features of this derived class then it is said to be a grandchild of the base class that has all features of parent and child class. Consider a class A as parent class, class B as a child class of class A and class C as a child class of class B. and when an object is created for class C say obj as given above. When we declare this object, the constructor of class C is called. As we know, while constructor of a child class first constructor of its parent class is called. Thus when we call C() – then B() constructor gets called and further as B is a child class for class B thus A() is called. Then control gets back to child class. Thus in the following series constructors are executed: But when a method is called using this object of a child class, first it is checked if that method with the same signature is present in the child class. If not then control is directed to its parent class to find the method. Thus in this case when fun1() is called using object obj, control goes to C and find there is no such method thus control goes to B and thus class A. The reason for this is because one can easily redefine the methods of the parent class in its child class, known as method overriding. Thus first preference is given to the overridden method. In this way, Multilevel inheritance implements the inheritance feature in classes at multiple levels. This type of inheritance is most often used while implementing data augmentation – that is the process of increasing the diversity and amount of existing data without updating the existing code. It also helps to introduce variability to one’s available training model by applying simple transformations. Examples of Multilevel Inheritance Let us see some of the examples of multilevel inheritance in java. Code: class Electronics { public Electronics(){ System.out.println("Class Electronics"); } public void deviceType() { System.out.println("Device Type: Electronics"); } } class Television extends Electronics { public Television() { System.out.println("Class Television"); } public void category() { System.out.println("Category - Television"); } } class LED extends Television { public LED() { System.out.println("Class LED"); } public void display_tech() { System.out.println("Display Technology- LED"); } } public class Tester { public static void main(String[] arguments) { LED led = new LED(); led.deviceType(); led.category(); led.display_tech(); } } Output: Explanation: In the above example, class Electronics is a general class that provides a method device_type() for all electronic devices. Then we have class Television which extends the Electronics class which specifies the Electronics device and have a method name – category() to display the type of electronic device. Then class LED extends Television class to specify the technology used for its display. It has method display_tech() to show the technology is LED. In the main method when we make an object of the LED class, we use it to call method for all the parent class. When call constructor of a child class, a constructor of the parent class is called first thus when new LED() is called first new Television() is called. Further in this constructor new Electronics() gets called and displays – Class Electronics. Then it is returned to Television constructor and displays Class Television and then returned to LED class and displays- Class LED. When a method is called using an object of LED class, first control goes to LED class and tried to find method – device_type() and went to Television class to find the method on not finding it in LED class and further when not found it in Television class also goes to its further superclass Electronics and finds that method and execute it. Conclusion Multilevel inheritance is a great technique to implement the main advantage of inheritance i.e code reusability and readability through multiple levels. It helps to introduce variability and diversity to the existing code which provides the basic training material. This procedure is known as code documentation. Recommended Articles This is a guide to Multilevel Inheritance in Java. Here we discuss the syntax and working of Multilevel Inheritance in Java along with examples and code implementation. You can also go through our other related articles to learn more–
https://www.educba.com/multilevel-inheritance-in-java/?source=leftnav
CC-MAIN-2020-34
refinedweb
1,053
59.84
So VS.NET is giving you the vague error when using Subversion to version control your ASP.NET Web projects. "Refreshing the project failed. Unable to retrieve folder information from the server" Here is a good working solution, its not my solution but I felt the need to document it and to share. First off, uninstall your version of TortoiseSVN. You installed the version which produces that standard ".svn" folders which is the reason for this issue in the first place. Next, download and install the special version specific to VS.NET. At the time of writing this the proper download was indicated by: "Special version for Win2k/XP: (We provide NO support for this!) uses _svn folders instead of .svn to work around the VS.NET bug with web projects. If you don't use web projects then please use the official version. Note: working copies created by this version are incompatible with other Subversion clients!" And was found at: So great, you now have changed over your version of TortoiseSVN to use the _svn instead of the common .svn folders, but what about all of your local code which you may or may not have commited the changes to your repository, or plain and simple you want an easier way to update your local copy to avoid having to get it all from the repository again? Here is a quick snippet of code that I threw together just for this purpose: using System; namespace SVNChangeFolders { class Class1 { [STAThread] static void Main(string[] args) { string source="."; string dest="_"; string path = ""; foreach(string a in args) { if(a.ToLower()=="to:_" || a.ToLower()=="to:.") { if(a.ToLower()=="to:_") { source="."; dest="_"; } else { source="_"; dest="."; } }else{ path+=" "+a.Trim(); } } path=path.Trim(); if(!System.IO.Directory.Exists(path)) { System.Console.WriteLine("Folder does not exist:"+path); return; } System.IO.DirectoryInfo dir = new System.IO.DirectoryInfo(path); RenameFolders(dir.GetDirectories(),source, dest); } public static void RenameFolders(System.IO.DirectoryInfo[] Folders, string source, string dest) { foreach(System.IO.DirectoryInfo dir in Folders) { if(dir.Name==source+"svn") dir.MoveTo(dir.FullName.Replace(source+"svn",dest+"svn")); else { RenameFolders(dir.GetDirectories(),source, dest); } } } }} Useage is pretty easy, build it first with: "csc SVNChangeFolders.cs" then run it: SVNChangeFolders Some Directory Path or SVNChangeFolders "Some Directory Path" By default it will switch all ".svn" folders to "_svn". You can change this behaviour by doing something like: SVNChangeFolders Path to:. or SVNChangeFolders Path to:_ Pretty simple fix and much easier than gettting the latest revision of the (sometimes large) tree. PingBack from The following worked for me. It might also work for some people: 1. search for ".svn" folders (make sure to tick the hidden files and folders checkbox - from the "more advanced options"). 2. delete all .svn folders! 3. install Tortoise version 1.2 and checkout a fresh version of your web projects. 4. replace the new checked-out folder with your older one replacing the new files with your own (modified) ones. Thanks for the info, helped me fix that vague problem
http://weblogs.asp.net/rchartier/archive/2005/08/10/422184.aspx
crawl-002
refinedweb
512
59.6
I am trying to make a program to go on a stock website find the value a particular stock is at and display it to the user. Then repeat at your selected timeframe. I can get my code to read the entire source of the page or open a page but I cannot seem to get it to find something within the source. To give you a better idea of what I would it to do when completed. Get input from the user on what stock and sight to search and how often to check it. Once the run button is clicked I would like it to take the input from the user and run in the back ground going on the website searching the stock and display the numbers back to the user. I have only been able to have it display the entire source code though, not just the stock numbers. If anyone has some suggestions I would appreciate it. Thanks. import java.io.BufferedInputStream; import java.io.IOException; import java.io.InputStream; import java.net.MalformedURLException; import java.net.URL; public class StockTracker { public static void main(String[] args) { try { URL url = new URL(""); InputStream stream = url.openStream(); BufferedInputStream buf = new BufferedInputStream(stream); StringBuilder sb = new StringBuilder(); while (true) { int data = buf.read(); if (data == -1) { break; } else { sb.append((char)data); } } System.out.println(sb); } catch (MalformedURLException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } } }
http://www.javaprogrammingforums.com/whats-wrong-my-code/35462-how-can-i-search-specific-content-web-site.html
CC-MAIN-2016-22
refinedweb
239
66.94
How to Display Something on Screen with the C Programming puts() Function When you’re programming in C, you may want your computer to display something on the screen. The puts() function sends a stream of text to the standard output device. What the heck does that mean? For now, consider that the puts() function displays text on the screen on a line by itself. Here’s the format: #include <stdio.h> int puts(const char *s); Because that official format looks confusing, you can also use this unofficial format: puts("text"); The text part is a string of text — basically, anything sandwiched between the double quotes. It can also be a variable. The puts() function requires that the source code include the stdio.h header file. That header file contains the function’s prototype. Header files are added to the source code by the use of the #include directive, as just shown. The C language handles text in streams, which is probably different from the way you think computers normally handle text. The standard output device is usually the computer’s display. Output can be redirected at the operating system level; for example, to a file or another device, such as a printer. That’s why the technical definition of the puts() function refers to standard output and not to the display.
http://www.dummies.com/how-to/content/how-to-display-something-on-screen-with-the-c-prog.html
CC-MAIN-2016-22
refinedweb
223
65.42
Parse JSON from a local file in react-native: JSON or JavaScript Object Notation is a widely used format for transferring data. For example, a server can return data in JSON format and any frontend application (Android, iOS or Web application) can parse and use it. Similarly, an application can send the data to a server in the same JSON format. Sometime, we may need to test our application with local JSON files. If the server is not ready or if you want to test the app without interacting with the main server, you can use local JSON files. In this post, I will show you how to import a local JSON file and how to parse the values in a react native application. Create one react native app : You can create one react-native application by using the below command : npx react-native init MyProject Create one JSON file : By default, this project comes with a default template. We will place the JSON file in the root directory. Open the root directory, you will find one App.js file. Create one new JSON file example.json in that folder. Add the below content to this JSON file : { "name" : "Alex", "age" : 20, "subject" : [ { "name": "SubA", "grade": "B" }, { "name" : "SubB", "grade" : "C" } ] } It is a JSON object with three key-value pairs. The first key name is for a string value, the second key age is for a number value and the third key subject is for an array of JSON objects. We will parse the content of this array and show in the Application UI. JSON parsing : We can import one JSON file in any javascript file like import name from ‘.json’. Open your App.js file and change the content as like below : import React, { Component } from 'react'; import { Text, View } from 'react-native'; import exampleJson from './example.json' export default class App extends Component { render() { return ( {exampleJson.name} {exampleJson.age} {exampleJson.subject.map(subject => { return ( {subject.name} {subject.grade} ) })} ); } }; Explanation : Here, - We are importing the .json file as exampleJson. - We can access any value for that JSON file using its key name. - The third key, subject is a JSON array. We are iterating through the items of this array using map(). Output : Run it on an emulator and you will get one result like below :
https://www.codevscolor.com/react-native-parse-json
CC-MAIN-2021-04
refinedweb
387
64.91
08 May 2012 13:47 [Source: ICIS news] LONDON (ICIS)--?xml:namespace> However, March’s increase in building and construction – up by 30.7% – was largely due to catch-up effects after harsh weather in February, the ministry said. March’s increase in productive output came after a revised 0.3% decline in February from January. On a two-month sequential comparison – February-March versus December-January – Also on Tuesday, Germany’s central bank, the Bundesbank, reported that, based on preliminary data, the index for Germany’s chemicals and pharmaceuticals production rose to 108.8 points in March, from 105.5 points in February. For the first quarter of 2012, the index stood at 106.6 points, compared with revised 105.9 points in the 2011 fourth quarter and 111.8 points in the 2011 first quarter. Meanwhile,
http://www.icis.com/Articles/2012/05/08/9557452/germanys-productive-output-rises-2.8-in-march-from.html
CC-MAIN-2014-52
refinedweb
138
61.33
Here's my code: import java.util.Scanner; public class Main { public static void main(String[] args) { int entry; Scanner keyBoard = new Scanner(System.in); System.out.print("Enter a number which you wish to square:"); entry = keyBoard.nextInt(); System.out.println("The number chosen is " + entry); predictSquare(entry); } public static int predictSquare(int entry) { int newAmount; newAmount = entry * entry; return newAmount; System.out.println("Cubed: " + newAmount); } I'm getting a 'missing return statement' error on the predictSquare(). I have a return statement in there? What is going on here? I've been banging my head against a wall for a while now.
http://www.javaprogrammingforums.com/whats-wrong-my-code/1035-stumped.html
CC-MAIN-2014-15
refinedweb
103
52.46
JDeveloper Extension SDK (Comments) 2015-05-29T07:13:08+00:00 Apache Roller Re: Migrating an Existing Extension to JDeveloper 11gR2 -- Part One John 'JB' Brock 2014-06-27T14:05:27+00:00 2014-06-27T14:05:27+00:00 <p>Hi Oto,</p> <p>I'm sorry but I don't know that much about SQL Developer. While they are roughly based on the same platform as JDeveloper, not everything is the same. I would ask this question over on the SQL Developer forums.<br/> <a href="" rel="nofollow"></a></p> Re: Migrating an Existing Extension to JDeveloper 11gR2 -- Part One Oto 2014-06-27T13:27:57+00:00 2014-06-27T13:27:57+00:00 . </p> <p>Thanks beforehand for your help.</p> Re: Migrating an Existing Extension to JDeveloper 11gR2 -- Part One Freydie 2014-02-25T20:14:25+00:00 2014-02-25T20:14:25+00:00 <p>Thank you John - Will look into those forums</p> Re: Migrating an Existing Extension to JDeveloper 11gR2 -- Part One John 'JB' Brock 2014-02-25T01:56:59+00:00 2014-02-25T01:56:59+00:00 <p>Hi Freydie,</p> .</p> <p>I would take a look over in the SQL Developer forums. This looks like a good place to start:<br/> <a href="" rel="nofollow"></a></p> Re: Migrating an Existing Extension to JDeveloper 11gR2 -- Part One Freydie 2014-02-25T01:24:00+00:00 2014-02-25T01:24:00+00:00 <p>I would be happy if you could help me:-)</p> ).</p> <p.<br/> I assume the namespace of the sql dev context menu should not be the jcp.org etc... and the site id-ref would be also suited to the db nav??<br/> Thanks a lot in advance!! <br/> Freydie</p> Re: Declarative Menus in JDeveloper Extensions – Part Four Alexis Lopez 2014-01-27T13:53:41+00:00 2014-01-27T13:53:41+00:00 <p>Excellent post, thank you!</p> Re: Declarative Menus in JDeveloper Extensions – Part Two John 'JB' Brock 2014-01-15T15:54:25+00:00 2014-01-15T15:54:25+00:00 <p>Arix,</p> <p>Glad to hear you worked it out. I wasn't able to get an answer internally before you figured it out yourself. Sorry about that.</p> <p>While SQL Developer is built on JDeveloper, the extension API's can vary in a lot of places. I always recommend asking for help with specific SQL Dev issues, in the SQL Developer forums.</p> <p>If an extension works properly in JDeveloper, but fails to function the same in SQL Developer, then it's most likely something like what you just found, and that will be answered the fastest in the SQL Dev forums.</p> Re: Declarative Menus in JDeveloper Extensions – Part Two Arix. O 2014-01-14T11:25:29+00:00 2014-01-14T11:25:29+00:00 <p>Hi,</p> <p>I finally resolved the issue.<br/> In my extension.xml file I simply used className instead of action-ref.<br/> Now when I click on the submenu I added in my DB package context menu, it does the job.</p> <p>Have a good day.</p> <p>Sincerly,</p> <p>Arix.</p> Re: Declarative Menus in JDeveloper Extensions – Part Two Arix. O 2014-01-10T17:53:57+00:00 2014-01-10T17:53:57+00:00 <p>Hi,<br/> I am using JDeveloper 11g on Windows 7 64bits for building a SQL Dev extension.<br/> I used SQL Dev to see if the project deployed and ran properly.<br/> It seems like the action I defined won't link to the item I am targeting (the one in my custom submenu).</p> Re: Declarative Menus in JDeveloper Extensions – Part Two John 'JB' Brock 2014-01-10T15:28:42+00:00 2014-01-10T15:28:42+00:00 <p>Hi Arix,</p> <p>Can you please provide me with the version of JDev you are working with and the operating system that you are using? It sounds like you may be using SQL Developer and not JDeveloper.</p> Re: Declarative Menus in JDeveloper Extensions – Part Two Arix. O 2014-01-10T10:52:24+00:00 2014-01-10T10:52:24+00:00 <p>Good morning.</p> <p>First of all think you for taking the time to explain how to implement extensions in JDeveloper.<br/> I am trying to implement one but I am experiencing some issues when it comes to open a custom window if I click on a submenu element I have created.<br/> More precisely, nothing happens.</p> <p>Here is how my project is defined in JDev:<br/> MyProject<br/> ---Application Sources<br/> -----tools<br/> -------addSubmenuToASqlDevPackageOnClick.xml<br/> -------Command.java<br/> -------Controller.java<br/> -------SubMenuListener.java<br/> -------ToolsAddin.java<br/> -------JpanelToShowOnClickSubMenuelement.java<br/> -----META-INF<br/> -------extension.xml</p> <p>Here is my submenu like when I click on a package in SQL Dev:</p> <p>--MySubmenu<br/> ----ShowFileChooserWindow</p> <p>Let me know if you want me to post sources.</p> <p>Thinks in advance</p> Re: Adding files to an existing project guest 2013-09-10T06:36:27+00:00 2013-09-10T06:36:27+00:00 <p>Hi,</p> <p</p> <p>Thanks</p> Re: Migrating an Existing Extension to JDeveloper 11gR2 -- Part Two guest 2013-07-25T14:21:10+00:00 2013-07-25T14:21:10+00:00 <p>I have added a Tools menu item via menu bar hook which is used to load and initialize the custom extension I am migrating for 12.1.2. Upon initializing the extension creates a custom workspace and add it to the jdeveloper content pane. </p> <p.</p> Re: Migrating an Existing Extension to JDeveloper 11gR2 -- Part Two John 'JB' Brock 2013-07-24T15:32:44+00:00 2013-07-24T15:32:44+00:00 <p>Hi Apurba, questions like this are probably best asked and answered in the forums instead of in comments of the blog.</p> <p>I would re-ask this question over at:<br/> <a href="" rel="nofollow"></a></p> <p>While you're moving over there though, I would ask first if the extension loads at all? Placing this tag in the triggers section is not going to actually trigger the extension load. What are you using as a trigger-hook?</p> Re: Migrating an Existing Extension to JDeveloper 11gR2 -- Part Two Apurba 2013-07-24T15:13:18+00:00 2013-07-24T15:13:18+00:00 <p>I am migrating an extension from 11gR1 to 12c(12.1.2). I am not able to make code template work on JDev12.1.2. </p> <p>In 11.1.1.5 we have the code template file under meta-inf folder and in the extension.xml file we have the following entry under hook<br/> <code-template-hook" rel="nofollow">"></a><br/> <templates-url>/meta-inf/code-templates.xml</templates-url><br/> </code-template-hook></p> <p>For 12c I am doing the same other than adding the code-template-hook under triggers section. But its not working. Is there something more i need to do ?</p> Re: Setting up a Fusion Applications Development Environment John 'JB' Brock 2013-07-22T15:40:50+00:00 2013-07-22T15:40:50+00:00 <p>Hi David,</p> <p>This topic is probably best asked and answered over in the JDeveloper forums.</p> <p><a href="" rel="nofollow"></a></p> <p>Blog comments are not a really great way to do support stuff. ;-)</p> <p>Please post your question over in the forums and I'll make sure someone picks it up.</p> Re: Setting up a Fusion Applications Development Environment David 2013-07-22T15:34:34+00:00 2013-07-22T15:34:34+00:00 <p>Hi,John.</p> <p>I know that Eclipse has the function to develop the plugin projects which are osgi bundles and can be exported and imported.<br/> Do you know how to use jdeveloper to develop plugin projects which are organized by osgi and can be dependent by each other?</p> <p>I have created two extension projects-A and B,but can not use A as a plugin and let B import A.</p> <p>Could you give a detail directions on how to develop plugins which are dependent by other plugins?</p> <p>Thank you ! </p> Re: Adding Exteneral Libraries to an Extension in 11gR2 John "JB" Brock 2013-07-02T13:55:15+00:00 2013-07-02T13:55:15+00:00 <p>I don't believe that just migrating your older 10g project over to 11gR2 is going to work for you. </p> <p>I would suggest creating a new "Extension" project in 11gR2 and copying your code over to that project. It will get all of the appropriate files in place and will be much easier to maintain going forward.</p> <p>Once you have done that, you should be able to add your external file reference to the manifest.mf file and build your extension bundle properly.</p> Re: Adding Exteneral Libraries to an Extension in 11gR2 guest 2013-07-02T13:10:28+00:00 2013-07-02T13:10:28+00:00 <p>hello JB</p> <p>i am deploying my ejb jar that needs jasper libs included but i am unable to include it in deployment.</p> <p>i am using oracle 11g r2 project is migrated from 10g and i have no manifest file<br/> what shuld i try?<br/> i tried it by deployment profile hit an trials but no luck</p> <p>thanx</p> Re: Adding Exteneral Libraries to an Extension in 11gR2 Charly Regensburger 2013-06-02T12:20:46+00:00 2013-06-02T12:20:46+00:00 <p>hi john</p> <p>thanks for your answer<br/>.</p> <p>it does work, as I mentioned, when i deploy it to the extension directory of my sql developer<br/> ?</p> Re: Adding Exteneral Libraries to an Extension in 11gR2 John "JB" Brock 2013-06-01T17:26:54+00:00 2013-06-01T17:26:54+00:00 <p>Hi Charly,</p> <p>The manifest file is only used with 11gR2. In your case, you need to add a dependency in the extension.xml file to the wherever you are going to place the library when your extension is deployed.</p> <p>You can go to the extension.xml file in editor and add a dependency via the visual editor, or you can manually edit the extension.xml file yourself.</p> <p>A reminder that while SQL Developers is based on the JDeveloper framework, they do have their own extension development process, and not everything that can be done to extend JDeveloper, will work, or work the same with SQL Developer.</p> Re: Adding Exteneral Libraries to an Extension in 11gR2 Charly Regensburger 2013-06-01T16:55:35+00:00 2013-06-01T16:55:35+00:00 <p>hello</p> <p>actually im working with 11gR1 (11.1.1.4) and im developing an extension for the sql developer.<br/> i need an embedded DB but I am unable to load a DB driver (sql-lite 3.7.2) because there is no class file available after deploying<br/> i guess error has something to do with the way I added the external sql-lite.jar<br/> I already added the sql-lite.jar in the project properties (Libraries and Classpath) and "Deploy by Default" is set<br/> I also tried to add it the way you described it with the manifest file, but it doesn't work</p> <p>any suggestions how I can solve that problem?</p> <p>PS: I tried another DB library as well (H2 DB) but it didn't work either.</p> Re: Don't fear the Audit -- Part 1 mayank 2013-05-17T10:35:55+00:00 2013-05-17T10:35:55+00:00 <p>Hi <br/> I have to create audit rule who will check the class name should be start with capital letter.<br/> i am using jdeveloper 11.1.1.6,,plz provide me solution</p> Re: Setting up a Fusion Applications Development Environment John 'JB' Brock 2013-05-13T17:57:06+00:00 2013-05-13T17:57:06+00:00 <p>Thanks Orthon,</p> <p>I'm going to send you over to the Fusion Apps forum to get more eyes on the issue. <a href="" rel="nofollow"></a><br/> There is an entire team setup to help with Fusion Apps customization develepers now. There is also a new blog at: <a href="" rel="nofollow"></a></p> <p>Hopefully, between these two resources, you will be able to get things smoothed out and working correctly on a regular basis.</p> Re: Setting up a Fusion Applications Development Environment Orthon 2013-05-13T15:02:15+00:00 2013-05-13T15:02:15+00:00 <p>Thanks for your reply, John. But I really don't have a path with space, and I'm using Python 2.7.4. Here is the cmd file used for starting JDev.<br/> set MW_HOME=C:\Develop\Oracle\Fusion<br/> set JAVA_HOME=%MW_HOME%\jdk160_24<br/> set PATH=.;C:\Develop\Python27<br/> set JDEV_USER_HOME=C:\Develop\Oracle\JDevUserHome<br/> set USER_MEM_ARGS=-Xms256m -Xmx1024m -XX:MaxPermSize=512m -XX:CompileThreshold=8000<br/> %MW_HOME%\JDeveloper\jdev\bin\jdev</p> Re: Setting up a Fusion Applications Development Environment John 'JB' Brock 2013-05-13T14:24:45+00:00 2013-05-13T14:24:45+00:00 <p>The only thing that comes to mind is that you may have everything installed in a path with a space in it. The WLS installer doesn't like spaces in the Path. I know, it's crazy in this day and age, but it's there.</p> <p>You may also want make sure you are using Python version 2.7.x or higher. But NOT Python v3.x</p> Re: Setting up a Fusion Applications Development Environment guest 2013-05-11T02:58:26+00:00 2013-05-11T02:58:26+00:00 <p>Hi John,?</p> Re: Cleaning up after yourself -- Deleting test extensions Jan Vervecken 2013-05-10T09:34:01+00:00 2013-05-10T09:34:01+00:00 <p>hi John</p> <p>see also JIRA issue ADFEMG-131, "uninstalling a JDeveloper extension"<br/> at <a href="" rel="nofollow"></a></p> <p>regards<br/> Jan Vervecken</p> Re: Migrating an Existing Extension to JDeveloper 11gR2 -- Part One John 'JB' Brock 2013-04-22T14:42:41+00:00 2013-04-22T14:42:41+00:00 <p>I'm not sure what you mean by "exporting jar". Are you trying to deploy the extension to another JDeveloper instance?</p> <p>You will want to look at this blog post about packaging an extension for distribution and use by others.</p> <p><a href="" rel="nofollow"></a></p> Re: Migrating an Existing Extension to JDeveloper 11gR2 -- Part One guest 2013-04-22T10:57:27+00:00 2013-04-22T10:57:27+00:00 <p </p>
https://blogs.oracle.com/jdevextensions/feed/comments/atom
CC-MAIN-2015-32
refinedweb
2,509
62.38
Stephan T. Lavavej - Core C++, 4 of n - Posted: Aug 22, 2012 at 6:00 AM - 63,088 Views - 36 Comments Something went wrong getting user information from Channel 9 Something went wrong getting user information from MSDN Something went wrong getting the Visual Studio Achievements Right click “Save as…” In part 4, Stephan teaches us about Virtual Functions. In parts 1-3, we learned about compile-time constructs. Now, we enter the realm of runtime. STL spends some time discussing inheritance and a bit about access control. Tune in. Learn. See part 1: Name Lookup See part 2: Template Argument Deduction See part 3: Overload Resolution for your work. Wow!! Thank you for this lecture video. Yesterday we got the first C++ && Beyond 2012 and today Stephan. Life is being kind to native developers... long may this state of affairs continue. Thanks as always Stephan for another awesome video. I hope this makes C++ more intuitive and give it a new life. I often hear people saying C++ is dead but after listening to Herb Sutter's argument about the future of C++. C++11 is way to go. Keep up the good work. I hope C++11 have more support for weak memory models. What surprised me most about virtual functions is that pure virtual function can have implementation/body (but not at the same place as its declaration(?)/definition(?)): Marek Seems like a long time since the 3rd part. Great show! I would still like to see that "parameter passing best practices" episode But if it's gotta be core C++, how about "sequence points are dead, long live the sequenced-before relationship"? Thanks Stephan, for another great lecture. Too bad time ran out, because it would have been nice to show that the NVI idiom also applies to static polymorphism using the CRTP pattern with a class template `Base<Derived>`. Here you can explicitly downcast the this-pointer in the `Base` class and access a `Derived` class's private implementation, once again showing that access control is orthogonal to visibility. Is there a transcript and/or write-up of the lectures for reference purposes? Failing that how about time tags on the video? These mindblowingly valuable lectures require proper indexing. Great stuff again, thanks. I find I use NVI a lot also although I never realised it had a common name. Usually this is because I want to make sure that a specific piece of code is always called before (or after) that part of the behaviour which is customisable by the derived class implementor. So a trivial example would be if I want to allow derived classes to customize a function, but I also want to ensure that the function's argument values are always logged to a file before calling the 'real' code. The only annoying part of this pattern is that you have to come up with a slightly different function name for the virtual function... not a big fan of pre-pending 'do' although the other convention I've seen is *appending* 'Ex' which is also a bit rubbish! Thanks for watching, everyone. That's what keeps me going! vsbrar> I hope C++11 have more support for weak memory models. C++11's <atomic>, which we shipped in VC11 RTM, fully supports weak memory models like ARM's. The atomics wisely default to sequential consistency, but if you want relaxed/acquire/release/etc. semantics, they are available. Marek> What surprised me most about virtual functions is that pure virtual function can have implementation/body Correct. "Pure" means "must be overridden" (if you want to get a concrete class). yanshuai> Seems like a long time since the 3rd part. There'll be another gap between this and Part 5 - I'll be on vacation for a couple weeks. NotFredSafe> I would still like to see that "parameter passing best practices" episode That definitely counts as part of the Core Language - the only thing I am avoiding in this series is directly covering the Standard Library. I basically do episodes when I feel inspired enough about a topic to talk about it for 45+ minutes in a single take. Now that I'm basically done with "how a function call works", I can turn my attention to other matters. > But if it's gotta be core C++, how about "sequence points are dead, long live the sequenced-before relationship"? That is exceedingly subtle, and has almost no effect for single-threaded code. I figure that anyone experienced enough to make sense of the topic doesn't need me to explain it to them. rhalbersma> Too bad time ran out I'll never have enough time to cover every area of a topic (I could probably talk about the integral types for 3 solid hours), so my goal is to cover the basics and sketch out the structure of things so people are prepared to learn more, especially as they encounter issues similar to what I've talked about. Philhippus> Is there a transcript and/or write-up of the lectures for reference purposes? Failing that how about time tags on the video? Every episode is an undiluted stream of consciousness (mine is filled with cats meowing), but maybe Charles has some resources here. GlassKey> The only annoying part of this pattern is that you have to come up with a slightly different function name for the virtual function... The Standard conventionally uses a "do_" prefix. If I saw an "Ex" suffix, I would expect that (like in the Windows API) it referred to an extended form of an ordinary function (in C++ this is properly achieved by overloading, but C doesn't have that mechanism). The override keyword is a nice solution for the case where I think I override but I don't. If I need to change the signature of the function in the base class, I wish I could rely on the compiler to give me an error for all the derived class that are now broken (which it will if override is used everywhere). However, I can't rely on that unless I know the override keyword is used everywhere... I wish we had a compiler warning when the override keyword is missing. Something we might get in Visual Studio someday? Cool lecture, though vector<shared_ptr<>> made me go :/ - because of the performance of shared_ptr, esp if there are many of them. BTW Stephen IDK if you can comment but FB released folly(parts that I want to ask you about are optimizations of STL - does VS does something similar to stuff that folly does). To save your time I extracted cool STL features: 1. (allocator here is malloc() if i get it correctly)"It doesn't take a rocket surgeon to figure out that an allocator- aware std::vector would be a marriage made in heaven: the vector could directly request blocks of "perfect" size from the allocator so there would be virtually no slack in the allocator. Also, the entire growth strategy could be adjusted to work perfectly with allocator's own block growth strategy." 2. "In order to allow fast relocation without risk, fbvector uses a trait folly::IsRelocatable defined in "folly/Traits.h". By default, folly::IsRelocatable::value conservatively yields false. If you know that your type Widget is in fact relocatable, go right after Widget's definition and write this: // at global namespace level namespace folly { struct IsRelocatable<Widget> : boost::true_type {}; } " I guess you cant use 2. in STL, but you can use if for MS libs that use STL. Regarding the memcyp trick I was wondering... is it possible using TMP to determine if a user class is memcypable(by making it memcpyable if every member is memcpyable, so class X{vector<string> vs; int y; } is memcpyable ). Again great lecture, one cool thing would be auto. I know ppl think it is super simple, but actually it isnt. also you could reshow your trick question regarding for each and auto from GN conf. :D Vincent> I wish we had a compiler warning when the override keyword is missing. That would warn about the whole world, at least at first. Ivan> Cool lecture, though vector<shared_ptr<>> made me go :/ - because of the performance of shared_ptr, esp if there are many of them. shared_ptr isn't free, but it isn't horribly inefficient. > "an allocator- aware std::vector would be a marriage made in heaven" Our std::vector and std::allocator (powered by new/malloc/HeapAlloc) don't conspire together. That could improve performance, but I wouldn't expect it to lead to dramatic gains. > is it possible using TMP to determine if a user class is memcypable std::is_trivially_copyable does this. > class X{vector<string> vs; int y; } is memcpyable It is not - vector has a nontrivial copy constructor. > Again great lecture, one cool thing would be auto. That's a good idea, thanks. I'll think about it. > That would warn about the whole world, at least at first That would for people who enable the warning, which means it'd be opt-in... I don't see that as a bad thing... Those warnings are pretty mechanical to fix: assuming the code is otherwise correct all you have to do is add "override" on the line. It gets you to a state where a class of mistake can't happen: - adding a method to a derived class without realizing it already exists as virtual on the base class (so overriding it when it's not intended) - adding a virtual method on a base class when one of the derived classes already has a function with the same signature - changing the signature of a function on the base class and forgetting to update the derived class (this is what override is designed to fix, but how can I rely on it as the author of the base class if the keyword is optional? I could with a warning...) @STL tnx for the answers... 1)regarding vector and memcopyable ... what I wrote was wrong, I was thinking about swap and move. For example if I now call std::sort on vector whose elements are class X{vector<string> vs; int y; } is a) compiler smart enough to figure out that swap can be done by memcpy a1) if not can programmer tell him that without manually writing sort function. 2) regarding auto, if you do it please mention const vector<double>& fun(); auto x = fun;// x is not a const ref Great video, thanks a lot! One question about slicing: You suggest forbidding any assignment operator and copy constructor in the polymorphic base class. However, how would you in that situation implement the derived class's copy constructor and assignment operator? I was always under the impression that slicing was precisely required to say Base(rhs)and Base::operator=(rhs);in the derived implementations. Ivan> For example if I now call std::sort on vector whose elements are class X{vector<string> vs; int y; } C++11's rvalue references v3 (not implemented in VC11 RTM) will automatically generate move ctors/assigns for X. Then swap() and sort() will automatically take advantage of them. KerrekSB> However, how would you in that situation implement the derived class's copy constructor and assignment operator? You don't; they remain disabled. The solution here is a virtual clone() function, which you can invoke on a Base * and get a properly-cloned Derived. @STL: Thanks for the lecture! // Awesome as always Regarding the NVI -- what are some good examples where we'd prefer "protected" access over "private" (and vice versa)? I guess it boils down to choosing between protected-virtuals [23.3] and private-virtuals [23.4] in the C++ FAQ, but unfortunately there's no direct comparison in there: Another question -- is NVI the same as or different from the Template Method pattern? This source calls TM pattern "more generic" (without specifying what's more generic about it): While this one seems to show TM as exactly the same thing: Great lecture, thanks Stephan. @STL : "C++11's rvalue references v3 (not implemented in VC11 RTM) will automatically generate move ctors/assigns for X. Then swap() and sort() will automatically take advantage of them.". I had a really weird experience playing around with poor mans's v3 RVRs(memcpy) (VC11 RC)and hit really weird problems so I was wondering if you could explain it (for understanding of VS purposes, not to debug my code, like I said I did it for fun): So to recap code comments: 1. in release mode memcpy specialized swap makes code a LOT faster. 2. in Debug mode with specialized swap I get runtime errors- if I was forced to guess: Debug runtime machinery doesnt deal ok with memcpy swap implementation. :D 3. in Debug mode by manually implementing Ctor error goes away. (this is really really weird). Ignore the lack of functionality of the code, it is just some random operations in ctor... to make element different from each other and to make debugging easier. Disclaimer : I reviewed my code to double check that it isnt some stupid mistake, but still it is possible that is the cause. Awesome lecture, really gave me insight on the way to structure things. I would be very interested in further discussion about avoiding inheritance for the sake of it. I see a lot of C++ developers that do "java C++" as I call it, with huge boilerplate classes hierarchies when something much more elegant could be conceived. I find it hard to find C++ resources that don't advocate this "inheritance based programming", and I'd love to hear your best practices about that. These core c++ video aren't being posted on visual c++ blog now? . I see just 2 videos of this listed there. thanks to google for linking me to this link :) Matt_PD> Regarding the NVI -- what are some good examples where we'd prefer "protected" access over "private" (and vice versa)? You'd want "protected" when you'd want the base class to provide an implementation that derived classes can invoke as a helper. (The base implementation can be marked pure if you want to require derived classes to override it. Of course, if the base implementation is protected, pure, and implemented, a derived class can override it and just call the base implementation.) > Another question -- is NVI the same as or different from the Template Method pattern? I don't really pay attention to design patterns as I don't find them to be especially useful. I've heard that the NVI is one way to implement the "Template Method" pattern, which sounds right to me. Ivan> I had a really weird experience playing around with poor mans's v3 RVRs(memcpy) That's incorrect and dangerous. In the absence of rvalue references v3, you should write memberwise moves (with std::move). In C++98/03/11 (the terminology has changed, but the principles have remained the same), only PODs (Plain Old Data) can be bit-blasted with memcpy(). Classes with copy constructors/etc. are "non-POD" and attempting to memcpy them triggers undefined behavior. std::vector is definitely non-POD and therefore you cannot memcpy it. Chewie> I find it hard to find C++ resources that don't advocate this "inheritance based programming", and I'd love to hear your best practices about that. See Effective C++, Third Edition by Scott Meyers. It has a whole chapter on inheritance, and an item (38, looking at the table of contents - my copy is at home) dedicated to when you should use composition instead of inheritance. In general, I use inheritance when I need the one thing it does uniquely and well: runtime polymorphism. Otherwise, I'll reach for other C++ mechanisms first. For example, templates are better at compile-time polymorphism. What "compile-time polymorphism" means is when I want to write something that can work with many different types, but those types are all known at compile-time. The STL is the most obvious and famous example of this. vector<T> is a container of an arbitrary type T, but that type can be specified at compile-time. Making it a template allows it to handle these arbitrary types, without incurring any runtime overhead (everything is stamped out by the compiler and then inlined away, ideally). gh0st0nride> These core c++ video aren't being posted on visual c++ blog now? I write those link-posts when I have time (I just got back from 2 weeks of vacation). I'll try to remember to write them for videos in the future. Here's a wish for a future lecture, which doesn't fit into the current Core C++ series, but which I would like to get your insights on anyway. How about showing us your development setup and routine? So e.g. teach us about your VC editor/compiler/linker settings, any shortcuts/plugins/external libraries you often use, perhaps some advanced debugging tricks, precompiled headers, etc. etc. In short, anything that would increase productivity/build times for the rest of us. Hi! I'm sorry if I'm too off-topic here, but I like C++ and there is something on my mind: The MSVC C++11 compliance is far too weak, and I have to admit that I expected more for VC11. I love the things the standard committee guys did to the language, and I am trying to incorporate the new features into my way of thinking in C++ instead viewing them as extensions. That works really well with GCC or Clang, but when it comes to Visual Studio, I have this moment way too often where I think "WTF? Why does this $41T not compile?!!". I'm sort of an open source guy and I'm all for supporting multiple platforms and compilers and I believe it's healthy to compile your code with several compilers, but I arrived at the point where I'm just about to drop support for MSVC in all my programs, regard it as broken and use MinGW for Windows compilation instead. @STL: Are your core compiler collegues finally working on C++, or do they still have "other assignments"? Can you make a guesstimation of when the "in-between-release" of the compiler that Herb Sutter mentioned in GoingNative will arrive? How do you find this situation? I could imagine you'd quite love to see more features to play with (variadic templates maybe?). if overloaded operators are just syntactic sugar, please explain, why there is differences in the following code? class X{ public: X& operator% (const X&) { std::cout << "member fun\n"; return *this; } void f(); }; X& operator%( X& a, int) { std::cout << "global fun\n"; return a; } void X::f() { X& a = *this; a % 1; // calls non-member function operator%(a, 1);//compiler error } thanks in advance @gaya: if you recall episode 1 (name lookup), it should be clear that once the compiler finds operator% (even one with a different signature, which will be resolved during overload resolution) in the scope of class X, it stops looking for any other operator% in enclosing scopes. To call the free operator%, you can use ::operator%(a,1); I filmed Part 5, "Specializations", on Tuesday! :-> rhalbersma> How about showing us your development setup and routine? You'll see some of this in Part 6. My environment is very austere - I use a plain text editor and I drive the compiler from the command line. (When building test cases, I manually invoke "cl /EHsc /nologo /W4". When building the compiler and libraries, I use our internal build system on the command line. At home, I use makefiles on the command line.) Alexander> Are your core compiler collegues finally working on C++, or do they still have "other assignments"? The compiler team is working on C++11 Core Language features right now. In Part 6, I'll show off an internal build of the compiler (driven from the command line, since IDE integration is a separate chunk of work). I'd love to say which features are being implemented, but I'm not allowed to do that yet. > Can you make a guesstimation of when the "in-between-release" of the compiler that Herb Sutter mentioned in GoingNative will arrive? We can't ever talk about release dates before they're publicly announced, sorry. > How do you find this situation? I could imagine you'd quite love to see more features to play with (variadic templates maybe?). I've often mentioned that as an STL maintainer, my work is greatly complicated by the lack of variadic templates. We've been imitating them with clever-but-horrible macro schemes since the VC9 Feature Pack. gaya> if overloaded operators are just syntactic sugar They are syntactic sugar for ordinary function calls, but they obey various special rules, especially for overload resolution. rhalbersma is correct, but to explain further: Given "a % 1", N3376 13.3.1.2 [over.match.oper]/3 explains that "For a unary operator @ with an operand of a type whose cv-unqualified version is T1, and for a binary operator @ with a left operand of a type whose cv-unqualified version is T1 and a right operand of a type whose cv-unqualified version is T2, three sets of candidate functions, designated member candidates, nonmember candidates and built-in candidates, are constructed as follows:". That's why both the member and non-member functions are considered by overload resolution (as a programmer would naturally expect). Given "operator%(a, 1)" which is written as an ordinary function call, 3.4.1 [basic.lookup.unqual]/8 notices that you're in a member function body, so as soon as it finds X::operator% it stops. In general, C++ really likes calling member functions, which is why the unqualified name lookup rules are written like this. The Argument-Dependent Lookup rules also follow this preference - both unqualified lookup and ADL are performed, and both sets of declarations found are tossed into the overload resolution arena, *except* that if unqualified lookup finds a class member, the ADL set is emptied out (3.4.2 [basic.lookup.argdep]/3 has the full story and lists a couple of other conditions). This is actually relevant in your scenario. Given the arguments (a, 1) the argument a is of type X which activates ADL. X lives in the global namespace, so ADL would search the global namespace and find operator%(X&, int) there - except that ADL has been disabled because unqualified lookup found a class member. Dear @STL, thanks a lot for YOUR comprehensive explanation Great Video again! Charles, any idea when the next video will be uploaded? Thanks for all the very informative shows. Note sure if this is the best place but I wanted to suggest std::function and related topics for a show. I have just started replacing a bunch of old style functor stuff with this and have run into a lack of documentation, for instance answering a question like "what does a std::function actually store and how does that relate to copying/assignment of std::function objects" seems hard to find out without reading C++ library code which is not the easiest code to read! davidhunter22> I wanted to suggest std::function and related topics for a show. I'll try to remember that for when I start doing videos on the STL again. If I forget, remind me. :-> > I have just started replacing a bunch of old style functor stuff with this std::function is extremely useful, but it shouldn't be overused. For example, if you can template an algorithm on functor type, that is more efficient than having it take a std::function (which has inherent overheads). > "what does a std::function actually store Magic. But really, it just stores a copy of the functor you've given it. > and how does that relate to copying/assignment of std::function objects" The stored functor, including any state within, will be copied/assigned. > seems hard to find out without reading C++ library code which is not the easiest code to read! You shouldn't look at the guts of our implementation. If MSDN seems insufficient, you can look at the Working Paper for maximum accuracy. Tominator2005: This compiles for me with VC11 RTM: Send me a self-contained repro (like this) at stl@microsoft.com and I'll take a look. @STL: Great stuff as usual, I love this core C++ series, please keep it coming ! Best Regards, Russell Part 5: Remove this comment Remove this threadclose
http://channel9.msdn.com/Series/C9-Lectures-Stephan-T-Lavavej-Core-C-/Stephan-T-Lavavej-Core-C-4-of-n?format=progressive
CC-MAIN-2013-48
refinedweb
4,119
61.46
An introduction to Vue.js - Chapter 2 - Components (Part I) Moritz Schramm Updated on ・5 min read Series overview Chapter 1 Chapter 2 Chapter 3 Chapter 4 Chapter 5 Foreword Before we start some short information. Before you read the second chapter, please read the first one so that you have a basic setup we can work with. Thank you :) I will always upload the code to this github repository. Some of you ask me why I do not use "Single File Components" ( .vue files). I decided to write a special chapter about that whole topic, show you how to configure your project to make use of them and tell you my opinion about this. Our todays chapter will be more theory and less writing code. Components Components are one of the main parts or even the main part of Vue.js. But what is actually a component? Let me check Wikipedia for you. Web Components are a set of features [...] that allow for the creation of reusable widgets or components in web documents and web applications. That is the basic definition of Web Components in the context of the W3C specs. But basically this although applies to Vue components. They are reusable widgets that you can use in your app. A widget could be a navigation, a list or even a simple button. I personally prefer naming my components with small letters and putting all files which belong together into one folder. In Vue every component need to have at least 2 things: - A name (obvious) - A template (The rendered DOM that belongs to each component) Lets take a look at the .js file of our last chapter: import template from './hello.html'; export default { name: 'vg-hello', template }; We imported a template from a .html file and we exported a data object with two key-value-pairs. The two keys were name and template (if you are not familiar with the shorthand object property notation have a look here). I prefixed my component name with vg- since it makes the work a lot easier when using third-party components. Later I will show you how to use those components in other components. There the name will be equal to our tag in the DOM. Now let us have a short look into our current .html file: <h1>Hello World</h1> Here we see the DOM which is rendered instead of the tag or when it is the root component (as it is for now), it replaces the mounted element in the initial DOM. Reminder: Our index.html currently looks like that: <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8" /> <title>Vue Guide</title> </head> <body> <div id="app"></div> <script src="build.js"></script> </body> </html> If we now start our app, open the dev tools and look at the DOM tree, we should see this: <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8" /> <title>Vue Guide</title> </head> <body> <h1>Hello World</h1> <script src="build.js"></script> </body> </html> The direct replacement of the element and later the custom tags is something that I really like within Vue.js (Angular 4 for example renders the custom tags into the DOM). Button Now let us create another component, a simple button. We will use this button for the next chapters and it will develop more and more over time. For now our button should be like: - The HTML button tag - Has the attributes class="button"and role="button" - Has the text Click me! Let us start with the template ( src/components/button/button.html) <button role="button" class="button">Click me!</button> This should be easy to understand. We have our button tag with class, role and the expected text. Now we need to define the .js file. ( src/components/button/button.js) import template from './button.html'; export default { name: 'vg-button', template }; I imported the template, named the button class vg-button and exported both. That is everything what we need to do. I will show you now how to use those components in other components. There are two ways and I will show you both. Register the component on a global level. For that we need to add some lines to our main.js. import button from 'app/components/button/button'; Vue.component(button.name, button); Your main.js could now look like that: import button from 'app/components/button/button'; import hello from 'app/components/hello/hello'; import Vue from 'vue'; Vue.component(button.name, button); new Vue({ render: (h) => h(hello) }).$mount('#app'); To use the button component in our hello component I adapt the hello.html: <div class="app"> <h1>Hello World</h1> <vg-button /> </div> As you can see I added an additional <div> around both elements since Vue requires exactly one root element per component. If you now build and open your app you now should see the button. It has no functionality for now but it should be there. You can although add more: <div class="app"> <h1>Hello World</h1> <vg-button /> <vg-button /> <vg-button /> </div> Now should even see three. They should all have the same DOM, the same inner text and should all do nothing. Register the component on a local level That is basically the way I prefer since it is much more easier to test the rendered DOM in unit tests (we will see that in later chapters). I will use this way in the next chapters but I will not force you to use it (as always). For that you need to adapt your hello.js. We need to import the component and then export the used components. import button from 'app/components/button/button'; import template from './hello.html'; export default { name: 'vg-hello', template, components: { [button.name]: button } }; As you can see I added a new property to my object containing the used components. If we know use the same HTML in our template as before the button should still be there without registering them globally. Done I hope you like the guide. If you have any questions ask them on Twitter or in the comment section. I will try to answer a much as possible. I am happy about any possible feedback. Next chapter will come in the next days. Your 2018 in Numbers This is a great format for looking back on 2018: Ali Spittel 💁 @aspittel ...
https://dev.to/neradev/an-introduction-to-vuejs---chapter-2---components-part-i
CC-MAIN-2019-43
refinedweb
1,073
66.23
Output from ComboBox I need to get a class from ComboBox and change its Items property. Here is my code: public class MyComboBox2 : ComboBox { private MyObjectCollection MyItems; public MyComboBox2() { MyItems = new MyObjectCollection(this); } public ComboBox.ObjectCollection Items new public MyObjectCollection Items { get { return MyItems; } } } public class MyObjectCollection : ComboBox.ObjectCollection { public MyObjectCollection(ComboBox Owner) : base(Owner) { } new public int Add(Object j) { base.Add(j); return 0; } } As you can see, I am creating a new class MyComboBox2 derived from ComboBox . This class is expected to have a New Items property that will be of type MyObjectCollection , not ComboBox.ObjectCollection . I have a comboBox named myComboBox21 in a type form MyComboBox2 . When I want to add a new object to my ComboBox, I would execute the code like this: myComboBox21.Items.Add("text"); In this case, I complete the method Add MyObjectCollection I executed myself. However, the ComboBox on the form does not contain the "text" value. I am attaching a screenshot of the debugger showing the ComboBox values. myComboBox21 contains a Items Property (which contains the "text" as shown in the screenshot "2.png") and it contains base.Items (which does not contain the "text" as shown in the "1.png") So apparently myComboBox21 contains its own property Items (which I can insert) and its base class property Items which is displayed on the windows form. What can I do to successfully add my own method to the comboBox? Since my ComboBox has 2 Items properties, can I specify which Items property values should be displayed in the ComboBox? source to share Just by looking at the code very quickly: The original index of the object is declared as virtual Object this[int index] {...} Can a new keyword be exchanged by overriding in your implementation to make the runtime the selected code? source to share
https://daily-blog.netlify.app/questions/2163802/index.html
CC-MAIN-2021-21
refinedweb
305
56.76
I assume that you know basic C++ and have read and understand all previous lessons. How Translations Work So, how do translations work? Well, generally, we only move one thing....the origin. Yes, the origin. Alright, here we go. Every point in OpenGL has two seperate coordinates. By this, I mean a point can be at (3, 5, 2) and (-7, 1, 5) at the same time. How? Well, there are two types of coordinates. These are global and actual. Global coordinates never change for a particular point (the global origin never changes in any way), but actual coordinates are constantly changing. Still with me? If not, go back and read over again. The actual origin can be changed through functions such as glTranslatef(), glRotatef(), and glScalef(). Take a look at this diagram: The red dot is the actual origin. The global origin is where the X and Y axes meet. Because the actual origin is in the same place as the global origin, the actual and global are the same for the point at (2 , -1). Here the actual (above) and global (below) origins are in different places, so the point over to the right has two different coordinates pairs (we are ignoring the Z axis and coordinates at the moment for simplicity's sake). But...what if I want to make different objects move seperately? For instance what if I wanted to make something spin clockwise as another spins counterclockwise as another doesn't spin but goes around the perimeter of the window? Well, for this we use our good friends, glPushMatrix() and glPopMatrix(). glPushMatrix() saves the transformation state and glPopMatrix() restores it. Quick, look in the mirror! Does your face look like this: The Code [size=2] #include <gl/glut.h> using namespace std; int angle = 0; void init(int argc, char** argv) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH); glutInitWindowSize(600, 600); glutCreateWindow("First Look at Motion"); glEnable(GL_DEPTH_TEST); glEnable(GL_COLOR_MATERIAL); glClearColor(0.7f,0.0f,1.0f,1.0f); } void handleResize(int w, int h) { glViewport(0, 0, w, h); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45.0, (double)w / (double)h, 1.0, 200.0); } void draw() { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); We start this code out the same as ever, except we add in a new variable, angle. glTranslatef(0.0f, 0.0f, -5.0f); //Move foward 5 units glPushMatrix(); //Save transformation state Here we just move foward 5 units then save the transformation state. The translation is not really necessary, but I like (x, y, 0) much better than (x, y, -5). glScalef(0.5f, 0.5f, 0.5f); //Scale by .5 in all directions glRotatef(angle, 0.0f, 0.0f, 1.0f); //Rotate on Z axis glTranslatef(0.0f, -3.0f, 0.0f); //Move to center of square glRotatef(angle, 0.0f, 0.0f, 1.0f); //Rotate on Z axis The square was a bit big in the last lesson, so I scaled it down to half it's original size. Note, we aren't really doing anything to the square, just scaling the actual origin so that it has an effect on the square. We then rotate the square on it's Z axis, around the global origin (0, 0, 0). Then, we move the actual origin to the center of the square, where we rotate it again. glBegin(GL_QUADS); glColor3f(0.85, 0.25, 0.25);(); glPopMatrix(); //Restore transformation state glPushMatrix(); //Save transformation state We draw the square, same as always, followed by a call to both glPopMatrix() and glPushmatrix(). This is just restoring and saving the original transformation state so that the transformations will not have an effect on the triangle. Note: calls to glPopMatrix() and glPushMatrix() cannot be called within calls to glBegin() and glEnd(). glRotatef(angle*3, 0.0f, 1.0f, 0.0f); //Rotate on Y axis this rotates the triangle on it's Y axis. The "*3" is just to make it spin 3 times faster than normal. glBegin(GL_TRIANGLES); glColor3f(1.0, 0.6, 0.0); glVertex3f(0.0f, 0.5f, 0.0f); glColor3f(0.5, 0.5, 1.0); glVertex3f(-0.5f, -0.5f, 0.0f); glColor3f(1.0, 1.0, 1.0); glVertex3f(0.5f, -0.5f, 0.0f); glEnd(); glPopMatrix(); //Restore transformation state glutSwapBuffers(); } To finish our draw() function, we draw the triangle, restore the transformation state, and send the scene to the screen. void update(int value) { //Makes motion continuous angle += 1; if (angle > 360) { angle = 0; } glutPostRedisplay(); //Redraw scene glutTimerFunc(5, update, 0); //Call update in 5 milliseconds } Here is our function to cause continuous motion. We add to the angle, change it to 0 if it becomes to high, and redraw the scene. We also call update again (horray for recursion!) in 5 milliseconds. int main(int argc, char** argv) { init(argc, argv); glutDisplayFunc(draw); glutReshapeFunc(handleResize); glutTimerFunc(5, update, 0); //Call update 5 milliseconds after program starts glutMainLoop(); return 0; } Same as always, except for a call to update 5 milliseconds after the program starts. I can't show you a picture of the output , but I highly recommend that you paste the code into a project and run it. It's actually pretty cool The next lesson will include: -3D shapes -Lighting -User input
http://www.dreamincode.net/forums/topic/117319-using-openglglut-to-create-3d-scenes/
CC-MAIN-2016-50
refinedweb
877
64.2
note ikegami <p>Your name space isn't poluted:</p> <code> use strict; use warnings; use mymodule; print "My Test\n"; Func1(); # ERROR! Func2("Fluffy", 5); # ERROR! </code> <p>But check this out:</p> <code> use strict; use warnings; use mymodule qw( Func1 ); # Import Func1 #use mymodule qw( Func2 ); # This won't work since # Func2 is not in @EXPORT_OK. print "My Test\n"; Func1(); Func2("Fluffy", 5); # ERROR! </code> <p>You should never export methods. As you can see, there's no reason to do so. All functions in a package are tied to the object blessed to that package, whether you use Exporter or not.</p> <blockquote><i>and how can I prevent it as it seems to be allowing everything into the calling script's namespace?</i></blockquote> <p>It's possible to disguise a function using lexicals and/or closures so that noone outside the package can call it, but there's no need to do so. Write proper documentation instead.</p> 439989 439989
https://www.perlmonks.org/index.pl/?displaytype=xml;node_id=439992
CC-MAIN-2020-05
refinedweb
167
75.1
Opened 8 years ago Closed 7 years ago Last modified 5 years ago #11012 closed (fixed) Memcached cache module fails when retrieving binary strings via cache.get, during forcing it unicode Description E.g. from zlib import compress cache_val = compress("sdf sdlf sdlfj sldkfj alsdkjf gallksr glasrljit rweioj tasdj gfapsdopjof ps") cache.set('key', cache_val) res = cache.get('key') DjangoUnicodeDecodeError: 'utf8' codec can't decode byte ... in position ...: ... In [5718] ticket #4845, the following is introduced: ... --- memcached.py (revisión: 5717) +++ memcached.py (revisión: 5718) @@ -1,6 +1,7 @@ "Memcached cache backend" from django.core.cache.backends.base import BaseCache, InvalidCacheBackendError +from django.utils.encoding import smart_unicode, smart_str try: import cmemcache as memcache @@ -16,17 +17,22 @@ self._cache = memcache.Client(server.split(';')) def get(self, key, default=None): - val = self._cache.get(key) + val = self._cache.get(smart_str(key)) if val is None: return default else: - return val + if isinstance(val, basestring): + return smart_unicode(val) + else: + return val def set(self, key, value, timeout=0): - self._cache.set(key, value, timeout or self.default_timeout) + if isinstance(value, unicode): + value = value.encode('utf-8') + self._cache.set(smart_str(key), value, timeout or self.default_timeout) def delete(self, key): - self._cache.delete(key) + self._cache.delete(smart_str(key)) def get_many(self, keys): - return self._cache.get_multi(keys) + return self._cache.get_multi(map(smart_str,keys)) The problem is in: - return val + if isinstance(val, basestring): + return smart_unicode(val) + else: + return val which makes it impossible to store binary strings. You have to encapsulate it in dummy objects, otherwise. I still don't know why is it necessary to convert it to str (encode utf8) and back. Python memcached 1.43 pickles anything different to int, long, str, which should work fine with unicode. Attachments (2) Change History (12) Changed 8 years ago by comment:1 Changed 7 years ago by comment:2 Changed 7 years ago by comment:3 Changed 7 years ago by comment:4 Changed 7 years ago by comment:5 Changed 7 years ago by I'm working on adding tests to this. Changed 7 years ago by Updated patch and added test to confirm original failure and fix. comment:6 Changed 7 years ago by I created a new diff that includes the patch and a test. The new test fails before the patch is applied and all cache tests pass after it's applied. Tested with memcached 1.4.4 and python-memcached 1.4.5. comment:7 Changed 7 years ago by Tests pass for me against dummy, file, and locmem backends as well. comment:8 Changed 7 years ago by comment:9 Changed 7 years ago by comment:10 Changed 5 years ago by Milestone 1.2 deleted don't convert from/to unicode during get/set in memcached
https://code.djangoproject.com/ticket/11012
CC-MAIN-2017-04
refinedweb
463
60.11
Kawa (like other Free Software projects) has no lack of tasks and projects to work on. Here are some ideas. The Kawa compiler currently uses reflection to determine properties (such as exported function definitions) from referenced classes. It would be better to read class files. This should not be too difficult, since the gnu.bytecode library abstracts over class information read by reflection or class reading. We’d like a command for compiling compile a list of Java and Scheme source files thar may have mutual dependencies. A good way to do this is to hook into javac, which is quite extensible and pluggable. One could do something like: Read the "header" of each Kawa source file, to determine the name of the generated main class. Enter these class names into the javac tables as “uncompleted” classes. Start compiling the Java files. When this requires the members of the of the Kawa classes, switch to the Kawa files. From javac, treat these as pre-compiled .class files. I.e. we treat the Kawa compiler as a black box that produces Symbols in the same way as reading class files. At this point we should only need to the initial “scan” phase on Kawa. If necessary, finish compiling remaining Kawa files. This approach may not immediately provide as robust mixed-language support as ideal, but it is more amenable to incremental improvement that a standalone stub-generator. This project is good if you know or want to learn how javac works. Java 7 supports MethodHandles which are meant to provide better performance (ultimately) for dynamic languages. See JSR 292 and the Da Vinci Machine Project. MethodHandles will be used to compile lambdas in Java 8. Kawa can already be compiled to use Methodhandles, but only in one unimportant way. There much more to be done. For example we can start by optimizing arithmetic when the types are unknown at compile-time. They could make implementing generic functions (multimethods) more efficient. At some point we want to compile lambdas in the same way as the Java 8 preview does. This can potenitally be more efficient than Kawa’s current mechanism. Kawa supports most of the functionality of R6RS and R7RS. However, various R6R7 or (more importantly) R7RS features are missing, and should be added. For example, both R6RS or R7RS library definition syntax are unimplemented. In a related matter, Kawa supports most of the functionality of syntax-case, but some pieces are missing, and no doubt some of it is incorrect. Adding the missing pieces and testing for correctness of corner cases is needed. Andre van Tonder’s R6RS expander may be helpful. It would be useful to extend the import form (and also the require form when no explicit filename is given) to search a “source path” for a matching source file, automatically compiling it as needed (as done in the require form when an explicit filename is given). In interactive mode, if the module is already loaded, check if it is updated - if not recompile and re-load it. Implement SwitchExp as a new class extending Expression, and compile it using the existing gnu.bytecode.SwitchState. Use it to optimize Scheme’s case form. This might be better done without a new Expression, but instead using a special Procedure with custom “validation” and code-generation. (This is a fairly small starter project.) Kawa has some limited support for parameterized types, but it’s not used much. Improve type inferencing. Support definition of parameterized classes. Better used of parameterized types for sequence class. Support wildcards. (It might be better to have wild-carding be associated with declarations, as in Scala, rather than uses.) Kawa doesn’t have true function types: Parameter and result types are only handled for “known” functions. Adding first-class function types would be a major task, possibly depending on improvements in Parameterized types. Add support for full continuations, which is the major feature missing for Kawa to qualify as a “true Scheme”. One way to implement continuations is to add a add that converts the abstract syntax tree to continuation-pass-style, and then exhand the existing full-tail-call support to manage a stack. There are other ways to solve the problem. This may benefit from Faster tailcalls. Make --full-tailcalls run faster. This may depend on (or incorporate) TreeList-optimization. The TreeList class is a data structure for “flattened” trees. It is used for XML-style nodes, for multiple values, and for the full-tail-call API. The basic concept is fine, but it could do with some re-thinking to make make random-access indexing fast. Also, support for updating is insufficient. (This needs someone into designing and hacking on low-level data-structures, along with lots of profiling and testing.) C# recently added asynch and await keywords for asynchronous programming. Kawa’s recently improved support for lazy programming seems like a good framework for equivalent functionality: Instead of an asynch method that returns a Task<T> the Kawa programmer would write a function that returns a lazy[T]. This involves some design work, and modifying the compiler to rewrite the function body as needed. This is related to full continuations, as the re-writing is similar. Improvements to the read-eval-print console. In addition to a traditional Swing console, it would be useful to support using a web browser as a a remote terminal, possibly using web-sockets. (This allows “printing” HTML-expressions, which can be a useful way to learn and experiment with web technologies.) See here for an article on the existing Swing REPL, along with some to-do items. Being able to hide and show different parts of the output might be nice. Being able to link from error messages to source might be nice. Better handling of redefinitions is discussed here in the context of JavaXF Script; this is a general REPL issue, mostly independent of the GUI for it. It would be nice to update the XQuery (Qexo) support to XQuery 1.1. It would be nice to support XQuery updates. This depends on TreeList-optimization. Kawa supports a small subset of the Common Lisp language, but it supports a much larger subset of core Common Lisp concepts and data structure, some designed with Common Lisp functionality in mind. Examples include packages, arrays, expanded function declarations, type specifications, and format. A lot could be done to improve the Common Lisp support with modest effort. Some Common Lisp features could also be useful for Scheme: Documentation strings (or markup) as Java annotations, better MOP-like introspection, and generic methods a la defmethod (i.e. with multiple definition statements, possibly in separate files, as opposed to the current make-procedure) all come to mind. Being able to run some existing Common Lisp code bases with at most modest changes should be the goal. One such package to start with might be a existing test framework, perhaps FivaAM. Full Common Lisp compatibility is nice, but let’s walk before we can run. A lot of work is needed to make JEmacs useful. One could try to import a useful package and see what works and what fails. Or one may look at basic editing primitives. Enhancements may be needed to core Emacs Lisp language primitives (enhancing Common Lisp support may help), or to the display engine. Emacs now supports lexical bindings - we should do the same. There is some Kawa support for Eclipse (Schemeway), and possibly other IDEs (NetBeans, IntelliJ). But many improvements are desirable. REPL improvements may be a component of this. Kawa-Scheme support for the NetBeans IDE would be useful. One could perhaps build on the Clojure plugin. Kawa-Scheme support for the Eclipse IDE would be useful. Probably makes sense to enhance SchemeWay. It may also make sense to build on the Dynamic Languages Toolkit, possibly making use of Schemeide, though DLTk seems more oriented towards interpreted non-JVM-based languages. SLIME is an Emacs mode that provides IDE-like functionality. It supports Kawa. JDEE is a Java development environment, so might have better hooks to the JVM and Java debugging architecture. CEDET is a more general framework of development tools. It would be nice to integrate the pretty-printer with the REPL, so that window re-sizing re-breaks the output lines. It would be nice to enhance the pretty-printer to handle variable-width fonts and other “rich” text. Figuring out how to make the output formatter more flexible, more efficient, and more customizable are also desirable. Hop is an interesting design for integrating server-side and client-side programming using a Scheme dialect. These ideas seem like they would port quite well to Kawa. Add quaternions to the numerical tower, such that Complex extends a new abstract class Quaternion extends Quantity. These are useful for 3D rotations. A reasonable design is here. Support localization by extending the SRFI_109 syntax, in the manner of (and compatible with) GNU gettext. I.e. optionally specify a localization key (to use as an index in the translation database); if there is no key specified, default to using the literal parts of the string. Implement a “bind” mechanism similar to that of JavaFX Script. The idea is when you initialize a variable or field, instead of initializing it to a fixed value, you bind it to an expression depending on other variables. We install “listeners” on those variables, so when those variables change, we update the bound variable. This feature is useful in many application, but the initial focus could be GUI programming and perhaps web programming.
http://www.gnu.org/software/kawa/Ideas-and-tasks.html
CC-MAIN-2014-10
refinedweb
1,599
57.27
The Atom 0.2 snapshot is out. pre-draft of a formal 0.2 spec based upon informal 0.2 spec. This will be updated to reflect consensus in the Wiki and on the mailing list. - Minimal 0.2 feed. Feed title, link, and modified are required. Entry title, link, id, issued, and modified are required. Maximal 0.2 feed. Feed tagline, id, generator, and copyright are optional. Entry created, summary, contributor, and content are optional. Feed generator and copyright are new in 0.2. Feed tagline used to be called "subtitle"; entry subtitle has been dropped. Multipart/alternative 0.2 feed. This is the biggest new feature in this snapshot: content type="multipart/alternative". It contains 1 or more content elements, of varying types and languages. These are different representations of the same content. Examples: an image, and an HTML representation of the image. Or: an HTML version, and a plain text version of the same content (perhaps run through lynx -dump, or w3m, or simply by stripping all HTML tags). Publishers should order the alternative content representations in increasing order of fidelity. So if you have an HTML version and a plain text version, put the plain text version first and the HTML version second. If you have an image and an HTML rendition of the content of the image (a longdesc), but the HTML version first and the image second. If you have a translation of the content into another language, put the translation first, and the original content second. Consumers should render at most one representation of a content type="multipart/alternative", according to their capabilities and end user preferences. It does not have to be the one the publisher thinks is best, although it should be, if you know how to render content of that MIME type and language and the end user hasn't specified a preference. The consumer has the final say about what content to render, if any.</p> 0.2 feed with feed author and 0.2 feed with entry author explain and demonstrate the use cases for the author element. New/changed in 0.2: author is required either on the feed level or on the entry level, and now contains a required name and optional url and email elements. If author exists on the feed but not on an entry, it is assumed to apply to the entry. If author exists on the feed and an entry, the entry author takes precedence. Every entry must have exactly one author, either explicitly included in the entry or inherited from the feed dive into mark Atom 0.2 feed. Live. This uses an additional namespace, Dublin Core, to express entry categories (dc:subject). This is the only way to express categories in this snapshot. An entry may contain multiple dc:subject elements, although this example does not. MT template for Atom 0.2. dive into mark comments Atom 0.2 feed. Live. This uses content type="multipart/alternative" to include both an HTML version and a plain-text version of each comment. It also uses entry-level authors, since each entry is a comment by a different author. MT template for Atom 0.2 comment feed. The Feed Validator has been updated to support the Atom 0.2 snapshot. 0.1 feeds will no longer validate. Validator 1.2 source code validates Atom 0.2, and is what is deployed right now. (It still validates RSS. There were no changes to the RSS validation in this release.) Suggestions that did not make it into this snapshot, but that are still under active discussion: Using xml:base to handle relative links. But what should it apply to? All URLs (even the <link> element and so forth)? URLs in content only? What about escaped content? Discuss in RelativeLinks. Allowing HTML in title, tagline, and/or summary, by allowing publishers to declare type="text/html" and mode="escaped" on those elements (similar to the way they can specify MIME type and encoding on <content> elements now). Blogger did something like this in their experimental 0.1 feed, but it’s not clear they actually need it. (They may need it in the API though.) Regardless, there’s been no discussion on how or if to allow this, so it stays out until we can hash through it. Other issues: Property-centric or Role-centric? -- see PropertiesVsRoles. DublinCore Metadata Element Set in echo's namespace? -- see EchoInDublinCore NechoInDublinCore NechoInDublinCoreOptionalRdf What constitutes an ID? -- see PermaLinks, EntryIdentifier, PostIdSpec and WhatIsEntryId. Should content be escaped or inline? -- see EscapedHtmlDiscussion and content Content by Reference? -- see MimeContent, EntryContentDiscussion, and content. MultipleContentDiscussion Sub-Entries? -- see EntryContentDiscussion and content. Authors versus persons and roles? -- see AuthorElementDiscussion and PropertiesVsRoles. Avoiding Duplication of Author data? -- see AuthorElementDiscussion. Inclusion of other extensions needs to be considered -- see SyntaxExtensionMechanism. Subtitle pulled 7/4/2003 -- see SubTitle for a summary of the arguments Versioning moved -- see VersionNo for a summary of the arguments Relative Links moved -- see RelativeLinks for a summary of the arguments <weblog> and <homepage> were replaced with a single <url> -- see WeblogVsUrl How to include comments? -- see CommentExtension, IsaCommentAnEntry -
http://www.intertwingly.net/wiki/pie/SyntaxDiscuss?action=fullsearch&value=SyntaxDiscuss&literal=1&case=1&context=40
CC-MAIN-2017-47
refinedweb
857
59.7
Red Hat Bugzilla – Bug 44619 g++ -O2 bug in cast expression (80x86) Last modified: 2007-04-18 12:33:41 EDT From Bugzilla Helper: User-Agent: Mozilla/5.0 (X11; U; Linux 2.2.19-7.0.1 i686; en-US; rv:0.9+) Gecko/20010508 Description of problem: g++ -O2 for 80x86 produces incorrect assembly code for the 'Cflatten' function in the following snippet (narrowed down and converted to valid C from a large C++ program): #include <stdio.h> typedef short int16; typedef long int32; typedef struct { int16 v, h; } Point; int32 Cflatten(Point *pp) { Point p; p.v = pp->v; p.h = pp->h; return (*((int32*))); } int main() { Point p = { 0x1234, 0x5678 }; int32 f; f = Cflatten(); printf("%08x\n", f); /* should print 56781234 on a little-endian machine */ } The Cflatten function generates the following x86 assembly: Cflatten__FP5Point: pushl %ebp movl %esp, %ebp pushl %eax movl 8(%ebp), %eax movw (%eax), %ax movw %ax, -4(%ebp) movl -4(%ebp), %eax leave ret which leaves the upper half of %eax uninitialized. How reproducible: Always Steps to Reproduce: 1.Compile the above code fragment with g++ -O2 2.Run and observe the output is bfff1234 instead of 56781234 3.Re-compile without -O2 and observe the correct output. Actual Results: Output is "bfff1234" Expected Results: Output should be "56781234" (on a little-endian machine) Additional info: The bug does not occur if you turn off the strict-aliasing optimization, e.g. "g++ -O2 -fno-strict-aliasing". It also does not occur if you use gcc instead of g++ to compile. If you change the initialization of Point p to use struct initialization syntax, or make p volatile, the problem goes away. Reading specs from /usr/lib/gcc-lib/i386-redhat-linux/2.96/specs gcc version 2.96 20000731 (Red Hat Linux 7.1 2.96-81) I haven't tried to see if this bug exists in a gcc 3.0 pre-release. See bug #36862. Bug 36862 referenced above is clearly related, yet this bug is closed as "not a bug". But there clearly IS a problem here. In my case, the entire 32-bit word should be read and returned, but it's returning half-uninitialized memory. Apart from these two typos in your example, -return (*((int32*))); +return (*((int32*)&p)); -f = Cflatten(); +f = Cflatten(&p); and gcc -O2 returning 0x0000 in the upper 16 bits, the *((int32*)&p) cast is bad code and should be fixed. For an introduction to the problem, see "info gcc", node "Optimize Options", on -fstrict-aliasing, or search the web on type-punning. Apologies for the 'typos'. I actually cut-and-pasted that into the web form from working code, but apparently the ampersands got eaten and I didn't notice. The results of web-searches on -fstrict-aliasing and type-punning consisted mostly of people complaining that code didn't work with gcc -fstrict-aliasing and being told by people from the gcc team that the code was bad and needed to be fixed. There was very little information on why it is considered valid for the compiler to make this optimization. There were references to the ANSI C Standard ISO-9899:1990 Sections 3.3 and 3.5.2.1, but the actual standard is impossible to find. Books containing it are out of print, and ansi.org appears to only have the 1999 standard available. If someone could e-mail me or append here just the relevant sections of the standard, or point me where I could find them, I'd appreciate it. I realize that code of the sort *(int32*)&p is inherently non-portable, but on a known architecture where the size of the pointers are the known to be the same, I find it shocking that compiler-writers would allow code like this to break without so much as a warning message. While not "clean", it is sometimes necessary. For example, some of our code which does this kind of thing is reading an arbitrary block of data over a TCP connection, and interpreting the content by dynamically parsing a type-descriptor at run-time. If deferencing pointer casts does not work, how else is one supposed to do this? And even if declared technically legal by the standard, it seems like turning on an optimization by default in -O2 with the potential to break lots of code is a bad idea. And I'm sure I'm not alone. Here's a nice quote on the subject from Linux Torvalds (Linux Kernel mailing list, Dec 2000): > I hope that's another thing that the gcc people fix by the time they do a > _real_ release. Anobody who thinks that "-fstrict-aliasing" being on by > default is a good idea is probably a compiler person who hasn't seen real > code. > I realize that code of the sort *(int32*)&p is inherently > non-portable, but on a known architecture where the size > of the pointers are the known to be the same, Not the size of the pointers is the problem, but finding out when to update the object pointed to in memory, i.e. whether the value may be cached until it is accessed the next time. Checking out the C9X Rational or the C9X Committee Draft might be an idea. IIRC, it is somewhere in section 6.5. I've copied this from my Sent folder. I think it's an excerpt from the C9X Rational:. Thanks Michael for explaining instead of myself. You basically have 3 options: either decide you don't want to fix your code, but then you should stick -fno-strict-aliasing into your Makefiles. Or make sure one of the types is a char (whatever signedness) pointer, ie. you can cast a pointer to anything to a char pointer, or a char pointer to pointer to whatever and access through that (note that the problem is not the casting itself, but accessing the same memory location with two incompatible types). The third option is to use unions, e.g. above you could either int32 Cflatten(Point *pp) { Point p; p.v = pp->v; p.h = pp->h; return ((union { Point p; int32 i; } *)&p)->i; } or: int32 Cflatten(Point *pp) { union { Point p; int 32 i; } p; p.p.v = pp->v; p.p.h = pp->h; return p.i; } Well, I'm partially mollified by the exception for 'char', which would make most code which does this sort of thing continue to work correctly. I still agree with Linus that it was a bad idea to turn on this optimization bu default in -O2. The problem is that there is no way to tell when the code does not compile as expected. I would prefer to fix our source code over using -fno-strict-aliasing, but I don't know how to go about finding all such code in a deterministic way. Would you consider adding an option to gcc to generate a warning when code is encountered that due to aliasing might have a different effect than the user would expect?
https://bugzilla.redhat.com/show_bug.cgi?id=44619
CC-MAIN-2016-44
refinedweb
1,183
71.44
A method is a group of statements that together perform a task. Every C# program has at least one class with a method named Main. When you define a method, you basically declare the elements of its structure. The syntax for defining a method in C# is as follows − <Access Specifier> <Return Type> <Method Name>(Parameter List) { Method Body } Here, Access Specifier − This determines the visibility of a variable or a method from another class. Return type − A method may return a value. The return type is the data type of the value the method returns. If the method is not returning any values, then the return type is void. Method name − Method name is a unique identifier and it is case sensitive. It cannot be same as any other identifier declared in the class. Parameter list − Enclosed between parentheses, the parameters are used to pass and receive data from a method. The parameter list refers to the type, order, and number of the parameters of a method. Parameters are optional; that is, a method may contain no parameters. Method body − This contains the set of instructions needed to complete the required activity. The following is an example of methods showing how to find that a string has unique words or not. Here, we have created a C# method CheckUnique() − using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; public class Demo { public bool CheckUnique(string str) { string one = ""; string two = ""; for (int i = 0; i < str.Length; i++) { one = str.Substring(i, 1); for (int j = 0; j < str.Length; j++) { two = str.Substring(j, 1); if ((one == two) && (i != j)) return false; } } return true; } static void Main(string[] args) { Demo d = new Demo(); bool b = d.CheckUnique("amit"); Console.WriteLine(b); Console.ReadKey(); } } True
https://www.tutorialspoint.com/How-to-define-methods-in-Chash
CC-MAIN-2021-43
refinedweb
303
67.96
> Put Your Code in the SWAMP: DHS Sponsors Online Open Source Code Testing <tt>The SWAMP is currently just one site, but their eventual goal is that you can install and run it on your own internally, or however you see fit.</tt> Put Your Code in the SWAMP: DHS Sponsors Online Open Source Code Testing <tt>I worked on this project. You should glance at who is involved before donning the tinfoil hats.<br><br>It's an education grant with several phd's who study various CS security subjects (fuzzing, dynamic, static analysis). Built by a bunch of nice nerds employed by the Morgridge Institute which is part of University of Wisconsin Madison.<br><br>QA/Testing is the black sheep of the coding universe, and trying to get those tools running can be a pain sometimes. Anything that makes it easier (Swamp, Travis, etc) makes our universe a better place.</tt> Nano-Scale Terahertz Antenna May Make Tricorders Real "I love the idea of a tricorder, but please, invent something that is PASSIVE." --- #include <stdio> #include "acpi/dilithium.h" #include "sf/medical/diseases.h" #include "sf/shared/science/scanner.h" int main(void) { scan_for("Cancer"); printf("You have cancer!"); } Teenager Builds $300 Open Source Eye-Tracking System The Eye Writer guys were at the Open Hardware Summit, their work allowed the graffiti artist Tempt to continue to create after he lost use of his arms and legs to Lou Gehrig’s disease. Their methods used webcams for eye tracking, while the articles method uses electrical signals from eye muscles The more the merrier! Vulnerabilities Discovered In Prison SCADA Systems Shut down all the garbage smashers on the detention level! Canadian Songwriters Propose $10/mo Internet Fee All the record labels could sell music on-line for a reasonable price? For all of the technologies they have shut down, they certainly could have assimilated a few. Still to this day, nothing beat places like mp3downloads.ru (I think that was its name, circa 2001). Pick your songs, download at whatever quality/format you wanted, priced accordingly. People will legitimately buy something if given a fair and reasonable option. Debian Is the Most Important Linux Most of my disks are 3.5" :-) Mine is 8" when floppy =) Last.FM To Require Subscription For Mobiles and Home Devices Grooveshark and Last.fm have a large area of non overlap. * Last.fm provides a great social aspect to your favorite music, as well as to the band/song bio pages. * Last.fm's tag/artist/genre stations are great for honing in on what you want to listen to. * Last.fm has great coverage of indie artists (Especially 8-bit chiptunes) * Last.fm lets artist songs to be aired in full during streaming, but only sampled for 30 second previews (if they want to sell them for instance) Grooveshark and Pandora are great for things more mainstream. Soundcloud and Last.fm are hands down the best, for the rest. 'Death By GPS' Increasing In America's Wilderness It's been stated already, GPS is just where you are. Its road metadata and routing that gets you. I'd like to point out the wiki style road map system Open Street Maps Easy to upload your own tracks and create roads out of them. The OSM wiki has lots of information on apps that utilize the data, I even used it on an old Nokia Series60 phone. is how to use the data over the default maps on Garmin. Sony Wants To Put Your Game Saves In the Cloud Valve does this with their Steam client for games like Half-life that support play on Mac and PC. It's quite useful. Ignoring the 'Cloud' buzzword, this and Firefox-Sync are very useful, especially for dual booting win/linux or mac/win machines. Makerbot Thing-o-Matic 3D Printer Review .. In between, my daughter has been printing doll house furniture. I think this is one of its (and these online 3d printers that keep popping up) excel at, custom one-off small scale items. I know some grownups who are addicted to their doll house furniture. The would flip if you showed them Google sketchup's furniture shape library and said 'print out whatever you want'! Graphene Won't Replace Silicon In CPUs, Says IBM "Mr. President, we must not allow... a mine shaft gap!" World's First Full HDR Video System Unveiled Speaking of 3D rendering, most of them output HDR which would be awesome to see without being tone mapped!. (LuxRender has built-in Reinhard tone mapping) And since we are on the topic of HDR.. this is what it is NOT (Reinhard discusses the blown out tone mapping heavily prominent on flickr) NASA's Next-Generation Airplane Concepts I love how they used the whole Saucer separation thing like twice in the first season.. and then never again until Generations. (Although there may have been some separated when the hundreds of thousands of Enterprises all came into the same time/space) NASA's Next-Generation Airplane Concepts We could focus on rail infrastructure and reducing ticketing costs there. You could easily cope with demand and capacity efficiently by adding/removing engines and cars as needed. ITU Softens On the Definition of 4G Mobile This is like the US vs Japanese, Final Fantasy games numbering scheme all over again. Yahoo! To Close Delicious Or you can just go command-line with: curl --user user:pass -o delicious_bookmarks.xml -O '' This only works for accounts not linked up with Yahoo. Those require OAUTH unfortunately.
http://beta.slashdot.org/~zeroeth
CC-MAIN-2014-41
refinedweb
933
64.51
Using Intel® MKL in your Python* program Introduction This article describes how to use the Intel® Math Kernel Library (Intel® MKL) from a Python* program. There's more than one way to write Python programs to interface with native libraries. I've simply chosen one so that I can emphasize what might be less commonly known: how to build a custom shared library from Intel MKL so that you can call it from your script. I'll run through the basics steps of accessing Intel MKL from Python 2.6 on a 64-bit Linux* OS. The example program calls the CBLAS interface to the DGEMM function which performs a multiplication (and optional add) on general, double precision matrices. Much more about these functions can be found in the C version of the Developer Reference for Intel® MKL (available online here). Update: With Intel MKL 10.3 or 11.0 there is a new dynamic library which removes the need to create your own custom library. So if you're using 10.3 or later you don't need to do step 1 below. To make some changes in the behavior of this library you can look up these routines in the reference manual: mkl_set_interface_layer, mkl_set_threading_layer, mkl_set_xerbla, and mkl_set_progress. - Build a custom library (now unnecessary with Intel MKL 10.3 or later): To interface with Intel MKL from Python we recommend you use the custom library builder in the tools/builder sub-directory of the Intel MKL package. The Intel® MKL User's Guide has documentation on this tool (docs online). Here briefly are the steps I took to do this: - Set up your environment to use the desired version of Intel MKL: source /<MKLpath>/tools/environment/mklvarsem64t.sh - Build the DLL: cd /<MKLpath>/tools/builder make em64t name=~/libmkl4py export=cblas_list - Add library paths to LD_LIBRARY_PATH: All the Intel MKL libraries needed must be in directories contained in the LD_LIBRARY_PATH environment variable. The library as built above will depend on the OpenMP* threading runtime library used by Intel MKL (libiomp5.so) so you should make sure that both libraries, libmkl4py.so and libiomp5.so, are in a directory specified in the LD_LIBRARY_PATH environment variable. If you're using Intel MKL 10.3 or later you need to add the directories for both libmkl_rt.so and libiomp5.so (if you want it to run on multiple cores). - Call Intel MKL in your Python script: The following is a simple script (also available here) that loads the shared library just created and calls the matrix function. from ctypes import * # Load the share library mkl = cdll.LoadLibrary("./libmkl_rt.so") # For Intel MKL prior to version 10.3 us the created .so as below # mkl = dll.LoadLibrary("./libmkl4py.so") cblas_dgemm = mkl.cblas_dgemm def print_mat(mat, m, n): for i in xrange(0,m): print " ", for j in xrange(0,n): print mat[i*n+j], print # Initialize scalar data Order = 101 # 101 for row-major, 102 for column major data structures TransA = 111 # 111 for no transpose, 112 for transpose, and 113 for conjugate transpose TransB = 111 m = 2 n = 4 k = 3 lda = k ldb = n ldc = n alpha = 1.0 beta = -1.0 # Create contiguous space for the double precision array amat = c_double * 6 bmat = c_double * 12 cmat = c_double * 8 # Initialize the data arrays a = amat(1,2,3, 4,5,6) b = bmat(0,1,0,1, 1,0,0,1, 1,0,1,0) c = cmat(5,1,3,3, 11,4,6,9) print "nMatrix A =" print_mat(a,2,3) print "nMatrix B =" print_mat(b,3,4) print "nMatrix C =" print_mat(c,2,4) print "nCompute", alpha, "* A * B + ", beta, "* C" # Call Intel MKL by casting scalar parameters and passing arrays by reference cblas_dgemm( c_int(Order), c_int(TransA), c_int(TransB), c_int(m), c_int(n), c_int(k), c_double(alpha), byref(a), c_int(lda), byref(b), c_int(ldb), c_double(beta), byref(c), c_int(ldc)) print_mat(c,2,4) print - A few notes: - Matrices in the BLAS and LAPACK parts of Intel MKL are stored in one dimensional arrays and integers are used to specify their geometry. - I've actually loaded here CBLAS interface to the general matrix multiply function which allows you to choose how the matrix is specified. In my script I've listed the matrix by rows (row-major ordering). If you do not use the cblas interface to the BLAS or if you use LAPACK you should keep in mind that these functions assume the Fortran method of listing matrices by columns (column-major ordering). Here is the Python code I created that implements the steps above: matmult.py Examples code: We extended the list of examples demonstrate how possible to call different ( not only widespread example like dgemm ) from the Python program: See the list of 3 different examples attached: dft.zip - shows the Python program calls 1D DFTI API spblas.zip - shows how to call matrix-matrix multiplication routine for a sparse matrix stored in the block compressed format (BSR) vsl.zip - shows how to call vdRngGaussian routine ( generates normally distributed random numbers) from VSL domain. Notes: Each zip file contains *.res and *_list files mean input file and file for custom library building correspondingly
https://software.intel.com/pt-br/articles/using-intel-mkl-in-your-python-programs
CC-MAIN-2019-22
refinedweb
870
62.68
Introduction to SharePoint Development in Visual Studio 2010 - Creating a SharePoint Solution - Conclusion Visual Studio 2010 provides the templates and tools you need to develop complete SharePoint solutions. In this chapter you will create your first SharePoint solution and we will introduce you to some of the projects, project item templates, and tools that are in Visual Studio 2010 for SharePoint development. Creating a SharePoint Solution First, make sure you have followed the setup instructions in Appendix A, "Preparing for SharePoint Development." Once your machine is set up, you will need to launch Visual Studio as an administrator. The SharePoint projects in Visual Studio require administrator privileges to interact with SharePoint. To launch Visual Studio as an administrator, locate the Microsoft Visual Studio 2010 shortcut in the Start Menu under All Programs > Microsoft Visual Studio 2010. Right click on the Microsoft Visual Studio 2010 shortcut. You can choose Run as administrator from the context menu to run Visual Studio as an administrator. Alternatively, if you just want to make Visual Studio start up with administrator privileges every time you launch it, you can change the Microsoft Visual Studio 2010 shortcut properties to always run as administrator. To do this, right click on the Microsoft Visual Studio 2010 shortcut and choose Properties. Click the Compatibility tab as shown in Figure 2-1. Then check the Run this program as an administrator check box and press OK. Figure 2-1 Setting the Microsoft Visual Studio 2010 shortcut to start as administrator Now use the modified Microsoft Visual Studio 2010 shortcut or the Run as administrator command in the context menu to launch Visual Studio with administrator privileges. Once Visual Studio has started up, choose New > Project... from the File menu. This brings up the New Project dialog. Select Visual C# as the language from the tree view control on the left. Then expand the Visual C# node and select SharePoint under the Visual C# node. Expand the SharePoint node and click the 2010 node. This will display all the available SharePoint project types as shown in shown in Figure 2-2. Figure 2-2 SharePoint projects in the New Project dialog in Visual Studio Visual Studio has 12 basic SharePoint project types; they are listed in Table 2-1. The most basic SharePoint project type is the empty SharePoint Project. This project begins empty but lets you create and add any SharePoint project items you want to it as you go along. So in an empty SharePoint project you could add a web part, a list definition, and so on. Table 2-1. SharePoint Project Types A second class of SharePoint project is prepopulated with one particular SharePoint project item type. This class includes the Visual Web Part, Sequential Workflow, State Machine Workflow, Business Data Connectivity Model, Event Receiver, List Definition, Content Type, Module, and Site Definition projects. These projects are empty SharePoint projects that have one SharePoint project item preadded to them—the item type specified by the project type designation. So, for example, the Content Type SharePoint project is an Empty SharePoint project with a Content Type project item preadded to it. As with the Empty SharePoint project type, you can continue to add SharePoint project item types to this class of projects or even remove from them the initially preadded SharePoint project item type. A third class of SharePoint projects is populated by a wizard that runs when the project is first created. This class of SharePoint projects includes the Import Reusable Workflow and Import SharePoint Solution Package. The Import Reusable Workflow project creates a SharePoint project by importing a workflow from a workflow created in SharePoint or SharePoint Designer. The Import SharePoint Solution Package project creates a SharePoint project by importing a .WSP file exported from SharePoint Designer. It is worth noting at this point that you have two general options for how you structure your SharePoint solutions. You can use a single project and add as many SharePoint project items as you need to that project. Or you can divide your solution into multiple projects. A key limitation of a project is that a single project can only produce a single .NET assembly (a .DLL file). So in cases where you need to factor your solution to produce multiple .NET assemblies you need to create multiple projects. You might also choose to divide your solution into multiple projects if you are developing a solution with other developers. This can make it easier for a solution to be simultaneously worked on by multiple developers. Later in this chapter we will discuss in more detail what each of these project items and projects do. For now, let's start by creating an Empty SharePoint Project. As shown in Figure 2-2, you can specify a name and location for your project. You can also optionally add the project to source control by checking the Add to Source Control check box. Once you've set the name and location for your project, click the OK button to create the project. A second dialog will appear to configure the site URL and security level for the newly created project as shown in Figure 2-3. The site URL should designate the SharePoint site where you want to deploy and test your SharePoint solution. As discussed in Chapter 1, a particular machine can host multiple SharePoint sites. If you click the Validate button, Visual Studio will verify that the URL you have typed corresponds to an already created SharePoint site. Figure 2-3 Debugging options for empty projects In order for you to successfully deploy and debug your SharePoint solution, you must also have the proper permissions on the SharePoint site you designated with the site URL. Your user account must be added as the Site Owner or Site Collection Administrator for the site URL. In some cases, a customization you build may be deployed at the site level and at other times a customization may only be deployable at the site collection level. For more information on setting up the SharePoint site properly so you can deploy and debug, see Appendix A, "Preparing for SharePoint Development." In the same dialog shown in Figure 2-3, you must choose the trust level for the SharePoint solution. After the project is created, you can change the trust level later by using the Properties window for the project as shown in Figure 2-5 (see page 114). The property to change is the Sandboxed Solution property—if this is set to False the solution will have a Farm solution trust level. Sandboxed solutions run in a secure, monitored process. Sandboxed solutions can be deployed without requiring SharePoint administrative privileges. If you choose a sandboxed solution, you can only use project item types that are valid in sandboxed solutions. If you choose the option "Deploy as a farm solution," Visual Studio will deploy the solution as a fully trusted farm solution. If you choose a farm solution, you can use all available SharePoint project item types in your project, but deployment will require administrative privileges and the solution will run in full trust. For this example, choose Deploy as a sandboxed solution. Click the Finish button as shown in Figure 2-3 to complete the creation of the project. Sandboxed Solutions versus Farm Solutions Early in the project creation process, Visual Studio asks you to decide between using a sandboxed solution or a farm solution. It is worth considering in more detail the difference between a sandboxed solution and a farm solution and when to choose one over the other. Prior to SharePoint 2010, all solutions you could create were farm solutions. In Chapter 1 we saw that SharePoint solutions are deployed to a farm that could consist of one to many servers. Each server in the farm can have multiple web applications running on it. A web application can in turn have one or more site collections, and a site collection has one or more sites. Farm solutions can impact the entire SharePoint system and are available to all site collections and sites in the farm. This is sometimes desirable, but sometimes can have undesired effects because a farm solution that is misbehaving can impact all sites and site collections in the system. In SharePoint 2010, you can create a new type of solution called a sandboxed solution. Sandboxed solutions are deployed at the site collection level rather than the farm level, so this lets you isolate a solution so it is only available to one site collection within the farm. Sandboxed solutions also run in a separate process from the main SharePoint IIS web application process, and the separate process is throttled and monitored with quotas to protect the SharePoint site from becoming unresponsive due to a misbehaving sandboxed solution. It is worth mentioning that sandboxed solutions solve an organizational problem as well—in many organizations it is difficult to get permission to install a farm solution because of the possible impact that could have on the SharePoint system. System administrators in charge of running a SharePoint site have been reluctant in the past to allow custom solutions to run on their sites. With the advent of SharePoint 2010, there is now a robust system in place to monitor and throttle these custom solutions so that system administrators don't have to worry about a custom solution bringing the entire SharePoint site down. In addition, with sandboxed solutions, users can upload solutions without requiring administrator approval. So if sandboxed solutions are so great, why are farm solutions still around at all in SharePoint 2010? Well, because of the need to restrict and throttle a sandboxed solution so that it cannot negatively impact the entire site, there are restrictions on the kinds of solutions you can build with a sandboxed solution. The most significant restrictions disallow creation of application pages, visual web parts, or code-based workflows with a sandboxed solution. You can, however, create a web part without using the visual designer and deploy it in a sandboxed solution—we will see how to work around this particular limitation in Chapter 9, "SharePoint Web Parts." So in the end, the choice between sandboxed and farm solutions should come down to whether or not you need to create an application page or a workflow with code in it. For these kinds of solutions, you should pick a farm solution. For all other solutions, pick a sandboxed solution. The only other reason to use a farm solution over a sandboxed solution is if you really have some code that needs to run at the web application or farm level, perhaps because it needs to interact with or move data between multiple site collections. In this case, you would create a farm solution as well. Exploring an Empty SharePoint Project Returning now to the project we just created, let's inspect the structure of an empty SharePoint project as shown in Figure 2-4. First, click on the root node, Solution 'SharePointProject1' in our example. In the Properties window, you will see the properties associated with the solution. The two most interesting properties are the Active config and the Startup project properties. Active config sets whether to build debug or release assemblies. By default this starts out as Debug|Any CPU. Typically, during development you will use debug, but when you are ready to deploy the solution you will use Release|Any CPU setting. The Startup project will set which project's startup settings will be used when you press F5 when there are multiple projects in the solution. Since in typical solutions all projects will be deploying to the same SharePoint site URL, this won't matter much in practice unless you are building one solution that creates multiple deployments. Figure 2-4 An empty SharePoint project Now, click on the Project node, SharePointProject1, in our example. In the Properties window are a number of SharePoint specific properties as shown in Figure 2-5. Table 2-2 describes the functions of each of these properties. Two of the properties you configured during project creation can be changed here: Site URL—which designates the SharePoint site where you deploy and test your SharePoint solution, and Sandboxed Solution—which when set to True indicates that the solution will be a sandboxed solution and when set to False indicates that the solution will be a farm solution. Figure 2-5 SharePoint project Properties window Table 2-2. SharePoint Project Properties Next, consider the Properties folder in the Solution Explorer. In this folder, you will find an AssemblyInfo.cs file that contains the attributes that will be added to the assembly that is created when the project is built. Almost all of these attributes are identical to the ones you would find when creating a simple class library project. The only one that is new is the AllowPartiallyTrustedCallers attribute. This attribute is used for partially trusted solutions (sandboxed solutions or farm solutions that have Assembly Deployment Target set to WebApplication as we saw in Table 2-2). For projects that have Assembly Deployment Target set to GlobalAssemblyCache, the AllowPartiallyTrustedCallers attribute can be removed. The References folder in the Solution Explorer contains all the referenced assemblies for the project. The set of referenced assemblies are identical to the ones you would find when creating a simple class library project with two additions: Microsoft.SharePoint and Microsoft.SharePoint.Security. The Microsoft.SharePoint assembly contains the server object model for SharePoint that you will use when writing code against SharePoint. The Microsoft.SharePoint.Security assembly contains code access security objects that are used for partially trusted solutions. The Features folder is a special folder that is found only in a Visual Studio SharePoint project. This folder contains SharePoint features that have been created for the project. A SharePoint project can contain zero or more features. By default, when you add a new SharePoint item to a SharePoint project that requires a feature, Visual Studio will create a new feature automatically or reuse an existing feature if there is already a feature in the project with the same scope (Farm, Site, Web, or Web Application). For more information on working with features, see Chapter 11, "Packaging and Deployment." The Packages folder is another special folder that is found in SharePoint projects. This folder contains information that allows the project to create a package file or .WSP file. Package files are used to deploy SharePoint features to a SharePoint site. By default, when you add a new SharePoint item to a SharePoint project that results in the creation of a new feature, that new feature will automatically be added to the package file associated with the project. For more information on working with packages, see Chapter 11, "Packaging and Deployment." The next file you will find in a SharePoint project is the key.snk file. This is a strong name key that is used to sign the output assembly of the project. Mapped Folders, Deployment, and the Hive One item you will often find in a SharePoint project that isn't found in our example is a mapped folder. Mapped folders give you a way to take resources and other files in your project and add them to folders in the Visual Studio project that are mapped to file system locations where those files need to be deployed on the SharePoint server. For example, imagine you have an application page you have developed that needs to deploy a file to the SharePoint server's images folder. To do this you would right click on the Project node and choose Add, then SharePoint Images Folder. This creates a mapped folder in the project called Images. Any folders you add to the images folder will be created on disk (if they aren't already there) and the contents of those folders will be copied to the SharePoint server's images folder when the project is deployed. It is time for another aside regarding SharePoint terminology. We've just implied that SharePoint has an images folder—what is this and what other special folders does SharePoint have? When you build a deployment for SharePoint you build a SharePoint package, which is basically a CAB file (like a ZIP file if you aren't familiar with the CAB format) that has in it a set of files and instructions that are used to install your SharePoint solution. The instructions are encapsulated in one or more SharePoint Feature files, which consist of XML markup that is read at install time. A special program called stsadm.exe takes the SharePoint package file (which is a CAB file with a .WSP extension) and reads the SharePoint feature files in the package to determine how to install the SharePoint solution. These SharePoint feature files in turn can refer to additional files that are packaged within the SharePoint package. Stsadm.exe then does two major things—it adds information to the SharePoint content database and it copies files to the file system. So a SharePoint solution typically modifies the SharePoint content and configuration databases and adds files to the file system of the SharePoint server machine. There are three general locations where SharePoint copies files to the file system of the server during deployment. The first location is the global assembly cache of the server machine. Solutions that have assemblies that need full trust will copy to this directory when Assembly Deployment Target is set to GlobalAssemblyCache as we saw in Table 2-2. The second location is directories specific to a web application (which is an IIS concept we described in Chapter 1). One of those web application-specific directories is the bin directory. This is where assemblies are deployed if you set the Assembly Deployment Target property to WebApplication. To determine where the web application directory is, launch the Internet Information Services (IIS) manager on the SharePoint server (use the search box in the Start menu to search for it). Once you've launched the IIS manager, expand the Sites folder and find the web application you are interested in—in a default install it will be called SharePoint -80 as shown in Figure 2-6. Right click on the SharePoint -80 node, and pick Explore from the context menu. This will open the base directory for your web application as shown in Figure 2-7. Of interest here are several directories and files you may use. The web.config file is used to configure ASP.NET specific settings—you have to modify this file for some SharePoint development scenarios we will see later in this book. The bin folder is the bin directory associated with the web application where assemblies are sometimes deployed. There are other directories here that are used for web part development, such as the wpresources folder. Figure 2-6 The Internet Information Services (IIS) Manager showing the web application SharePoint -80 Figure 2-7 Directories and files associated with the SharePoint -80 web application The third location of interest for deployment is known in the SharePoint developer world as the hive, which is the location on disk where SharePoint installs feature definitions, site definitions, and other content used to provision the web site. SharePoint builds on its own extensibility model—many of the features in the SharePoint web site correspond to actual files you can inspect and learn from in these directories. The hive can be found at Program Files\Common Files\Microsoft Shared\Web Server Extensions\14. Some of the folders found in the hive are shown in Figure 2-8. Figure 2-8 Directories and files in the SharePoint hive When you add a mapped folder in Visual Studio by right clicking on the Project node and choosing Add, then SharePoint Mapped Folder, you will see the dialog shown in Figure 2-9, which lets you view all the folders in the hive to which you might want to deploy items. In Figure 2-9, we have expanded the TEMPLATE folder, which is the main place to which you will deploy items. In this folder, you can see there is an IMAGES folder, where you can deploy arbitrary images you want to use from web parts or application pages. There are other directories as well—for example, the SiteTemplates folder, where you install Site Definitions files and the LAYOUTS folder, where you can find the master page being used for the SharePoint server. You will typically create a subdirectory within the LAYOUTS folder if you want to install your own application pages. We will learn more about the hive throughout this book. Figure 2-9 Adding a Mapped Folder in Visual Studio to the Layouts folder SharePoint Project Items So now that we've seen the basic structure of an empty SharePoint project and learned a little bit more about deployment, let's consider what happens when we add a SharePoint project item to the SharePoint project. To add a SharePoint project item, right click on the Project node in Server Explorer (titled SharePointProject1 in our example) and choose Add, then New Item... from the context menu. The Add New Item dialog shown in Figure 2-10 appears. There are a number of SharePoint project items that can be added to a SharePoint project. Table 2-3 lists the project item types and briefly describes each one. It also lists the chapter in this book where each project item type is described in detail. Figure 2-10 Add New Item dialog Table 2-3. SharePoint Project Item Types Table 2-3 shows the SharePoint project items available when you right click on the Project node item and choose Add New Item. In addition, there are four more SharePoint project items that are only available when you right click on an existing item in a SharePoint project and then choose Add > New Item... . It can be confusing that these items don't appear if you have the Project node selected but only when you have an existing project item selected. But this reinforces the idea that these project items only make sense when used in conjunction with other SharePoint project items. These four additional project items are listed in Table 2-4. Table 2-4. SharePoint Project Item Types Dependent on Other Project Item Types For the purposes of this chapter, let's create an event receiver. An event receiver handles events that are raised by lists, items in a list, webs (Share- Point sites), and workflow. To create an event receiver, select Event Receiver from the list of project item types that can be added, as shown in the Add New Item dialog in Figure 2-10. Accept the default name (EventReceiver1) and press the Add button. The SharePoint Customization Wizard dialog appears. Here you can choose the type of event receiver you want to create, as shown in Figure 2-11. Figure 2-11 The SharePoint Customization Wizard dialog configured to handle list item events for the Announcements list There are five types of event receivers you can create: a list event receiver, a list item event receiver, a list e-mail event receiver, a web (SharePoint site) event receiver, and a list workflow event receiver. List item, list workflow, and list e-mail event receivers act on a specific list instance that you must also elect in the dialog in Figure 2-11. List and web event receivers pertain to a web scope and act on the current SharePoint site to which you are deploying. For this example, let's create an event receiver that handles list item events for the Calendar list associated with the SharePoint site. To do this, pick List Item Events from the first drop-down in the dialog and select Announcements as the specific list for which we will handle events. After making these selections, the list box at the bottom of the dialog shows the different list item events that can be handled by your code. Check the check boxes next to An item is being added, An item is being deleted, An item was added, and An item was deleted. The dialog should look like Figure 2-11. Then press the Finish button. Visual Studio adds the SharePoint project item representing an event receiver. The resulting project structure now looks like Figure 2-12. Figure 2-12 A SharePoint project with a single SharePoint project item for an Event Receiver Exploring a SharePoint Project Item Now that we have added our first SharePoint project item to this project, we will explore some of the new things we can see in the project with the addition of a SharePoint project item. First note that there is a project item icon in Solution Explorer titled EventReceiver1, and parented under that icon are two additional items: an XML file called Elements.xml and a code file called EventReceiver1.cs. As we will see throughout this book, this is a common pattern for many SharePoint project items. The root SharePoint project item node "EventReceiver1" has properties that configure the event receiver. This root node is actually a folder that contains two files. The Elements.xml file contains the XML that is used to describe the customization to SharePoint and is read by stsadm.exe as part of installing and configuring the solution. The EventReceiver1.cs file contains custom code that defines the behavior of the SharePoint customization and will be compiled into an assembly to be deployed to the SharePoint site. Let's consider each of these files in more detail. The Elements.xml File The first file to consider is the Elements.xml file. This is sometimes referred to as an Element Manifest file, and is an XML file that contains information that describes the SharePoint item being created to SharePoint, in this case an event receiver. Behind the scenes, Visual Studio will refer to this Elements.xml file in a feature file it has created. The feature file in turn is contained by a package—a package can contain one or more features as shown in Figure 2-13. When Visual Studio deploys the package, each feature file and associated Elements.xml file will be copied to the SharePoint server. SharePoint will read the feature file that will refer to the Elements.xml file. The Elements.xml file, as we will see, in turn refers to event handlers defined in an assembly. Once SharePoint has read the feature file and associated Elements.xml and assembly files, it can make the feature available for activation in the SharePoint site. We will consider the Visual Studio project support for features and packages in more detail later in this chapter and in Chapter 11, "Packaging and Deployment." Note that in this diagram, one feature has custom code associated with it represented by a .NET assembly. It is possible for multiple features to use code written within the same assembly. Figure 2-13 The relationship among a package, a feature, element manifests, and an assembly When you click on the Elements.xml file node in the Solution Explorer, you will see several properties in the Properties window that can be configured as shown in Figure 2-14. Note that the properties in the Properties window change if you click in the contents of the Elements.xml file, so be sure you've clicked on the node in the Solution Explorer tree view. These properties are organized into three categories: Advanced, Misc, and SharePoint. Let's consider the properties in each of these three categories. Figure 2-14 The properties associated with the Elements.xml file node Under the Advanced category of properties there are four properties: Build Action, Copy to Output Directory, Custom Tool, and Custom Tool Namespace. These properties tell Visual Studio how to process the Elements.xml file and you should just leave these properties set to their original values—Build Action is set to Content, Copy to Output Directory is set to Do not Copy, and the other two properties have no setting. Under the Misc category of properties there are two properties—the File Name and the Full Path to the file so you can locate where it is stored on disk. As with the Advanced properties, there is no good reason to change these properties; they should be left set to their original values. Finally, the SharePoint category of properties includes the property Deployment Location with child properties Root and Path and the property Deployment Type. The Deployment Location property tells you where on the SharePoint server the Elements.xml file will be deployed when you build and deploy the solution. In our example, it is set to {SharePointRoot}\Template\Features\{FeatureName}\EventReceiver1\. In our example, {SharePointRoot} is a substitution token that will be replaced by the actual root file location where SharePoint features are deployed on a server, typically a path such as "C:\Program files\Common Files\Microsoft Shared\web server extensions\14\, although SharePoint could be installed to a different drive or program file location than in this example. Another term you will hear used for this set of directories found under {SharePointRoot} is the SharePoint hive. {FeatureName} is another substitution token that will be replaced by the name of the feature that this SharePoint project item file is associated with, in our example: Feature1. The Deployment Type property is set to ElementManifest—this reflects that Elements.xml is an element manifest and must be deployed to a folder corresponding to the feature with which the SharePoint project item file is associated. Changing this property would be a bad idea for this file because it would change the location where the file is deployed to one not appropriate for an element manifest. Now that we've considered the properties associated with the Elements.xml file node, double click on the Elements.xml file node in the Solution Explorer window to open the contents of the Elements.xml file. The contents of the file are shown in Listing 2-1. Note that this is representative of the contents of an Elements.xml file for an event receiver. But for other SharePoint project item types—for example a list definition—the contents of the Elements.xml file will look quite different. This can be confusing to new SharePoint developers because they think that all Elements.xml files have similar contents when in truth, the contents of Elements.xml are specific to the SharePoint project item type. Listing 2-1: The Elements.xml file defining event receivers for four list item events <>10000</SequenceNumber> </Receiver> > </Receivers> </Elements> The contents of the Elements.xml file have a root element called Elements. Elements has a child element called Receivers. Receivers contains a Receiver element for each of the four event receivers we defined for our Event Receiver project item. Each Receiver element contains 5 subelements listed in Table 2-5. You might also notice the ListTemplateId attribute, which is set to 104. To find out where this magic number comes from, use the Server Explorer that we saw in Chapter 1 to browse the site. Under Team Site, List Templates, Communications select the Announcements list template. In the Properties window, you will see that its Type_Client property is 104. The number 104 tells SharePoint to associate the event receiver with the Announcements list definition. Table 2-5. Subelements Contained within the Receiver Element in Elements.xml As you might imagine, the Elements.xml or element manifest for other SharePoint project item types contains different content that defines that particular SharePoint project item type. Every Elements.xml file, regardless of type, has a root element called Elements, however. The Code File (EventReceiver1.cs) Below the SharePoint project item node "EventReceiver1" you will see a code file called EventReceiver1.cs, shown in Listing 2-2. This contains a class that derives from Microsoft.SharePoint.SPItemEventReceiver. Event handlers are added with calls to the base class implementation of the event handler. Note that the names of these event handlers map to the names used in the Elements.xml file. Listing 2-2: EventReceiver1.cs); } /// <summary> /// An item was added. /// </summary> public override void ItemAdded(SPItemEventProperties properties) { base.ItemAdded(properties); } /// <summary> /// An item was deleted. /// </summary> public override void ItemDeleted(SPItemEventProperties properties) { base.ItemDeleted(properties); } } } We will add some code to the ItemDeleting and ItemAdded event as shown in Listing 2-3. In ItemDeleting, we first call the base ItemDeleting method. Then we use the properties parameter that is passed to the function and use the ListItem property. The ListItem property is a parameterized property to which we can pass a field name to read and write a field from the item being deleted from the list. We use the ListItem property to access the Title field and append the Title field with an asterisk. Next, we call the Update method on the ListItem to update the item being deleted with the new Title. Finally, we use the Cancel property on the properties parameter to cancel the deletion of the item. We can stop an item from being deleted in the ItemDeleting event because this event is fired before the item is deleted—the ItemDeleted event happens after the item is deleted, so putting this code into that event would not work. We also add code to the ItemAdded event. Here, we again use the ListItem property on the properties parameter passed to the event. We modify the Title to add the string "added" to it after it is added to the list. We then call the Update method on the ListItem to update the item that was added so our change will be shown in the list. Listing 2-3: EventReceiver1.cs with custom event handlers for ItemDeleting and ItemAdded); SPWeb web = properties.OpenWeb(); properties.ListItem["Title"] = properties.ListItem["Title"] + "*"; properties.ListItem.Update(); properties.Cancel = true; } /// <summary> /// An item was added. /// </summary> public override void ItemAdded(SPItemEventProperties properties) { base.ItemAdded(properties); SPWeb web = properties.OpenWeb(); properties.ListItem["Title"] = properties.ListItem["Title"] + " added"; properties.ListItem.Update(); } /// <summary> /// An item was deleted. /// </summary> public override void ItemDeleted(SPItemEventProperties properties) { base.ItemDeleted(properties); } } } The Root Project Item Node or Folder (EventReceiver1) When you click on the root level SharePoint project item node EventReceiver1, you will see several properties in the Properties window that can be configured as shown in Figure 2-15. First note that the Properties window indicates that EventReceiver1 is a folder—the Properties window says Folder Properties. The properties for EventReceiver1 are organized into three categories: Misc, SharePoint, and SharePoint Events. Let's consider the properties in each of these three categories. Figure 2-15 The properties associated with the EventReceiver1 node Misc Properties The sole property under the Misc category in Figure 2-15 is Folder Name. If you change the name of the project item node—either by changing the Folder Name property or by renaming the project item node in the Solution Explorer, it will automatically change some but not all areas in the project that refer to the project item node. For example, if you were to rename EventReceiver1 to EventReceiver2, Visual Studio automatically fixes up the feature "Feature1" associated with the SharePoint project item to refer to the new project item name. But it doesn't change the names of the files contained in the (now EventReceiver2) folder or any of the classes that were created. So after changing the node to EventReceiver2, the code file is still titled EventReceiver1.cs and the class inside the code file is still titled EventReceiver1. More critically, the Elements.xml file still refers to EventReceiver1. You could manually rename EventReceiver1.cs to EventReceiver2.cs and even refactor the class contained in the newly renamed EventReceiver2.cs file to be EventReceiver2. In this case, the Elements.xml file will be updated correctly. But in some cases the Elements.xml file will not be updated correctly after a refactor. For example, if you change the namespace that your class is defined in from SharePointProject1.EventReceiver1 to SharePointProject1.MyEventReceiver, the Elements.xml file will not be updated to have the right fully qualified class names in the Class elements. You would have to manually update the Elements.xml file to ensure it contains the new SharePointProject1.MyEventReceiver namespace or the project won't run successfully. SharePoint Properties Continuing with our exploration of the properties associated with the root level SharePoint project item node EventReceiver1, you will see a number of properties under the category SharePoint in Figure 2-15. Feature Properties is a collection of key value pairs that are used when deploying the feature associated with the event receiver to SharePoint. These properties are deployed with the feature and can be accessed later in SharePoint using the SPFeaturePropertyCollection object. So for example, you might use feature properties to associate some configuration information or other static data with your feature. The Feature Receiver set of properties includes the subproperties Assembly and Class Name. You can use these properties to specify an assembly and class name that you want to handle events that are raised when the Feature associated with your EventReceiver1 item is raised. You can create a class in your current solution and refer to it or include another assembly in your SharePoint package—for more information on how to include additional assemblies in your SharePoint package, see Chapter 11, "Packaging and Deployment." Events that your feature receiver can handle include FeatureInstalled, FeatureUninstalling, FeatureActivated, and FeatureDeactivating. So you could write code that runs when your EventReceiver1 is installed and uninstalled from the SharePoint site, maybe to add additional resources or lists required on install and remove those additional resources or lists on uninstall. We will discuss feature event receivers more in Chapter 5, "SharePoint Event Receivers." Project Output References are used to tell Visual Studio about any dependent assemblies your project item requires to run. For example, maybe your event receiver uses a helper class library called HelperLibrary.dll. You can use the Project Output References project to tell Visual Studio about the helper class library and then Visual Studio will package the dependent assembly in the final solution. For more on Project Output References, see Chapter 11, "Packaging and Deployment." Finally Safe Control Entries is used to designate whether an ASPX control or web part is trusted and can be used by users on the site. In the context of an Event Receiver, this property is not applicable and doesn't do anything. We will see this property used in Chapter 9, "SharePoint Web Parts." SharePoint Events Properties In Figure 2-15, under the category SharePoint Events are a number of properties that correspond to the list of events we saw earlier in the wizard shown in Figure 2-11. For the events that we checked earlier, the properties Handle ItemAdded, Handle ItemAdding, HandleItemDeleted, and HandleItemDeleting are set to True. To add and remove events that this event receiver handles you can change which SharePoint event "Handle" properties are set to True or False. To see the impact of setting a property that was previously set to False to True, open the Elements.xml file and the EventReceiver1.cs files under the EventReceiver1 node and arrange your windows so these files are visible while you change "Handle" properties under the SharePoint Events category to True or False. You will notice that setting a property like Handle ItemCheckedIn, which was previously False to True, adds a block of XML to the Elements.xml file and adds a new event handler to the EventReceiver1.cs file. If you then set that property back to False, you will see that Visual Studio removes the block of XML that it previously added from the Elements.xml file, but leaves the event handler it added to the EventReceiver1.cs file intact. It leaves the code in EventReceiver1.cs intact because you might have written some code in the handler and Visual Studio wants to preserve any code you wrote. Also, having an inactive event handler in EventReceiver1.cs (inactive because it isn't registered in the Elements.xml file) will have no ill-effect on your remaining active event handlers (active because they are registered in the Elements.xml file). Features and Packages in a Visual Studio Project We've now explored the properties and files that are associated with a new SharePoint project item. We've seen the Elements.xml file, the code file associated with an event receiver, and the properties associated with each of these files and the root EventReceiver1 folder for the SharePoint project item. You may have noticed that when we added the event receiver project item to our blank solution, some new items appeared under the Features folder. Let's examine the Features and Package folders in the SharePoint project to start to get an idea of what Visual Studio does to package and deploy our SharePoint solution. Features Just to make things a little more interesting, let's create a second event receiver. Follow the steps we did earlier in the chapter to create a second event receiver called EventReceiver2. For the second event receiver, choose List Item Events as the type of event receiver to create, use Calendar as the event source, and handle the event when an item is being added. Now double click on the project item called Feature1 under the Features folder. The Visual Studio Feature designer appears as shown in Figure 2-16. Note that we now have two event receivers in our solution, EventReceiver1 and EventReceiver2, and Feature1 is configured to install both event receivers. Figure 2-16 Visual Studio's Feature designer with two SharePoint project items to deploy It is possible to add features to the Features folder. For example, maybe you want EventReceiver1 and EventReceiver2 to be deployed as separate features. You could create a separate feature called "Feature2" and install EventReceiver1 in Feature1 and EventReceiver2 in Feature2. Doing this would enable the event receivers to be installed and uninstalled separately. Another reason you might need to have separate features is when you have SharePoint project items you want to deploy that need to be installed at a different scope. If you drop down the Scope drop-down in Figure 2-16, you can see that a feature can be installed to one of four scopes: Farm, Site (the Site Collection Level), Web (the Site level), and WebApplication (all sites hosted by an IIS web application). Due to historical reasons, SharePoint sometimes uses the word Site to refer to a Site Collection and Web to refer to a SharePoint Site. Let's create a second feature by right clicking on the Features folder and choosing Add Feature. A new Feature called Feature2 is created. In the Feature designer that will appear for Feature2, click on the EventReceiver2 SharePoint item and click the > button to move the feature from the left-hand list to the right-hand list. Then back in the Feature1 designer, ensure that EventReceiver2 is not installed by Feature1 by clicking on EventReceiver2 and pressing the < button to move it from the right-hand list to the left-hand list. The resulting Feature1 designer is shown in Figure 2-17. This shows that Feature1 will now install only EventReceiver1 not EventReceiver2. The right-hand list contains the features that will be installed; the left-hand list contains other items in the solution that have not been added to this feature. Also in Figure 2-17, we have expanded the Files outline and the Feature Activation Dependencies area of the Feature designer. We will discuss these two areas of the designer next. Figure 2-17 Visual Studio's Feature designer with one SharePoint project item to deploy The Files outline shows the actual files that will be included in the feature to install the associated SharePoint project item. In this case, you can see that the Elements.xml file will be included. The assembly built with the current project is also implicitly included in the feature, even though it doesn't show in this designer. Also, at the bottom of the dialog you can now see the Feature Activation Dependencies area of the Feature designer. Here you can add dependencies that your feature has on other features in the solution or on other features that must be installed in advance to the SharePoint site where this feature will be installed. For example, you might have a situation in which you've created two features in your solution but Feature1 needs Feature2 to be installed first. Let's enforce this constraint. Click the Add... button in the Feature Activation Dependencies area for Feature1 to specify that Feature2 is a dependency. When you click the Add... button, the dialog shown in Figure 2-18 appears. If you click the feature SharePointProject1.Feature2 and then press the Add button, Feature2 will be added to the list of Feature Activation Dependencies for Feature1. Figure 2-18 The Add Feature Activation Dependencies dialog You also might want to add a dependency on another custom or built-in SharePoint feature. For example, you might need to ensure that the Announcement Lists feature is installed on the SharePoint site because your event receiver modifies or creates announcement lists. If announcement lists are not there, your event receiver will fail. The dialog shown in Figure 2-18 also lets you add dependencies to SharePoint features not in your solution by specifying the Feature ID of the feature on which you are dependent. As you might remember from Chapter 1, you can use the Server Explorer and the Properties window to determine the Feature ID for a particular feature as shown in Figure 1-76's DefinitionID. This ID could be added as a custom dependency for our Feature1 using the dialog in Figure 2-18. Package Designer Features in a project are useless unless they are deployed into what is called a Package or a .WSP file. Visual Studio helps you configure the Package created by your solution with the Package Designer. To see the Package Designer, double click on the Package.package project item under the Package folder in your solution. The designer shown in Figure 2-19 appears. Figure 2-19 The Package Designer When you first open the designer it won't exactly match Figure 2-19 because Visual Studio will automatically place both features we created into the items to install in the package that is created by the project. We used the < button to remove Feature2 from the package because we don't really want to install EventReceiver2 since we have no code added to it yet. Each project can build only one package, but you can have a package created by other projects in your solution. Visual Studio also lets you mix and match where features come from—that is, a feature can come from Project1 in a solution but be installed by the Package built by Project2. If you click on the Advanced button at the bottom of the Package Designer, options are provided to add additional assemblies to the package—either assemblies created by other projects in the solution or additional external assemblies. The Advanced page of the Package Designer is shown in Figure 2-20. Figure 2-20 The Package Designer's Advanced Page This has given you a brief introduction to Visual Studio's support for features and packages. We will consider features and packages in more detail in Chapter 11, "Packaging and Deployment." Building So we now have a SharePoint solution with two event receivers: EventReceiver1 and EventReceiver2, two features: Feature1 and Feature2, and one package: Package.package. Feature1 includes EventReceiver1, Feature2 includes EventReceiver2, but Package.package only includes Feature1, so the EventReceiver2 will not be packaged or installed. If we build currently, we will get a missing dependency error because we made Feature2 a dependency for Feature1 and Feature2 is not currently being packaged. Use the Feature designer's Feature Activation Dependencies area for Feature1 to remove the dependency on Feature2 by clicking the Remover button. We are now ready to build our project in preparation for running and debugging it. When you build the project by choosing Build Solution from the Build menu, the Output window indicates pretty much what you would expect—it says that a DLL has been built in the Debug folder of your project called SharePointProject1.dll. When we go to the bin\debug directory for the project in Windows Explorer, you will see the DLL and the PDB for the DLL, which contains debugging information. If you package the project by choosing Package from the Build menu, you will see something a little different, as shown in Figure 2-21. You will now find in addition to the DLL and PDB files, there is a .WSP file. This is the SharePoint package file that the Feature and Package Designer helped us to create. Figure 2-21 What Visual Studio built after choosing Package from the Build menu Let's look at the .WSP file in a little more detail. Click the SharePointProject1.wsp file in the bin\debug folder of your project. Copy the SharePointProject1.wsp then Paste to make a copy of the .WSP file. Rename its extension from .WSP to .CAB. Remember we said that a .WSP file was actually a .CAB file? Now that we've renamed it to a .CAB file, you should be able to double click on it and see the contents of the .WSP file as shown in Figure 2-22. Figure 2-22 Inside the .WSP File As you can see, there are 4 files inside the .WSP file: Elements.xml, Feature.xml, manifest.xml, and the assembly created by our project (a copy of the one we saw in the debug directory). Let's look at the contents of these files briefly. Drag Elements.xml, Feature.xml, and manifest.xml out of the renamed .CAB file to your desktop. Manifest.xml is shown in Listing 2-4 and is the top-level manifest for the .WSP file. It tells about any assemblies included in the package (in this case SharePointProject1.dll). Additional assemblies could be included if you use the Advanced page of the Package Designer to add additional project or external assemblies. Manifest.xml also lists any features contained in the Package, in this case Feature1. You can see what manifest.xml will look like within Visual Studio by double clicking on the Package.package project item to show the Package Designer then clicking on the Manifest button at the bottom of the Package Designer. The same file shown in Listing 2-4 will be shown. Listing 2-4: Manifest.xml inside the .WSP file <?xml version="1.0" encoding="utf-8"?> <Solution xmlns="" SolutionId="00257823-9b84-48c4-814a-fd754b21073f" SharePointProductVersion="14.0"> <Assemblies> <Assembly Location="SharePointProject1.dll" DeploymentTarget="GlobalAssemblyCache" /> </Assemblies> <FeatureManifests> <FeatureManifest Location="SharePointProject1_Feature1\Feature.xml" /> </FeatureManifests> </Solution> Feature.xml is shown in Listing 2-5 and corresponds to Feature1 of our two features. In fact, you can see this XML file by double clicking on Feature1.feature to show the Feature designer, then clicking the Manifest button at the bottom of the form. This XML file describes the Feature and tells about the manifests included in the feature. Because Feature1 includes EventReceiver1, there is one Elements.xml file associated with EventReceiver1, which is the same Elements.xml file that we found under the EventReceiver1 folder. Listing 2-5: Feature.XML inside the .WSP file <?xml version="1.0" encoding="utf-8"?> <Feature xmlns="" Title="SharePointProject1 Feature1" Id="d4050cd0-e7d5-48b0-88a2-fb4257b461b7" Scope="Site"> <ElementManifests> <ElementManifest Location="EventReceiver1\Elements.xml" /> </ElementManifests> </Feature> Elements.xml is just the same file we saw under the EventReceiver1 folder as shown in Listing 2-6. As you can see, there is no magic here, the .WSP file packages up files we've already been able to see in the Package and Feature designers and the Elements.xml file we edited in Visual Studio associated with EventReceiver1. Listing 2-6: Elements.xml inside the .WSP File <>1000</SequenceNumber> </Receiver> </Receivers> </Elements> Debugging Now that we've built our project and created the .WSP file, let's debug our solution. To debug the solution, press F5 or choose Run from the Debug menu. Now we see much more activity in the Output window as shown in Listing 2-7. The Build phase does what we saw before—compiles a DLL from any code in the project and builds a package. Then in Deploy several things of interest happen. First, there are some steps to Retract the previous version of the solution. This is so the edit code, run, edit code and run again cycle will work. Visual Studio automatically removes the package and features you installed on your last debug session before deploying your updated package to ensure that you will always have the most recent version of your solution on the server and that the old one won't conflict with the new one. You can also manually Retract a solution from the Server using the Retract command from the Build menu—for example, if you want to ensure that the Server you were testing on doesn't have your solution on it when you are done. The next thing that Visual Studio does is deploy your .WSP file to the server—the equivalent of using stsadm.exe on the .WSP file at the command line. This installs the package, but there is also a second step after installation called activation. An installed solution is still not active for the web site. Visual Studio also activates the features in the solution to ensure they are installed and active on the web site. Visual Studio will also do an IIS Application Pool recycle if necessary—this ensures that the most current version of the site is running with your new solution installed on it. Finally, Visual Studio launches the site URL in a browser window. Listing 2-7: Output when you start the solution ------ Build started: Project: SharePointProject1, Configuration: Debug Any CPU ------ SharePointProject1 -> C:\Users\ecarter\Documents Visual Studio 2010\Projects\SharePointProject1\SharePointProject1 bin\Debug\SharePointProject1.dll Successfully created package at: C:\Users\ecarter\Documents\Visual Studio 2010 Projects\SharePointProject1\SharePointProject1 bin\Debug\SharePointProject1.wsp ------ Deploy started: Project: SharePointProject1, ========== Let's see if our event receiver is working. As you may remember, we tied our event receiver to the Announcements list. First, let's set a break point in EventReceiver1.cs. Click on the ItemAdded event in that file and add a breakpoint by clicking in the left margin of the code editor. Now, go back to the web browser that Visual Studio opened up for you and navigate to the Announcements list. To get there, click on the Site Actions drop-down in the top left corner of the page and choose View All Site Content. On the page that appears, scroll down to the Lists section and click on the Announcements list. Click on the Add new announcement link at the bottom of the list as shown in Figure 2-23. Figure 2-23 The Announcements List In the dialog that pops up, type some text for your new announcement, something like "Test" as shown in Figure 2-24. Then click the Save button. Figure 2-24 Creating a new announcement When you click the Save button your breakpoint should be hit in the debugger in the ItemAdded event. You can step through the code to watch it modify the newly added item by appending the text "added" to it using the F10 key. Press F5 to continue. You will now see the announcement list in the browser with the newly added announcement with the text "added" appended to the text you entered "Test." You can also click the check box next to the newly added announcement and press the Delete Item button to try to delete it. For this case, the event receiver we created called ItemDeleting runs and cancels the deletion of the announcement. SharePoint shows the dialog in Figure 2-25, notifying the user that the announcement cannot be deleted. Figure 2-25 An Event Receiver canceled the Request dialog To stop debugging, close the browser window or choose Stop Debugging from the Debug window.
http://www.informit.com/articles/article.aspx?p=1626325&amp;seqNum=2
CC-MAIN-2017-04
refinedweb
9,388
62.48
I'm playng with AMQ 5.1 and HTTP/REST in demo application works strange :( Posting to works fine, but I have problems with GET. It works first time, but then I always get HTTP status 204 and empty body, with lots of pending message in queues. The following problems is always reproduced. After restarting AMQ: $ wget "" HTTP request sent, awaiting response... 200 OK Length: 21 [text/xml] (works fine, next message) $ wget "" HTTP request sent, awaiting response... 204 No Content (and no body) And then I get status 204 until AMQ is restarted. So, I can GET only single message. I'm very confused. I think I make some very basic error :(( I also wrote simple client in Python that handle cookie -- same result. #!/usr/bin/python import httplib2 import urllib from time import sleep BASE = "" client = httplib2.Http(".cache") cookie = None def SendMessage(queue, body): url = BASE + queue print "[%s] -> %s" % (body, url) body = urllib.urlencode({"type": "queue", "body" : body}) headers, body = client.request(url, "POST", body, headers={"Content-type": "application/x-www-form-urlencoded"}) print headers def GetMessage(queue): global cookie url = BASE + queue + "?" + urllib.urlencode({"type": "queue", "readTimeout": 1000}) print url,'->' headers = {} if cookie: headers['Cookie'] = cookie print headers response, body = client.request(url, "GET", None, headers) print response try: cookie = response['set-cookie'] except: pass return body queue = "myQueue" SendMessage(queue, "test message1") SendMessage(queue, "test message2") SendMessage(queue, "test message3") sleep(1) print GetMessage(queue) print GetMessage(queue) print GetMessage(queue) -- View this message in context: Sent from the ActiveMQ - User mailing list archive at Nabble.com.
http://mail-archives.apache.org/mod_mbox/activemq-users/200805.mbox/%3C17153428.post@talk.nabble.com%3E
CC-MAIN-2015-32
refinedweb
263
57.06
In this section we will have a list of questions , my personal collection, which I encountered or derived from my understanding of java and this is highly recommended for experienced Java developers. General Collections - What is a load factor ? - Which of the collection class would you user if you have to implement a queque behavior ? - What is size and capacity of a Vector ? - Which situation would lead a Vector to throw an ArrayIndexOutOfBoundsException ? - Can you iterate a map ? No, Map does not implement Iterable interface, only Collection ( thus , list and set ) do , so the Collection implements iterable <E> , make the set and the list iteratble. The Map can't be iterable only the key part can be iterable for example the following code is how you traverse the map Map <String,String> map= new HashMap<String,String>(); Ierator i =map.keySet().iterator(); while(i.hasNext()){ String value=map.get(i.next()); } - How can you make your custom class iterate through advanced for loop ? You need to make your class implement the Iterable interface, override the iterator() method , and create Iterator class , which can iterate your items in the class. - HashCode,Equality , toString,identity - Suppose that I am designing a class it is really mandatory to override the hashCode() method , even if I am sure none of these object will ever be used in Hash based collections like HashMap, Hashtable or HashSet ? It is not necessary to override the hashCode method even if you don't use it in a hashbased collections. But If you have your equals method overriden , which means that you would want your object to be tested for equality. And the contract with the equals method is that , if two objects are equal you must override the hashcode method. Lets take the following example class Animal { private int i=12; } public class Test { public static void main(String [] args) throws Exception { Animal a1=new Animal(); Animal a2=new Animal(); System.out.println(a1==a2); System.out.println(a1.equals(a2)); } }Looking at the code you certainly know that the a1 and a2 objects are equal meaningfully, But the output would say otherwise, ( false false), This is not expected. What the above equals does is , it invokes the equals method on Object , and the object checks if they are pointing to the same object, which is not the right way to check. So essentially you would have to override the equals method, and as per contract with hashCode, the whenever the equals is overriden it is necessary to override the hashCode(), who knows you might need this objects added to the hash based collections, so better do it now. - When I invoke system.out.println() on an object which class doesn't override the toString() method , what is the output and what it means ? The output comes in a format of <fully qualified class name>@hashcode, the hashcode (in hexadecimal) is invoked from the Object if hashCode method is not overriden - What is the problem if you return a random number in your hashCode(), I think it is most efficient , since it would create unique bukkets ? - How good idea is that to return 1 ( a contstant value ) in your hashCode()? In such case all your object will fall into one single bucket , and the searching algorithm will end up invoking the equals method to find the object you are looking for , which is a highly inefficient way , but legal. Seriallization , Deseriallization, Externalization - Lets say you need to introduce a marker interface in your library, how would be add capability to the class which implements it? - Static fields don't get seriallized, but lets say I would want to save this the state of the field , how would I do it ? Generics - If methods overloaded on basic of type parameters, what is the best alternative to overload ? Java Memory Management How do you set the stack size of java runtime ? -Xss can be used to set the stack size What is the difference between stackoverflowerror vs outofmemoryerror ? stackoverflow happens in overflow of stack ,and outofmemory ocuures if you dreained your heap. - How to generate StackOverlFlow error and OutOfMemoryError problematically ?
http://www.bullraider.com/java/core-java/advanced-interview-questions
CC-MAIN-2017-47
refinedweb
690
58.42
Network viewer The network viewer allows a user to display an SBML model as a network. If a model has no layout information the network viewer will create a layout based on a force-spring algorithm. If the SBML does contain a layout the viewer will use that. The viewer can be used either from the console or from the viewer graphical user interface. To view an Antimony model use the code: import nwed import tellurium as te model = ''' S1 -> S2; v; S2 -> S3; v; ''' nwed.setsbml (te.antimonyToSBML (model)) To view a SBML model from a file use the code: import nwed import tellurium as te nwed.setsbml (te.readFromFile ('mymodel.xml')) To get the SBML from the viewer use the following code: import nwed sbmlStr = nwed.getsbml () The plugin also provides a library to manipulate the SBML layout extension, i.e. add layout to SBML model that has none. Further details can be found at: libSBNW documentation and the hosting Github page Import plugins Tellurium allows simple GUI based plugins for importing COMBINE and SED-ML files. For more information on COMBINE and SED-ML support in Tellurium, see SEDML Support page. To use the plugin, open File menu and go to Import. Here, you have the option to import either a COMBINE Archive or SED-ML files in Python script or PhraSED-ML script. A window to select the file to translate will pop up: Choose a file to translate, and press Open. Translated files will be opened in a new editor window. Save the file and now you can directly run the code using the console. Any SBML files described in the manifest file of COMBINE archive will be translated into Antimony by default. Rate Law Template The Rate Law Templates plugin provides a dictionary of commonly used rate equations in systems biology. The plugin allows users to insert equations to the editor window (optionally renaming the equation parameters to match the user’s own variable names). Currently, the library consists of 47 different rate laws which can be easily expanded through updating the rate law database. If you do not see a tab for either the network viewer or the rate law template, go to View -> Panes and make sure Network viewer and RateLaw are checked
http://tellurium.analogmachine.org/spyder-plugins/
CC-MAIN-2017-22
refinedweb
381
62.17
#include <pfmt.h> int lfmt(FILE *stream, long flags, char *format, ... /* arg*/); The lfmt() function retrieves a format string from a locale-specific message database (unless MM_NOGET is specified) and uses it for printf(3C) style formatting of args. The output is displayed on stream. If stream is NULL no output is displayed. The lfmt() function encapsulates the output in the standard error message format (unless MM_NOSTD is specified, in which case the output is like that of printf(). It forwards its output to the logging and monitoring facility, even if stream is NULL. Optionally, lfmt() displays the output on the console with a date and time stamp. If the printf() format string is to be retrieved from a message database, the format argument must have the following structure: <catalog>:<msgnum>:<defmsg>. If MM_NOGET is specified, only the <defmsg> field must be specified. The <catalog> field indicates the message database that contains the localized version of the format string. This field is limited to 14 characters selected from a set of all characters values, excluding the null character (\0) and the ASCII codes for slash (/) and colon (:). The <msgnum> field is a positive number that indicates the index of the string into the message database. If the catalog does not exist in the locale (specified by the last call to setlocale(3C) using the LC_ALL or LC_MESSAGES categories), or if the message number is out of bound, lfmt() will attempt to retrieve the message from the C locale. If this second retrieval fails, lfmt() uses the <defmsg> field of the format argument. If <catalog> is omitted, lfmt() will attempt to retrieve the string from the default catalog specified by the last call to setcat(3C). In this case, the format argument has the following structure: :<msgnum>:<defmsg>. The lfmt() function will output the message Message not found!!\n as the format string if <catalog> is not a valid catalog name, if no catalog is specified (either explicitly or with setcat()), if <msgnum> is not a valid number, or if no message could be retrieved from the message databases and <defmsg> was omitted. The flags argument determines the type of output (whether the format should be interpreted as it is or be encapsulated in the standard message format) and the access to message catalogs to retrieve a localized version of format. The flags argument is composed of several groups, and can take the following values (one from each group): Do not use the standard message format but interpret format as a printf() format. Only catalog access control flags, console display control and logging information should be specified if MM_NOSTD is used; all other flags will be ignored. Output using the standard message format (default value is 0). Do not retrieve a localized version of format. In this case, only the <defmsg> field of format is specified. Retrieve a localized version of format from <catalog>, using <msgid> as the index and <defmsg> as the default message (default value is 0). Generate a localized version of HALT, but do not halt the machine. Generate a localized version of ERROR (default value is 0). Generate a localized version of WARNING. Generate a localized version of INFO. Additional severities can be defined with the addsev(3C) function, using number-string pairs with numeric values in the range [5-255]. The specified severity is formed by the bitwise OR operation of the numeric value and other flags arguments. If the severity is not defined, lfmt() uses the string SEV=N where N). Specify an action message. Any severity value is superseded and replaced by a localized version of TO FIX. Display the message to the console in addition to the specified stream. Do not display the message to the console in addition to the specified stream (default value is 0). Identify the source of the condition. Identifiers are: MM_HARD (hardware), MM_SOFT (software), and MM_FIRM (firmware). Identify the type of software in which the problem is spotted. Identifiers are: MM_APPL (application), MM_UTIL (utility), and MM_OPSYS (operating system). The lfmt() function displays error messages in the following format: label: severity: text If no label was defined by a call to setlabel(3C), the message is displayed in the format: severity: text If lfmt() is called twice to display an error message and a helpful action or recovery message, the output may appear as follows: label: severity: text label: TO FIX: text Upon successful completion, lfmt() returns the number of bytes transmitted. Otherwise, it returns a negative value: Write the error to stream. Cannot log and/or display at console. Since lfmt() uses gettxt(3C), it is recommended that lfmt() not be used.Example 2 The following example setlabel("UX:test"); lfmt(stderr, MM_INFO|MM_SOFT|MM_UTIL, "test:23:test facility is enabled\n"); displays the message to stderr and makes it available for logging: UX:test: INFO: test facility enabled See attributes(5) for descriptions of the following attributes: addsev(3C), gettxt(3C), pfmt(3C), printf(3C), setcat(3C), setlabel(3C), setlocale(3C), attributes(5), environ(5)
http://docs.oracle.com/cd/E36784_01/html/E36874/lfmt-3c.html
CC-MAIN-2016-36
refinedweb
838
53.41
explain_fputc_or_die - output of characters and report errors #include <libexplain/fputc.h> void explain_fputc_or_die(int c, FILE *fp); The explain_fputc_or_die function is used to call the fputc(3) system call. On failure an explanation will be printed to stderr, obtained from explain_fputc(3), and then the process terminates by calling exit(EXIT_FAILURE). This function is intended to be used in a fashion similar to the following example: explain_fputc_or_die(c, fp); c The c, exactly as to be passed to the fputc(3) system call. fp The fp, exactly as to be passed to the fputc(3) system call. Returns: This function only returns on success. On failure, prints an explanation and exits. fputc(3) output of characters explain_fputc(3) explain fputc(3) errors exit(2) terminate the calling process libexplain version 0.19 Copyright (C) 2008 Peter Miller explain_fputc_or_die(3)
http://huge-man-linux.net/man3/explain_fputc_or_die.html
CC-MAIN-2017-30
refinedweb
139
57.47
Everyone, You were correct that the test1 module shouldn't have worked at all. I needed to change the first line to: from test2 import TestError, TryItOut I had test2 open in the PythonWin IDE so it was already in the namespace. Which brings me back to my original issue of why it works here and not in my larger module. Is it normal to have to import user defined exceptions? I dont rememeber having to import any of the built in exceptions. So, here is the output of the fixed test1: test2.TestError fudge Am I doing this all wrong? Thanks, Derek Basch Derek Basch fed this fish to the penguins on Sunday 29 September 2002 07:22 pm: > I know that TestError has been imported because I can > put a "print TestError" before the except clause in > test1 and I get "test2.TestError". Can anyone tell me > why I am having to put the test2.TestError instead of > just TestError? It confuses me even further that my > small test modules work like I would expect and it > doesn't work on the larger modules. > I don't even see why "test1" is working. You explicitly import the exception by name, but you didn't bring in "TryItOut"... That alone should have raised an exception. [wulfraed at b... wulfraed]$ python test1.py Traceback (most recent call last): File "test1.py", line 14, in ? letitrip.run() File "test1.py", line 7, in run tryitout = TryItOut() NameError: global name 'TryItOut' is not defined __________________________________________________ Do you Yahoo!? New DSL Internet Access from SBC & Yahoo!
https://mail.python.org/pipermail/python-list/2002-September/131167.html
CC-MAIN-2018-05
refinedweb
263
84.68
NAMETcl_LinkVar, Tcl_UnlinkVar, Tcl_UpdateLinkedVar - link Tcl variable to C variable SYNOPSIS #include <tcl.h> int Tcl_LinkVar(interp, varName, addr, type) Tcl_UnlinkVar(interp, varName) Tcl_UpdateLinkedVar(interp, varName) ARGUMENTS - - Tcl_Interp *interp (in) Interpreter that contains varName. Also used by Tcl_LinkVar to return error messages. - - char *varName (in) Name of global variable. Must be in writable memory: Tcl may make temporary modifications to it while parsing the variable name. - - char *addr (in) Address of C variable that is to be linked to varName. - - int type (in) Type of C variable. Must be one of TCL_LINK_INT, TCL_LINK_DOUBLE, TCL_LINK_BOOLEAN, or TCL_LINK_STRING, optionally OR'ed with TCL_LINK_READ_ONLY to make Tcl variable read-only. DESCRIPTION.: - TCL_LINK_INT - The C variable is of type int. Any value written into the Tcl variable must have a proper integer form acceptable to Tcl_GetInt; attempts to write non-integer values into varName will be rejected with Tcl errors. - TCL_LINK_DOUBLE - The C variable is of type double. Any value written into the Tcl variable must have a proper real form acceptable to Tcl_GetDouble; attempts to write non-real values into varName will be rejected with Tcl errors. - TCL_LINK_BOOLEAN - The C variable is of type int. If its value is zero then it will read from Tcl as ``0''; otherwise it will read from Tcl as ``1''. Whenever varName is modified, the C variable will be set to a 0 or 1 value. Any value written into the Tcl variable must have a proper boolean form acceptable to Tcl_GetBoolean; attempts to write non-boolean values into varName will be rejected with Tcl errors. - TCL_LINK_STRING - The C variable is of type char *. If its value is not null then it must be a pointer to a string allocated with Tcl_Alloc. Whenever the Tcl variable is modified the current C string will be freed and new memory will be allocated to hold a copy of the variable's new value. If the C variable contains a null pointer then the Tcl variable will read as ``NULL''.. KEYWORDSboolean, integer, link, read-only, real, string, traces, variable Important: Use the man command (% man) to see how a command is used on your particular computer. >> Linux/Unix Command Library
http://linux.about.com/library/cmd/blcmdl3_Tcl_LinkVar.htm
CC-MAIN-2014-42
refinedweb
359
63.9
Add this to your package's pubspec.yaml file: dependencies: kod: "^1.0.4" You can install packages from the command line: with pub: $ pub get Alternatively, your editor might support pub get. Check the docs for your editor to learn more. Now in your Dart code, you can use: import 'package:kod/kod.dart'; We analyzed this package on May 20, 9, column 29 of /tmp/pana-FXGTMQ/lib/src/kod_base.dart: Expected to find ';'. final Env replEnv = new Env() ^ Fix platform conflicts. Error(s) prevent platform classification: Make sure dartfmtruns. Maintain CHANGELOG.md. Changelog entries help clients to follow the progress in your code. Fix analysis and formatting issues. Analysis or formatting checks reported 34 errors 5 hints. Strong-mode analysis of lib/src/kod_base.dartfailed with the following error: line: 49 col: 20 Undefined class 'MalType'. Run dartfmtto format lib/kod.dart. The description is too short. Add more detail about the package, what it does and what is its target use case. Try to write at least 60 characters.
https://pub.dartlang.org/packages/kod
CC-MAIN-2018-22
refinedweb
173
71.41
We already have a hint of the dynamic nature of the Spirit framework. This capability is fundamental to Spirit. Dynamic parsing is a very powerful concept. We shall take this concept further through run-time parametric parsers. We are able to handle parsing tasks that are impossible to do with any EBNF syntax alone. A little critter called boost::ref lurking in the boost distribution is quite powerful beast when used with Spirit's primitive parsers. We are used to seeing the Spirit primitive parsers created with string or character literals such as: ch_p('A') range_p('A', 'Z') str_p("Hello World") str_p has a second form that accepts two iterators over the string: char const* first = "My oh my"; char const* last = first + std::strlen(first); str_p(first, last) What is not obvious is that we can use boost::ref as well: char ch = 'A'; char from = 'A'; char to = 'Z'; ch_p(boost::ref(ch)) range_p(boost::ref(from), boost::ref(to)) When boost::ref is used, the actual parameters to ch_p and range_p are held by reference. This means that we can change the values of ch, from and to anytime and the corresponding ch_p and range_p parser will follow their dynamic values. Of course, since they are held by reference, you must make sure that the referenced object is not destructed while parsing. What about str_p? While the first form of str_p (the single argument form) is reserved for null terminated string constants, the second form (the two argument first/last iterator form) may be used: char const* first = "My oh my"; char const* last = first + std::strlen(first); str_p(boost::ref(first), boost::ref(last)) #include <boost/spirit/attribute/parametric.hpp> Taking this further, Spirit includes functional versions of the primitives. Rather than taking in characters, strings or references to characters and strings (using boost::ref), the functional versions take in functions or functors. The functional version of chlit. This parser takes in a function or functor (function object). The function is expected to have an interface compatible with: CharT func() where CharT is the character type (e.g. char, int, wchar_t). The functor is expected to have an interface compatible with: struct functor { CharT operator()() const; }; where CharT is the character type (e.g. char, int, wchar_t). Here's a contrived example: struct X { char operator()() const { return 'X'; } }; Now we can use X to create our f_chlit parser: f_ch_p(X()) The functional version of range. This parser takes in a function or functor compatible with the interfaces above. The difference is that f_range (and f_range_p) expects two functors. One for the start and one for the end of the range. The functional version of chseq. This parser takes in two functions or functors. One for the begin iterator and one for the end iterator. The function is expected to have an interface compatible with: IteratorT func() where IteratorT is the iterator type (e.g. char const*, wchar_t const*). The functor is expected to have an interface compatible with: struct functor { IteratorT operator()() const; }; where IteratorT is the iterator type (e.g. char const*, wchar_t const*). The functional version of strlit. This parser takes in two functions or functors compatible with the interfaces that f_chseq expects. Use, modification and distribution is subject to the Boost Software License, Version 1.0. (See accompanying file LICENSE_1_0.txt or copy at)
http://www.boost.org/doc/libs/1_31_0/libs/spirit/doc/parametric_parsers.html
crawl-003
refinedweb
562
56.35
Introduction, a large portion of this data is either redundant or doesn't contain much useful information. The most efficient way to get access to the most important parts of the data, without having to sift through redundant and insignificant data, is to summarize the data in a way that it contains non-redundant and useful information only. The data can be in any form such as audio, video, images, and text. In this article, we will see how we can use automatic text summarization techniques to summarize text data. Text summarization is a subdomain of Natural Language Processing (NLP) that deals with extracting summaries from huge chunks of texts. There are two main types of techniques used for text summarization: NLP-based techniques and deep learning-based techniques. In this article, we will see a simple NLP-based technique for text summarization. We will not use any machine learning library in this article. Rather we will simply use Python's NLTK library for summarizing Wikipedia articles. Text Summarization Steps I will explain the steps involved in text summarization using NLP techniques with the help of an example. The following is a paragraph from one of the famous speeches by Denzel Washington at the 48th NAACP Image Awards: So, keep working. Keep striving. Never give up. Fall down seven times, get up eight. Ease is a greater threat to progress than hardship. Ease is a greater threat to progress than hardship. So, keep moving, keep growing, keep learning. See you at work. We can see from the paragraph above that he is basically motivating others to work hard and never give up. To summarize the above paragraph using NLP-based techniques we need to follow a set of steps, which will be described in the following sections. Convert Paragraphs to Sentences We first need to convert the whole paragraph into sentences. The most common way of converting paragraphs to sentences is to split the paragraph whenever a period is encountered. So if we split the paragraph under discussion into sentences, we get the following sentences: - So, keep working - Keep striving - Never give up - Fall down seven times, get up eight - Ease is a greater threat to progress than hardship - Ease is a greater threat to progress than hardship - So, keep moving, keep growing, keep learning - See you at work Text Preprocessing After converting paragraph to sentences, we need to remove all the special characters, stop words and numbers from all the sentences. After preprocessing, we get the following sentences: - keep working - keep striving - never give - fall seven time get eight - ease greater threat progress hardship - ease greater threat progress hardship - keep moving keep growing keep learning - see work Tokenizing the Sentences We need to tokenize all the sentences to get all the words that exist in the sentences. After tokenizing the sentences, we get list of following words: ['keep', 'working', 'keep', 'striving', 'never', 'give', 'fall', 'seven', 'time', 'get', 'eight', 'ease', 'greater', 'threat', 'progress', 'hardship', 'ease', 'greater', 'threat', 'progress', 'hardship', 'keep', 'moving', 'keep', 'growing', 'keep', 'learning', 'see', 'work'] Find Weighted Frequency of Occurrence Next we need to find the weighted frequency of occurrences of all the words. We can find the weighted frequency of each word by dividing its frequency by the frequency of the most occurring word. The following table contains the weighted frequencies for each word: Since the word "keep" has the highest frequency of 5, therefore the weighted frequency of all the words have been calculated by dividing their number of occurances by 5. Replace Words by Weighted Frequency in Original Sentences The final step is to plug the weighted frequency in place of the corresponding words in original sentences and finding their sum. It is important to mention that weighted frequency for the words removed during preprocessing (stop words, punctuation, digits etc.) will be zero and therefore is not required to be added, as mentioned below: Sort Sentences in Descending Order of Sum The final step is to sort the sentences in inverse order of their sum. The sentences with highest frequencies summarize the text. For instance, look at the sentence with the highest sum of weighted frequencies: So, keep moving, keep growing, keep learning You can easily judge that what the paragraph is all about. Similarly, you can add the sentence with the second highest sum of weighted frequencies to have a more informative summary. Take a look at the following sentences: So, keep moving, keep growing, keep learning. Ease is a greater threat to progress than hardship. These two sentences give a pretty good summarization of what was said in the paragraph. Summarizing Wikipedia Articles Now we know how the process of text summarization works using a very simple NLP technique. In this section, we will use Python's NLTK library to summarize a Wikipedia article. Fetching Articles from Wikipedia Before we could summarize Wikipedia articles, we need to fetch them from the web. To do so we will use a couple of libraries. The first library that we need to download is the beautiful soup which is very useful Python utility for web scraping. Execute the following command at the command prompt to download the Beautiful Soup utility. $ pip install beautifulsoup4 Another important library that we need to parse XML and HTML is the lxml library. Execute the following command at command prompt to download lxml: $ pip install lxml Now lets some Python code to scrape data from the web. The article we are going to scrape is the Wikipedia article on Artificial Intelligence. Execute the following script: import bs4 as bs import urllib.request import re scraped_data = urllib.request.urlopen('') article = scraped_data.read() parsed_article = bs.BeautifulSoup(article,'lxml') paragraphs = parsed_article.find_all('p') article_text = "" for p in paragraphs: article_text += p.text In the script above we first import the important libraries required for scraping the data from the web. We then use the urlopen function from the urllib.request utility to scrape the data. Next, we need to call read function on the object returned by urlopen function in order to read the data. To parse the data, we use BeautifulSoup object and pass it the scraped data object i.e. article and the lxml parser. In Wikipedia articles, all the text for the article is enclosed inside the <p> tags. To retrieve the text we need to call find_all function on the object returned by the BeautifulSoup. The tag name is passed as a parameter to the function. The find_all function returns all the paragraphs in the article in the form of a list. All the paragraphs have been combined to recreate the article. Once the article is scraped, we need to to do some preprocessing. Preprocessing The first preprocessing step is to remove references from the article. Wikipedia, references are enclosed in square brackets. The following script removes the square brackets and replaces the resulting multiple spaces by a single space. Take a look at the script below: # Removing Square Brackets and Extra Spaces article_text = re.sub(r'\[[0-9]*\]', ' ', article_text) article_text = re.sub(r'\s+', ' ', article_text) The article_text object contains text without brackets. However, we do not want to remove anything else from the article since this is the original article. We will not remove other numbers, punctuation marks and special characters from this text since we will use this text to create summaries and weighted word frequencies will be replaced in this article. To clean the text and calculate weighted frequences, we will create another object. Take a look at the following script: # Removing special characters and digits formatted_article_text = re.sub('[^a-zA-Z]', ' ', article_text ) formatted_article_text = re.sub(r'\s+', ' ', formatted_article_text) Now we have two objects article_text, which contains the original article and formatted_article_text which contains the formatted article. We will use formatted_article_text to create weighted frequency histograms for the words and will replace these weighted frequencies with the words in the article_text object. Converting Text To Sentences At this point we have preprocessed the data. Next, we need to tokenize the article into sentences. We will use the article_text object for tokenizing the article to sentence since it contains full stops. The formatted_article_text does not contain any punctuation and therefore cannot be converted into sentences using the full stop as a parameter. The following script performs sentence tokenization: sentence_list = nltk.sent_tokenize(article_text) Find Weighted Frequency of Occurrence To find the frequency of occurrence of each word, we use the formatted_article_text variable. We used this variable to find the frequency of occurrence since it doesn't contain punctuation, digits, or other special characters. Take a look at the following script: stopwords = nltk.corpus.stopwords.words('english') word_frequencies = {} for word in nltk.word_tokenize(formatted_article_text): if word not in stopwords: if word not in word_frequencies.keys(): word_frequencies[word] = 1 else: word_frequencies[word] += 1 In the script above, we first store all the English stop words from the nltk library into a stopwords variable. Next, we loop through all the sentences and then corresponding words to first check if they are stop words. If not, we proceed to check whether the words exist in word_frequency dictionary i.e. word_frequencies, or not. If the word is encountered for the first time, it is added to the dictionary as a key and its value is set to 1. Otherwise, if the word previously exists in the dictionary, its value is simply updated by 1. Finally, to find the weighted frequency, we can simply divide the number of occurances of all the words by the frequency of the most occurring word, as shown below: maximum_frequncy = max(word_frequencies.values()) for word in word_frequencies.keys(): word_frequencies[word] = (word_frequencies[word]/maximum_frequncy) Calculating Sentence Scores We have now calculated the weighted frequencies for all the words. Now is the time to calculate the scores for each sentence by adding weighted frequencies of the words that occur in that particular sentence. The following script calculates sentence scores: sentence_scores = {} for sent in sentence_list: for word in nltk.word_tokenize(sent.lower()): if word in word_frequencies.keys(): if len(sent.split(' ')) < 30: if sent not in sentence_scores.keys(): sentence_scores[sent] = word_frequencies[word] else: sentence_scores[sent] += word_frequencies[word] In the script above, we first create an empty sentence_scores dictionary. The keys of this dictionary will be the sentences themselves and the values will be the corresponding scores of the sentences. Next, we loop through each sentence in the sentence_list and tokenize the sentence into words. We then check if the word exists in the word_frequencies dictionary. This check is performed since we created the sentence_list list from the article_text object; on the other hand, the word frequencies were calculated using the formatted_article_text object, which doesn't contain any stop words, numbers, etc. We do not want very long sentences in the summary, therefore, we calculate the score for only sentences with less than 30 words (although you can tweak this parameter for your own use-case). Next, we check whether the sentence exists in the sentence_scores dictionary or not. If the sentence doesn't exist, we add it to the sentence_scores dictionary as a key and assign it the weighted frequency of the first word in the sentence, as its value. On the contrary, if the sentence exists in the dictionary, we simply add the weighted frequency of the word to the existing value. Getting the Summary Now we have the sentence_scores dictionary that contains sentences with their corresponding score. To summarize the article, we can take top N sentences with the highest scores. The following script retrieves top 7 sentences and prints them on the screen. import heapq summary_sentences = heapq.nlargest(7, sentence_scores, key=sentence_scores.get) summary = ' '.join(summary_sentences) print(summary) In the script above, we use the heapq library and call its nlargest function to retrieve the top 7 sentences with the highest scores. The output summary looks like this: Artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals. Many tools are used in AI, including versions of search and mathematical optimization, artificial neural networks, and methods based on statistics, probability and economics. The traditional problems (or goals) of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception and the ability to move and manipulate objects. When access to digital computers became possible in the middle 1950s, AI research began to explore the possibility that human intelligence could be reduced to symbol manipulation. One proposal to deal with this is to ensure that the first generally intelligent AI is 'Friendly AI', and will then be able to control subsequently developed AIs. Nowadays, the vast majority of current AI researchers work instead on tractable "narrow AI" applications (such as medical diagnosis or automobile navigation). Machine learning, a fundamental concept of AI research since the field's inception, is the study of computer algorithms that improve automatically through experience. Remember, since Wikipedia articles are updated frequently, you might get different results depending upon the time of execution of the script. Conclusion This article explains the process of text summarization with the help of the Python NLTK library. The process of scraping articles using the BeautifulSoap library has also been briefly covered in the article. I will recommend you to scrape any other article from Wikipedia and see whether you can get a good summary of the article or not.
https://stackabuse.com/text-summarization-with-nltk-in-python/
CC-MAIN-2019-43
refinedweb
2,222
54.02
posix_spawn(2) BSD System Calls Manual posix_spawn(2) NAME posix_spawn posix_spawnp -- spawn a process SYNOPSIS #include <spawn.h> int posix_spawn(pid_t *restrict pid, const char *restrict path, const posix_spawn_file_actions_t *file_actions, const posix_spawnattr_t *restrict attrp, char *const argv[restrict], char *const envp[restrict]); int posix_spawnp(pid_t *restrict pid, const char *restrict file, const posix_spawn_file_actions_t *file_actions, const posix_spawnattr_t *restrict attrp, char *const argv[restrict], char *const envp[restrict]); DESCRIPTION The posix_spawn() function creates a new process from the executable file, called the new process file, specified by path, which is an abso- lute or relative path to the file. The posix_spawnp() function is iden- tical to the posix_spawn() function if the file specified contains a slash character; otherwise, the file parameter is used to construct a pathname, with its path prefix being obtained by a search of the path specified in the environment by the ``PATH variable''. If this variable isn't specified, the default path is set according to the _PATH_DEFPATH definition in <paths.h>, which is set to ``/usr/bin:/bin''. This path- name either refers to an executable object file, or a file of data for an interpreter; execve(2) for more details. The argument pid is a pointer to a pid_t variable to receive the pid of the spawned process; if this is NULL, then the pid of the spawned process is not returned. If this pointer is non-NULL, then on successful comple- tion, the variable will be modified to contain the pid of the spawned process. The value is undefined in the case of a failure. The argument file_actions is either NULL, or it is a pointer to a file actions object that was initialized by a call to posix_spawn_file_actions_init(3) and represents zero or more file actions. File descriptors open in the calling process image remain open in the new process image, except for those for which the close-on-exec flag is set (see close(2) and fcntl(2)). Descriptors that remain open are unaffected by posix_spawn() unless their behaviour is modified by particular spawn flags or a file action; see posix_spawnattr_setflags(3) and posix_spawn_file_actions_init(3) for additional information. The argument attrp is either NULL, or it is a pointer to an attributes object that was initialized by a call to posix_spawnattr_init(3) and rep- resents a set of spawn attributes to apply. If NULL, then the default attributes are applied; otherwise, these attributes can control various aspects of the spawned process, and are applied prior to the spawned process beginning execution; see posix_spawnattr_init(3) for more infor- mation. The argument argv is a pointer to a null-terminated array of character pointers to null-terminated character strings. These strings construct the argument list to be made available to the new process. At least argv[0] must be present in the array, and should contain the file name of the program being spawned, e.g. the last component of the path or file argument. The argument envp is a pointer to a null-terminated array of character pointers to null-terminated strings. A pointer to this array is normally stored in the global variable environ. These strings pass information to the new process that is not directly an argument to the command (see environ(7)). Signals set to be ignored in the calling process are set to be ignored in the new process, unless the behaviour is modified by user specified spawn attributes. Signals which are set to be caught in the calling process image are set to default action in the new process image. Blocked sig- nals remain blocked regardless of changes to the signal action, unless the mask is overridden by user specified spawn attributes. The signal stack is reset to be undefined (see sigaction(2) for more information). By default, the effective user ID and group ID will be the same as those of the calling process image; however, this may be overridden to force them to be the real user ID and group ID of the parent process by user specified spawn attributes (see posix_spawnattr_init(3) for more informa- tion).: parent process ID see getppid(2) process group ID see getpgrp(2), posix_spawnattr_init(3)), posix_spawnattr_init(3) When a program is executed as a result of a posix_spawn() or posix_spawnp() If the pid argument is NULL, no pid is returned to the calling process; if it is non-NULL, then posix_spawn() and posix_spawnp() functions return the process ID of the child process into the pid_t variable pointed to by the pid argument and return a 0 on success. If an error occurs, they return a non-zero error code as the function return value, and no child process is created. ERRORS The posix_spawn() and posix_spawnp() functions will fail and return to the calling process if: [EINVAL] The value specified by file_actions or attrp is invalid. ), posix_spawnattr_init(3), posix_spawn_file_actions_init(3), STANDARDS Version 3 of the Single UNIX Specification (``SUSv3'') [SPN] HISTORY The posix_spawn() and posix_spawnp() function calls appeared in Version 3 of the Single UNIX Specification (``SUSv3'') [SPN]. Mac OS X November 2, 2010 Mac OS X Mac OS X 10.9.1 - Generated Mon Jan 6 10:26:35 CST 2014
http://www.manpagez.com/man/2/posix_spawnp/
CC-MAIN-2016-30
refinedweb
860
54.05
int box_prompt_multiple ( char *message, int sort_flag ) char *message; // box_prompt messages int sort_flag; // enables/disables item sorting Synopsis #include "silver.h" box_prompt_multiple is like box_prompt , except that multiple items may be selected from the list of text items. The items are displayed in a box panel. Items are added to the list by previous calls to attach_box_item. Selected items may be retrieved by successive calls to get_generic . Parameters message is a null-terminated string with three components, separated by vertical bars ('|'), as described in box_prompt , except for the third component: <text1>|<text2>|<text3> where : <text1> is the title of the panel. <text2> is the text that appears beneath the title, if this text is "*", then no text will be displayed. <text3> should be "*". sort_flag , if non-zero, will cause the items to be sorted alphabetically; otherwise, they will retain the order in which they were received. Return Value If the user successfully selects at least one item then box_prompt_multiple returns 1; otherwise, 0 is returned. See Also attach_box_item , get_generic Example C / C++ Code attach_box_item ( "One" ); attach_box_item ( "Two" ); attach_box_item ( "Three" ); attach_box_item ( "Four" ); attach_box_item ( "Five" ); if ( box_prompt_multiple ( "Numbers|Select numbers|*", 0 ) ) { for ( i = 1 ; get_generic ( i, xxx ) ; ++i ) ss_command ( "note xxx: [%s] i: [%d]", xxx, i ); } Help URL:
http://silverscreen.com/box_prompt_multiple.htm
CC-MAIN-2021-21
refinedweb
205
53.21
Informally, a closure can be recognized as a list of statements within braces, like any other code block. It optionally has a list of identifiers to name the parameters passed to it, with an → making the end of the list. It’s easiest to undertands closures through examples, so let’s consider the following example: def adder = {i,j -> i + j} // 1 assert adder(40,60) == 100 // 2 - Define a closure which receive two parameters and then sum that parameters - Verify that calling adder closure with 40 and 60 as parameters returns 100 Another simple examples of closures would be: def greeting = { "Hello $it!" } // 1 assert greeting('Patrick') == 'Hello Patrick!' - Defines an implicit parameter, named it def list = [1,2,3,4,5,6,7,8] list.each{ print it // 1 } - Prints each element from the list def number = 3 number.times { println "hi" // 1 } - Prints hi three times def list = [1,2,3,4] println list.findAll{ it%2 == 0 // 1 } - For each element in the list is applied mod 2, in order to know the elements who remainder is zero You may want to pass an closure as parameter def multiply(n, closure){ closure(n) } assert 18 == multiply(3, { n -> n * 6 }) Other version of this code. def multiply(n, closure){ closure(n) } assert 18 == multiply(3) { n -> n * 6 } Example: Find the square of odds numbers > 0 until n. def square(n,closure){ for(int i=1;i<=n;i++){ closure(i) } } square(20, { if((it%2)==1) println "The square of $it is ${it*it}" }) Output The square of 1 is 1 The square of 3 is 9 The square of 5 is 25 The square of 7 is 49 The square of 9 is 81 The square of 11 is 121 The square of 13 is 169 The square of 15 is 225 The square of 17 is 289 The square of 19 is 361 Example: A Palindrome is a word phrase, or number that reads the same backward or forward. Find palindrome words using closure def isPalindrome(word, closure){ if(closure(word)){ println ("$word is Palindrome") } else { println ("$word is NOT Palindrome") } } def closure = { it == it.reverse() } isPalindrome("Hello World", closure) isPalindrome("anitalavalatina", closure) Output: Hello World is NOT palindrome anitalavalatina is palindrome Scope To fully understand the closures it’s really important to understand the meaning of this, owner and delegate. In general: - this:. Confused?. Me too, so let’s take a look at this file called: ClosureScope.groovy class Owner { def closure = { assert this.class.name == 'Owner' // 1 assert delegate.class.name == 'Owner' // 2 def nestedClosure = { assert owner.class.name == 'Owner$_closure1' // 3 } nestedClosure() } } def closure = new Owner().closure closure() - The thisvalue always refers to the instance of the enclosing class. - Owner is always the same as this, except for nested closures. - Delegate is the same as owner by default, but it can be changed. So how we can change delegate from a closure?, let’s take a look at some code: class FirstClass { String myString = "I am first class" } class SecondClass { String myString = "I am second class" } class MainClass { def closure = { return myString } } def closure = new MainClass().closure closure.delegate = new FirstClass() assert closure() == 'I am first class' Even when the delegate is set it can be change to something else, this means we can make the behavior of the closure dynamic. Let’s consider a slightly more complicated question: If n people are at a party and everyone clinks glasses with everybody else, how many clinks do you hear?. To answer the question, you can use Integer’s upto method which does something for every Integer starting at the current value. You apply this method to the problem by imaginating people arriving at the party at one by one. As people arrive, they clink glasses with everyone who is already present. This way, everyone clinks glasses with everyone else exactly once. def totalClinks = 0 def partyPeople = 100 1.upto(partyPeople) { guestNumber -> clinksWithGuest = guestNumber - 1 // 1 totalClinks += clinksWithGuest // 2 } assert totalClinks == (partyPeople * (partyPeople-1)) / 2 // 3 - Counting number of people already present - Keep running total of the number of clinks - Test the result using Gauss’ formula Return to the main article
https://josdem.io/techtalk/groovy/closures/
CC-MAIN-2022-33
refinedweb
700
59.13
Using a DSL (Domain Specific Language) for RSK Java implementation tests I have already described something of the RSK project in Java in other post. Today I would like to comment on an addition I wrote a few years ago to facilitate some tests. I could say that those tests are integration tests. I’m not an expert on test nomenclature, but see Integration testing for more information and definitions. The project is a fork of EthereumJ project (now deprecated). When forked, that project didn’t have many integration tests, and in general, it was difficult to write them: to create the necessary objects (from a blockchain to a block storage or contract storage) it was a process quite arduous or convoluted. Tests of this kind, which tested various components, were based on data written in JSON (see EthereumJ original jsonsuite package or the new rskj jsonsuite package): those JSON files described the transactions and accounts that were needed at the beginning of the execution, the blocks that were to be executed and the expected results. But those JSON files were generated from one of Ethereum’s implementations, based in C++ programming language. They were then run by all the other Ethereum implementations (Python, Go, Java, etc.) to see if the results were the same. At RSK the only implementation was the same one we wanted to test. Those JSON files could have been generated from the project, and then used to test the same project behavior: a kind of inbred relationship. Some examples There are texts files that describes the actions to be executed, see the DSL resources: This is the accounts01.txtfile content: # create account with initial balance account_new acc1 10000000 # check account balance assert_balance acc1 10000000 There are verbs and arguments. The accoutn_new verb allows to create an account with a symbolic name ( acc1), and with initial balance. That account is already created before processing the genesis block. A verb like assert_balanceallows to check that an account has an expected balance amount. This is the blocks01.txt file content: # Create two blocks, starting from genesis block block_chain g00 b01 b02 # Add the two blocks, in order, to current block chain block_connect b01 b02 # Assert best block assert_best b02 # Assert latest connect result assert_connect best The block_chain verb creates a chain of blocks: new block b01 has as parent the block g00; block b02 has as parent the block b01. The block g00 is the genesis block, created at the beginning of the world process, after processing the initial accounts. Creating blocks does not imply that they are already on the blockchain. You have to connect them using the verb block_connect that receives as arguments the list of blocks to add, in order. The assert_best verb ensures that the best block, after this process, is block b02. The assert_connect best checks that the last block added to the blockchain was accepted as the best block (it could have been rejected). The uncles01.txt file content: # Create one block and two uncles block_chain g00 b01 block_chain g00 u01 block_chain g00 u02 # Create second block with uncles block_build b02 parent b01 uncles u01 u02 build # Add the two blocks block_connect b01 u01 u02 b02 # Assert best block assert_best b02 # Assert latest connect result assert_connect best The block_build verb is a multiline one. It allows to specify the content of a new block, like its uncles, its transactions, its parent block. Then the last verb build creates the specified block with the symbolic name b01 or b02 transfer01.txt file: account_new acc1 10000000 account_new acc2 0 transaction_build tx01 sender acc1 receiver acc2 value 1000 build block_build b01 parent g00 transactions tx01 build block_connect b01 # Assert best block assert_best b01 assert_balance acc2 1000 There is another multiline verb: transaction_build. You can specify the sender account, the receiver account, the value to transfer, the gas limit, etc… Then, the transactions could be added to a block using the block_build verb. But what about create and execute contracts? See create01.txt file: account_new acc1 10000000 # Create empty.sol contract transaction_build tx01 sender acc1 receiverAddress 00 value 0 data 60606040523415600e57600080fd5b603580601b6000396000f3006060604052600080fd00a165627a7a72305820b25edb28bec763685838b8044760e105b5385638276b4768c8045237b8fc6bf10029 gas 1200000 build block_build b01 parent g00 transactions tx01 build block_connect b01 # Assert best block assert_best b01 # The code test checks the gas used The creation of a contract could be executed adding the data to transaction_build verb. It should contain the compiled EVM/RVM bytecode of the contract to be deployed Notice that the last comment refers to additional code asserts in the test Java code. One of the latest addition is a multiline comment, see recursive01.txt file content: comment // Contracts compiled using // Truffle v5.1.14 (core: 5.1.14) // Solidity v0.5.16 (solc-js) // the contract to be deployed is RecursiveParent // the contracts source code // RecursiveInterface.sol pragma solidity >=0.5.0 <0.6.0; interface RecursiveInterface { function increment(uint level) external; } // RecursiveParent.sol pragma solidity >=0.5.0 <0.6.0; import "./RecursiveInterface.sol"; import "./RecursiveChild.sol"; contract RecursiveParent is RecursiveInterface {// ....// end of comment end Its main use is to specify the original source code of the smart contracts that are used along the test. The smart contracts should be deployed using the compiled bytecodes into the data field of a transaction. But the source code is useful as a reference. Usage This is a typical Java code test that executes a DSL file: @Test public void runTransfers01Resource() throws FileNotFoundException, DslProcessorException { DslParser parser = DslParser.fromResource("dsl/transfers01.txt"); World world = new World(); WorldDslProcessor processor = new WorldDslProcessor(world); processor.processCommands(parser); Assert.assertNotNull(world.getAccountByName("acc1")); Assert.assertNotNull(world.getAccountByName("acc2")); Assert.assertNotNull(world.getTransactionByName("tx01")); Assert.assertNotNull(world.getBlockByName("b01")); } So, you can execute the file with the DSL commands, and then, using code, check the final world state. The package World, DSL parser, DSL command, DSL processor and related classes are included into the test/dsl package: The WorldDslProcessor is the executor of the commands specified in the text file. To do: Improve the parser, to allow more complex cases. And add new assertion verbs like assert_storage, assert_nonce, call contract view functions and check their return values, etc. Angel “Java” Lopez
https://angeljavalopez.medium.com/using-a-dsl-domain-specific-language-for-rsk-java-implementation-tests-c57a36b8870f?source=post_internal_links---------4----------------------------
CC-MAIN-2021-49
refinedweb
1,023
52.6
Plus Plus Operator Abstract This entry in Min Blogg deals with curious behavior of the increment-operator. In particular in lines like a[c++] = b[c]; I found some inconsistencies that were both depending on language (C or C#) and compiler (cl, gcc, cs or mcs). WARNING: It is well-defined that "a[c++] = b[c];" is undefined in C. In C# it is well-defined on the other hand. Headlines are: - ++ = Increment - Pointer Arithmetics - Use of increment on regular integers - The Problem - The solution - Compilation - Observations - Conclusions - See Also 1. ++ = Increment Some time in ancient history (at latest when releasing the first C version 1972) the increment operator was invented. In general this operator is used to add one to something. For example if we increment an integer it's value is increased with one: #include <stdio.h> void main(int argc, char * argv[]) { int c = 5; c++; printf("c is now %d\n", c); // prints "c is now 6" } Also note that there the "inverse" of increment is called decrement and is written --. This is completely meaningless if it hadn't been for pointers, arrays and Pointer Arithmetics. 2. Pointer Arithmetics Pointers have some form of built-in intelligence (remember that we are talking computer science of the 1970's - I do not mean gps, mp3, self-destruction-button kind of intelligence, I mean the more crude form of intelligence). If we for example have an array of a data type complex and assign a pointer to the start of it then the increment operator (++) will make the pointer point to the next item in the array. #include <stdio.h> void main(int argc, char * argv[]) { int arr[10]; // c points at item in position 4 (the fifth item) int* c = &arr[4]; // c sets item in position 4 to 1337 and points at the next item *c++ = 1337; } Also note that the increment can be done in another way. The comments in the next example explains the difference #include <stdio.h> void main(int argc, char * argv[]) { int arr[10]; // d points at item in position 4 (the fifth item) int* d = &arr[4]; // d points to the next item and sets that item to 1337 *++d = 1337; } Not all languages have built-in support for pointers. C# allows you to use it only under certain conditions for example. Python has no increment operator but can be considered to use pointers anyway (from my perspective). 3. Use of increment on regular integers Today I was confronted with the following situation: I was given an very long input vector and a quite short output vector. I was to add N zeros to the beginning of the output vector. Then fill the remaining positions with values from the input vector. Pretty much like this: double[] A = new double[10] { 12, 23, 24, 34, 45, 35, 56, 56, 57, 53}; double[] B = new double[6]; int n = 3; for (int i = 0; i < n; i++) B[i] = 0; for (int i = n; i < B.Length; i++) B[i] = A[i]; // values of the array // A: 12 23 24 34 45 35 56 56 57 53 // B: 0 0 0 34 45 35 Since I had a number of variables and wanted to use a minimal amount of silly counters I tried to keep the code minimal, pretty much like this: int c = 0; for (int i = 0; i < n; i++) { B[c] = 0; c++; } for (int i = n; i < B.Length; i++) { B[c] = A[c]; c++; } The advantage of this is not obvious in this example. But imagine that the loops require a lot of computation and so on to determine the number of items to set in B. Also: perhaps we need to call other functions where we pass c as a parameter to know where in the arrays to insert values. Anyway: the last loop annoyed me and I wanted to lower the number of lines in it from two to one, to something like this: for (int i = n; i < B.Length; i++) { B[c] = A[c++]; } Please note Please note that the best way to do exactly what I mean in the above minimal loop is something like this: for (int i = n; i < B.Length; i++, c++) { B[c] = A[c]; } 4. The Problem The thing to think about here is what the line B[c] = A[c++]; is decomposed to. Questions that ran through my mind was: - Since we set a value in B[c] we must first read A[c++]. Since we read A[c++] c will be incremented before the value of c is used in B, right? - Since I read code from left to right the compiler cannot execute things the other way around right? - Has polish notation got anything to do with this? - Is this compiler-specific (I remember a lecture in FORTRAN about the value of i after a loops)? 5. The solution I created a some simple code-files that tests many possible cases: They both first contain some declarations creating eight arrays to be filled with values from a ninth array. They also contain this horrible for-loop: // c1-c8 are all 1 before this loop for (i = 2; i < 5; i++, c3++, ++c7) {]; } 6. Compilation I compiled using: - cl to a native C(ansi) file. - gcc to a dito. - the built-in cs compiler in .NET 2.x (I later tested cs from 1.1 and the results are the same). Also mcs from the Mono Platform produce the same result. (Perhaps since the all use Common Intermediate Language (CIL)?) Output from the CIL versions b1: 0 2 3 4 0 0 0 0 0 0 b2: 0 3 4 5 0 0 0 0 0 0 b3: 0 2 3 4 0 0 0 0 0 0 b4: 0 3 0 5 0 Output from the cl 0 3 4 5 0 0 0 0 0 b6: 0 0 3 4 5 0 0 0 0 0 b7: 0 2 3 4 0 0 0 0 0 0 b8: 0 0 0 4 0 6 0 8 0 0 Output from the gcc 7. Observations - Lines that gave identical behavior in all versions: - b1[c1] = a[c1++]; - b3[c3] = a[c3]; - b6[++c6] = a[c6]; - b7[c7] = a[c7]; - Lines that gave different behavior: - b2[c2++] = a[c2]; - b4[c4++] = a[c4++]; - b5[c5] = a[++c5]; - b8[++c8] = a[++c8]; 8. Conclusions My guess here is that increment operator is not well defined enough. - If an increment is last in a line it can be considered to be performed after it, f.x: b1[c1] = a[c1++]; - If an increment is first in a line it can be considered to be performed before the line, f.x: b6[++c6] = a[c6]; - If the increment is standalone (like c3 and c7) there is no problem. - More than one increment on a line can be performed before/after it and/or in the middle of (it I guess). My interpretation is that b2[c2++] = a[c2]; in C# is converted to: int* p = a[c2]; c2++; b[c2] = p; and in C(ansi) int* p = a[c2]; b[c2] = p; c2++; I used Reflector on the C# version and the loop there looked like this: while (i < 5) {]; i++; c3++; c7++; } so that does not help much. Perhaps some details might be give in by reading the CLI - but I don't get that yet. 9. See Also - [3] English Wikipedia on increment - [4] CFAQ (in particular [5]) - [6] A thread I posted in comp.lang.c - [7] A thread I posted in microsoft.public.dotnet.languages.csharp This page belongs in Kategori Programmering.
http://pererikstrandberg.se/blog/index.cgi?page=PlusPlusOperator
CC-MAIN-2017-39
refinedweb
1,288
66.37
A scala implementation of the RFC-6901, RFC-6902, and RFC-7396. It also provides methods to compute diffs between two Json values that produce valid Json patches or merge patches. Note: if you still want to use the 3.x.y series (without cats), please see this documentation Table of ContentsTable of Contents - Getting Started - Json Library - Json Patch (RFC-6902) - Json Merge Patches (RFC-7396) Getting StartedGetting Started This library is published in the Maven Central Repository. You can add it to your sbt project by putting this line into your build description: libraryDependencies += "org.gnieh" %% f"diffson-$jsonLib" % "4.1.1" where jsonLib is either: spray-json play-json circe These versions are built for Scala 2.12, 2.13, and 3. Scala.JS is also supported for Scala 2.12, 2.13, and 3. To use it, add this dependency to your build file: libraryDependencies += "org.gnieh" %%% f"diffson-$jsonLib" % "4.1.1" Json LibraryJson Library Diffson was first developped for spray-json, however, it is possible to use it with any json library of your liking. The only requirement is to have a Jsony for your json library. Jsony is a type class describing what operations are required to compute diffs and apply patches to Json-like types. At the moment, diffson provides instances for spray-json, Play! Json, and circe. To use these implementations you need to link with the correct module and import the instance: // spray-json import diffson.sprayJson._ // play-json import diffson.playJson._ // circe import diffson.circe._ If you want to add support for your favorite Json library, you may only depend on diffson core module diffson-core and all you need to do then is to implement the Jsony class, which provides all the operations for diffson to be able to compute diffs and apply patches. Contribution of new Json libraries in this repository are more than welcome. Note on (de)serializationNote on (de)serialization The purpose of diffson is to create and manipulate diffs and patch for Json like structures. However the supported patch formats can also be represented as Json objects. The core library doesn't mention any of this as it's sole purpose if the diff/patch computations. Given the variety of Json libraries out there and there various ways of implementing the way of (de)serializing Json values, there is no good abstraction that fits this general purpose library, and this is up to the library user to do it in the most appropriate approach given the Json library of their choosing. The various supported Json libraries in diffson provide an idiomatic way of (de)serializing the different element for each of them (e.g. the circe module provide Decoders and Encoders for all the patch types). For instance to get circe encoder and decoder instances, you need to import io.circe._ import diffson.circe._ import diffson.jsonpatch._ val decoder = Decoder[JsonPatch[Json]] val encoder = Encoder[JsonPatch[Json]] For Play! Json, you need to import play.api.libs.json._ import diffson.playJson._ import diffson.playJson.DiffsonProtocol._ import diffson.jsonpatch._ val format = Json.format[JsonPatch[JsValue]] For Spray Json, you need to import spray.json._ import diffson.sprayJson._ import diffson.sprayJson.DiffsonProtocol._ import diffson.jsonpatch._ val format = implicitly[JsonFormat[JsonPatch[JsValue]]] Json Patch (RFC-6902)Json Patch (RFC-6902) Basic UsageBasic Usage Although the library is quite small and easy to use, here comes a summary of its basic usage. Diffson uses a type-class approach based on the cats library. All operations that may fail are wrapped in type with a MonadError instance. There are two different entities living in the diffson.jsonpatch and one on diffson.jsonpointer package useful to work with Json patches: Pointerwhich allows to parse and manipulate Json pointers as defined in RFC-6901, JsonPatchwhich allows to parse, create and apply Json patches as defined in RFC-6902, JsonDiffwhich allows to compute the diff between two Json values and create Json patches. Basically if someone wants to compute the diff between two Json objects, they can execute the following: import diffson._ import diffson.lcs._ import diffson.circe._ import diffson.jsonpatch._ import diffson.jsonpatch.lcsdiff._ will return a patch that can be serialized in json as: [{ "op":"replace", "path":"/a", "value":6 },{ "op":"remove", "path":"/b" },{ "op":"replace", "path":"/c/0", "value":"test2" },{ "op":"add", "path":"/d", "value":false }] This example computes a diff based on an LCS, so we must provide an implicit instance of Lcs. In that case we used the Patience instance, but other could be used. See package diffson.lcs to see what implementations are available by default, or provide your own. You can then apply an existing patch to a Json object as follows: import scala.util.Try import cats.implicits._ val json2 = patch[Try](json1) which results in a json like: { "d":false, "c":"test2", "a":6 } which we can easily verify is the same as json2 modulo reordering of fields. A patch may fail, this is why the apply method wraps the result in an F[_] with a MonadError. In this example, we used the standard Try class, but any type F with the appropriate MonadError[F, Throwable] instance in scope can be used. Simple diffsSimple diffs The example above uses an LCS based diff, which makes it possible to have smart diffs for arrays. However, depending on your use case, this feature might not be what you want: - LCS can be intensive to compute if you have huge arrays; - you might want to see a modified array as a single replaceoperation. To do so, instead of importing diffson.jsonpatch.lcsdiff._, import diffson.jsonpatch.simplediff._ and you do not need to provide an Lcs instance. Resulting diff will be bigger in case of different arrays, but quicker to compute. For instance, the resulting simple diff for the example above is: [ { "op" : "replace", "path" : "/a", "value" : 6 }, { "op" : "remove", "path" : "/b" }, { "op" : "replace", "path" : "/c", "value" : [ "test2", "plop" ] }, { "op" : "add", "path" : "/d", "value" : false } ] Note the replace operation for the entire array, instead of the single modified element. Remembering old valuesRemembering old values Whether you use the LCS based or simple diff, you can make it remember old values for remove and replace operations. To that end, you just need to import diffson.jsonpatch.lcsdiff.remembering._ or diffson.jsonpatch.simplediff.remembering._ instead. The generated diff will add an old field to remove and replace operations in the patch, containing the previous version of the field in original object. Taking the first example with the new import, we have similar code. import diffson._ import diffson.lcs._ import diffson.circe._ import diffson.jsonpatch._ import diffson.jsonpatch.lcsdiff.remembering._ results in a result with the old value remembered in the patch: [ { "op" : "replace", "path" : "/a", "value" : 6, "old" : 1 }, { "op" : "remove", "path" : "/b", "old" : true }, { "op" : "replace", "path" : "/c/0", "value" : "test2", "old" : "test" }, { "op" : "add", "path" : "/d", "value" : false } ] Patches produced with this methods are still valid according to the RFC, as the new field must simply be ignored by implementations that are not aware of this encoding, so interoperability is not broken. Json Merge Patches (RFC-7396)Json Merge Patches (RFC-7396) There are two different entities living in the diffson.jsonmergepatch package useful to work with Json merge patches: JsonMergePatchwhich allows to parse, create and apply Json merge patches as defined in RFC-7396, JsonMergeDiffwhich allows to compute the diff between two Json values and create Json merge patches. Basically if someone wants to compute the diff between two Json objects, they can execute the following: import diffson._ import diffson.circe._ import diffson.jsonmergepatch._ import io.circe.parser._ import io.circe.syntax._ val json1 = parse("""{ | "a": 1, | "b": true, | "c": "test" |}""".stripMargin) val json2 = parse("""{ | "a": 6, | "c": "test2", | "d": false |}""".stripMargin) val patch = for { json1 <- json1 json2 <- json2 } yield diff(json1, json2) which will return the following Json Merge Patch: { "a": 6, "b": null, "c": "test2", "d": false } You can then apply the patch to json1: val json3 = patch(json1) which will create the following Json: { "d":false, "c":"test2", "a":6 } which we can easily verify is the same as json2 modulo reordering of fields.
https://index.scala-lang.org/gnieh/diffson/diffson-circe/4.0.0-M2?target=_sjs0.6_2.12
CC-MAIN-2021-25
refinedweb
1,396
57.37
On Sun, Dec 6, 2015 at 10:07 PM, Matthew Brett matthew.brett@gmail.com wrote: Hi, On Sun, Dec 6, 2015 at 12:39 PM, DAVID SAROFF (RIT Student) dps7802@rit.edu wrote: This works. A big array of eight bit random numbers is constructed: import numpy as np spectrumArray = np.random.randint(0,255, (2**20,2**12)).astype(np.uint8) This fails. It eats up all 64GBy of RAM: spectrumArray = np.random.randint(0,255, (2**21,2**12)).astype(np.uint8) The difference is a factor of two, 2**21 rather than 2**20, for the extent of the first axis. I think what's happening is that this: np.random.randint(0,255, (2**21,2**12)) creates 2**33 random integers, which (on 64-bit) will be of dtype int64 = 8 bytes, giving total size 2 ** (21 + 12 + 6) = 2 ** 39 bytes = 512 GiB. 8 is only 2**3, so it is "just" 64 GiB, which also explains why the half sized array does work, but yes, that is most likely what's happening. Jaime Cheers, Matthew _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org
https://mail.python.org/archives/list/numpy-discussion@python.org/message/QFINSRZOWACRC7G23DA35OJLZ6R4MNFB/
CC-MAIN-2022-27
refinedweb
192
66.33
Gumgo 968 Report post Posted May 10, 2007 I'm having a problem with headers... Say I have header A and header B. Header A declares class A and header B declares class B. Class A has a pointer to class B. Class B has a pointer to class A. Therefore, header A includes header B and header B includes header A. And of course, in each header, there is an #ifndef... #define... #endif so that the classes aren't declared twice. Here's the problem: The compiler starts with header A. Since A_DEFINED has not been defined, it is defined and the script compiles. The first thing it hits is #include "B.h", so the compiler moves on to B.h before continuing on A.h. At the beginning of B.h, the compiler hits the #ifndef B_DEFINED, and since B_DEFINED has not been already defined, the compiler continues. The next thing it hits is #include "A.h". The compiler moves to A.h, but hits #ifndef A_DEFINED again. Since A_DEFINED has been defined, it skips the rest of the file and resumes at B.h. Now it moves on to the class declaration in B.h. But suddenly it hits a line that says "A * newpointerA;". Since the first execution of A.h hasn't finished, and the second one was skipped because A_DEFINED had already been set, the compiler has not declared class A yet and runs into an error! This is the problem I'm having, except slightly more extensive with several scripts that rely on each other. Is there any ways to sort this all out? Thanks. 0 Share this post Link to post Share on other sites
https://www.gamedev.net/forums/topic/447461-headers-relying-on-each-other/
CC-MAIN-2017-43
refinedweb
283
77.94
Feedback Getting Started Discussions Site operation discussions Recent Posts (new topic) Departments Courses Research Papers Design Docs Quotations Genealogical Diagrams Archives While working on new programming metalanguage, I encountered a parsing problem. followed some documentation and got this implementation in my metalanguage: RegExp <= ( Union <= (@SimpleRE, '|', @RegExp) | SimpleRE <= ( Concatenation <= (@BasicRE, @SimpleRE) | BasicRE <= ( OneOrMore <= (@ElementaryRE, ('+?' | '+')) | ZeroOrMore <= (@ElementaryRE, ('*?' | '*')) | ZeroOrOne <= (@ElementaryRE, '?') | NumberedTimes <= ( @ElementaryRE, '{', In <= ( Exactly <= @Integer | AtLeast <= (@Integer, ',') | AtLeastNotMore <= (@Integer, ',', @Integer) ), ('}?' | '}') ) | ElementaryRE <= ( Group <= ('(', @RegExp, ')' | Any <= '.' | Eos <= '$' | Bos <= '^' | Char <= ( @NonMetaCharacter | '\\', ( @MetaCharacter | 't' | 'n' | 'r' | 'f' | 'd' | 'D' | 's' | 'S' | 'w' | 'W' | @OctDigit, @OctDigit, @OctDigit ) ) | Set <= ( PositiveSet <= ('[', @SetItems, ']') | NegativeSet <= ('[^', @SetItems, ']') ) <~ ( SetItems <= ( SetItem <= ( Range <= (@Char, '-', @Char) | @Char ) | @SetItem, @SetItems ) ) ) ) ) ) It would work with some javascript back-end, but when I compared "union" to "set" in original Regexp definition, I concluded they are about the same thing, a choice of values detected at parse time. I didn't like this redundancy, so I decided to slightly change the definition of Regexp and to develop my own version which looks like this: ChExp <= ( Choice <= (@ConExp, '|', @ChExp) | ConExp <= ( Concatenation <= (@WExp, @ConExp) | WExp <= ( Without <= (QExp, '!', @WExp) | QExp <= ( OneOrMore <= (@GExp, ('+?' | '+')) | ZeroOrMore <= (@GExp, ('*?' | '*')) | ZeroOrOne <= (@GExp, '?') | NumberedTimes <= (@GExp, '{', @Integer, '}') | GExp <= ( Group <= ('(', @ChExp, ')') | Exp <= ( Any <= '.' | Range <= (@Char, '-', @Char) | Char <= ( @NonMetaCharacter | '\\', ( @MetaCharacter | 't' | 'n' | 'r' | 'f' | '0x', @HEXDigit, @HEXDigit, @HEXDigit, @HEXDigit, @HEXDigit, @HEXDigit ) ) ) ) ) ) ) ) So, what is the difference? I completely removed Set notation (noted by [xyz]) because it can be replaced by "Union" notation (x|y|z). Initially, there was a thing missing in my new Regexp: negative set ([^xyz]). To support this I decided to introduce a new operator: "Without" (!), that parses expression to the left of '!' excluding expression to the right of '!'. So, '[^xyz]' now becomes '.!(x|y|z)'.. The new Regexp does not have redundant set-union colision, while its definition looks a bit cleaner than traditional Regexp. I have to say, probably structured way of defining the grammar in my metalanguage saved me from the pitfalls which original authors of Regexp had in the seventies when they probably used unstructured plain BNF for defining Regexp. So, what do you say? Is this new Regexp appearance worth of abandoning traditional standard that is widely spread across the globe? Do you have any sugestions on my modified solution? Traditional Regexp has been around for decades, do you think it is time for modernizing it? was recently optimizing a parser I had written in JavaScript, and the best change I made by far was to add a regex-based tokenizer. My tokenizer probably does exactly what you're trying to do! The code looks like this, and it just uses the standard lastIndex property: lastIndex var tokenRegex = /[ \t]+|[\r\n`=;',\.\\/()\[\]]|[^ \t\r\n`=;',\.\\/()\[\]]*/g; tokenRegex.lastIndex = currentCodeIndex; var token = regex.exec( codeString )[ 0 ]; currentCodeIndex += token.length; The exec method looks at the lastIndex property and uses it as the starting point for the search. Since the regex I'm using has no possibility of failure, the match always begins precisely at the specified lastIndex, rather than proceeding to search for a match later on in the stream. exec Depending on your token syntax, it might be tricky to write a regex that always succeeds like that, but someday soon it could be a little easier to write these. ECMAScript 6 adds a sticky flag, /y, which specifically makes it so a regex does not proceed to try later starting positions if it fails to match at lastIndex. Apparently only Firefox supports this at the moment. /y. Are you aware of negative lookahead (?!...)? The JavaScript regex /^(?!keyword1|keyword2).*$/ will exactly match the strings "foo" and "not keyword1" but will not match "keyword1" or "keyword2". In a regex, /[^abc]/ and /(?!a|b|c)./ have the same effect. (?!...) /^(?!keyword1|keyword2).*$/ /[^abc]/ /(?!a|b|c)./ I'm not saying JavaScript regexes are without limits, but it looks like they might be a little more capable than you expect. Hopefully that's good news. :) So, what do you say? Is this new Regexp appearance worth of abandoning traditional standard that is widely spread across the globe? There's a variety of regex syntaxes out there for different languages, not to mention many different forms of EBNF for specifying entire language syntaxes. I think your alternate appearance will be in good company, which is perhaps a cheerier way of saying it won't stand out. Regexes provide a suite of basic operators for combining regexes, like concatenation /AB/ and alternation /A|B/. In this way, they're a special case of the broader category of parser combinator libraries, so a Parsec clone like ReParse will probably serve as a nice regex substitute. If you still want to experiment, this could be a nice starting point for that experimentation; maybe your syntax could compile to use this underneath. /AB/ /A|B/ If you just want a parser that works already, a very popular option for parsers in JavaScript is PEG.js. That /y flag would be of use, too bad it is not yet supported everywhere. I thought of negative lookahead, but I wasn't aware that javascript supports it. Good to know, thanks. So, out there is a great deal of different Regexps? Maybe my version doesn't even deserve a different name... Anyway, I'll stick to my version if speed performance lets me... I kind of like it... PEG.js is cool and fast, but I can't use it because I want to mix syntax definitions and actual instances of data in the same file. I also want to support left recursion (by seed growing algorithm) . I have my doubts about left recursion in top down parsers. I've read a few papers on it where the methods turned out to be impractical. Though there was an early paper I'd like to see about left recursion using continuations, I have a weak spot for continuation and I find papers from the 70s easier to read than current ones anyway. I've tested seed growing algorithm in Javascript. It works like a charm. I've even extended it to deal with CFG instead of PEG. Thank you guys from the web (Alessandro Warth, James R. Douglass, Todd Millstein) :) [Edit.. sorry, just realised I didn't read that clearly enough, so my main point was obsolete. Now edited to make the seconary point the main point :-(] So, '[^xyz]' now becomes '.!(x|y|z)'. So ! is a negative look-behind assertion. ! Set and inverted set are nice efficient syntactic sugars. No one wants to write (0|1|2|3|4|5|6|7|8|9|A|B|C|D|E|F) instead of [0-9A-F]. But the real problem with regexps is that they can't be composed. Perl6 has some nice answers to this. PEG>js looks nice, thanks Ross, I'll investigate further. (0|1|2|3|4|5|6|7|8|9|A|B|C|D|E|F) [0-9A-F] These two forms are not really syntactic sugar for one another - they overlap in what they can do but there are differences. For example: (x|y|zz) doesn't translate directly into character-sets. This is the basis for handling escaping sequences in regexp. The real difference between them is that a character set is a fixed width construct, but the choice is being made over arbitrary regexp's (it is an operator). This implies that [^xyz] has a closed set of matches, while !(x|y|z) does not. For example, is "kk" in the set of matches for !(x|y|z) ? In proposed Regexp form, operator "!" would be read as "without" (or negative look behind, if you like), so "!(x|y|z)" isn't really independent expression. It depends on what's left to it. Only if we write ".*!(x|y|z)" then we can read it as ".*" without "(x|y|z)". Ok, so it is a compound modifier. How does the matcher work - match the expression on the left, then check the match a second time against the excluded set? Would something like [0-9]+!(12+) work? except I would write it like "0-9*!(12+)". I completely removed "[...]" notation from the syntax and introduced character range (like "A-Z") without brackets. I would look at the regexp above and wonder whether it matched any number that didn't start with '12' or any number that didn't contain the sequence '12', and go reading to see what the conventions were. But mostly I'd be reading to see whether the regexp language has variable binding and reference built in, because they are incredibly useful and inserting references to (or bindings of) them is the main reason IMO for square brackets. One hardly ever uses a regexp unless picking a value out of a stream. And what one does with that value, at least half the time, is to bind it to a variable. So I think that a regexp sublanguage needs a way to specify variables to bind to the values it picks out of the stream. I like being able to write something like success=regmatch( (a-z)^5[varName=]((1-9)*) ,inputstring) where "success" is a boolean, and if its value after this expression is #true then the expression matched 5 characters of lowercase alphabetic followed by an integer, and varName is now bound to the integer. Or even success=regmatch( [val=]((1-9)*)_[val], inputstring ) to see if something matched a number, underscore, the same number. I would write the definition in multiline string: def = ` AssignExp <= ( VarName <= /A-z+/, op <= ':=', Value <= /1-9+/ )`; and then i'd use the full blown parser: abstractSyntaxTree = parser.parse(def, inputstring); after which I'd get a parsed tree for analyzing. How does the matcher work - match the expression on the left, then check the match a second time against the excluded set? If the discussion is limited to actual regular expressions as the terms for "!", then the operator is a set difference and the pattern "x!y" is always regular. Multiple passes might be desirable but aren't necessary. I would actually like to write (0-9|A-F)
http://lambda-the-ultimate.org/node/5309
CC-MAIN-2019-13
refinedweb
1,678
64.91
Get Social with us: This is the first of many C++ programming tutorials to come. You can use any Native C++ compiler in this tutorial series, but we will use a free Windows C++ compiler. In this tutorial, you learn how to download and install Microsoft Visual C++ Express 2010. It is free, and easy to learn. It also offers more than enough features to learn how to program in C++. You will learn C++ starting with Console applications. A Console application (text input from the keyboard, and text output to the console window), is all you need to get started in C++. Later we will go into more advanced topics like Object Oriented Programming. Yes, you can learn all this in a Console Project as well. You will also learn: * What a .cpp file is * The C++ main function * The return 0 in C++ * The iostream in C++ * The using namespace std * Comments in C++ * And using cout and cin * What a .cpp file is * The C++ main function * The return 0 in C++ * The iostream in C++ * The using namespace std * And using cout and cin So lets get started >> Your first program is the simple "Hello World" example: intmain() { cout << "Hello World"; return 0; } intmain() { cout << "Hello World"; return 0; } Installing a C++ IDE/Compiler In your web browser go to microsoft.com. In the search, type "c++ express". This will bring up a list, the top one usually will be the one your interested in. On the Express download page, choose Microsoft Visual C++ Express. The file you download "vc_web" is just a loader. After downloading the loader, double click "vc_web" to start installing. The loader will start downloading and installing the files needed for C++ Express. When the download is complete, you will be asked to register. Register it now or in 30 days you will have to register to keep using C++ Express for free. Click the Start button and you will see the Microsoft Visual C++ Express title. Click it and C++ Express boots up. Now you are ready to start a console project. Click C/C++ then under Display, check "Line numbers". Line numbers are good, you will find out as you learn C++ in Microsoft. Lets start a bare bones console application. First, on the Menu, go to File/New/Project. In thepop up box, choose Win32/Win32 Console Application. Name your App, in this case we will name it "C++ConsoleApp." Then click ok. The Application Wizard box opens. Before you click Finish, click Next. Make sure you put a checkmark next to "Empty project." Now you can click Finish to create an empty console application. A bare bones C++ program needs a .cpp file. So under Solution Explorer, right click Source files/Add/New Item. Choose C++ File (.cpp) and give it a name like MainApp. Then click Add. Below, you see an empty MainApp.cpp file. It starts at line number 1, but there is no Main function yet. We will add that next. Type in "int main(){return 0;}" Next, add the <iostream> and namespace std above the main function. Now we can start coding inside the main function. int input_pause; cout << "Hello World"; cin >> input_pause; //press any key and Enter to end program or close console window int input_pause; cout << "Hello World"; cin >> input_pause; //press any key and Enter to end program or close console window To start the program for the first time it has to be Built by the Compiler. You can go to "Build Solution" first, or if you are sure there are no syntax errors, go to "Start debugging", which will Compile and start the program. The console window pops up and you see Hello World. Lets take a closer look at the three statements of interest in this program. int input_pause; cout << "Hello World"; cin >> input_pause; //press any key & Enter to end program or close console window cin >> input_pause; //press any key & Enter to end program or close console window The first statement just initializes the variable input_pause The second statement cout, just prints the text that is in quotations to the screen. The third is a cin input statement with the variable "input_pause". All this line does is keep the console window from closing until you enter a key or close the console window. Take note that all statements must end in a semicolon " ; " cout << "Hello World"; The "slash marks" is a comment and is ignored by the compiler. cin >> input_pause; //press any key You will start learning the basics of C++ . You might also want to read: Part 2 - The Basics of C++ Part 3 - Conditional "if" Statement Part 4 - else if Statement in C++ Part 5 - "switch and loops" Part 6 - Arrays & Strings Part 7 - Pointers Part 8 - Functions in C++>
http://www.tutorialboneyard.com/Pages/CppPart1StartLearning.aspx
CC-MAIN-2015-06
refinedweb
803
81.93
. Yes, TypeScript does support inheritance. You can define classes which can derive from other classes and may implement interfaces. Digging deeper into the inheritance topic I usually get interested in encapsulation and interfaces. var c : Class = MyClass; var x : MyClass = new c();): any; } This interface will be assignment compatible with any class because all classes will have a construct signature with one or more arguments that returns something assignment compatible with any. You can limit this further depending on your use case.… Yes, you can. They call it “modules”. It’s pretty cool especially because Modules are “open ended”, meaning you can add more implementations to pre-existing Modules. Time flies. I just realized that my last entry was from May. Thanks for your feedback on my Pew Pew Manifesto! In the last few months I’ve had a few things I wanted to write about but just not enough time. Yesterday I read an announcement by Microsoft that seems to be too important to be left out of the discussion about JavaScript cross-compilers. On October 1st, 2012, Microsoft introduced a new programming language for large-scale web application development called TypeScript: “TypeScript is a superset of JavaScript that combines type checking and static analysis, explicit interfaces, and best practices into a single language and compiler.” TypeScript: JavaScript Development at Application Scale Here are a few links that you might find useful if you are interested in learning more about TypeScript: In my opinion it is worth stepping back for a second and look at the motivation for TypeScript: Why did Microsoft decide to invest in TypeScript? And in extension: Why did Google create Dart? It seems TypeScript wants to overcome limitations in JavaScript that prevent developers from writing large-scale web applications: : JavaScript Development at Application Scale That echoes pretty much what Google says about their motivations for Dart. In Dart – A Modern Web Language (Google IO 2012) Lars Bak and Kasper Lund describe the thought process that eventually lead to developing Dart. At around 8:30 Kasper Lund explains some of the biggest shortcomings of JavaScript in a section called Fundamental JavaScript Issues. The most important issue is in my opinion JavaScript’s keep-on-truckin’ paradigm, where mistakes are tolerated to a degree where almost anything goes and wrong types lead to unusable results (~11:00). That segment is really worth watching. I highly recommend it. The second session I would like to quote is Google I/O 101: Introduction to Dart. At around 11:00 Seth Ladd uses this JavaScript example for illustrating that in Google’s Dart you can reason about unfamiliar code more easily: function recalculate(origin, offset, estimate) { … } Seth explains: For example the JavaScript example at the top reads, the calculate method takes an origin, an offset, an estimate, and I don’t even know what it returns. What is an origin? What is an offset? What does this return? Given this function signature, you know almost nothing. Of course, you can pray the developer left comments that annotate all the types of the parameters and return value. If you are lucky, they did. But then your tools don’t know how to parse written text. Worse case, you’re reading the function body, and if you’re reading the function body, you’ve broken encapsulation, and you’re in a bad spot. You really want to know what the method is, what you can pass to it, and what it can return. What Seth describes has happened to me many, many times when working with foreign JavaScript and Lua code (which is another dynamic language that should be avoided in my opinion). I often hear arguments that reiterate the mantra of “don’t blame JavaScript, it’s the developer’s fault”. For example, incoherent JavaScript code is caused by developers who don’t understand how to write code in JavaScript. Or, ambiguous function signatures are the result of undocumented code, which developers failed to provide. Or, developers should read a good book about software design before writing any code. Or, if you use the JavaScript library xyz then you can avoid all of those development problems. If those arguments from JavaScript enthusiasts were really true, why did Microsoft invest in TypeScript and why did Google create GWT and later Dart? In my opinion those arguments from JavaScript enthusiasts all ignore structural problems of JavaScript that are very real. To illustrate my point: If a language allows typos and keeps on truckin’ with unusable results – would that be the developer’s fault? Frankly, yes, but developers are humans and make mistakes. That’s why we need languages and compilers that catch errors like typos as early as possible, that is, at compile time. Pointing out that a developer just has to know the language and should simply not introduce any typos doesn’t help anybody. Nobody is perfect. That’s why we need good tools and languages. TypeScript, Dart, and GWT seem to provide just the right tools for helping you creating large-scale web applications. ]]>]]> This: I’ll walk you through the list…. You have several choices:. In Optimizing cross-compiled JavaScript I wrote a whole article about the importance of optimizing your JavaScript code. As far as I know Google’s Closure Compiler is still the best JavaScript optimizer out there.…]]> This. Microsoft’s congratulation email included some information about the next steps that would follow. They kindly asked every finalist to do four things:. Sabbatical! Adobe is a great workplace and offers its employees one sabbatical every five years. I have been with Adobe for over 15 years and got the maximum of six weeks. There are three rules for Adobe sabbaticals:. My time in Mexico was rather uneventful in regards to developing Pew Pew. I worked only for a few hours but continuously and over time improved Pew Pew little by little.: In my case I didn’t have much choice and had to use VMware in order to install the new Windows 8 version. Fortunately everything went well. No problems, Pew Pew only needed to be recompiled.. ]]>]]> This. I boiled those Metro Commandments further down to these Metro features that I wanted Pew Pew to support: I’ll walk you through each of those features and tell you (roughly) how I implemented them.. ]]>]]> This is part five. This all had to be done in 10 days.. ]]>]]> This is part four looked crappy.. Ian Lobb wrote a great code review of Pew Pew, which gave me some good pointers for improving Pew Pew: Ian’s list was already depressingly long. But I had to add one very important missing feature: I could have just taken that list and started fixing those problems until the deadline arrived. But as I mentioned before it is worth stepping back and spending a few extra cycles on what exactly needs to be done.: Now that I boiled everything down to a list of six Metro features I could start thinking about pricing those features.… ]]>]]> This is part three was surprised to see a simple cross-compiled app running in Metro without any problems. Would it be possible to get even more complicated apps like Pew Pew up and running in Metro? It only took me a few minutes to find that old prototype I wrote in February 2011 to convince skeptics within Adobe that you can cross-compile a game from ActionScript to JavaScript. There it was: Pew Pew, a cute space shooter game originally written by a Mike Chambers, a colleague of mine. The version that Mike open sourced Pew Pew in 2011 at github was pretty much the same I had used for my prototype. But I had to do a few important adjustments back then before I could cross-compile Pew Pew to JavaScript. In Pew Pew’s github depot you will notice a file called Main.fla, which is a Flash IDE project file. If you open Main.fla with the Flash IDE you might be surprised to see that there is really nothing going on in this project. There is only one key frame, which kicks off the game through adding an instance of Main to the stage. Somewhere in Main.fla there is also one transition for animating explosions. Most of the logic is in a cluster of ActionScript files that contains code which gets attached to Symbols defined in Main.fla. (For more details about Pew Pew’s project I recommend reading Ian Lobb’s code review of PewPew.) Later I learned that many games are written that way: one frame, a few transitions, a bunch symbols, and the rest is all ActionScript code. I can see why many developers prefer this kind of Flash Pro setup, because using Symbols in combination with ActionScript code is very convenient. You would have to do a lot more extra work in order get the same game running in Flex. Herein lies the problem, though. From a Flex developer’s point of view ActionScript code from Flash projects often appears incoherent and messy. The part that Flash Pro is adding to SWFs may seem convenient. But to me (and I suspect many other developers coming from the Flex world would agree) all that “magic” that Flash adds is just opaque. In many cases ActionScript code recovered from Flash project doesn’t even compile. Back in January 2011 it quickly became clear to me that before I could cross-compile Pew Pew from ActionScript to JavaScript I had to convert the original Pew Pew Flash project into a coherent Flex project. In other words, first I needed to be able to create a Pew Pew SWF from a Flex project. You simply can’t compile incoherent code. Your compiler will (and should) throw compile errors if it detects inconsistencies. I realized, my job was adding that missing “Flash Magic”, that hidden extra code that game developers are fond of not having to worry about. You might think that you can easily convert a Flash project to a Flex project just by importing those ActionScript files and compiling everything. That’s unfortunately not the case. There are a few weird things I call “Flash Magic”, which Flash Pro adds to the SWF in order to make everything work. The two most important oddities every Flex developer needs to know about Flash projects are automatic instantiation of symbol members, and automatic promotions of local variables to members. Both concepts seem utterly counterintuitive and unnecessary to me. But that doesn’t matter. It’s part of reality and one has to deal with it. Fortunately automatic promotions of local variables to members didn’t seem to be a problem in Pew Pew. So let’s focus on automatic instantiation of symbol members and have a quick look at Main.as : public class Main extends MovieClip { ... //instantiated within FLA private var gameArea:GameArea; //main game menu view private var gameMenu:GameMenu; ... //background public var background:MovieClip; ... //main entry point for applications public function Main() { //note stage quality is always set to best in Adobe AIR. Have it //set to HIGH here in case we are running in browser stage.quality = StageQuality.HIGH; stage.align = StageAlign.TOP_LEFT; stage.scaleMode = StageScaleMode.NO_SCALE; //listen for when we are added to the stage addEventListener(Event.ADDED_TO_STAGE, onStageAdded, false, 0, true); //cache bitmap so it can be accelerated background.cacheAsBitmap = true; //dont need mouse events on background, so disable for performance background.mouseEnabled = false; background.mouseChildren = false; stop(); } ... } You will probably agree with me that the code above as it has been checked into github looks very suspicious. Our Main class declares a background member of type MovieClip. Then Main’s constructor code uses background before it has been created. Where is the code that does “new MovieClip()” ? You won’t find it, because it is part of the “Flash Magic”. If you open Windows/Library in the Flash IDE you will find an entry for Background. As far as I know Flash automatically instantiates class members if they are listed in the Library. In order to convert the code in Main’s constructor to proper, coherent ActionScript I had to add these two lines: //main entry point for applications public function Main() { if( !background ) background = SymbolManager.instance().createBackground(); ... } By doing so I could use the same ActionScript code in Flash and in Flex. In the Flash version the background member would be automatically instantiated but in the Flex version we would instantiate background manually. I could have initialized background with just “new MovieClip()”. But I decided early on to encapsulated all the work for emulating “Flash Magic” into one class, which I named SymbolManager. Some symbols like Ship have an initial transform, that SymbolManager also has to emulate. For example if you select GameItems/graphics/ship_graphic in Flash IDE’s Window/Library and observe the Transform panel you’ll notice that Ship is rotated by 90 degrees and also scales x and y axis by about 51.8%. The Info panel tells us that the initial x/y positions are set to 0.0/0.0. Unfortunately I didn’t find the values that the Flash IDE gave me very reliable. In order to get correct values I had to export my symbols to FXG (File/Export/Export Selection, format=FXG). The exported Ship.fxg for GameItems/graphics/ship_graphic was just an XML file and looked like this: // Ship.fxg: <?xml version="1.0" encoding="utf-8" ?> <Graphic version="2.0" viewHeight="800" viewWidth="480" ATE: <Library> <Definition name="GameItems_graphics_ship_graphic" flm: <Group pd: <Group d: <BitmapImage rotation="90" scaleX="0.51818848" scaleY="0.51817322" source="@Embed('Ship.assets/images/ship_bmp.png')" fillMode="clip"/> </Group> </Group> </Definition> </Library> <Group d: <GameItems_graphics_ship_graphic x="-12" y="-15.35"/> </Group> <Private/> </Graphic> As you can see rotation and scale values are listed as attributes of the BitmapImage tag. The interesting part is in the GameItems_graphics_ship_graphic tag, which is the container of the ship image. That container tag sets x and y to -12 and 15.35. You will never find those values in the Flash IDE. But they are very important in order to correctly set up the initial transform. With the FXG information I was able to implement SymbolManager.createShip() and onShipLoaded, which gets called when the ship bitmap has been loaded: private function onShipLoaded(e:Event):void { const bitmap : DisplayObject = getBitmapFromEvent(e); if( bitmap ) { bitmap.scaleX = 0.51818848; bitmap.scaleY = 0.51817322; bitmap.rotation = 90; bitmap.x = -12; bitmap.y = -15.35; bitmap.width = SHIP_WIDTH; bitmap.height = SHIP_HEIGHT; ... _removeEventListenerFromTarget(e, onShipLoaded); } } There were a few other minor things I had to adjust in the Pew Pew. But I was eventually able to create a Pew Pew SWF from Flex. Once I had a coherent Flex project it was pretty easy to cross-compile Pew Pew to JavaScript. In the end it took me about three days to reanimate the old Pew Pew prototype with the latest version of my cross-compiler. Along the bumpy way I fixed a bunch of regression bugs and a few new ones as well. It was quite a moment of victory when I hit the debug button in Visual Studio 11 and Pew Pew finally worked in Metro. I thought, if I put in a little bit of work I might be even able to submit this game to the Windows 8 First App Contest I had just read about. Being able to cross-compile that old Pew Pew prototype to Metro was a big accomplishment for me, which I wanted to share with my wife, who was sitting right next to me. The conversation that followed went like this: Me: Hey, I got Pew Pew up and running in Metro! She: That’s great, let me see. Me: Here it is. It needs a little bit of work, though. I am thinking of submitting it to the contest. She: You will never win the contest with something like that. Me: What do you mean? The graphics? It’s supposed to look crummy. It’s charming. She: There is a fine line between “crummy” and “crappy”. I knew I was in trouble. Cross-compiling ActionScript to JavaScript was one thing, developing a cool game that is not crappy is a completely different beast I would have to tame. ]]>]]> This is part two of a series of blog posts about a crazy journey that eventually led to Pew Pew becoming one of the first apps in Microsoft’s App Store. On a long flight from Seattle to Omaha I learned about Metro and got hooked. I decided to dive into Metro at around 12/20/2011. After a few relaxing days of catching up with my in-laws and eating way too much good food it was time to get my hands dirty with Metro. I only brought my MacPro laptop to Omaha. So how does one install Microsoft’s Windows 8 Developer Preview on a Mac? This is how I did it: This may surprise you, but I would have done the same on my Windows machine at work. I highly recommend sandboxing new operating system versions that are under development with virtualization software like VMware or VirtualBox. This method just saves you many troubles. Take many snapshots and just roll-back if things get too messy (as things tend to during software development). Then I launched Windows 8 in my sanitized vmware sandbox and made myself familiar with my new environment. Metro reminded me of Windows 7 for phones. It was all very “touchy.” I have to admit, at first I didn’t like the Tile metaphor. Tiles remind me of bathrooms and hospitals. They break into shards when they hit the ground and they are cold unless you have heated floors. I know, UI design is more difficult than one thinks it is, but I wondered, why didn’t they choose something organic like Leaves instead? Perhaps even Pillows? As far as I can remember Windows 8 Developer Preview from September 2011 came with Visual Studio 11 Express pre-installed. In case you run the latest Windows 8 Consumer Preview you need to install the Developer Tools and SDK with Visual Studio 11 Express Beta in a separate step. VS 11 looks pretty much like the previous versions except for a few new options like creating Metro Style JavaScript App projects. This all looked pretty encouraging. I was eager to write my first Metro Style App, Hello World. Of course, I had no idea how to write a simple Hello World Metro Style app. So I downloaded the Windows 8 Sample Pack and looked for a Hello World sample. There was one example called “DirectWrite hello world sample.” But that didn’t look simple at all. I believe I ended up picking the “App tiles and badges sample” and built it. When I hit the debug button VS linked the app, deployed it to Metro and launched it without problems. That was easy. Just for fun I decided to create a blank Metro Style JavaScript App Project and sniff around a little bit. VS 11 created a bunch of files, one of them was named default.html, which included default.js . That had to be the entry point! Sure enough, when I opened default.js I saw that WinJS.Application.start() brings the app to life, which would eventually trigger the call to WinJS.Application.onactivated – a function that I could, for my convenience, override: // default.js (); })(); In order to test my theory that onactivated() would be eventually called towards the end of the launch sequence I added a trace statement to WinJS.Application.onactivated: app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { WinJS.UI.processAll(); console.info( "Hello, Metro" ); } }; There it was in my debug console: Hello, Metro. For an hour or so I just browsed through the samples until implementation patterns seemed to emerge then I decided to get serious about studying the Windows Runtime API. Fortunately Microsoft’s online API reference for Windows Runtime and Windows Library for JavaScript documentation for Metro was (and still is) excellent. I noticed two namespaces: Windows.* and WinJS.* . Apparently, the Windows.* namespace corresponds to Windows Runtime, which “is designed for use with JavaScript, C#, Visual Basic, and C++”. The WinJS.* namespace on the other hand refers to “the Windows Library for JavaScript is a set of JavaScript and CSS files that make it easier to create Metro style apps using JavaScript”. I was feeling ready to try running a simple cross-compiled app. Why not SpriteExample? One of the first ActionScript apps (if not the first one) I ever cross-compiled to JavaScript was SpriteExample.as . I simply imported my cross-compiled SpriteExample.js into the empty “Hello, Metro” project and made a small change: // Original SpriteExample.js (function(__global) { ... })(window);// Modified SpriteExample.js function loadSpriteExample(__global) { ... }; This change allowed me to defer the load sequence of my SpriteExample. I wanted to load SpriteExample at the end of app.onactivated, where I had put my trace statement: app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { WinJS.UI.processAll(); loadSpriteExample(window); } }; The only step left was adding a script tag to default.html, which loads SpriteExample.js: <script src="/js/SpriteExample.js"></script> Frankly, I was shocked when I saw SpriteExample running in Metro without problems. This all was way too easy. Could it be that it was in fact super easy to develop Metro Style apps using ActionScript and my cross-compiler? I had to try a more complicated project. I was convinced, at one point reality would stop me from doing what I was doing. So I chose Pew Pew as my next project… ]]>]]> This is part one of a series of blog posts about a crazy journey that eventually led to Pew Pew becoming one of the first apps in Microsoft’s App Store. It all began in December, 2011… If you download and install Microsoft’s Windows Consumer Preview of their upcoming Windows 8 operating system and click on the Store button you will find a rapidly growing number of apps. One of them is a game called Pew Pew, which I submitted for the First App Contest. The description reads: Attention Earthlings! This fun, retro space shooter game allows you to destroy enemy ships and UFOs from the safety and convenience of your home planet. Rack up points while protecting your planet from invasion. Pew Pew was originally written by Mike Chambers, who is currently the Directory of Developer Advocacy for web platforms here at Adobe. In 2010 Mike published Pew Pew’s source code under the MIT license at github. The original Pew Pew version was written in ActionScript and in order to turn Pew Pew into a Metro app I had to solve three problems. First I downloaded the latest Pew Pew source code from github and cross-compiled Pew Pew’s ActionScript code to JavaScript. Then I improved the game and added Metro features before submitting Pew Pew to Microsoft’s App Store. This all happened in 9 weeks between 12/17/2011 and 2/17/2012. Vacation! On 12/17/11 my wife and I hopped on an airplane in Seattle to visit her parents in Omaha. Those, who know me better, have accepted that I am one of those developers that always work – even on their vacations and honeymoon. I call it Rest Developer Syndrome while my wife thinks I am just a workaholic. As in most cases, my wife is probably right, but still, in my opinion there is a difference between work and “work”. When I am on vacation I can finally catch up with all the new stuff that had come out in the past few months. That’s not work for me, that’s even more fun than my regular work! There was that //build conference that I missed in September 2011, when Microsoft revealed its new Windows 8 operating system with something completely new called Metro. Fortunately Microsoft made recordings of the keynotes and many talks publicly available. So I downloaded five or six of those videos and watched most of them during the long flight from Seattle to Omaha on my slow iPad. I started with the keynotes, which seemed a little bit hectic as most keynotes usually are (only few deliver brilliant keynotes as Steve Jobs did). I did learn from those keynotes that Windows 8 would introduce a new user interface called Metro in addition to the “traditional” desktop. Only Metro style apps would run in Metro, they said, and developers would be able to write entire apps in JavaScript, because Metro’s JavaScript API would allow developers to access the operating system through a new, unified Windows Runtime layer. In Windows 8 you could only install Metro apps through Microsoft’s new App Store. That’s how developers would make money (and Microsoft would make a few pennies, too). I didn’t mind seeing Microsoft try to catch up with Apple and Google by opening their own App Store. I love competition, I love Freedom of Choice! What really intrigued me was that I could access the operating system through Metro’s JavaScript API. I had been cross-compiling a lot of complex ActionScript projects for almost a year and by December 2011 I had become a fearless gladiator. The biggest limitation when cross-compiling ActionScript to JavaScript was the browser and its DOM. For example, there was (still isn’t) any way to access the camera from JavaScript unless you used something like PhoneGap. If it were really true (as Microsoft promised in the keynotes) then Metro would open a gigantic candy store right in my own neighborhood: access to the OS layer. I definitely wanted to learn more about Metro. Still on the plane to Omaha I next picked Jensen Harris’s talk about 8 traits of great Metro style apps. Those eight traits are: Jensen’s talk walks you through each of those traits and frankly, I was surprised. Something was really different with this guy and Metro itself also seemed different from what I had expected. I had to watch another talk about Metro. The next talk I picked was Samuel Moreau’s talk about Designing Metro style: principles and personality and this one just blew me away. Sam, who is a director at UX Design and Research at Microsoft, explains the background and inspirations for Metro style design. He identifies three key influences: Sam’s talk really opened my eyes. I realized, that Metro is not some flashy technology tacked on to a mature operating system (the lipstick on the pig if you will). Instead I had to look at Metro as a new paradigm. The gladiator in me knew that cross-compiling ActionScript projects to Metro wouldn’t be as easy as I thought it would be. Bringing some cross-compiled JavaScript up and running in a new environment alone won’t do it for Metro. Metro is a different beast. Jensen Harris and Samuel Moreau really surprised me in many ways and it took me a while to put my finger on what it was. Their talks were interesting and I didn’t feel lectured to at all. Instead of past attitude, I sensed a human side that I found refreshing. Those guys seem to be normal people and their reasoning resonated with my way of thinking and cultural values. I wondered, is there maybe a new breed of developers and managers like Jensen and Sam emerging within Microsoft? Perhaps there is a New Microsoft forming within the Old Microsoft? One can only wait and see. By the end of our flight to Omaha I have already decided to dive into Metro. But I also wanted to spend the holidays with my wife, who I love very much, and her wonderful family. So I came up with these ground rules for my new Holiday project: I should not work more than 3-4 hours a day on this, or any project that involved me being emerged with my laptop. ]]>]]>
http://blogs.adobe.com/bparadie/feed/atom/
CC-MAIN-2014-10
refinedweb
4,701
64.2
According to Idea Finder, the game of ClueDo was invented by Anthony Ernest Pratt and patented in England in 1947, then purchased by Waddington Games before the U.S. rights were bought by Parker Brothers in 1949. The WhoDunnit? game has been a mainstay of rainy days and cottage cupboards around the world ever since. There have been several PC versions of this murder mystery, and after passing up the last copy I'd seen on a store shelf and then failing to find another one again, I went to the nearest thrift shop and bought a used copy of the board game for $2, and started on this project about a month ago, and it looks pretty cool. If all you want to do is play your old favourite and you already have Microsoft's C#2010 up and running, then all you need to do is download the source code above and let yourself into Mr. Boddy's Mansion on the day of the murder. You can play any suspect you like by making your selection when prompted. It's a one-player game, so the other players are all controlled by the computer's Artificial Intelligence, which you can set via the menu Options->Set-AI. Once you have this menu up, you can cycle through the various AI-levels by clicking on the character's name. It would be nice if you could do the same thing with your own 'intelligence', but even if you're playing Mustard and you set Mustard's AI to genius, you'll still have to do all the work yourself. There's also something called Auto-Proceed that's part of the speed controls, which you can access through the menu Options->Speed. In the default setting, Auto-Proceed will advance the game in a way similar to when you're playing with people around a table so that it is difficult to keep up and take careful notes. You may want to switch this off the first time you play, and manually click the 'OK' button to move the game along while you do your sleuthing. Which brings us to the Notebook on the right of the game board. In the image above, you'll notice a few things. First, the names on the left are the names of the game cards sorted out into either Suspect-Cards, Weapon-Cards, or Room-Cards. Take a look at the left edge of the image, and you'll see the three cards your suspect is holding; in this particular example: Revolver, Conservatory, and Billiard Room. To the right of the names, you can see six colored boxes. These boxes are user-controlled. You can either right-click or left-click them to cycle through the four possible values each colored box can hold: X, check, ?, or blank. There's a color for each suspect, and your own suspect's boxes are automatically set at the beginning of the game to help you along. Xs mean that suspect definitely does not have that card, and a check means he does, while a blank means you have no idea either way, and a ? means you think he might. This should be plenty to help you solve the puzzle. Let's look at the game board. You can clearly see the 9 rooms and the marble tiles. In the image above, the two characters Miss Scarlet and Col. Mustard have played their opening turns and stepped away from their respective start locations towards the rooms. The object of the game is to travel from room to room making Suggestions and getting other players (suspects like yourself) to show you their cards. When you've figured out what cards they all have, you should know what three cards have been separated from the rest and now rest in the middle of the board, and win the game. The control-panel is on the right on your notebook. and consists of a few Pulse-Buttons for you to either Roll the die, Make a Suggestion, or End your turn, which are all self-explanatory. Below these buttons, you have the die-box where the die (that's the square thing with spots on it!) is rolled. And, beneath the die-box is your ultimate button Accuse, which is all you need to make someone's life a real bother. At the beginning of your turn, you'll either get a chance to suggest right away (if some other player summoned you into a new room to make their suggestion), or you'll have to move out of a room. In either case, whenever the Suggestion button is enabled, you're entitled to make a new suggestion, and that'll happen every time you enter a room. It's probably a good idea to make suggestions every chance you get. Once you've clicked the Suggest button, your game board will look like the image above. You must suggest the room you're in when making a suggestion, so your only options are the Who and the How about it. To make your choice, you simply click on the face of the suspect and the image of the weapon you want to suggest, and when you're done deciding, the images will rotate into position; click the OK button that appears on the bottom of the 'Bubble' whenever you're ready. At this point, the Suggestion needs to be Proven. So the player on your left (or at least that's where he'd be if you were sitting around the table) will have to try to disprove your suggestion by showing you any of the cards you're asking for. If they don't have a card, they'll tell you so: If she does have a card, she'll show you what she has and that'll be the end of your suggestion, but whenever another player is making a suggestion, they and the person disproving their suggestion are the only people around the table who know what card is being shown. All the other players have to try and figure out what card it is by using deduction and reason. It will happen several times during the game that someone else will make a suggestion and it will be your turn to try to disprove it. When you cannot disprove a suggestion, a banner will appear telling you "you have no card", or if you can, the same banner will say that you do have a card and it will show you what cards you can show. Here, if the auto-proceed setting is checked, the game will go on without your interfering if you have none or only one card, but even if you have not shut off the auto-proceed option, the game will stop and wait for you to pick which card you want to show the player making the suggestion when you have more than one of the cards being suggested. Before you roll the die, first consider whether you want to use a Secret Passage. The secret passages are located in the corner rooms, and let you travel across the board without rolling the die. So, if you're in a room and you want to go to the opposite corner, you'll then want to use the secret passage. To do so, you click on to the colored square that appears in the room that you're in before rolling the die. The image below shows the game-board at the beginning of Col. Mustard's turn, and he has not yet decided whether he wants to roll the die or use the secret passage. You can still see the the yellow square in the corner indicating that Col. Mustard has the option of using the secret passage and go to the Conservatory from the Lounge rather than roll the die. To roll the die, you click the Roll button. When the roll-die animation stops, the game board will display where you can move by placing colored squares on all marble tiles and doors where you can go. The image above shows the move options for Col. Mustard when he rolls a 6 at the start of the game. Just click on the game board, and Col. Mustard will move to wherever you told him to. When you've suggested enough and have it all figured out, go ahead and make your accusation and close the game, but be careful... if you accuse incorrectly you'll eliminate yourself and have to watch the AI until some other sleuth solves Mr. Boddy's mysterious murder. To accuse, you only need to click the Accuse button below the die-box on the right. Things are a little different here in that you can accuse anyone anywhere with any weapon. So you don't actually have to be in any specific room, which means you can select whatever room you think is the crime-scene. The image above shows you the accuse options, and you can make your pick by clicking the appropriate images like you did with your suggestions. To cancel your accusation, click the Accuse button again when it reads Cancel Accusation, and you can wuss-out and let someone else be the hero. Or, if you've got the juice, click that big red pulsing button in the middle of the screen and see if you're an Ace or a Waste! This project was a lot of fun to write, and it went pretty well all the way through, but the first problem was the board itself. All of the images that appear in this game were downloaded off the internet (the only thing I actually drew myself was the piece of cheese!). But there weren't any pictures of the board itself that I could use for the game since the graphics animation required each tile to be perfectly aligned (if I wanted to do things the easy way) to facilitate the placing of the images of the suspects in their proper places. For this reason, I had to draw the game board myself... So, OK, the game board and the piece of cheese. This was actually harder than you'd think. For the marble tiles, I painted all the blank tiles yellow, and then whited-out a few disparate tiles at a time while cutting and pasting the entire transparent map (white tiles) onto a mega image of a marble slab, each time whiting out a few new tiles and picking up marble when I pasted the latest image until all the tiles were marbled. Whiting them all out and pasting the board onto a marble slab wouldn't look so cool, because then the whole floor would have had a single marble image behind it. This is way better. To keep track of which tiles the suspects can move on as well as the dimensions of each room, I put together a miniature map using MS-Paint depicting the game's floor, where each pixel is equivalent to a single tile. Here's the image: There's probably a better way to do this, but doing it this way allowed me to visualize the game's data better than if I had written it directly into code in a database format. The white pixels are marble tiles on the game board, the red-tiles are rooms, black tiles are inaccessible, and the blue ones are doors. The difficult thing to do here was 'program' the door-pixels in the image to contain information like RoomID and dirDoor. To do this: RoomID dirDoor blue is not quite so blue, but neither should you a poem by Christ Kennedy (I've applied to the CodeProject to become their on-line poet-laureate, but haven't yet heard back from them; a petition from all my fans would greatly improve the chances that this site finally has the proper poet it truly needs.) The blue pixels on the mini-map do hold this information. The color blue in the blue pixels, like all pixels, is defined by four byte-sized variables: R (red), G (green), B (blue), and A (Alpha). Though the blue doors all have the same 255 for the B component, the R and G values hold the egress direction and room ID values, respectively. You can see this in the classSuspect's function shown below: R G B A classSuspect /// <summary> /// uses the 24x24 pixel bitmap bmpClueFloor to set up 2d array describing the game map. /// each pixel represents a square on the board. /// the pixel color components (Red, Green, Blue) determines the type of tile. /// Marble Tile (corridors) : R=255, /// G=255, /// B=255 (white). /// Illegal square : R=0, /// G=0, /// B=0 (black). /// Room : R=255, /// G = ROOM ID NUMBER (study =0, /// hall =1, ... billiard room = 7, library =8), /// B=0 (variant of Red). /// Door : R=egress direction (0=north, 1=east, // 2=south, 3=west, 4=secretpassage), /// G=Room ID, /// B=255 (variant of Blue). /// e.g. the marble tile south of the door /// (6,4 in 2d array) to the study is white (0,0,0). /// the first tile into the study(roomId=0) is considered a door-tile /// facing south(dir=2) therefore tile (6,3 in 2d array) is colored (2,0,255). /// the top-left-most tile (0,0) is the study's secret passage /// to the kitchen(roomID=4) therefore its tile is colored(4,4,255). /// the right-bottom-most tile (23,23) is the kitchen's secret passage /// to the Study(roomID=0) therefore its tile is colored (4,0,255). /// </summary> void initFloor() { // static floor tiles Bitmap bmpFloorTiles = new Bitmap(Clue_CS2010.Properties.Resources.ClueFloor); cFloor = new classFloor[bmpFloorTiles.Width, bmpFloorTiles.Height]; for (int intX = 0; intX < bmpFloorTiles.Width; intX++) for (int intY = 0; intY < bmpFloorTiles.Height; intY++) { //if (intX == 0 && intY == 23) // MessageBox.Show("stop initFloor()"); cFloor[intX, intY] = new classFloor(); Color clrTile = bmpFloorTiles.GetPixel(intX, intY); if (clrTile.R == 255 && clrTile.G == 255 && clrTile.B == 255) { // white = tile cFloor[intX, intY].eType = enuFloor.tile; } else if (clrTile.R == 0 && clrTile.G == 0 && clrTile.B == 0) { // black = invalid cFloor[intX, intY].eType = enuFloor.invalid; } else if (clrTile.B == 255) { // blue = door : R = direction door exits room, G = roomID# cFloor[intX, intY].eType = enuFloor.door; cFloor[intX, intY].dirDoor = (enuDir)clrTile.R; cFloor[intX, intY].eRoom = (enuRooms)clrTile.G; } else if (clrTile.R == 255) { // red = room : G = roomID# cFloor[intX, intY].eType = enuFloor.room; cFloor[intX, intY].eRoom = (enuRooms)clrTile.G; } else MessageBox.Show("this should not happen"); } } There's also color-code for the rooms (red pixels), but that turned out to be unnecessary since the suspects only travel along tiles and then are inside a room, and when they're inside a room, they are drawn at their assigned locations around a central point so the red tiles may as well be black (though that would make it harder to visualize the map). In any case, once this cFloor array is set and we have our floor set up in a data format which the code can easily read, the rest of it is quite simple. cFloor Well, actually, there were a few other problems. Both the player-controlled suspects and the AI-controlled suspects use much the same code, and the same thing happens when it's an AI player's turn to move and it decides to roll the die, or when it's the human player who presses the roll-die button: the die is cast! And when that animation stops, the program has to decide what squares that particular suspect is allowed to go to. To do this, it uses an algorithm called Breadth First Search, which I describe in an earlier article: Battlefield Simulator. Essentially, it moves step by step away from the suspect's current location, keeping track of where it can go at each round of stepping, while stepping one step further each round, until it reaches the limit (die roll result), all the while keeping track of where the suspect can go in an array called: public classSearchElement[,] cSEAllowableMoves; of the class shown below: public class classSearchElement { public Point pt; public int intSteps; public int intTurns; public int intCost; public enuDir SrcDir; public classSearchElement next; public void set(Point PT, int Cost, enuDir sourceDirection) { set(PT, sourceDirection, Cost, 0); } public void set(Point PT, enuDir sourceDirection, int Steps, int turns) { pt = PT; SrcDir = sourceDirection; intSteps = Steps; intTurns = turns; intCost = intTurns * 3 + intSteps; } } Since the array is initialized to contain nothing but null values (note that all the previous elements are inserted into a linked list so that they can be recycled the next time a search is made), when the player (or AI-suspect) clicks on the screen, the pictureBox click-event handler tests if the tile corresponding to the area of the screen that was clicked is null; if it is, then that click is ignored; if it isn't, then it knows that that tile is a valid place for that suspect to move to. null pictureBox Then, when it knows where the player wants to go, it does another similar breadth first search in the classSuspect function shortestPath() using the same classSearchElement shown above and stepping from the destination to the current location, while keeping track of the path along which it travels (via the SrcDir variable in classSearchElement). Then the animation begins. shortestPath() classSearchElement SrcDir One of the most important variables in this project is eMode of type enuMode, shown below: eMode enuMode public enum enuMode { idle, dealCards, Accuse, Accuse_Animation, chooseCharacter, warnPlayerNextTurn, beginTurn, gameOver, rollDie_begin, rollDie_animate, rollDie_end, showAllowableMoves, animateMove, Suggest, AISuggestionComplete, animateIntroduction, animateSummonSuspect, animateSummonWeapon, animateSecretPassage, respondToSuggestion, playerChoosesCardToShow, playerTurnIdle, endTurn, any }; Whenever tmrDelay is enabled and ticks, its event handler checks to see what mode the game is in before deciding what needs to be done next. When the suspect is moving from one tile to the next, or entering or leaving a room by a door, then the eMode variable is set to animateMove and the flow of the program falls into the function moveStep(). tmrDelay animateMove moveStep() The moveStep() function moves the suspect's icon over the board along the path which the shortestPath() function calculated and stored in the global udrMoveAnimationInfo structure variable, moving the piece along by steps of 7 pixels (each tile is 28 x 28 pixels in area) in whatever direction the icon is moving from one tile to the next, until it reaches its destination. classSuspect's location is described with two variables: eRoom and ptTile. Since the eRoom variable tells the program that that suspect is either in a room (specifies which room), at start-position, or on a tile, the only time the ptTile variable is needed is when the suspect's eRoom value is set to tiles. udrMoveAnimationInfo eRoom ptTile tiles Once I had made myself a six-player game of clue (with many details left undone), I set out to make the smartest, perfectest, geniusistic artificial intelligence I could conjure. The simplest way to simulate this would be to count the number of turns each suspect plays, and then decide that after so many turns, there's a chance that this or that suspect might 'guess' the solution and just have it look at the cards. In other words, cheat. But what would be the fun in that!?! Nope, that's not what this does at all. The genius setting takes advantage of every trick I've given it. And though I am no genius, the AI is pretty close to it. To geniusify my computer, I had it take careful notes of what cards each suspect declares they do not have, as well as each set of cards that were called when they did show a card. This way, at every cycle of the Prove Suggestion phase of the game, it can check if previous sets of unknown cards included a card any suspect recently declared not having, until it eliminates two of those three and then knows exactly what card was shown any number of turns ago, even though they didn't see it themselves. For example: Mrs.White suggests : Col. Mustard with Knife in the Kitchen. Col. Mustard with Knife in the Kitchen. and Mrs. Peacock says she has no card. But previously, Miss Scarlet suggested Col. Mustard with the rope in the kitchen. Col. Mustard with the rope in the kitchen. and Mrs. Peacock, that turn, did show Miss Scarlet a card. A mystery card which we can now deduce was in fact the Rope because we now know that Mrs. Peacock does not have Col. Mustard and Mrs. Peacock does not have the Kitchen, she must therefore have shown Miss Scarlet the Rope. The genius AI remembers all sets of unknown cards, and keeps diligent notes of who denied having what, making it fairly easy for it to figure out relatively quickly what the solution to the puzzle is. It also knows how many cards each suspect holds in their hand (as of now, in a six player game, each suspect holds three cards each), and so once those three cards have been identified, the genius-AI knows that that suspect does not have any other cards. To solve the puzzle, it can eliminate each possibility off of a list, or it can notice that no suspects have this or that card, making that one card the obvious choice. Whenever its list of suspects, list of weapons, and list of rooms is narrowed down to one of each, then it makes its accusation. And, the computer is never wrong (I didn't bother making the dumber levels erroneous, just less diligent). But keeping good notes isn't the be-all and end-all of winning a good game of Clue. You still have to ask the right questions. classSuspect does this in the function below: public void makeSuggestion() { prepareSuggestionAI(); pickSuspectForSuggestion(); pickWeaponForSuggestion(); MainForm.udrSuggestion.eRoom = eRoom; ... by first picking a suspect, then a weapon. To make the selection of the suspect, it goes over its notes, trying to eliminate 'unknowns', those are the sets of cards which other suspects have shown to a third party. If it knows the Who in this whodunit?, then it may either choose that card, and be certain it doesn't relearn what it already knows, or it may choose a card it holds in its hand (if a computer actually had hands, I guess). But before it can figure out who killed Mr. Boddy, it has to eliminate the possibilities by looking over the 'unknowns'. If it has no mystery unknown for the suspect category, it just randomly picks any suspects remaining on its list. But if it does have one or more of these unknown suspect cards left to eliminate, it will go around the table in the reverse order in which the suggestion will be disproved, and then pick the first suspect-unknown-card it finds. It goes in the reverse order because the notion is that it's best to collect as many Xs on your notebook as possible. If the first suspect you ask to disprove your suggestion shows you a card, you've narrowed one field (suspect, weapon, or room) by one card, but you haven't learned much else. By collecting all these Xs on your notebook, it becomes much easier to deduce what cards are being shown to any third party, and the puzzle is solved much more quickly. Then the AI picks a weapon to suggest by essentially doing the same thing it did to pick a suspect, but keeping in mind that it doesn't want to ask about a mystery unknown weapon-card which is in the same suspect-column as the mystery-unknown suspect card it has already picked. For example, if it knows Prof. Plum showed a card when he was asked for the Kitchen, Peacock, and Candlestick, and it has already decided to suggest Peacock, then it does not want to suggest Candlestick because if it eliminates one, it's already resolved the other. It can learn more by asking about a different weapon, a weapon which it does not suspect the same suspect has (forgive the confusion). Collecting more Xs, resolving more unknowns, and reducing more possibilities, until a clear deduction of the who and the where and the how is made to end the game. See? Simple. Quick and neat. The AI's internal note keeping makes regular use of two important functions: void checkNote(enuCards eCard, enuSuspects eSuspect) and public void XNote(enuCards eCard, enuSuspects eSuspect) Notes here refer to individual cells that represent internally what the user sees graphically on the notebook, the colored squares. There are 6x21 notes in a 2-dimensional array for each suspect (6) and each card (21). The AI checks a note whenever it has deduced or has been shown that a given suspect has a card. The checkNote(eSuspect, eCard) function sets a specific note in the array to 'check', meaning that the holder of the card eCard has been identified as the suspect eSuspect, and if the holder of a card has been identified, then all the other suspects are known to not have the same card (there is only one of each card), and so all the other suspects have their note for that particular card X'ed (using Xnote() below). Also, when the holder of a new card has been identified, then a tally of cards which that suspect is known to have is made, and if all of that suspect's cards have been identified, then it knows that that suspect has no other cards, so all other cards are X'ed. checkNote(eSuspect, eCard) eCard eSuspect Xnote() X'ing a card by using the XNote(eSuspect, eCard) function is similar to checkNote() above, except that it lets the AI make a record of the knowledge that the suspect eSuspect does not have the card eCard. When this happens, it goes over all of its sets of mystery unknown cards and looks for one that includes the card eCard which that suspect has recently been discovered not to have, and eliminates these unknowns. By eliminating unknowns, it can reduce the number of cards in unknown card-sets to one and can then check (using checkNote() above) that card in its notes. If all suspects are known not to have a given card, then the AI knows that that card is part of the solution and adjusts its suggestion making accordingly. XNote(eSuspect, eCard) checkNote() Between X'ing and Check'ing, both Check'ing and X'ing, the X'ing and Check'ing soon cascades into a lot of figuring, and in no time, soon the whole muddled mystery unravels nicely. Lobotomy! Yes, you guessed it. If all suspects around the table were geniuses, you would soon get frustrated trying to play this game. It takes luck (and untoggling of the Auto-Proceed helps!) in order to win a game if you're playing against a bunch of geniuses. First of all, the geniuses remember everything, never make mistakes, and are very diligent in their note-taking about everything that happens in the game. You can still tweak your suggestion making and get to the answer pretty quick because there are ways to beat it, but ... most people just like to play against us regular folk. So that means the AI needs to be a bit stupid sometimes. Luckily that's not much of a bother. There are five levels of AI, ranging from Stupid-to-Genius. And to dummificate a geniustic computer, all it takes is a nasty virus, or a few lines of code that involve the use of something called a Random Number Generator. Here's an example where the AI decides whether it wants to count the number of identified cards for this suspect and then X'ify the remaining cards if it has identified all of the cards which that suspect holds: // ai may notice that all of this suspect's cards have been identified if ((eAILevel == enuAILevel.stupid && (int)(MainForm.rnd.NextDouble() * 1000) % 400 == 0) || (eAILevel == enuAILevel.dull && (int)(MainForm.rnd.NextDouble() * 1000) % 50 == 0) || (eAILevel == enuAILevel.average && (int)(MainForm.rnd.NextDouble() * 1000) % 10 == 0) || (eAILevel == enuAILevel.bright && (int)(MainForm.rnd.NextDouble() * 1000) % 2 == 0) || (eAILevel == enuAILevel.genius )) { int intCountNumCardsKnown = 0; for (enuCards eCardCounter = (enuCards)0; eCardCounter < enuCards._numCards; eCardCounter++) { if (udrAILogic.eNotes[(int)eSuspect, (int)eCardCounter] == enuNote.check) intCountNumCardsKnown++; } if (intCountNumCardsKnown == MainForm.cSuspects[(int)eSuspect].cCards.Length) { // all of this suspect's cards have been identified // -> X the ones that are unknown for this suspect for (enuCards eCardCounter = (enuCards)0;eCardCounter < enuCards._numCards; eCardCounter++) { if (udrAILogic.eNotes[(int)eSuspect,(int)eCardCounter] != enuNote.check) XNote(eCardCounter, eSuspect); } } } You might have noticed the pretty effects which make this game kind of cool. Some of them you've seen before (if you've read or downloaded some of my previous articles) like Jessica Rabbit in the title animation. She's animated using classSprite which I discussed in a previous article: Sprite Editor for .NET. For anyone who has made use of this class, you will be glad to know that I've made a few upgrades, the most important of which is the accelerated load time. The major draw-back of the class was that it took so long to load it became tedious to use. But this new upgrade is backwardly compatible, and automatically upgrades the sprite files from .spr to .sp2, so that it will reduce the load time by a factor of 10 (at least!). Just download this Clue code, copy classSprite from it, and stick it wherever. classSprite As for the other pretty things, there isn't anything quite as pretty as Jessica Simpson. Er... uhh... forgive the freudian (I'm an Ashlee fan myself!), there isn't anything quite as pretty as Jessica Rabbit, the cards themselves are kinda cool. Like I mentioned earlier, all the images in this project were downloaded off the internet and modified. And when I decided I wanted to animate the process of sorting the cards, shuffling the three different types, and picking the mystery cards before dealing the remaining ones, I had to generate 'sprites' for each card. This seemed unnecessary, so I created a new type called classRotatedImage. This class takes an input image, rotates it three times, and stores four base images in an .rbmp file on the hard-drive. Doing this takes time, and that is the reason why the first time you load the game, it will take up to two minutes before you can see the intro-animation, but once these cards are generated, the load time is much much faster. These four base-images are rotated evenly between zero and pi/2 radians (or thereabouts) so that .NET's native Bitmap's FlipRotate function can then be used to generate the remaining 12 of the total rotated 16 images of that particular instance of the class. classRotatedImage Bitmap FlipRotate The single most difficult part of this project was getting the cards to flip during the deal-cards animation. This is the part where the cards have been sorted into categories then spread out face-up. The idea was to create an effect that would give the illusion of these two dimensional images flipping in a cascade of cards, much like a magician might by spreading the cards out onto the table and then flip the first card and watch the rest of them follow. Creating the flip-one-card illusion is just a matter of slimming the image to an appropriate size during the animation, then widening it until it is normal size again on the reverse side. But getting a cascade of cards to do that at the same time was a bit more difficult, and it was done in the main form's: AnimateDealCards_FlipCardsUtoD_DirR(enuCardType eTypeCard) function, which is called by: animateDealCards() When the tmrDelay's tick event handler trips, eMode is in dealCards mode, and the variable eDealMode is set to flipWeaponcards, flipSuspectcards, or flipRoomcards. dealCards eDealMode flipWeaponcards flipSuspectcards flipRoomcards The idea was to simulate something like what a magician might do when he spreads a pack of cards out onto the table, then flips the first one and watches the rest of them flip after it. To do this, the program spreads the cards out and retains their locations on the table. Then it moves an invisible finger across the breadth of the entire spread. Each card has one edge which remains in contact with the table; the point at which this edge makes contact with the table is called that card's pivot-point. The left-most card that still has its pivot point to the right of the invisible finger is designated the pivot-card at any instance during the card-flip animation. This is the furthest card to the left that is still face-up. It is drawn first, then the next one to its right, then the one after, and so on until the last card to the right has been drawn, and the animation then proceeds to draw the cards to the left of the pivot point one after another, starting from the card nearest to the pivot card until it finishes with the card furthest to the left of the spread. But before drawing any of these cards, their relative image widths need to be calculated. To do this, the algorithm loops while looking at the cards from a side view. To calculate the width of card B, it: Since the contact point of one card is used in the next's card's calculations as the 'floating edge' by this algorithm, it can easily cycle through all the cards one after the other. The sizes of each of the face-down cards (those to the left of the pivot card) are calculated in much the same way, but in reverse, with negated x values to simplify things. However, the card immediately to the left of the pivot card can either have its edge pressed against the side of the pivot card, or it can have its side pressed against the edge of the pivot card and, after a week of flipping out about this (in a very cool, calm, and dignified manner), I decided that approximating this by setting both of their floating edge's X values to the invisible finger's position was going to be just good enough (this isn't the guidance system of a heat-seeking missile, after all). And that was the end of that. The intro-animation is pretty cool. Yep, pretty cool. To do this, I used a copy of the board and notebook magnified by a factor of three, and kept it in memory. This image is much too large to run the animation smoothly, so it is then cut up into an array of much smaller images, and the whole bit-of-the-map thing is stored in this array, using about twice as much memory than before, but each smaller image can be accessed and used much more easily by the algorithm that draws the magnifying glass and the magnified image behind it onto the screen. The width of these parcels of the whole are sized by calculating the smallest value, which divides evenly into the width of the entire magnified image, and is also bigger than twice the width of the magnifying glass' lens. The height is done in a similar fashion. The magnifying glass lens image consists of This magnifying glass is moved around the magnified image. When its location is determined, after it has been moved, its surface area is described (the actual position of the lens is stored in memory as the position of the center of the lens, so that the area which the lens covers needs to be calculated). This surface area is then copied from one of the parcel images, stored in the area described in the previous paragraph, onto a new bitmap the size of the lens. The area inside the lens's rim is made transparent, and then the lens is superimposed onto the new bitmap of the magnified board we've just created so as to make the magnified image appear as if it were "behind the lens". Then the red area around the rim of the lens is made transparent, and this new image, rim, and magnified game-board behind the lens are drawn onto a regular sized copy of the board and notebook before going to the screen. And that would be that if there weren't any protesting rats and spider-webs. And what's this Waldo dude doing here anyway? But then things can always be improved, and so, that is the reason why we need to throw in a bit of animation. The animated characters are all done using the new classSprite, and they do their thing whether or not anyone is watching them (their respective intConfiguration and intConfigurationStep values are updated at each iteration of the intro animation). Their images are generated and put to the screen whenever the magnifying glass comes close enough for them to be needed. These sprite images are placed onto the copy of the mega-board images before the magnifying glass' lens is drawn and then placed onto a bitmap of the regular sized board. intConfiguration intConfigurationStep When all that is done, the magnifying glass' handle is drawn onto the screen, and the whole animation just loops until the user begins a new game. When the user (or the computer AI geniusified, dullified, or stupidated) finally makes an accusation, we roll ourselves into the I accuse mode of things. This is a bit of neat code in that it creates a reasonably cool effect. There are two images of everything: one in color and another in black/white. The moving parts, aside from the floodlight, are in the foreground, and the background is nothing but background. The black/white background, however, does have a colored "I accuse" accusing suspect and speaking bubble, but aside from that, everything else is in black and white. Using a blue-screen (I'm off the green screen, I got angry picketers from Green-Peace outside my door, and I'm afraid what they're going to do), I move my 'floodlight' around by drawing a transparent circle on my blue screen, and then superimpose this onto the colored image (with the animated colored foreground drawn onto the colored background), and cut-out a colored circular segment of that image; then, making the blue part of the resultant image transparent, I draw this onto the black/white image, and repeat the process by moving the floodlight around until we're finally ready to flip the cards and see the Who and How! But sadly, we may never know why Mr. Boddy was murdered so tragically. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) atan2(double x, double y) Christ Kennedy wrote:but I get : sin(alpha-beta)/d = sin(pi-alpha)/l Christ Kennedy wrote:atan2(double x, double y), I've tried it in the past and the result was always from 0 to pi/2, but I'll have a look at it again! General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/Articles/96869/The-Game-of-Clue-C-2010?fid=1580796&df=90&mpp=10&noise=1&prof=True&sort=Position&view=Expanded&spc=None&fr=11
CC-MAIN-2017-22
refinedweb
6,622
65.66
WebStorm 2017.1 Adds Vue.js Support - | - - - - - - Read later Reading List JetBrains has released version 2017.1 of their popular WebStorm IDE, topping it off with new features in order to keep the tool competitive in the fast-moving JavaScript landscape. For years, JetBrains has kept WebStorm up-to-date with the latest and greatest in JavaScript. This is no small feat given the velocity of change. This year is no different with Vue.js support joining in the fun. WebStorm will now recognize .vue files and includes support for code completion inside the templates. It will also perform auto completion when referencing other view components as well as automatically adding the import statements. Evan You, creator of Vue.js, told InfoQ that he thought this was "great news": I know there are a lot of Vue users who are using or wanted to use WebStorm as their primary editor but were waiting for better Vue support. The React support now includes the ability to autocomplete and generate component import statements, in addition to automatically generating the import for React itself. The IDE also supports the Jest testing platform which is popular in the React community. For Angular devs, the new Angular Language Service is available to use. This service is written by the Angular team and provides error checking and type recommendations inside Angular templates. Victor Savkin highlighted this new ability and why it's an improvement over WebStorm's native ability: WebStorm does a good job out of the box ... But it can only go [so] far and gets lost when trying to autocomplete "this.". With the Angular Language Service, Savkin says, "WebStorm no longer has to do any guessing. With the help of @angular/language-service it actually knows what's available in the template." The StandardJS code style has gained a lot of steam lately and it is now natively supported along with many other code style tweaks. Developers can try out version 2017.1 for 30 days for free. After the trial, yearly subscriptions start at $59 for indie developers and $129 for businesses. Rate this Article - Editor Review - Chief Editor Action
https://www.infoq.com/news/2017/03/webstorm-2017-vue-support
CC-MAIN-2018-47
refinedweb
357
57.57
* Tam? | Is this good practice? It sounds good to me, at least. | It seems that my parser knows something about namespace... Nothing wrong with that, you just have to keep the different layers of the different specs separate in your mind (and parser :). | What will happen to XML 1.0 when the namespace specification becomes | a W3C recommendation? Good question. I don't really know. A reasonable guess would be that the SGML DTD syntax is ditched in favour of an XML-based syntax that is namespace-aware. Or that both are retained. Of course, this means that XML will have two different schema languages, only one of them SGML-compatible. But, like I say, this is just a guess. --Lars M.
https://mail.python.org/pipermail/xml-sig/1998-November/000518.html
CC-MAIN-2014-10
refinedweb
122
77.64
GitAPI - A json styled API CAUTION: Note that this is a work in progress, and is not yet perfected. There will be errors, and it would be appreciated if you let us know about it. **Version: 1.0.0 GitAPI: An API made for GitHub Stats! Coded in Python3, uploaded to PyPi, and coded by JBYT27 About About GitAPI is an API made with python - styled with json - to make the data preferably easier to use. It is made up of posting json requests, and retrieving that data from a function, and transfering that data into an output, in which you can use. This API is designed to show GitHub stats for certain users, or viewing GitHub itself, in data form. To learn how to use it, you can read the Usage header below. Languages used Languages used to program this package were: Python3 Usage Installation To install and use the package, you must first: pip install git_api This will install the package - git_api(GitAPI) - and then you will be able to use it. To then import it, you must put the following code: import git_api OR import git_api as gitapi OR from git_api import * For more information, go here. Note that all of these methods work. Usage To use this package, first import it as shown above. Then create a python file - name it whatever you want, it doesn't matter. Once you're done with that, open the file, and add the following example code: import git_api Token() # We'll place the personal access token here later on. For now, it'll be empty. user_info = User("Username here").User() # Insert your username in the argument shown here. print(user_info) You've done it! But wait - it doesn't work, it only gives an error! The reason for this is that - NUMBER 1: You need a personal access token which we'll discuss in a moment. NUMBER 2: You need to have a GitHub username in mind and place it in the assigned space. Let's start with number 1, creating the token. 1: The Token To first create a token, you must create or use an existing GitHub account. If you already have a GitHub account, you can move on to the next section. However, if you are creating a new GitHub account, follow the instructions below: #1: Go to and click #2: Once you've clicked that, just follow the instructions shown on the page. #3: Then after that, you can either get used to GitHub and do this later, or do this immediately; Go to this document and read it thoroughly, as it holds the information to creating a personal token. Choose the categories you think will best fit for your project and finish up with the token. #4: Note that this token should be kept private and not shared. If you are positive that this token will be private, then you can just copy and paste the token into a string (inside parentheses), and insert it into the token argument space. However, if you know that this will be shown to the public, create a .env file, and paste the token inside there. Make sure you make it a variable, for example, like this: token=blahblahblah Also note that you can only copy the token once, so check that you actually copied down the token. Go back into your python file and copy/paste down the following code into the assigned space; os.environ["token"] # Insert your .env variable here #5: Then you're pretty much done! Your final code example should look something like this: 2: Finale import git_api Token(os.environ["token"]) user_info = User("JBYT27").User() print(user_info) This will print some of the user's information. Contributing Contributing will be listed mostly in the Code of Conduct, however for more info, visit the Contributing readme. License This package is under the MIT License. Note that its been like 5 months since ive last posted lmao nice Thanks! @ch1ck3n
https://replit.com/talk/share/GitAPI-A-json-styled-API/146129
CC-MAIN-2022-21
refinedweb
666
72.16
Flux in Depth. Store and Network Communication. Edit · Jul 18, 2015 · 13 minutes read · Follow @mgechev. However, our application can’t be completely stateless. We have at least business data, UI state and an application configuration. The store is the place, which keeps this data.. Alright, so the store is responsible for providing the data to the view. So far so good. How we can make the store deliver the required by the view data? Well, it can trigger an event! Recently observables are getting quite popular. For good or bad, they are going to be included in the JavaScript standard. If this makes you angry because you have to learn new things and JavaScript is getting fatter, you can find the guy who stays behind all of this here: Observables are good way of building our store. If the view is able to observe the store for changes, it can update itself once the store updates. Lets take a look at the following design pattern (yeah, my friends already noticed I’m talking about design patterns constantly and did an intervention for me but didn’t help). Chain of Responsibility This is a design pattern, which is exclusively used in the event-driven programming. Lets change the context and talk about DOM for a while. I bet you know that you can listen for any event on the document, for example once a user clicks on a button the event will propagate to the root of the DOM tree (eventually, if nothing stop it) and be caught by your listener: document.addEventListener('click', e => { alert(`You actually clicked on element with #${e.target.getAttribute('id')}, not on document`); alert(e.currentTarget === document) }, false); <button id="awesome-button">Click me</button> JS Bin on jsbin.com If you click on the awesome-button, you’ll see two alert boxes: “You actually clicked on element with #awesome-button, not directly on document” and “true”. If you change the third parameter of addEventListener to true, the event will propagate in the opposite direction (i.e. from top to bottom, capturing mode). Why I said all of this? Well, this is the pattern Chain of Responsibility:. Or if you’re have more enterprise taste, here is the UML class diagram of this design pattern: Why do we need Chain of Responsibility? Well, our store is a tree of objects, once a property in an internal node in the tree changes, it’ll propagate this change to the root. The root component in the view will listen for events triggered by the store object. Once it detects a change it’ll set its state. I’m sure it sounds weird and obfuscated, lets take a look at a diagram, which illustrates a simple chat application: From the Store to the View Lets take a look at the following diagram, it illustrates the initial setup of our application: The tree on the left-hand side is the store, which serialized into JSON looks the following way: { "users": [ { "name": "foo", "id": 1 }, { "name": "bar", "id": 2 } ], "messages": [ { "text": "Hey foo", "by": 2, "timestamp": 1437147880686 } ] } The tree on the right-hand side is the component tree. We already described the component trees in the previous part. Here is a snippet, which shows the relation between the Chat store and the ChatBox component: import React from 'react'; import Chat from '../store/Chat'; class ChatBox extends React.Component { constructor(props) { super(props); Chat.on('change', c => { this.setState(c); }); } render() { return ( <div> <MessagesList messages={this.state.messages}/> <MessageInput/> </div> ); } } Basically, the root component ( ChatBox) is subscribed to the change event of the Chat. When the Chat emits a change event, the ChatBox sets its state to be the current store and passes the store’s content down the component tree. That’s it. What happens if something change our store? On the diagram above something changed the first message in the Chat store (lets say the text property was changed). Once a property in a message changes, the message emits an event and propagates it to the parent (step 1) which means that the event reaches the messages collection. The messages collection triggers change event and propagates the event to the chat (root object, step 2). The chat object emits a change event, which is being caught by the ChatBox component, which sets its state to the content of the store. That’s it… In the next section, we’re going to take a look how the view can modify the store using Actions. From the View to the Store Now lets suppose the user enters a message and sends it. Inside a send handler, defined in MessageInput we invoke the action addMessage(text, user) the MessageActions object (step 1) (peek at the flux overview diagram above). The MessageActions explicitly invokes the Dispatcher (step 2), which throws an event. This event is being handled by the Messages component (step 3), which adds the message to the list of messages and triggers a change event. Now we’re going to the previous case - “Store to View”. All of this is better illustrated on the following diagram: Model vs UI state Most likely you’re building an application, which works on business data over given domain. For example, in our case we had: - Messages - Users However, in most cases, this is not enough. Your application also has a UI state. For example, whether given dialog is opened or no, what is the dialog’s position, etc. The data, which defines now your components will be rendered, depends on both - your UI state and the model. This is the reason I create a combined store, which presents both. For example, if we add a few dialogs to our chat application, the JSON representation of the store may look like: { "users": [ { "name": "foo", "id": 1 }, { "name": "bar", "id": 2 } ], "messages": [ { "text": "Hey foo", "by": 2, "timestamp": 1437147880686} ], "dialogsOpenStatus": { "nicknameInput": true, "editMessage": false } } I always prefer to have separation between the UI state and the model because the communication with external services usually requires only handling changes in the model, so events emitted by the UI store does not concern us. Quick FAQ: Alright, I got it. So if I have a mouse move event, which changes the store by updating the mouse position, we should go through the entire described data flow? This doesn’t seem like quite a good idea. If you have a big store and huge component tree, re-rendering it on mouse move will impact the performance of your application dramatically! What will be the biggest slowdown? Well, you’ll have to serialize the store to a POJO (Plain Old JavaScript Object), later you need to pass it to the root component, which is responsible for passing it down to its children and so on. And all of this because of two changed numbers? It is completely unnecessary. In such cases I’d recommend adding event listeners in the specific component, which depends on the mouse position and creating a separate store for it. For example, if you’re using react-dnd in your application, the drag & drop component has its own state, which doesn’t have to be merged with the rest of the store. It can live independently. Let me show you what I mean on the following diagram: In this example, when the store containing the dialog position updates it does not require update of the entire component tree. How should I make my store trigger events? The same way you did that in the dispatcher - use EventEmitter. Since we want to implement Chain of Responsibility, you will be required to keep reference to the object’s parent in order to propagate the event. Remote Services This is a hot topic. Most of the single-page applications require communication with services through a transport protocol, in order to fetch and store data. However, in the overview of flux there’s no such thing as TransportChannel, RemoteService or whatever. If there are involved a lot of different protocols (for example I’m working on a application, which communicates with other applications and services through HTTP, WebSocket and WebRTC data channel) it could get a bit hairy if you don’t create the proper abstractions. In this section I’ll describe the basics of the network communication and later we’re going to discuss more interesting details. Alright, in the diagram which shows the overview of flux there is a “hanging” action. There’s no incoming arrow to the leftmost action but there’s outgoing one: From the Network to the UI We can use this “hanging” action in order to integrate the communication with the external services. Lets take a look at the following picture, in order to visualize basic communication with external service: In it we can trace how a network event is being processed. The cloud is an external service/application. Initially we get a new message, it is being processed by the Service object (it is black on the diagram because it is like a black box for us right now, it may get quite complex depending on your application), later the semantics behind the received message is found and based on it, Service invokes a StoreAction. Now the flow goes exactly the way we described it earlier (From the Store to the View). We’re good for now. But what happens if the store changes? Nothing so far. From the UI to the Network In order to send network events based on changes of the store we can observe it and perform a specific action based on the change. Since it extends EventEmitter we can simply add on change handlers to the store inside the Service and this way we can easily handle all changes and process them: But why the Service module shuld listen for store changes? Can’t it listen directly on the dispatcher? Most likely, in the store we’ll have some logic for handling the action passed by the dispatcher (for example format the raw data passed by the view). If the Service module listens for changes at the dispatcher we will duplicate this logic twice. Everything looks good so far. But the Service module is still a huge black box. In the next section I’ll suggest a sample design of this module. If you don’t agree with something, have any questions or recommendations let me know. The Service Module I was wondering whether to use UML for showing the components building the Service module, however I choose to make a colorful diagram like the one bellow: There are a lot of boxes here. The yellow ones are part of the flux architecture. The blue ones are responsible for sending updates from the application to the remote service, the red ones are responsible for handling network events and the green ones are the intersection between the last two categories. One component could be implemented with a few classes, we’re not restricting ourselves to these components. We can think of them more like categories rather then specific classes. Here we’ll demonstrate a sample decomposition of the service module. Lets describe the most interesting components one by one. Channel Abstract channel. It could be extended by HTTPChannel, WebSocketChannel, WebRTCDataChannel, etc. Package The Package component represents the packages used by our protocol. There could be a hierarchy of package types. For example, if we build our protocol on top of JSON-RPC, we may want to extend the base Package class. Command Implements the Command design pattern. CommandDispatcher Responsible for dispatching commands. There’s one more link, which is not represented on the diagram - the CommandDispatcher owns reference to the Channel component. It checks whether the channel is open before sending the command, if it isn’t the CommandDispatcher retries after a given timeout. This component is also responsible for buffering commands, invoking their undo actions in case the command’s execution fails. The CommandDispatcher may implement different dispatching strategies: - Request/response - Sliding window technique - Build a graph of the dependencies between the commands, process a topological sort and invoke the commands in the appropriate order StoreObserver The StoreObserver is responsible for handling change events of the store. Once it detects change in the store it creates a new command using the CommandBuilder and invokes it through the CommandDispatcher. NetworkObserver Waits for new messages emitted by the data Channel. This component is responsible for parsing the messages and processing them. Once the message is parsed, based on its content the NetworkObserver invokes specific Action. Third-party Services If you’re using a third-party service (Twilio for example) you can process in a similar fashion. You can create a wrapper of the service (in the case of Twilio, the Twilio global) and bring it in your flux data-flow. Quick FAQ: Isn’t NetworkObserver a “God class”? Yep, it is. You can decompose it into a few smaller classes, same for CommandDispatcher where you can use the strategy pattern for dispatching the commands in the appropriate order. What are these ProtocolDecorators? Since we can create our protocol based on another, already existing protocol (JSON-RPC, BERT-RPC, etc.), we can process the incoming messages by using a chain of decorators. For example, we receive a basic string, next the JSONRPCDecorator can parse it to a valid JSON-RPC message and pass it to the next decorator ( YourCustomProtocolDecorator), etc. Can’t I get cascading updates when update the model? Can’t I reach a condition in which the view updates the store, it emits a change event, which is being handled by the Service module, which emits an event…etc.? This can happen. There are two simple solutions: - emit change event only when the value of given property is actually being changed - emit ui-change event when update the model from the view and emit server-change event when change it from the server. The root component can listen for /change/ events (yeah, this is a regex) and the server can listen for ui-change. I throw an event but a few stores are handling it?! Most likely you have collision in the action types. I’d suggest to use the following schema for naming them: storename:action-name or something similar, which guarantees uniqness. Conclusion The first part of this post shown how we can wire the store with the view. The second part was about the basic idea of the network communication with external services. The last section described a sample implementation of the black box responsible for handling external messages, store changes, serialization of the store and sending it through the network. Although the first two parts were relatively canonical and predefined by flux, the last chapter was completely custom implementation. With it I wanted to imply that the micro-architecture we use (MVC, MVP, MVVM, flux, whatever) does not provide us the entire project design. It only puts some rules on how we should organize our application, defines the main building blocks and how we can wire them together, however, if we want to build a real-life application, most likely we’ll need to put some additional effort - the architecture or the framework won’t be able to solve all our concerns.
https://blog.mgechev.com/2015/07/18/flux-in-depth-store-network-communication-services/
CC-MAIN-2018-47
refinedweb
2,548
62.17
If javascript is up to amazing animations and visualizations with d3, maybe it’s up to non-linear curve fitting too, right? Something like this, perhaps: Here’s how I did it: - The hard work is done using Cobyla (“Constrained Optimization BY Linear Approximation”), which Anders Gustafsson ported to Java and Reinhard Oldenburg ported to Javascript, as per this stackoverflow post. Cobyla minimises a non-linear objective function subject to constraints. - The demo and its components (including cobyla) use the module pattern, which has the advantage of keeping the global namespace uncluttered. - To adapt Cobyla to this curve fitting problem, I wrote a short wrapper which is added onto the cobylamodule as cobyla.nlFit(data, fitFn, start, min, max, constraints, solverParams). This function minimises the sum of squared differences (y1^2-y2^2)between the data points, (x,y1), and the fitted points, (x,y2). - The Weibull cumulative distribution function (CDF), inverse CDF and mean are defined in the “distribution” module. Thus distribution.weibull([2,1,5]) .inverseCdf(0.5)gives the median (50th percentile) of a Weibull distribution with shape parameter 2, scale parameter 1 and location parameter 5. - The chart is built with d3. I am building an open-source library of a few useful chart types, d3elements, which are added onto the d3 module as d3.elts. This one is called d3.elts.xyChart. - So the user interface doesn’t get jammed, I use a javascript web worker to calculate the curve fit. I was surprised how easy it was to set this up. - I apologise in advance that this sample code is quite complicated. If you see ways to make it simpler, please let me know. - Finally, this may be obvious, but I like the rigour that a tool like jshint imposes. Hence the odd comment at the top of fitdemo.js, /* global fitdemo: true, jQuery, d3, _, distribution, cobyla */ Check out the source code on bitbucket here. You can see it being used for uncertain data estimation here. Please let me know what you think!
http://racingtadpole.com/blog/tag/estimation/
CC-MAIN-2019-43
refinedweb
340
64.71
. YASGUI, dbpedia and SPARQL dbpedia is basically a database version of wikipedia. Or another way to put it is a machine readable version of wikipedia that can be queried. The data is stored in RDF and queried using SPARQL. I have to say, RDF and SPARQL were easily the biggest obstacles. It is a very generic language, somewhat like SQL but not like SQL and I found it extremely difficult to use. But in the end it us for querying columnated data of sorts and I was able to pull some queries together. An example of a wikipedia page which has been scraped, stored in a database as RDF, queried and made readable again in table format is Rock and Roll. My first attempt to pull a genealogy was based on using foaf:name to make it legible. I used the YASGUI service: PREFIX db: PREFIX dbo: PREFIX dbr: PREFIX dbp: PREFIX foaf: PREFIX rdfs: SELECT ?soClean ?genreClean WHERE { ?genre_type a dbo:Genre . ?genre_type dbo:stylisticOrigin ?stylisticOrigin . ?genre_type foaf:name ?genreClean . ?stylisticOrigin foaf:name ?soClean . } This proved to be a bad idea because... Wikipedia's data quality is very poor. Second time I pulled only the raw names along with some "cultural origin" data which looked mostly like decade of origin so I thought that might be handy: PREFIX db: PREFIX dbo: PREFIX dbr: PREFIX dbp: PREFIX foaf: PREFIX rdfs: SELECT ?stylisticOrigin ?genre_type ?co WHERE { ?genre_type a dbo:Genre . ?genre_type dbo:stylisticOrigin ?stylisticOrigin . ?genre_type dbp:culturalOrigins ?co . } This provided several thousand rows of data. Ok cool! I download that data as standard CSV and it is available here. GraphViz Next step is to graph the data. I wrote a python script which reads in the csv and outputs a file in the graphviz language. import csv from graphviz import Digraph import os import sys #hack needed to read the YASGUI/dbpedia exports correctly reload(sys) sys.setdefaultencoding('utf8') csvfilename="raw_names.csv" genrefilename="genres.gv" chartfilename="genres" charttype="png" print("hello") print("buidling graphviz tree") g=Digraph('genres', format=charttype, filename=chartfilename, edge_attr={'penwidth':'1', 'weight':'0.1', 'fontsize':'12', 'color': 'blue', 'fontcolor':'blue' }, graph_attr={'fixedsize':'false', 'bgcolor':'white'}, node_attr={'fontsize':'64', 'shape':'plaintext', 'color':'none', 'fontcolor':'black'} ) g.attr(layout="twopi") g.attr(overlap="true") g.attr(nodesep='12') g.attr(ranksep='12') g.attr(size='3000,3000') print("populating from %s" % csvfilename) with open(csvfilename, 'rb') as csvfile: csvline = csv.DictReader(csvfile, delimiter=",", quotechar="|") for row in csvline: #print(row) gtext=row[' "genre_type" '].rsplit('/',1)[1] stext=row[' "stylisticOrigin" '].rsplit('/',1)[1] g.node(gtext, gtext) g.node(stext, stext) g.edge(stext, gtext, constraint='false') print("writing gv file %s" % genrefilename) f = open(genrefilename, "w") f.write(g.source) f.close() print("writing %s.%s" % (chartfilename,charttype)) g.render() That is available raw_names.csv and genres.gv. That script also performs the render. It's big. Here is a resized version: The original image is here. My ability to clean up the image and have it legible (without dropping many of the genres) was limited by the massive image size and what seemed like limitations of the software. I am running an older version of Debian so perhaps I will upgrade it in the future and try again. Also, the nature of music influences will always be messy but Wikipedia isn't nearly as clean as I expected and the dataset just doesn't lend itself to a clean output. NetworkX and Google Sheets I also wanted to perform some analysis on the structure of the network and graphviz is really oriented towards graphical output only so for this I turned to networkx and google sheets. I felt like I had some insight into the major influences from the diagram and could have done more with networkx, what I really wanted was to know the original / primal sources of music. So I built a tree using this python code and found all nodes with no parents. Unfortunately cultural origins and musical origins are so completely conflated in wikipedia that the output is useless. Almost all top level nodes are not musically related at all. I could probably have used something more respectable than google sheets but for basic math and plotting Sheets is so profoundly easy to use these days, particularly with pivot tables which in google sheets are basically a GUI version of SQL on row data. I used google sheets gratuitously to clean up data later on but I also used it here for a single query of top influencers. It really only looks at each node individually. What I could not determine, at least not without a significant amount of python code, was exactly how many children each node had. To me that was the most valuable contributor. It took about 30m to clean up the data and (literally) about 2-3m to produce this plot: Data looks about like I'd expect it to. Gephi The next attempt was to use a Windows GUI tool to plot the data. Everything up to this point was on a linux console or a web browser (for the queries and research). The same csv file was read in (with a slight modification, Gephi requires a specific Source and Target in the header row 1 of the file and it doesn't allow custom names to be assigned). Actually, I plotted many different files and used many different methods to categorize the nodes and edges. Unfortunately gephi does not appear to have a way to organize the data by year (ie decade). I would like to have put the oldest node clusters towards the middle and the newest node clusters towards the edge. It would have been fascinating to see if the age of the nodes/clusters were at all aligned with the connectivity between nodes. Here is a nice sample of that effort: And a gephi file is here for a couple more interesting examples. DBpedia-Live and Wikipedia Data Service Those attempts having met with good, but limited success, I was eager to find more detail but being limited by my own skills, especially with SPARQL, so my study quickly came to an end. There are two more resources which might be fruitful in the future. The first is DBpedia-Live. The dataset I used above is from 2016 and dbpedia appears to no longer update it. DBpedia-Live (and its web query page) appears to be its replacement. It is updated live and tracks wikipedia as it's being updated nearly in real time. However, the data I was able to pull from that resource was much smaller and much dirtier. I don't know if this reflects wikipedia or dbpedia but it was a disappointing regression. And at the very last minute I discovered that wikipedia has its own SPARQL interface! You can see many examples here. Unfortunately this is a code example for pulling music genres from the database. Seriously, what? How is that going to be useful? #defaultView:Graph SELECT ?item ?itemLabel ?_image ?_subclass_of ?_subclass_ofLabel WHERE { ?item wdt:P31 wd:Q188451; wdt:P279 ?_subclass_of. SERVICE wikibase:label { bd:serviceParam wikibase:language "[AUTO_LANGUAGE],en". } OPTIONAL { ?item wdt:P18 ?_image. } } It does however, automatically produce a network graph (with pictures!) which is convenient. Conclusion My success was metered by: My knowledge of network data analysis and tools was quite limited when I started this investigation. It has improved significantly, now being aware of some of the most common tools and some of the most common analyses that are used to gain insight into networks. I was limited at times by the abilities of the software or my hardware. I find it hard to complain. Network analysis was nearly impossible a decade ago and now I have tools for producing graphics, queries (for numerical analysis). I also have well put together python libraries for scraping the web and wikipedia. It's pretty remarkable, even if they could use improvement. SPARQL is horrible. It is extremely generic which is great for flexibility but it requires learning each systems particulars. Learning each database's tables or properties is significant enough but there is enough variation in querying language, and it's a language that to me is very confusing that I don't even want to use it. In the future it would be interesting to launch a side project to clean up wikipedia's genealogy. Perhaps put together a clean dataset and a graph and publish it to the community (which one? where?) and encourage the music folks to get involved. Perhaps set some rules about what is a genre (e.g. cultural vs musical origins, proper dating of genres, instruments are not genres, people are not genres, the failure to properly integrate classical music, the failure to properly integrate "world" or traditional genres, geographical scenes in a specific city make not a genre, etc). Add new comment
https://www.teramari.us/node/173
CC-MAIN-2020-16
refinedweb
1,482
55.54
> 101573: > > This has now been updated to no longer break the minidom regression > test, which should mean that minidom will survive the application > of this patch. [I was going to comment on attributes not being passed to the handler, but that is apparently solved in 101632] Also, on the pulldom part: Did you have a look at the pulldom in current PyXML? I believe it does support namespaces better, which is now possible since the namespace API of SAX2 has been determined. > 101630: > > Added back the InputSource class, which is referred to many places > in doco strings and also needed by 101631. Added test cases to the > SAX test code to exercise it. I think there has been some debate on the input source class, but so be it. It just seems that you have broken the ability to pass file objects to the parser's parse() methods, right? > 101631: > > Added back EntityResolver and DTDHandler (which were already > referred to in the code in many places) and added support for these > to expatreader.py Test cases added to the SAX test code. While I can see the others applied, this is not. Why? Regards, Martin
https://mail.python.org/pipermail/xml-sig/2000-September/003413.html
CC-MAIN-2017-22
refinedweb
195
77.27
The CF object allows SSAS to perform queries and HTTP calls through the ColdFusion server. ColdFusion supports the <cfhttp> tag, which allows ColdFusion applications to post to and retrieve content from remote web servers?or from your own web server. The CF.http( ) method mimics the functionality of this tag, as well as its child tag <cfhttpparam>. CF.http( ) has many possible uses: Post searches and retrieve search results from different search engines Access web services, such as stock services Load XML files from remote locations Create downloadable links that don't reveal the file location to the end user Dynamically create and save to the server HTML documents that can later be browsed as static pages There are many more uses as well. CF.http( ) can be called like this: var myVar = CF.http(method, url, username, password, resolveurl, params, path, file); The CF.http( ) method accepts up to eight arguments, as listed in Table 6-1. [1] Required The params array deserves a little further explanation. This array should contain one or more objects with the following properties: The name of the field that you are posting One of the following five types: URL-encoded data Indicates a value to be passed as a field in a form Cookie data CGI script to execute File to be uploaded Any value that conforms to the limitations of the type of field you are passing (you shouldn't pass a 10 KB cookie field, for example) You should build the array of parameter objects before sending it to the CF.http( ) method, as shown in the following examples. Table 6-2 shows HTML tags resolved by passing "yes" in the resolveurl argument to CF.http( ) You can call CF.http( ) using the standard technique in which the arguments must be specified in the expected order. Here, only the required arguments are passed: var myObj = CF.http("get",""); Named arguments can be passed to CF.http( ), shown here using an object literal, in which case the position of the arguments is irrelevant: var myObj = CF.http( { method:"get", url:"", resolveurl:"yes" } ); Here is an example using the "post" method and passing an array of parameters: // Define the parameters to pass var myParams = new Array( ); myParams.push({name:"username",type:"formfield",value:"tom"}); myParams.push({name:"password",type:"formfield",value:"mypassword"}); // Pass the myParams array along with the other parameters var myObj = CF.http( { method:"post", url:"", params:myParams, path:"c:\downloads", file:"myfile.xml" } ); The CF.http( ) method returns an object that contains seven built-in properties. You can access the properties of this object on the client side as you would any other ActionScript object: The character set used in the document that is returned The contents of the requested file The response header The MIME type of the file that is returned, such as "text/xml" Response header from the server, in the form of a single header or array of headers HTTP error code and error string from the remote call The value "true" if the file content is textual; otherwise, "false" The filecontent property is the most useful, allowing you to access the contents of the file you requested with the CF.http( ) method. The next section shows an example of a possible use of the CF.http( ) functionality. One of the limitations of Flash is that it can't access content outside of the Flash movie's domain. For example, a Flash movie hosted on can't load content from. One way around this is to use a proxy, a middleman that allows communication between two different servers. The proxy can be written with a few simple lines of Server-Side ActionScript code in a remote function. The code in Example 6-1 can be saved as Proxy.asr in the webroot\com\oreilly\frdg directory. function proxy (location) { // Request the data var theFile = CF.http (location); // Return the filecontent property of the object returned by CF.http( ) return theFile.get("filecontent"); } In this code, the CF.http( ) method grabs the file content from the specified URL (location). The proxy( ) method simply passes back the contents of the requested file to the Flash movie, which can do whatever it wants with the data. Example 6-2 shows the client-side ActionScript necessary to display a remote XML file through the Proxy service set up in Example 6-1. In this case, the XML document is an RSS feed for my weblog located at. It will load through the proxy to a movie served from any domain. The text field is created dynamically, so no interface is needed. To get the dynamic scrollbar to work with the movie, you'll have to drag an instance of the ScrollBar component from the Components panel to the Stage and then delete it. This populates the Library with the symbols needed for the component. #include "NetServices.as" // You must set the myURL and servicePath variables to // your own Flash Remoting path and service path var myURL = ""; var servicePath = "com.oreilly.frdg.Proxy"; // Create a text field to show the results createTextField("myTextfield",1,10,10,400,200); myTextfield.multiline = true; myTextfield.wordWrap = true; myTextfield.html = true; myTextfield.border = 1; // Add the ScrollBar component to the dynamic text field. // This assumes you've added the FScrollBarSymbol symbol // to the library by dragging a ScrollBar instance to the // Stage and deleting it. init = {_targetInstanceName:"myTextfield", horizontal:false}; _root.attachMovie("FScrollBarSymbol", "myScrollbar", 2, init); myScrollbar._x = myTextfield._width + 10; // put it next to textfield myScrollbar._y = myTextfield._y ; // put it next to textfield myScrollbar.setSize(myTextfield._height); myScrollBar.setEnabled(true); myTextfield.htmlText = "Reading blog..."; // Perform initialization only once if (!initialized) { initialized = true; NetServices.setDefaultGatewayUrl(myURL); var my_conn = NetServices.createGatewayConnection( ); var myService = my_conn.getService(servicePath); } function textfieldNews (xml) { // Extract news items in item nodes of channel node var channelTag = xml.childNodes[1].nextSibling.childNodes[1]; var temp = ""; var newsitem, currentTag, link, newsdate; myTextfield.htmlText = ""; for (var i=0; i < channelTag.childNodes.length; i++) { newsitem = channelTag.childNodes[i]; newsitem.ignoreWhite = true; for (var j=0; j<newsitem.childNodes.length; j++) { currentTag = newsitem.childNodes[j]; currentText = currentTag.firstChild.nodeValue; switch (currentTag.nodeName) { case "title": title = currentText; break; case "link": link = "<font color='#9966CC'>"; link += "<a href='" + currentText + "'>" + title + "</a></font>"; break; case "pubDate": newsdate = currentText; temp += newsdate + "<br>"; temp += link + "<br><br>"; break; } } // end for j } // end for i myTextfield.htmlText = temp; } // Responder object to displays the result or an error function MyResponder ( ) {} MyResponder.prototype.onResult = function (myResult) { var fr_news = new XML(myResult); textfieldNews(fr_news); }; MyResponder.prototype.onStatus = function (myStatus) { trace("Error: "+ myStatus.description); }; myService.proxy(new MyResponder( ), ""); ColdFusion developers have always had access to a simple tag that creates database connections, executes SQL statements, and returns resultsets to the caller. The <cfquery> tag is simple to use and simple to understand. The ColdFusion implementation of SSAS contains a method, CF.query( ), that works in a fashion similar to the <cfquery> tag. The CF.query( ) method accepts up to six arguments, as listed in Table 6-3. Only the datasource and sql arguments are required. A ColdFusion Server's database connections are defined in the CF Administrator. These connections are known as datasource names and are the basis of all data operations in ColdFusion. Once you have a datasource name set up, you can access the database to select, update, insert, or delete the data; invoke stored procedures calls; create tables; or perform any other database operation. All of this is accessible through SSAS. [2] Required The sql argument is the SQL statement that you want to send to the database. If this is a simple SELECT statement, the CF.query( ) method returns a resultset, or Query object as it is known to ColdFusion programmers. Just like the CF.http( ) method, you can pass your arguments to the CF.query( ) method in several different ways. This is how the method is called using the basic function call: var myVar = CF.query(datasource, sql, username, password, maxrows); You cannot use the timeout argument when calling CF.query( ) with sequential arguments. To use timeout, you must use the named argument style: var myVar = CF.query( { datasource:datasource, sql:sql, username:username, password:password, maxrows:maxrows, timeout:timeout } ); Common questions about Flash Remoting and SSAS involve the sql argument in the CF.query( ) method. The important thing to remember is that the sql argument is just a SQL statement in the form of a string; there is nothing magical about it. You can build a SQL statement manually, create a loop to add fields, or concatenate several parts together. The resulting string must be a valid SQL statement that can be sent to the database for processing. For example, you might pass an object to your Server-Side ActionScript containing the parameters for the query: myService.searchProducts({productname:"s%",unitprice:15}); Then, your SSAS code can build the SQL statement string using the object properties: function searchProducts (searchobj) { var sql = "SELECT * FROM Products WHERE "; sql += "ProductName LIKE '" + searchobj.get("productname") + '"; sql += " AND UnitPrice > " + searchobj.get("unitprice"); return CF.query("northwind", sql); } The variable sql in the previous example would contain the following SQL statement: SELECT * FROM Products WHERE ProductName LIKE 's%' AND UnitPrice > 15 When you are creating your SQL statements, make sure to use single quotes for string or character delimiters and no quotes for numeric data. If you're using a Microsoft Access database, you should use # for date and time data. The most basic form of database interaction involves the SELECT statement to retrieve results from the server. This is easily implemented in SSAS using CF.query( ), as in the following code: function getProducts ( ) { var sql = "SELECT ProductID, ProductName FROM Products"; var myResults = CF.query("northwind", sql); return myResults; } This code returns an entire resultset back to the Flash movie in the form of an ActionScript RecordSet object. This is the same as calling a <cfquery> tag in a ColdFusion Component or ColdFusion page, as shown in Chapter 3. When you perform a database SELECT, you retrieve a resultset. When you do other database operations such as inserting, updating, or deleting from a database, nothing is returned. These types of statements can also be executed from SSAS, as can other types of SQL statements that create and drop database objects, set permissions, or perform any other valid form of database transaction. To demonstrate, I'll use the client-side code that was set up in Example 5-14. The functions in the SSAS file all work in a fashion similar to the ColdFusion example. If you set up the ProductsAdmin.cfc file in Example 5-13, you'll have to rename it to SomethingElse.cfc in order to allow the SSAS .asr file to take precedence. The code in Example 6-3 is the full source listing for ProductsAdmin.asr. It should be saved in the webroot\com\oreilly\frdg\admin directory in order to allow it to work with the ProductsAdmin.fla file. function getSearchResult (search) { // Retrieves records that match the search criteria var sql = "SELECT ProductID, ProductName, UnitPrice,"; sql += " QuantityPerUnit, CategoryID, SupplierID"; sql += " FROM Products" // If no argument is passed, all records are returned if (search) sql += " WHERE ProductName LIKE '%" + search + "%'"; try { // Execute the query and capture errors var rsGetProducts = CF.query("northwind", sql); } catch (e) { throw "There was a database error"; } return rsGetProducts; } function addProduct (Product) { var sql = "INSERT INTO Products ("; sql += " ProductName"; sql += " , UnitPrice"; sql += " , QuantityPerUnit"; sql += " , CategoryID"; sql += " , SupplierID "; sql += ") VALUES ("; sql += " '" + Product.get("ProductName") + "'"; sql += " , " + Product.get("UnitPrice"); sql += " , '" + Product.get("QuantityPerUnit") + "'"; sql += " , " + Product.get("CategoryID"); sql += " , " + Product.get("SupplierID"); sql += ")"; try { // Execute the query and capture errors CF.query("northwind", sql); } catch (e) { throw "There was a database error"; } } function updateProduct (Product) {"); try { // Execute the query and capture errors CF.query("northwind", sql); } catch (e) { throw "There was a database error"; } } function deleteProducts (ProductIDs) { // Delete one or more products. ProductIDs can be one ProductID or // a comma-separated list of ProductIDs. // The next statement is the delete statement. It is commented out // so that you can use the Discontinued column to delete products. // var sql = "DELETE FROM Products WHERE ProductID IN (" + ProductIDs + ")"; var sql= "UPDATE Products SET Discontinued = 1 sql += "WHERE ProductID IN (" + ProductIDs + ")"; try { // Execute the query and capture errors CF.query("northwind", sql); } catch (e) { throw "There was a database error"; } } function getSuppliers ( ) { // Retrieve a list of suppliers for a ComboBox. var sql = "SELECT SupplierID, CompanyName FROM Suppliers"; try { // Execute the query and capture errors var rsSuppliers = CF.query("northwind", sql); } catch (e) { throw "There was a database error"; } return rsSuppliers; } function getCategories ( ) { // Retrieve a list of categories for a ComboBox. var sql = "SELECT CategoryID, CategoryName FROM Categories"; try { // Execute the query and capture errors var rsCategories = CF.query("northwind", sql); } catch(e) { throw "There was a database error"; } return rsCategories; } The remote methods operate exactly as the methods from the CFML in Example 5-13. Following are a few comments about the code. To access properties of objects passed to remote methods, you have to use objectName.get("propertyName"), as in this line from Example 6-3: sql += " SET ProductName='" + Product.get("ProductName") + "'"; This is because the ActionScript objects coming from the client are actually Java objects of type ASObject when they are parsed by your SSAS file. The SQL statements in Example 6-3 are built up using the preceding-comma approach of building SQL strings, as in this code:"); The code might look funny, but when you are debugging complex SQL statements, this style of coding makes it easy to comment out individual lines of SQL code without having to reformat the rest of the SQL statement. SQL statements in Server-Side ActionScript must be contained on one line with no line breaks. For that reason, it is wise to build your SQL statement as a string before sending it to the database.
http://etutorials.org/Macromedia/Fash+remoting.+the+definitive+guide/Part+II+The+Server-Side+Languages/Chapter+6.+Server-Side+ActionScript/6.2+The+CF+Object/
crawl-001
refinedweb
2,309
56.35
Microsoft Visual C# is just one of the languages that uses Visual Studio .NET as its development environment. Other programming languages supplied by Microsoft that use Visual Studio .NET include Visual Basic .NET and Visual C++ .NET. In addition, companies other than Microsoft are supplying compilers for Visual Studio .NET, which will enable you to develop solutions that include Eiffel, COBOL, and other languages. All programming using these languages can take advantage of the same set of tools and features offered by Visual Studio .NET, including all of the designers and tool windows that are part of the integrated development environment (IDE) as well as the integrated help system. Although this chapter focuses on how you can use and customize Visual Studio .NET with the Visual C# programming language, most of this information will apply to all Visual Studio .NET languages. The Visual Studio .NET Start Page, shown in Figure 1-2, provides a home base for obtaining information and services that extend beyond your machine. Many of the tabs available to you on the Start Page require an Internet connection. These tabs provide late-breaking information about Visual Studio .NET, provide links to new downloads, and enable you to host Web services with just a couple of mouse clicks. Specifically, the Visual Studio .NET Start Page contains the following tabs: Get Started The Get Started tab is displayed initially and includes a list of solutions that you’ve worked with recently. (See the section “Creating Visual C# Solutions” later in this chapter for a description of a solution.) When you first launch Visual C# .NET, the list of solutions will be empty. Later the list will contain links to your most recent projects. This tab also includes a link for reporting Visual Studio .NET bugs. If you have a connection to the Internet, Visual Studio .NET will check for service packs and add a link to the update page when a service pack is available. What’s New The What’s New tab is a Web page that has the latest updates and information about Visual Studio .NET. This tab includes additional up-to-date content that’s downloaded from the Microsoft Web site if you have an Internet connection. Online Community The Online Community tab has a selection of newsgroups and Web sites for Visual Studio .NET developers. Headlines The Headlines tab contains information from the Microsoft Developer Network (MSDN) Web site, including the latest features, technical information, news, and links to relevant knowledge base articles. To use this tab, you must have a connection to the Internet. Search Online The Search Online tab provides access to online searching of the MSDN database. To use this tab, you must have a connection to the Internet. Downloads The Downloads tab provides links to downloads and code samples that are related to developing with Visual C# .NET and Visual Studio .NET. To use this tab, you must have a connection to the Internet. XML Web Services The XML Web Services tab allows you to search for XML Web services you want to use in your current project. To use this tab, you must have a connection to the Internet. Web Hosting The Web Hosting tab provides information about Web hosting options that are available to you as a .NET developer using Visual C# .NET. Visual Studio .NET includes one-click hosting, a simplified way to host your applications. (One-click hosting is described in the next section.) To use this tab, you must have a connection to the Internet. My Profile The My Profile tab is used to configure the Visual Studio .NET user interface according to predefined profile elements. This tab is discussed in the section “Changing Your User Profile.” One-click hosting enables you to upload Web applications and services to a service provider with a single mouse click, which allows you to easily upload a Web application or a Web service to a central location that can easily be used by others. One-click hosting is useful for demonstration or testing purposes, as well as for deploying small applications. The Web Hosting tab on the Visual Studio .NET Start Page includes a current list of companies that are participating in one-click hosting and offering free trial periods. The Visual Studio .NET user interface can be adjusted to suit your preferences by modifying settings on the My Profile tab on the Visual Studio .NET Start Page. By selecting a user profile, you can identify yourself as a developer experienced with Visual C++, Visual Basic, or other tools. Visual Studio .NET will use this information to orient toolbars and other windows in a way that’s more familiar to you. You also can set the keyboard and windows schemes separately to match earlier versions of Microsoft programming tools. For example, you can set your keyboard mapping to match Visual C++ 2 and your window layout to match Visual C++ 6. The My Profile tab also enables you to define a default help filter so that help results are returned for a specific programming language or technology that you use (or ask questions about) most often. You also can choose to have help shown in an external window instead of in the default internal window. Visual Studio .NET has a large number of windows, toolbars, and Toolbox windows that you’ll use when developing your Visual C# applications. The environment is completely customizable, and the location and appearance of most windows can be easily modified to suit your needs. The default Visual Studio .NET layout is shown in Figure 1-3. Visual Studio .NET has two modes for managing its child windows, as follows: By default, child windows are tabbed and are stacked to conserve screen real estate. (Figure 1-3 shows Visual Studio .NET in tabbed-document mode.) Open files are selected by clicking on tabs that identify each open file. Optionally, you can display child windows using the Multiple Document Interface (MDI), which manages child windows as overlapped windows that are contained within the main Visual Studio .NET window. In this mode, child windows are allowed to overlap, as shown in Figure 1-4. In addition to the child windows used for editing text and forms, Visual Studio .NET has a number of windows that are located around the edges of its main window. The following sections discuss those windows. Solution Explorer displays a tree of the current Visual Studio .NET solution. Using Solution Explorer, you can browse through all the projects that make up the current solution, as well as the files that belong to each project. Double-clicking a project file will open the file for editing. Opening a file will change the menu and toolbar items that are available. For example, if you open an XML file, a top-level menu item for XML operations will be added to the main menu. Right-clicking on any element in Solution Explorer will display a shortcut menu with actions that you can perform on that element. For example, the shortcut menu for the solution icon allows you to perform tasks such as adding a new project to the solution, whereas the shortcut menu for a project enables you to add new items to the project and perform other project-related activities. The Class View window displays the class and type hierarchy for the current project and is used to traverse the project’s type hierarchy. Clicking an element that’s declared in your project will open the source code editor at the point of declaration. If you click on an element that’s part of the .NET Framework, the Object Browser window will open for that element. The Resource View window is used to view resource files associated with a project. If your project contains resources such as menus, string tables, and dialog boxes, those resources can be accessed in this window. The Resource View window is normally displayed only for projects that have an associated resource (.rc) file. The Properties window is used to declaratively set properties for different elements of your solution. The contents of this window vary depending on the type of item you’re currently working with. If you click an icon in Solution Explorer, properties for the selected item will be displayed. If you’re working with user interface controls or forms, many of the control and form properties can be set through this window. Likewise, if you’re working with an HTML or XML document, object model properties for these documents can be set in this window. The Task List window provides a running list of tasks that must be completed. This window is hidden by default. To display the Task List window, choose Other Windows from the View menu and choose Task List, or press Ctrl+Alt+K. Each item in the task list includes a category, a description, a file name, and a line number; double-clicking a task will open the source editor at the location indicated by the file name and line number. Depending on the task’s category, the entry can also have a priority that can be used to sort the tasks or a check box that can be used to indicate whether the task has been completed. Initially, the task list contains only build errors; however, it can be used to display a wide range of other tasks, including the following categories: Comment tasks These tasks are defined inside comment marks and are useful for tagging sections of code that require further attention. Visual Studio .NET defines the tokens TODO, HACK, and UNDONE. Comments that begin with these tokens will be displayed in the task list. Shortcut tasks These tasks are created by right-clicking in the source code editor and choosing Add Task List Shortcut from the shortcut menu. Shortcut tasks are easy to create and useful when you don’t need to associate a comment with the new task. You can specify a priority for a shortcut task by clicking in the leftmost column of the task and then choosing one of the displayed priorities. After the task has been completed, select the task’s check box to mark the task as finished. User-defined tasks These tasks are created by defining a comment token, as described later in this chapter, in the section “The Environment Category.” To specify the categories of items that will be shown in the task list, choose Show Tasks from the View menu. Alternatively, you can right-click in the Task List window, select Show Tasks from the shortcut menu, and then select the task category you want the task list to show. In addition to specifying a task category to be displayed, you can display all tasks or show only tasks that are checked or only tasks that are unchecked. Click on the task list header to sort the tasks based on priority, category, checked state, or any other displayed field. Server Explorer provides access to system services that are available on your machine as well as on other machines on your network. Normally, Server Explorer is tucked away under the edge of the Visual Studio .NET window, with just a small icon visible, as shown earlier in Figure 1-3. If you position the mouse pointer over the Server Explorer icon, the window expands to display a list of servers available on your network, as shown in Figure 1-5. To lock the Server Explorer window in place, click the pin icon in the upper-right; this will prevent the window from auto-hiding until the pin icon is clicked again. Server Explorer provides easy access to event logs, databases, performance counters, and other system services. Server Explorer is more than just a console for viewing information; you can drag objects from Server Explorer into your project, and Visual C# .NET will automatically generate code to make use of the new objects in your project. For example, to add a database connection to a Windows application, you can simply drag a database icon from Server Explorer to an open form in the Visual Studio .NET Windows Forms Designer. Adding database connections will be covered in detail in Chapter 18. The Toolbox window contains items that can be easily added to your projects. Each tool category is located on a separate tab in the Toolbox window; clicking a tab displays the items in that category. For example, in Figure 1-6, the Web Forms tab is open, showing a number of controls and elements that are useful when you’re creating a Web form. Visual Studio .NET will populate the Toolbox window with tool categories that are relevant to your current project. For example, a Web application will have Toolbox items that provide data access and controls for Web Forms. You can control the behavior of the Toolbox window through a shortcut menu that appears when you right-click in the Toolbox window. Visual Studio .NET includes a number of additional windows that you might find useful on occasion. You can display each of these windows by choosing Other Windows from the View menu and selecting the window you want Visual Studio .NET to show. Some of the windows available in the Other Windows menu are described here: Object Browser The Object Browser window is used to explore the types that are available in your solution. All projects in the current solution are listed as top-level nodes in the Object Browser window, as are all explicitly referenced assemblies from the .NET Framework. By expanding the nodes in the Object Browser window, you can view the enclosed namespaces, classes, structures, and all other types, as well as type members. In addition to basic structural information, the Object Browser window provides documentation for the elements it displays. Stack Trace The Stack Trace window is used during debugging to display the current call stack. The call stack includes information such as the method names in the stack, the current line number within each method, and parameter-related data. Output The Output window contains information about the most recent build status of your solution and displays output information during debugging. Command Window The Command Window is used to issue commands to the Visual Studio .NET IDE, as well as to evaluate statements while debugging your Visual C# projects. For example, to display the Find In Files dialog box, simply type the following line in the Command Window, followed by a carriage return: >Edit.FindInFiles Using the Command Window while debugging is discussed in Chapter 9. Favorites The Visual Studio .NET Favorites window is an extension of the Internet Explorer Favorites window. Items in the Visual Studio .NET Favorites window are shared with the Internet Explorer Favorites folder. The Favorites window includes two icons that you can use to add the current item to your Favorites window and to manage items in your Favorites folder. By default, Visual Studio .NET child windows are initially docked—that is, the windows are attached to one of the edges of the main Visual Studio .NET window. Many of the windows contain tabs and appear to be stacked on top of each other. Visual Studio .NET allows you to change the size, docking state, and tabbing for all child windows by simply dragging the windows to new locations. To undock a window, simply drag it by its title bar away from its docked position. The window will undock and follow the mouse pointer. As you drag the window, an outline will show its new size and location. If you drag a tabbed window by its title bar, the entire set of tabbed windows will undock together. Double-clicking the title bar will restore the window to its previous location; dragging the window close to the edge of the main Visual Studio .NET window will cause it to dock in that position. To untab a window from its current collection of windows, drag the window away from its current position by dragging its tab. The window will revert to an untabbed state and will follow the mouse pointer. As with docking operations, the outline of the dragged window is displayed to assist you in positioning the window. To force a window to be tabbed with other windows, drag the window over the window with which it will be tabbed. Position the mouse pointer over the target window’s title bar, and when the dragging outline changes to include the outline of a tab, drop the window to complete the dragging operation. If the target window already has tabs, you can target an existing tab instead of the window’s title bar. Customization of the Visual Studio .NET environment is performed through the Options dialog box, shown in Figure 1-7, which is opened by choosing Options from the Tools menu. You can use the Options dialog box to customize the behavior and appearance of Visual Studio .NET. Configuration options are grouped into categories and subcategories that appear in a tree view. To modify a particular option, you first navigate to the category and subcategory of the option, and then update the settings shown on that specific dialog box page. The Visual Studio .NET Options dialog box includes a large number of settings, many of which are outside the scope of this chapter. In this section, we’ll look at some of the more common Environment and Text Editor category options. The Environment category includes configuration options that affect the Visual Studio .NET development environment. Options in this category are not related to a specific programming language. Some of the more commonly used configuration settings are described here: General The General page includes settings to specify whether child windows should be tabbed or use MDI, as well as menu and status bar behavior. Documents The Documents page is used to control how documents are handled in Visual Studio .NET. For example, you can specify how Visual Studio .NET handles open documents that are changed by external programs. Dynamic Help The Dynamic Help page is used to enable or disable dynamic help for specific categories or types of help. Although Dynamic Help is a useful feature, disabling it will improve the performance of Visual Studio .NET, especially on underpowered machines. Fonts And Colors The Fonts And Colors page is used to set the fonts used for various windows in Visual Studio .NET. Help The Help page allows you to configure the settings for online help, such as the help collection that’s used by default, the preferred language for online help, and whether online help is displayed in an internal window or in a separate window external to Visual Studio .NET. International Settings The International Settings page is used to select the default language when multiple languages are installed on your computer. Keyboard The Keyboard page is used to control the keyboard bindings for Visual Studio .NET. You can choose from predetermined mapping schemes or specify new custom key mappings for commands. Projects And Solutions The Projects And Solutions page sets the default location for Visual Studio .NET projects. This page also specifies build behavior, such as the treatment of open files and whether the Output or Task List windows should be displayed. Task List The Task List page is used to configure how task list items are managed within Visual Studio .NET. Visual Studio .NET is initially configured to warn you before a task item is deleted or if a new task item is initially hidden. You can override that behavior on this page. You can also define new comment tokens that identify particular types of tasks. New comment tokens are case-sensitive. For example, if your new comment token is FIXME, a comment that begins with FixMe won’t be added to the task list. Web Browser The Web Browser page enables you to set options for the Home and Search pages and configure Internet Explorer options. The Text Editor category includes options for editing source files inside Visual Studio .NET. Configuration settings that apply to all types of source files are set on the General page. This page includes basic settings that relate to how the editor margins are displayed, as well as generic settings such as the setting that allows drag-and-drop editing within the Text Editor window. Each type of source file has its own configuration page. For example, Visual C# and XML files can have different configuration settings. Language-specific settings pages enable you to configure settings such as word wrap (off by default) and line numbers (also off by default). The online help system in Visual Studio .NET is based on the MSDN Library and is significantly better than online help systems in earlier versions of Visual Studio. Improved filtering and searching options make this version of online help much more useful. In addition, vendors of tools and components that integrate with Visual Studio .NET can now safely integrate their product documentation with the online help system, making it much easier to get help for third-party tools and components. Dynamic help is a great new feature in Visual Studio .NET. As you work with the various tools in Visual Studio .NET, the dynamic help system searches through the MSDN Library for relevant help topics. The list of help topics is updated as you use different tools or windows in Visual Studio .NET. As you edit a Visual C# source file, the Dynamic Help window is automatically updated to include help on the keywords or classes that you’re typing. You can display the Search window for online help by choosing Search from the Help menu. A predefined set of filters can be used to narrow your search. For example, if you’re interested in searching only through .NET Framework Software Development Kit (SDK) topics, you can easily narrow your search to include only those items. The Results window, which opens automatically when you conduct a search, displays the results of the search. The online help keyword index is displayed by choosing Index from the Help menu. Using the index is sometimes faster than searching when you know the title of the item you’re looking for. When the index search returns multiple topics, the list of results will be displayed in the Results window.
http://etutorials.org/Programming/visual-c-sharp/Part+I+Introducing+Microsoft+Visual+C+.NET/Chapter+1+A+Tour+of+Visual+Studio+.NET+and+Visual+C+.NET/Overview+of+Visual+Studio+.NET/
CC-MAIN-2017-34
refinedweb
3,717
62.68
- NAME - VERSION - SYNOPSIS - DESCRIPTION - METHODS - RUN-TIME INITIALIZER METHOD - INTEGRATION - CONSTANTS - CAVEATS - DEPENDENCIES - SEE ALSO - AUTHOR - BUGS - SUPPORT - ACKNOWLEDGEMENTS NAME Lexical::Types - Extend the semantics of typed lexicals. VERSION Version 0.15 SYNOPSIS { DESCRIPTION This pragma allows you to hook the execution of typed lexicals declarations ( my Str $x) by calling a configurable method in a configurable package at each run. In particular, it can be used to automatically tie or bless typed lexicals whenever they are initialized. Remind that for perl to be able to parse my Str $x, you need : either the Strpackage to be defined ; or for Strto be a constant sub returning a valid defined package. so make sure you follow one of those two strategies to define your types. This pragma is not implemented with a source filter. METHODS import use Lexical::Types; use Lexical::Types as => $prefix; use Lexical::Types as => sub { ... }; # = $mangler Magically called when use Lexical::Types is encountered. All the occurences of my Str $x in the current lexical scope will be changed to call at each run a given method in a given package. The method and package are determined by the parameter 'as' : If it's left unspecified, the TYPEDSCALARmethod in the Strpackage will be called. use Lexical::Types; my Str $x; # calls Str->TYPEDSCALAR If a plain scalar $prefixis passed as the value, the TYPEDSCALARmethod in the ${prefix}::Strpackage will be used. use Lexical::Types as => 'My::'; # or "as => 'My'" my Str $x; # calls My::Str->TYPEDSCALAR If the value given is a code reference $mangler, it will be called at compile-time with arguments 'Str'and 'TYPEDSCALAR'and is expected to return : either an empty list, in which case the current typed lexical definition will be skipped (thus it won't be altered to trigger a run-time hook) ; use Lexical::Types as => sub { return $_[0] =~ /Str/ ? @_ : () }; my Str $y; # calls Str->TYPEDSCALAR my Int $x; # nothing special or the desired package and method name, in that order (if any of those is no Lexical::Types; Magically called when no Lexical::Types is encountered. Turns the pragma off. RUN-TIME INITIALIZER METHOD' ... } INTEGRATION { ... } CONSTANTS . CAVEATS Using this pragma will cause a slight global slowdown of any subsequent compilation phase that happens anywere in your code - even outside of the scope of use of use Lexical::Types - which may become noticeable if you rely heavily on numerous calls to eval STRING.. Typed lexicals declarations that appear in code eval'd during the global destruction phase of a spawned thread or pseudo-fork (the processes used internally for the fork emulation on Windows) are ignored..-lexical-types at rt.cpan.org, or through the web interface at. I will be notified, and then you'll automatically be notified of progress on your bug as I make changes. SUPPORT You can find documentation for this module with the perldoc command. perldoc Lexical::Types ACKNOWLEDGEMENTS Inspired by Ricardo Signes. Thanks Florian Ragwitz for suggesting the use of constants for types. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
https://metacpan.org/pod/Lexical::Types
CC-MAIN-2016-26
refinedweb
521
51.78
BPF is the next Linux tracing superpower, and its potential just keeps growing. The BCC project just merged my latest PR, which introduces USDT probe support in BPF programs. Before we look at the details, here’s an example of what it means: # trace -p $(pidof node) 'u:node:http__server__request "%s %s (from %s:%d)" arg5, arg6, arg3, arg4' TIME PID COMM FUNC - 04:50:44 22185 node http__server__request GET /foofoo (from ::1:51056) 04:50:46 22185 node http__server__request GET / (from ::1:51056) ^C Yep, that’s Node.js running on Linux with my BPF trace tool attaching to the http__server__request probe, which is invoked for each incoming HTTP request. The argdist tool has support for these probes as well, and you can discover USDT probes easily using the tplist tool. You can try this example yourself by building Node from source with the –with-dtrace configure switch. If you have all the prerequisites, it should be as easy as: git clone --depth=1 cd node ./configure --with-dtrace make Discovering USDT Probes USDT probes are static tracing markers placed in an executable or library. The probes are just nop instructions emitted by the compiler, whose locations are recorded in the notes section of the ELF binary. Tracing apps can instrument these locations and retrieve probe arguments. Specifically, uprobes (which BPF already supports) can be used to instrument the traced location. To discover whether an executable or library contains USDT probes, run readelf -n and look for NT_STAPSDT notes. For example: $ readelf -n node ... stapsdt 0x00000067 NT_STAPSDT (SystemTap probe descriptors) Provider: node Name: http__server__response Location: 0x0000000000ef36c6, Base: 0x0000000001294154, Semaphore: 0x0000000001606c36 Arguments: 8@%rax 8@-1128(%rbp) -4@-1132(%rbp) -4@-1136(%rbp) stapsdt 0x00000060 NT_STAPSDT (SystemTap probe descriptors) Provider: node Name: http__client__response Location: 0x0000000000ef3b7a, Base: 0x0000000001294154, Semaphore: 0x0000000001606c3a Arguments: 8@%rdx 8@-1128(%rbp) -4@%eax -4@-1136(%rbp) stapsdt 0x00000089 NT_STAPSDT (SystemTap probe descriptors) Provider: node Name: http__client__request Location: 0x0000000000ef412b, Base: 0x0000000001294154, Semaphore: 0x0000000001606c38 Arguments: 8@%rax 8@%rdx 8@-2168(%rbp) -4@-2172(%rbp) 8@-2216(%rbp) 8@-2224(%rbp) -4@-2176(%rbp) stapsdt 0x00000089 NT_STAPSDT (SystemTap probe descriptors) Provider: node Name: http__server__request Location: 0x0000000000ef4854, Base: 0x0000000001294154, Semaphore: 0x0000000001606c34 Arguments: 8@%r14 8@%rax 8@-4328(%rbp) -4@-4332(%rbp) 8@-4288(%rbp) 8@-4296(%rbp) -4@-4336(%rbp) Parsing this output isn’t hard, but can be a bit confusing. If you prefer something more compact, try the tplist tool from BCC. For example, here’s how to discover all the probes in the node executable, and then list the variables available for one specific probe: $ tplist -l /home/vagrant/node/node /home/vagrant/node/node node:gc__start /home/vagrant/node/node node:gc__done /home/vagrant/node/node node:net__server__connection /home/vagrant/node/node node:net__stream__end /home/vagrant/node/node node:http__server__response /home/vagrant/node/node node:http__client__response /home/vagrant/node/node node:http__client__request /home/vagrant/node/node node:http__server__request $ tplist -v -l /home/vagrant/node/node 'node:gc__start' /home/vagrant/node/node node:gc__start [sema 0x1606c3c] location 0xef2994 raw args: 4@%esi 4@%edx 8@%rdi 4 unsigned bytes @ register %esi 4 unsigned bytes @ register %edx 8 unsigned bytes @ register %rdi This output means that the gc__start probe has three arguments — two 32-bit values and one 64-bit value. To understand their meaning, you’d need access to the source or documentation of these probes. For example: [vagrant@fedora-22-x86-64 tools]$ grep -r gc__start ~/node/* ... /home/vagrant/node/src/node.stp:probe node_gc_start = process("node").mark("gc__start") /home/vagrant/node/src/node_provider.d: probe gc__start(int t, int f, void *isolate); Looking at the node.stp file in more detail, we can find the way these flags were meant to be interpreted. But instead let’s look at another probe, which is slightly more interesting: $ grep -A 20 http__server__request node/src/node.stp probe node_http_server_request = process("node").mark("http__server__request") { remote = user_string($arg3); port = $arg4; method = user_string($arg5); url = user_string($arg6); fd = $arg7; probestr = sprintf("%s(remote=%s, port=%d, method=%s, url=%s, fd=%d)", $$name, remote, port, method, url, fd); } ... Now you know that the sixth argument is the request URL, and the fifth argument is the HTTP method for that request. Some other libraries known to contain USDT probes include libc, libpthread, libm, libstdc++, and many others. Try them out with tplist and have fun! Instrumenting a Program with USDT Probes To instrument your own program with USDT probes, you need the systemtap-sdt-dev package for your system. Then, the simplest approach is to just #include <sys/sdt.h> and start using the DTRACE_PROBE macros: if (value < 0) { DTRACE_PROBE1(myapp, value_was_negative, value); } Occasionally, you would want to emit a probe only if there is a tracing program attached. This is especially relevant if generating the traced parameters is expensive, and you’d rather do it only if you know someone cares about the probe. If you need this behavior, you’ll need a bit more of the USDT infrastructure. First, you need a .d file that describes your probes: provider myapp { probe value_was_negative(int val); probe app_exited(int exit_code, char *result); } Next, you use the dtrace tool (this is from the systemtap-sdt-dev package, and do not confuse it with DTrace of Solaris fame) to generate a header file to include and an object file to link with: $ dtrace -G -s myapp.d -o myapp-trace.o $ dtrace -h -s myapp.d -o myapp-trace.h The generated header file has declarations for your probes, such as MYAPP_VALUE_WAS_NEGATIVE(), and also macros that you can use to test if the probe is enabled, such as MYAPP_VALUE_WAS_NEGATIVE_ENABLED(). The way this works is that each probe now has a global variable that needs to be incremented by the tracing program to indicate that the probe should be enabled (this global variable is also called the probe’s semaphore). All that’s left is to actually invoke the probe: #include "myapp-trace.h" if (value < 0 && MYAPP_VALUE_WAS_NEGATIVE_ENABLED()) { MYAPP_VALUE_WAS_NEGATIVE(value); } if (MYAPP_APP_EXITED_ENABLED()) { MYAPP_APP_EXITED(0, "Everything's dandy"); } Finally, link with myapp-trace.o to obtain an instrumented executable. To verify everything’s right, run tplist or readelf -n on the end result. BCC Support Adding BCC support for USDT probes entailed writing a probe enumerator (USDTReader), which parses the NT_STAPSDT ELF notes and uses this information to determine which locations to probe. Probes associated with semaphores need to be enabled by incrementing the semaphore’s location in the requested process — yes, this actually requires poking /proc/$PID/mem to enable the probe. Next, we had to parse the probe’s arguments and determine how to retrieve them at the probe’s locations. The ELF note description contains argument data such as -4@-4(%rbp), which becomes a series of statements in the BPF program, such as: int tmp; bpf_probe_read(&tmp, sizeof(tmp), (void *)(ctx->bp - 4)); The end result is that you can use the trace and argdist tools to instrument USDT probes and access their arguments. Node is an interesting example, and so are libpthread, libc, and a bunch of other applications compiled with USDT support. See the man pages and examples for argdist and trace on the BCC repository for more information. Summary BCC now has support for USDT probes, which are used by various user-mode processes and libraries for static instrumentation and tracing. The trace and argdist tools can record and display USDT tracing data, and the tplist tool can discover USDT probes in an existing process, library, or executable. I am also using Twitter for shorter notes, links, and comments. You can follow me: @goldshtn Pingback: Links 3/4/2016: LabPlot 2.2.0, NixOS 16.03 | Techrights Is there anyway to enable USDT support on an already setup Node.js system? Or do I have to remove and configure from source. You need a Node.js binary built with `–enable-dtrace`. If it’s not, you need to configure from source.
http://blogs.microsoft.co.il/sasha/2016/03/30/usdt-probe-support-in-bpfbcc/
CC-MAIN-2019-22
refinedweb
1,324
51.89
Related Reads article: Skip to new content I had hoped that an expert would flex those muscular brains of theirs and share that knowledge but you have little ‘ol guppy me, so here is part 2 of the ‘Let’s C how this Goes…” articles. I don’t claim to be the guru or ‘Yoda’ expert on this, just fair warning. However, I will do my best to help you learn the C programming language a bit more in depth. This one is a bit longer so I apologize for that in advance. By the way, I’m sure most of you know what a guppy is but for those of you who don’t it’s an itty bitty fish which can look cute and pretty but if eaten is more like that small snack bite, like eating a chip or two and there you go. So enough fish ‘n chips…or else we all might drop what we are reading and doing and go eat something, jeesh. So just to recap, in the intro, we had talked about the basics such as the codes involved for your first ever C program in which we printed the statement “Hello World!.” Yeah, that was original wasn’t it? Just about everyone does that, but for simplicity’s sake we went along with that boring statement. We talked about the knick-knacks involved like brackets and parenthesis and semicolons and such. Just look at the intro article, don’t be lazy now…come on. Keep in mind that the beginning, though it can be tweaked, is pretty much the same as well as the end. It’s the middle part of the program which determines what you’re doing. For example are you writing a simple program which simple says (prints) ‘Hi how are you?’ or is it a small game or a calculator or what? Like saying are you creating Superman man or the bystander taking pictures of Superman as he flies away? See? Simple to complex, it’s up to you and you’re only limited by your imagination and the limitations of the programming language you are using. The good stuff: Let’s get on with it, I’ll assume you were a good student and took a look or at least skimmed the intro article. When you print a statement (you’re basically telling the computer to display the dang thing) you can print it in two lines or all on one line as we did in the intro. If you want to get fancy and cute, fine but don’t ask me to change the font color or size…that’s a different programming language. Don’t jump ahead of yourself, let’s focus on these baby steps before swimming with sharks and making your brain hate you for jumping around and confusing it. A new line is simply ‘n’, that simple right? Ok, go now and conquer the world…I’m kidding. Let me show you that in one line of code, just like the first example but don’t get worried on me now. So, we had simply used this; printf(“Hello World!”); but we want more than just one line then we use that n like so; printf(“Hellon World!n”); and there you have it. Go ahead and try it! Here’s what the full program would look like; #include <stdio.h> int main() { printf(“HellonWorld!n”); return 0; } That would give the return (or the result) like this: Hello World! You can try it at this online compiler: That was too simple though, so let’s add two integers (numbers) together. Don’t panic though, there will be more lines and the ‘scanf’ code will be introduced to you in 3…2…1 and now. The scanf code, basically in this program you’re saying “hey you! Yeah you, see these two numbers? Good, well I want you to do something with them.” Let’s say we use the code ‘sum’ which is used to perform the adding part. So we say “hey you take these two numbers and add them for me…oh and show me what it is too.” In nerdy tech talk you are saying, compute the sum of these values and print the result using printf. Oh and before using the numbers you have to tell it what to add, in this case whichever number is inputted (or typed in) by the user (in this case us for now). I have another food example; yes I’m a bit hungry as I’m typing this at the moment so bear with me. Basically, if you want a cheese sandwich you have to say, use two pieces of bread and this cheese for the middle now put them all together and present it to me…then you gobble it up. So what does that all look like? Well lets type out another program, YAY your second program (if you don’t count that one separate lines bit above ‘n’). Ok, so how did we start it off? Hey no peaking. Meh you looked, whatever, after time you won’t need to look the more you type it out, remember me saying practice makes perfect? Ok I didn’t say that exactly but you know what I mean. And no don’t call it a hashtag, before hashtags were popular thisà# was called a pound sign symbol and no I’m not a granny…don’t believe me look it up little guppy. For the longest time, when I was a wee bit little grasshopper I thought it was called a pounce sign, as in the cat pounced. Yeah don’t ask. I have no idea why I thought that, hehe, any who…onward with the lesson. Before we start adding numbers and scanning and all that fancy-ness lets also get introduced to what represents the integers (or numbers in the C program). What is that you may ask? Well ‘%d’ is used for integers. But why the percent sign? The ‘%’ is a format specifier, that’s basically fancy techy talk for ‘what the hell do you want? Numbers? Letter? What?.’ See not that hard. Don’t get too hung up on the why that symbol or why this symbol, for now just remember that ‘%d’ is integer (or numbers). Let’s keep things simple for now. Ok now we can apply what we know to see it in action. Here’s an example of it; # include <stdio.h> /* this is the function main which gets this party started…I mean program */ int main() { int integer1 ; /* The statement or line of code stating that the variable input is an integer */ printf(“Enter your magical number ” ) ; /* Letting the user know to give us that number */ scanf(“%d” , &integer1 ); /* This function takes the data, number inputted, and stores it in what we declared above, in this case integer—like saying ‘ok, let me hold onto this for you’ or ‘Gimme that’ */ printf(“Your magical number is : %dn” , integer1 ); /* Displays output, that number taken that was inputted in scanf is being displayed—like saying ‘Here take it back’ */ return 0; } Before I go on you are probably wondering what’s with all the red comments, right? Ok, calm down…breathe…don’t panic. Any comment in a program made between the /*….*/ will be ignored by the compiler and not included or factored in when compiling the program like the ones just typed out above. The comments made between the /* and */ can run on multiple lines but if you use ‘ // ’ with a sentence after it then it can only run on one line. Like so, //This is a one lined comment. I only displayed it in red to make it easier for you to read. If you are working or plan to work for someone or others that might look at the code you wrote, it can be a good practice to write comments not only for yourself to remember but anyone else who has to take over can follow along with what you were doing too, the ‘Be kind, and rewind’ type of policy. Ok, I think you understand, you seem like a smart cookie. So the program typed out above with all the cute red lines will ask you for a number and display back to you whichever number you first entered. It’s like throwing the ball and getting the same ball back. This time my example wasn’t food, my tummy is happy. In the line where we typed ‘int integer1’ you can call integer1 anything you like really, but simplicity’s sake I used the boring term integer so that it’s better understood what we are doing. You could even call it magical1 if you wanted to or bat or cat, you get the picture. Try it out here, just copy and paste it in as is to see; . Now let’s take it a step further, get a little fancy and add two integers together. So what do we do? Simple, it is…don’t look so worried, we add another variable and declare it as well as the sum for simplicity’s sake, like I said we could call it ‘z’ or ‘integer3’ or whatever strikes your fancy but keep KISS in mind. KISS= Keep It Simple Stupid, I’m not saying you’re stupid I’m saying to keep it that simple…so please don’t try to misinterpret that. Now we could scanf two numbers in the same statement or in separate statements but I don’t want to confuse you so we will write it all out on its own then I will show you the on one line bit. So here’s your 3rd program; # include <stdio.h> int main() { int integer1 ; int integer2 ; /* This is for our second input */ int sum; /* This is for the addition of the two integers */ printf(“Enter your first magical numbern ” ) ; scanf(“%d” , &integer1 ); printf(“Enter your second magical numbern ” ) ; scanf(“%d” , &integer2 ); sum = integer1 + integer2 ; printf(“Your magical numbers added together is: %dn” ,sum ) ; return 0; } Now if you wanted to shorten that up a bit but still add two integers together while also giving you the result, here’s one way of doing so. This does the same thing but formatted differently; # include <stdio.h> int main() { int num1; int num2; printf(“Enter two integersn” ) ; scanf(“%d %d” ,&num1,&num2); int result = num1+num2; printf(“Entered value is : %d and %d and their sum is %dn” ,num1,num2,result ); return 0; } Instead of num1, num2, and result you could instead replace them with something even shorter like x, y, and z. So that’s all, for now, folks. I hope you’re still sane even after all that. You may not realize this but you have learned more than you know. Now go forth and practice typing them out and try tweaking them a bit. Try challenging yourself by perhaps adding a third integer to be added. Okay, now you have your homework. Go be a good student and work on it. Good luck in your coding and remember that every little space and comma counts. See you around fellow Cybrarians! *Ghost hug* you can’t feel it but it’s? Just in case you copy and paste, then compile to receive this error: program.c:57: error: stray ‘342’ in program It turns out that you need to replace your double quotes in each line. If you look closely at what is copied from the website, you will see that the double quotation marks that surround the string literally are not the neutral (vertical) ones. Replace your left and right double quotes and then it works, it works! Yes I said that twice 🙂 Good post…looking fa dis Thank you, glad you found it ^_^ nice Nice… very informative to start with.students like me are learning this stuff. Thank you and good luck on your learning. ^_^ nice Nice… very informative to start with. Very Nice… very informative to start with.
https://www.cybrary.it/0p3n/lets-c-goes-part-2/
CC-MAIN-2019-30
refinedweb
2,021
78.38
Stateless and Create exceptionMonkey Den Feb 12, 2009 4:51 PM Using 2.0.3.CR1, I get the following error on start up. I'm pretty sure the error isn't actually what it says. I don't define a @Create in my bean, though I'd like to. I question the presence of the @Create on Query.validate(), but that may have been designed for the components.xml configuration and not extension. Only JavaBeans and stateful session beans support @Create methods: participantList @Stateless - GlobalUsersQueryController is a subclass of EntityQuery. I have multiple SMPC and use it to override getPersistenceContextName() @Stateless @Scope(ScopeType.PAGE) @Name("participantList") public class ParticipantListBean extends GlobalUsersQueryController<UserView> implements ParticipantList { @In FacesMessages facesMessages; @Local @Local public interface ParticipantList { public void search(); public UserView getCriteria(); public void setCriteria(UserView criteria); public Prefix getPrefix(); public void setPrefix(Prefix prefix); } 1. Re: Stateless and Create exceptionMonkey Den Feb 12, 2009 5:57 PM (in response to Monkey Den) So looking at Component.scanMethod() I see the error being thrown: if (type!=JAVA_BEAN && type!=STATEFUL_SESSION_BEAN) { throw new IllegalArgumentException("Only JavaBeans and stateful session beans support @Create methods: " + name); } I assume there is good reason for this, I just can't imagine why. I would expect that ALL Seam components could have a @Create. 2. Re: Stateless and Create exceptionStuart Douglas Feb 12, 2009 10:34 PM (in response to Monkey Den) You are trying to bind a stateless session bean to the page context. This is not possible, if it is stateless it should be bound to the stateless context (which it will be by default). If it really has to be PAGE scoped use a Java Bean or SFSB (incidentally page scope is generally a bad idea, but that is a matter for another thread). If you need to run logic on stateless bean creation use the @PostContruct ejb3 annotation. 3. Re: Stateless and Create exceptionlionel meroni Mar 4, 2009 3:39 PM (in response to Monkey Den) Hello, I have the same issue. I want to use SLSB which extends EntityHome. Like this : @Stateless @Name("userDAO") public class UserDAOImp extends EntityHome<User> implements IUserDAOLocal, IUserDAORemote { But EntityHome have a @Create annotation so my application fail to start, and i obtain this exception : java.lang.IllegalArgumentException: Only JavaBeans and stateful session beans support @Create methods: userDAO Why ? And how to use SLSB with EntityHome ? Thanks fo your future help 4. Re: Stateless and Create exceptionPedro Sena Mar 4, 2009 6:19 PM (in response to Monkey Den) @lionel meroni: Your problem is related to another thing. The EntityHome itself already has a method with @Create annotation, for this reason you cannot declare other method with this annotation(annotations are inherited). Take a look at documentation and try to override the method that is annotated with create, but do not forget to call the parent method before your code, just to avoid weird behavior. Regards, Pedro Sena 5. Re: Stateless and Create exceptionlionel meroni Mar 5, 2009 10:35 AM (in response to Monkey Den) Hello Pedro, As I wrote, the exception say : Only JavaBeans and stateful session beans support @Create methods So a stateless session bean could not have a method with @Create annotation. So a stateless session bean could not extends EntityHome !!! BUT in the documentation here : My Link There is an example of a Stateless bean which extends EntityHome. Actualy this is not possible ? Is the documentation up to date or wrong ?? 6. Re: Stateless and Create exceptionPedro Sena Mar 5, 2009 12:21 PM (in response to Monkey Den) Lionel, Use a Seam component(stateless scoped) instead of a EJB in this case. In this case they can be switched if no cost. Is what I'm doing here. Regards, PS 7. Re: Stateless and Create exceptionlionel meroni Mar 5, 2009 12:54 PM (in response to Monkey Den) Thanks Pedro, with @Scope(ScopeType.STATELESS) It work ! 8. Re: Stateless and Create exceptionJoshua D Oct 10, 2009 9:01 AM (in response to Monkey Den) I am a little confused here, does this mean it is not possible to expose seam components as stateless session beans (as mentioned in the reference docs). I get the same IllegalArgumentException mentioned in this thread above. What would be the suggested practice to expose the seam components as stateless session beans?
https://developer.jboss.org/thread/186282
CC-MAIN-2018-39
refinedweb
722
54.93
Last time we looked at implied volatility, we found a close relationship between historical volatility and implied volatility. This time, we want to look at the relative changes of DAX and VDAX and their relationship. German DAX and VDAX are highly correlated First, we examine the time series chart of roughly 8 years DAX and VDAX: We can see that for each spike in the red curve of DAX, there is an opposite spike in VDAX. This is surprising since we would expect that a large rise in DAX would also cause a large rise in implied volatility, namely VDAX. But, this is not the case. A further investigation is possible using a scatter plot: German DAX returns (y-axis) vs. its implied volatility index returns VDAX return (x-axis) from Nov. 16th 2005 until Apr. 17th 2014 (Data available at) This scatter plot shows a “x” for all trading days in the past 8 years with VDAX return as x-postion and DAX return as y-postion. This plot shows that there is a close relationship between these two. In fact, a little computation shows that they have correlation of -0.72 which is very high. What does that mean? Consider buying a call option: Now, the stock price rises. And the call option price rises because your option is deeper in-the-money: Its asset price minus strike price is higher. But, at the same time, the time value decays because the implied volatility goes down. So, for call option buyers this might be an issue. For Put option buyers, this effect works in their favour. The implied volatility rises when the asset price drops. By the way, this is a well-known effect: The implied volatility skew. Looking at the traded options, the put options further in-the-money already trade at a higher implied volatility – VDAX represents at-the-money options only.
https://computeraidedfinance.com/2014/04/22/vdax-revisited-implied-volatility-return-vs-index-return/
CC-MAIN-2020-40
refinedweb
316
73.68
Hello, Has any one had luck with how to use this .. I have been trying to setup on how to use it .. - Google released new programming language called Dart. Dart is a new class-based programming language for creating structured web applications. Developed with the goals of simplicity, efficiency, and scalability, the Dart language combines powerful new language features with familiar language constructs into a clear, readable syntax. Thanks, Chandra Ann Waceke Kinyua 14 months ago Which programming language is used to develop secure applications? by jobister 5 years ago Which software programming language should a beginner start with for web development?Looking to get into web development and would like to have some programming background to not only help me learn but also enhance my career. Any suggestions? by blackhoax1 5 years ago About .net programming languagePlease select 5 words that are related to .NET-enum -namespace -attend-garnish -class -meet-elongated -indo -gloom-private -crass -bilate-rivet ... by khalid abdirahman 4 years ago Which is the easiest programming language for a beginner to start with. by gavinmcschmitt 8 years ago what is CLOG programming
https://hubpages.com/technology/forum/84682/dart---google-programming-language
CC-MAIN-2018-39
refinedweb
186
57.47
A Vue.js 2.0 UI Toolkit for Web. Element will stay with Vue 2.x For Vue 3.0, we recommend using Element Plus from the same team LinksLinks - Homepage and documentation - awesome-element - Vue.js 3.0 migration - Customize theme - Preview and generate theme online - Element for React - Element for Angular - Atom helper - Visual Studio Code helper - Starter kit - Design resources - Gitter InstallInstall npm install wk-element-ui -S Quick StartQuick Start import Vue from 'vue' import Element from 'wk-element-ui' Vue.use(Element) // or import { Select, Button // ... } from 'wk-element-ui' Vue.component(Select.name, Select) Vue.component(Button.name, Button) | Français) | Français)Join Discussion Group Scan the QR code using Dingtalk App to join in discussion group :
https://www.npmjs.com/package/wk-element-ui
CC-MAIN-2022-40
refinedweb
122
51.75
I’ve gone back to doing some work on Gates of Dawn, my terse Python library for creating PureData patches. The idea here is I want to make synthesizers in PD but I don’t particularly like drawing boxes and connectors using the clunky UI. Here’s a brief overview from the Readme file. Showing some simple code examples and introducing the key ideas. Idea #1 : Gates of Dawn uses function composition as the way to wire together the units in Pure Data. In PureData traditionally, you’d do something like wire the number coming out of a slider control to the input of an oscillator and then take that to the dac audio output. Here’s how to express that using Gates of Dawn. dac_ ( sin_ ( slider("pitch",0,1000) ) ) To create the slider you call the slider function (giving a label and a range). Then you pass the result of calling that as an argument to the sin_ function (which creates an osc~ object in PD). The output of that function is passed as an argument to the dac~. (Note we are now trying to use the convention that functions that represent signal objects (ie. those that have a ~ in their name in PD) will have a _ suffix. This is not ideal but it’s the best we can do in Python.) In Gates of Dawn programs are structured in terms of a bunch of functions which represent either individual PD objects or sub-assemblies of PD objects. You pass as arguments to the functions those things that are upstream and coming in the inlet, and the return value of the function is suitable to be passed to downstream objects. Things obviously get a bit more complicated than that, but before we go there, let’s just grasp the basic outline of a Gates of Dawn program. Idea #2 : Files are managed inside with patch() context blocks. Here’s the complete program. from god import * with patch("hello.pd") as f : dac_ ( sin_ ( slider("pitch",0,1000) ) ) We use Python’s "with" construction in conjunction with our patch() function to wrap the objects that are going into a file. This is a complete program which defines the simple slider -> oscillator -> dac patch and saves it into a file called "hello.pd" Try the example above by putting it into a file called hello.py in the examples directory and running it with Python. Then try opening the hello.pd file in PureData. If you want to create multiple pd files from the same Python program you just have to wrap each in a separate with patch() block. Idea #3 : Variables are reusable points in the object graph. This should be no surprise, given the way function composition works. But we can rewrite that simple patch definition like this : s = sin_ ( slider("pitch",0,1000) ) dac_(s) The variable s here is storing the output of the sin_ function (ie. the outlet from the oscillator). The next line injects it into the dac. This isn’t useful here, but we’ll need it when we want to send the same signal into two different objects later on. NB: Don’t try to reuse variables between one patch block and another. There’s a weird implementation behind the scenes and it won’t preserve connections between files. (Not that you’d expect it to.) Idea #4 : Each call of a Gates of Dawn function makes a new object. with patch("hello.pd") as f : s = sin_ ( slider("pitch",0,1000) ) s2 = sin_ ( slider("pitch2",0,1000) ) dac_(s,s2) Here, because we call the functions twice, we create two different oscillator objects, each with its own slider control object. Note that dac_ is an unusual function for Gates of Dawn, in that it takes any number of signal arguments and combines them. Idea #5 : You can use Python functions to make cheap reusable sub-assemblies. Here’s where Gates of Dawn gets interesting. We can use Python functions for small bits of re-usable code. For example here is simple function that takes a signal input and puts it through a volume control with its own slider. def vol(sig,id=1) : return mult_(sig,num(slider("vol_%s" % id,0,1))) Its first argument is any signal. The second is optional and used to make a unique label for the control. We can combine it with our current two oscillator example like this : def vol(sig,id=1) : return mult_(sig,num(slider("vol_%s" % id,0,1))) with patch("hello.pd") as f : s = vol (sin_ ( slider("pitch",0,1000) ), "1") s2 = vol (sin_ ( slider("pitch2",0,1000) ), "2") dac_(s,s2) Notice that we’ve defined the vol function once, but we’ve called it twice, once for each of our oscillators. So we get two copies of this equipment in our patch. Of course, we can use Python to clean up and eliminate the redundancy here. def vol(sig,id=1) : return mult_(sig,num(slider("vol_%s" % id,0,1))) def vol_osc(id) : return vol( sin_( slider("pitch_%s"%id,0,1000) ), id) with patch("hello.pd") as f : dac_(vol_osc("1"),vol_osc("2")) Idea #6 : UI is automatically layed-out (but it’s work in progress). You’ll notice, when looking at the resulting pd files, that they’re ugly but usable. Gates of Dawn basically thinks that there are two kinds of PD objects. Those you want to interact with and those you don’t. All the objects you don’t want to see are piled up in a column on the left. All the controls you need to interact with are layed-out automatically in the rest of the screen so you can use them. This is still very much work in progress. The ideal for Gates of Dawn is that you should be able to generate everything you want, just from the Python script, without having to tweak the PD files by hand later. But we’re some way from that at this point. At the moment, if you need to make a simple and usable PD patch rapidly, Gates of Dawn will do it. But it’s not going to give you a UI you’ll want to use long-term. Idea #7 : You still want to use PD’s Abstractions Although Python provides convenient local reuse, you’ll still want to use PD’s own Abstraction mechanism in the large. Here’s an example of using it to make four of our simple oscillators defined previously : with patch("hello.pd") as f : outlet_ ( vol_osc("$0") ) guiCanvas() with patch("hello_main.pd") as f : dac_( abstraction("hello",800,50), abstraction("hello",800,50), abstraction("hello",800,50), abstraction("hello",800,50) ) In this example we have two "with patch()" blocks. The first defines a "hello.pd" file containing a single vol_osc() call. (This is the same vol_osc function we defined earlier.) Note some of the important extras here : * outlet_() is the function to create a PD outlet object. When the hello.pd patch is imported into our main patch, this is how it will connect to everything else. * You must add a call to guiCanvas() inside any file you intend to use as a PD Abstraction. It sets up the graph-on-parent property in the patch so that the UI controls will appear in any container that uses it. * Note that we pass $0 to the vol_osc function. $0 is a special variable that is expanded by PD into a different random number for every instance of a patch that’s being included inside a container. PD doesn’t have namespaces so any name you use in an Abstraction is repeated in every copy. This can be problematic. For example a delay may use a named buffer as storage. If you import the same delay Abstraction twice, both instances of the delay will end up trying to use the same buffer, effectively merging the two delayed streams into one. Adding the $0 to the beginning of the name of the buffer will get around this problem as each instance of the delay will get a unique name. In our simple example we don’t need to use $0. But I’ve added it as the label for our vol_osc to make the principle clear. The second "with patch()" block defines a containing patch called hello_main.pd. It simply imports the hello.pd Abstraction 4 times and passes the four outputs into the dac_. Note that right now, layout for abstractions is still flaky. So you’ll see that the four Abstractions are overlapping. You’ll want to go into edit mode and separate them before you try running this example. Once you do that, though, things should work as expected.
http://sdi.thoughtstorms.info/?tag=pure-data
CC-MAIN-2017-51
refinedweb
1,471
73.07
I recently fixed a bug in some code handing off a string to a C library function. The C function takes a plain char * (no const in either position): void use_str(char * s); so we need an UnsafeMutablePointer<CChar> on the Swift side. Our Swift code was basically this: import Foundation let s = "abcde" let sPtr = UnsafeMutablePointer(mutating: s.cString(using: .utf8)) use_str(sPtr) Running this, if you inspect sPtr and sPtr.pointee before the use_str call, you can see that the pointer is bad: it does not point to the contents of the string.* Until Xcode 10/Swift 4.2, it had consistently pointed to zeroed-out memory and the original implementer "worked around" what they thought was a bug in the C library. When we upgraded, the pointed-to values started being garbage, and there was breakage. Storing the C string in a local variable first, var sChars = s.cString(using: .utf8)!, fixes the issue. (That done, we can apparently also skip the explicit call to UnsafeMutablePointer(mutating:) and just write use_str(&sChars).) So it looks like the return value of cString(using:) is invalid immediately unless explicitly copied. Shouldn't it live as long as the String that produced it? If I understand correctly what's happening, this ObjC is equivalent to the original code: NSString * s = @"abcde"; use_str([s cStringUsingEncoding:NSUTF8StringEncoding]); which is perfectly valid, as far as I know ( use_str may need to copy the bytes, of course, but that's a separate issue). Alternatively, can/should there be a warning about UnsafeMutablePointer(mutating:) pointing directly to the result of a method call like this? Maybe our original Swift code is more equivalent instead to this, taking the address of a message send expression, which is illegal: void update_char(char * c); //... update_char(&[s characterAtIndex:0]); Please help me correct errors in my understanding; I'm trying to better grasp the situation/how Swift pointers operate. We should have caught this bug in our code, but I'm not sure why it occurred in the first place. *In fact if you add another string/pointer pair, in Swift 4.2 the two pointers consistently hold the same address!
https://forums.swift.org/t/invalid-pointer-to-the-result-of-cstring-using/19285
CC-MAIN-2019-35
refinedweb
362
54.63
Finance: Helen's Pottery, Hanebury Manufacturing, Maine Electric 1. Helen's Pottery Co.'s stock recently paid a $1.50 dividend (D0 = $1.50). This dividend is expected to grow by 15% for the next 3 years, and then grow forever at a constant rate, g. The current stock price is $40.92. If ks (required rate of return) = 10%, at what constant rate is the stock expected to grow following Year 3? 2. Hanebury Manufacturing Company (HMC) has preferred stock outstanding with a par value of $50. The stock pays a quarterly dividend of $1.25 and has a current price of $71.43. What is the effective rate of return on the preferred stock? 3. Assume that as investment manger of Maine Electric Company's pension plan (which is exempt from income taxes), you must choose between Exxon Mobil bonds and GM preferred stock. The bonds have a $1,000 par value; they mature in 20 years; they pay $35 each 6 months; they are callable at Exxon Mobil's option at a price of $1,150 after 5 years (ten 6-month periods); and they sell at a price of $815.98 per bond. The preferred stock is a perpetuity; it pays a dividend of $1.50 each quarter, and it sells for $75 per share. Assume interest rates do not change. What is the most likely effective annual rate of return (EAR) on the higher yielding security? Solution Summary The attached solution files contain very detailed information to explain the problem and the calculations of the answers.
https://brainmass.com/business/bond-valuation/finance-helens-pottery-hanebury-manufacturing-maine-electric-53335
CC-MAIN-2017-30
refinedweb
261
75.3
It would be good if globals.pl defined a sub CallProcessMail which contained the system() call, so that all the separate files could just call that single sub. In a Win32 installation it's many times necessary to whip up several parameters to the system call, and it's rather cumbersome to run around the source files pasting the new code everywhere. Combining the calls into a single place would help changing the system call syntax. I currently use the following: sub CallProcessMail { system("d:\\perl\\bin\\perl.exe", "-I\\\\t1\\\\s1\\bugzilla\\", "\\\\t1\\\\s1 \\bugzilla\\processmail.pl", @_); } Actually, processmail should be a package/sub, and then we wouldn't have to go through all these problems. I'll attach a patch which both moves it to a sub, and causes processmail to return values rather than outputting them directly. Callers of ./processmail have been changed. Once post_bug abnd process_bug are templatised then the extra template code just becomes an INCLUDE, like it is for attachments. I've kept the interface to processmail as is for this patch. mattyt, timeless: Here's the patch I was talking about. Dave: do we want this for 2.16? As well as fixing the windows issues, it makes the templatisation a bit easier, and allows us to defer calling processmail to when we display it (and can then FLUSH, too, after each one), so that you still see some results even if the mailserver is down. Note that after applying the patch, you need to |mv processmail Processmail.pm| - I've done the diff this way so that the differences can easily be seen. Created attachment 73428 [details] [diff] [review] patch I think we're too close to 2.16 for this. Moving off I just posted bug 140782 which has a strong relationship to this one. This should be 2.16 imo, though I wholly understand why you wouldn't want it that way. The processmail stuff is really way too complicated to be made to work on Windows platforms. I say this, having just worked on it several hours with the latest CVS version. ;-I OK, moving back to 2.16 and making a blocker. We just completely broke email altogether on Win32 (not sendmail, but processmail - see bug 140782), and this is the right way to fix it. Although we still aren't officially supporting Win32, we shouldn't break things that previously did work on Win32. If this patch gets revived: - The current bug/process/results.html.tmpl template should be enhanced to contain the "enter bug" results, and bug/create/created.html.tmpl changed to use it. (Same with attachment/updated.html.tmpl IMO.) - let's not use StudlyCaps for CGI parameters - Should we take the opportunity to kill off data/nomail? Email prefs are the correct way to do this now. - Can we think of a better name than "Processmail" for this? I've never liked that name. Gerv Processmail the module could be called 'BugChangeNotifier' (or something like that) to indicate that: 1) it is used to send mail about bug changes, not just any mail 2) it is essentially a notifier, not a mailer: mail is the way to do it now, but in the future other notification methods should be available and possibly incorporated into the same module. Created attachment 81491 [details] [diff] [review] v2 OK, lets try this. Now that all the places which want to use this are templatised, its a bit simpler. Before applying, do |cp processmail Bugmail.pm| Gerv: > - The current bug/process/results.html.tmpl template should be enhanced to contain the "enter bug" results, and bug/create/created.html.tmpl changed to use it. (Same with attachment/updated.html.tmpl IMO.) I don't know what this means. Take a look at what I've done, and see if this is what you wanted > - let's not use StudlyCaps for CGI parameters bah. _ sucks, and InterCaps is easier to type. Fixed anyway, in most places. > - Should we take the opportunity to kill off data/nomail? Email prefs are the correct way to do this now. No. We are taking this opportunity to hoepfully fix this for win32. Nothing else. The only other thing I am fixing is that the easiest way to do this is to have it go from the template, so that if mail is slow and there are lots of dependancies, you at least see what is happening in stages, like 2.14 does. This doesn't actually really matter, since processbug uses lots of little templates, but at least you'll be able to see which bug its dying on. > - Can we think of a better name than "Processmail" for this? I've never liked that name. I used Bugmail.pm instaead of BugNotifications.pm, or something. because this is still strongly related to bugmail, and would require a rewrite to be different. Other comments: - in globals.pl, I have to dereference the hash (It took me ages to work out that thats what the problem was). Alternatives include just not passing a ref in at all (can we then get at it from within TT?) or having Bugmail::Send take a hash reference, rather than a hash. - data/nomail sucks. I read it in once per use, which should have a slight perf improivment when we go through lots of dependancies, but has the downside that if I didn't use the wrapper func in globals.pl, .we'd do that for every cgi Or will a use be pulled in always, anyway, at compile time? I'll have to test. If so I'll just read it in once per bug, keeping the current usage. - 'people' isn't such a good name. Suggestions for a better one? - Its been lightly tested - it sends mail, and appears to get most of the -force stuff right. I haven't done a detailed test, though, esp with the cc forcing code. - In the new tempalte, I don't seem to be able to take stuff out of the array into a pair of variables - I have to use .0 and .1. Is there a better way to do this with TT, or is this a TT limitation? - can someone on windows please test this, and let me know if it fixes those problems? Ok, that patch does it. Preliminary testing shows it works fine on Windows, although you still have to spank Bugmail.pm a bit to enable mailing on Win platforms (as Windows boxes don't tend to have a '/usr/lib/sendmail' handy). One code change I had to do to get it working: + @{$force{'CClist'}} = ($people{'cc'} && scalar($people{'cc'}) > 0) ? map(trim($_), $people{'cc'}) : (); to @{$force{'CClist'}} = ($people{'cc'} && scalar($people{'cc'}) > 0) ? map(trim($_), @{$people{'cc'}}) : (); (note the added deref near the end of the line). Wonder if this change breaks anything in your patch? It worked fine with me. Bug 140782 is the real (potential) release blocker; the fix for this bug fixes that one but is not otherwise necessary for the release. At this point in the development cycle only the simplest and least risky fix for a blocker bug should be considered. Is the fix for this bug really the simplest and least risky approach? For that matter, is it really necessary to block a release because of a Win32 issue with code that had to be customized by Win32 installers of every previous release? There is going to be a release candidate, I think. If this is true, then this fix will necessarily be tested by everyone trying it out, since hardly anybody will play without completely without email. And we have already a report that it's working (even) on windows, so I don't see the problem with this. Also note that email handling currently is a mess which is complained about (e.g. on the newsgroups) quite often. Oops, this patch is indeed more complicated than just centralizing the calls to sendmail. Sendmail has nothing at all to do with this patch - on windows, we fail before even getting there. The only way arround it would be to print all of the template up to where we;d call process mail, thencall processmail, then do the test of it. The other way is to change all that code to use work explictly, ad mentioned in the docs, but that would then need testing on perl5.005, too... The diffs are reasonably trivial though, except for that type in the cc bit (I mentioned that I hadn't tested that part...) We need processmail to be able to be called directly, which means making it a package. Or we could write to a temp file, I suppose, but that will have its own issues. One other issue - why does processmail-the-script set the umask to 0? I removed that, and it seemeed to work - does processmail create files anymore, or is this a relic from teh pre-bug_activity days? Myk, re comment 10: While it's true that hacking the source has always been necessary on Win32, this is a different issue. If the Bugzilla-Sendmail border breaks, that's a somewhat external issue which can be left for the Win32'ers to fix. But if, say, process_bug-processmail border breaks, that's a Bugzilla bug (albeit caused by a lacking feature in ActiveState Perl). If this fix isn't checked into 2.16, everyone running Win32 will have to manually apply something like bbaetz's patch AND change processmail to use another mailer. This patch removes the first step (which is a new issue), and the latter is way more trivial (and exactly what's been needed for the past releases as well). > We need processmail to be able to be called directly, which means making it a > package. Can't we do what I did with bug_form.pl - wrap the entire thing in a subroutine, require "processmail" at the top, and call the subroutine? That's only a few lines of change. Gerv No, because that would still print to stdout - thats the reasl problem. As I said, we could redirect to a temp file, then read in from that file afterwards. But thats really messy and ugly, and has its own security issues. I've basically done what you did, really. Making it a package involved changing the file's name, and adding |Package Bugmail;| to the top (Ok, and the AUTOLOAD, too) In fact, the 'new' bug_form.pl should have been done that way. This is not a fundamental change in how things work - just look at the changes visually. If you want me to walk though the changes to processmail/Bugmail.pm, then I will, but they're really quite simple. Comment on attachment 81491 [details] [diff] [review] v2 >+if (exists $::FORM{'rescanallBugmail'}) { >+ use Bugmail; >+ Status("OK, now attempting to send unsent mail"); >+ SendSQL("SELECT bug_id FROM bugs WHERE lastdiffed < delta_ts AND delta_ts < date_sub(now(), INTERVAL 30 minute) ORDER BY bug_id"); >+ my @list; >+ while (my @row = FetchSQLData()) { >+ push @list, $row[0]; >+ } Isn't while (MoreSQLData()) { push (@list, FetchOneColumn()); } the right way to do this? >+ print "<br>" . scalar(@list) . " bugs found with possibly unsent mail."; Do you need "scalar" here? >+ foreach my $id (@list) { >+ if (detaint_natural($id)) { >+ Bugmail::Send($id); >+ } Why are we detainting bug ids we get from the DB? >- print("Run <code>processmail rescanall</code> to fix this<p>\n"); >+ print qq{<a href="sanitycheck.cgi?rescanallBugmail=1">Click here to send these mails</a><p>\n}; Nit: add a full stop outside the link. >+ # SendBugmail- sends mail about a bug, using Bugmail.pm >+ 'SendBugmail' => sub ($;$) { House style seems to be not to use function prototyping ($;$). >- [% mailresults %] >+ [% PROCESS "bug/process/bugmail.html.tmpl" id = bugid %] <sigh> I'm not quite sure how it got so we have four different templates for that square box you get when you change a bug or something else. The Right Thing is for all of these to be centralised, and bugmail.html.tmpl to be part of that one template. However, I'm not sure if we'll manage this for 2.16 :-( (The template which appears to be doing the unifying is bug/process/results.html.tmpl.) >+ # Contributor(s): Bradley Baetz <bbaetz@student.usyd.edu.au> >+ #%] >+ >+[%# INTERFACE: >+ # id: bug id to send mail about >+ # people: hash for processmail >+ #%] That's a bit wet :-) Try: #. >+ # people: hash; processmail params. Optional As above. >+# This is run when we load the package >+if (open(FID, "<data/nomail")) { >+ while (<FID>) { >+ $nomail{trim($_)} = 1; >+ } >+ close FID; >+} Can't we just remove support for this entirely (cut out the code)? Impact: it wouldn't work any more if people are using it. Possible mitigation: - Get checksetup.pl to read the file, if it exists, and turn off email prefs for all the users in it. Then delete the file. Do we even need to do that? >+# This is a bit of a hack, basically keeping the old system() >+# cmd line interface. Should clean this up at some point Is this still true? You appear to be using a nicer interface. >+ # Make sure to clean up _all_ package vars here. Yuck... >+ $nametoexclude = $people{'changer'} ? $people{'changer'} : ""; $nametoexclude = $people{'changer'} || ""; >+ #die "FOO: $people{'changer'}\n"; Please remove debugging code :-) >+ >+ return ProcessOneBug($id); Can this sub not now be rolled into the above sub? It appears to do very little. Gerv - 'people' isn't such a good name. Suggestions for a better one? conspirators? :-) Gerv > Isn't > while (MoreSQLData()) { > push (@list, FetchOneColumn()); >} >the right way to do this? Hey, I jsut moved code ;) But yes, I'll change that. >>+ print "<br>" . scalar(@list) . " bugs found with possibly unsent mail."; >Do you need "scalar" here? I want to know the number of items in teh list - how else do I do so? >>+ foreach my $id (@list) { >>+ if (detaint_natural($id)) { >>+ Bugmail::Send($id); >>+ } >Why are we detainting bug ids we get from the DB? heh. Because processmail runs with dbi-taint mode enabled. I personally think that that is stupid, but my attempts to remove it met with opposition, so... However, that is no longer part of processmail, so I can and will fix that > House style seems to be not to use function prototyping ($;$). My intent is to change house style. Once we no longer 'require' all this stuff, we'll be able to have perl test this for us at compile time. > <remove data/nomail> Theres a bug on moving this into the db. This bug will be as minimally invasive as possible - I am not fixing this now. Lets just leave it here, and deal with it later, either by removing it entirely, or by having a per user setting. We can fix the 'vacation mode' bug at the same time. >>+# This is a bit of a hack, basically keeping the old system() >>+# cmd line interface. Should clean this up at some point >Is this still true? You appear to be using a nicer interface. Yes, this is still true. What should happen is that the activity log should be read for additions and subtractions. Then we can call ProcessOneBug, passing only the bugid in. The 'people' hash is just the command lines made nicer, since I figured that I could at least get rid of @ARGLIST. (We may pass the changer in, to make things simpler. We currently have an issue where if mail isn't sent, the changer which does kick off the larger mail is considered to ahve changed everything. We may want to fix this, although since that case should, in theory, never happen, it may not be worth it) >>+ return ProcessOneBug($id); >Can this sub not now be rolled into the above sub? It appears to do very >little. On the contrary; this sub does all the work. Its the previous sub which has the issues, since it has to clear all the pacakge vars. I want to keep the two logically separate. You didn't comment on my reference-vs-hash thing - do you know the TT rules for that sort of thing? Yes, people isn't a good name - see my comments. 'conspirators' is worse, though. > >Do you need "scalar" here? > > I want to know the number of items in teh list - how else do I do so? print "<br>" . @list . " bugs found..." ? > > House style seems to be not to use function prototyping ($;$). > > My intent is to change house style. Once we no longer 'require' all this > stuff, we'll be able to have perl test this for us at compile time. That's as may be. But currently house style is not to do it - and I, for one, will oppose a change. Function prototypes in Perl gain nothing that well-structured code and early parameter reading into sensibly named variables doesn't get you, but just cause hassle and errors when they get out of sync with reality > You didn't comment on my reference-vs-hash thing - do you know the TT rules > for that sort of thing? Er... I didn't quite understand the situation. > Yes, people isn't a good name - see my comments. 'conspirators' is worse, > though. I was only joking :-) Gerv I just wanted to note that making it a .pm will be a good thing. If for nothing else to save me tearing my hair out trying to figure out why process_bug.cgi's output was repeating itself. process_bug.cgi does not handle processmail having the wrong path to perl very well, and goes nuts. The regexp given in the docs: perl -pi -e 's@#!/usr/bonsaitools/bin/perl@#!/usr/bin/perl@' *cgi *pl Bug.pm doesn't quite work against processmail. don't forget the voters in that hash... No, voters aren't covered there. Hmm. That probably means that the pref to send mail when you're added/removed from voting probably doesn't work. I'll have to test that at some point, but ts a separate bug. Actually, changing your votes _never_ sends mail, at all, so that option doesn't make sense. Its only manually put into the hash so that thers a valid array in there when we iterae over the types of people. Created attachment 82311 [details] [diff] [review] v3 OK, here comes v3. Again, remember to |mv processmail Bugmail.pm| first. The image_map changes in dependency-graph.html.tmpl are unrelated to this bug. > - let's not use StudlyCaps for CGI parameters There is a rescanallBugmail param. Maybe it should be rescanall_bugmail. > [% mail = SendBugmail(id, people) %] In the long run, we shouldn't rely on a template calling this function (IMO). See also the SyncAllPendingShadowChanges discussion. But for now, it's probably fine. I'm going to apply this now to my installation, and I will try to test it a bit. But if this is going to be checked in for 2.16, then it should go in asap, so that we have more time to catch any regressions it may cause. If there are any bugs introduced with this patch, we certainly won't find them by just looking at the patch. Template->process() failed twice. First assigned_to,bug_file_loc,bug_severity,bug_status,bug_type,cclist_accessible,component,everconfirmed,groupset,keywords,op_sys,priority,product,qa_contact,rep_platform,reporter,reporter_accessible,resolution,short_desc,status_whiteboard,target_milestone,version,votes, lastdiffed, now() FROM bugs WHERE bug_id = 1' to the database at globals.pl line 251. Second (bit & 64504) != 0 from groups where name = 'tweakparams'' to the database at globals.pl line 251. Do I need to do some additional untainting for my "bug_type" stuff, or is this a bug in the patch? This happened when I tried to comment on one of my test bugs where I am reporter, assignee and qacontact at the same time. Oops - yeah, ignore that template. Why shouldn't we rely on the template to send mail? Thats effectivly what the old code did, and it meant that people could see what was taking all this time. (To do so here, I probably need to add some [% FLUSH %] calls, though) Those errors look really unrelated to my patch. Do you get them without it? No, the errors went away when I backed out your patch. But this doesn't necessarily mean that it's a problem with your patch. I have an additional column bug_type in the bugs table: bug_type | enum('DEFECT','ENHANCEMENT','FEATURE','TASK','QUESTION','PATCH') | | | DEFECT | So if this patch adds some additional taint stuff I may be required to make some additional changes. But maybe someone else can test the patch first? Is it already live on landfill? Maybe you are right with the email sending stuff. But we should document somewhere all the places where we rely on templates to call some backend code. Without looking at your patch, I don't know. Work out which value is tainted, and detaint it. There shouldn't be any extra requirements from my patch in that regard, though. In any event, this doesn't happen without your addon patch, so... Ok, I finally found out that $id is tainted here, in ProcessOneBug: SendSQL("SELECT " . join(',', @::log_columns) . ", lastdiffed, now() " . "FROM bugs WHERE bug_id = $id"); Note that your patch removes a call to detaint_natural($id) before calling ProcessOneBug: - foreach my $id (@list) { - if (detaint_natural($id)) { - ProcessOneBug($id); - } Placing a detaint_natural($id); at the beginning of ProcessOneBug seems to fix it for me. But maybe it belongs in the Send(...) sub above? I wonder why this error is not showing up in your version... But I removed this from sanity check - is that where you are having your errors? (Actually, now that I think of it, that detainting code probably does need to stay) The difference is that now the number we pass to the bug stuff needs to be untainted. If you remove the two lines in Bugmail.pm changing $db->{'taint'}, does that fix it for you? Otherwise there probably is somewhere which needs to be detainted. However, this works for me, so it could be a perl 5.005 thing. Can you get a stacktrace, so that I can see where it is failing? That second error lookes real strange. > I removed this from sanity check - is that where you are having your errors? No, I'm getting the error when commenting on a bug, as I said in comment 27. > remove the two lines in Bugmail.pm changing $db->{'taint'} > Otherwise there probably is somewhere which needs to be detainted. Well, it obviously doesn't work for me, and I don't understand the problem. If ProcessOneBug expects $id to be untainted, and now we are passing it through the template processing stuff which eventually calls Bugmail::Send(...), then I think it's quite understandable that it needs to be detainted before passing it to ProcessOneBug, isn't it? Well, I don't understand this stuff at all, but it sounds plausible to me. > stacktrace I sent a page with confess output in private mail. > That second error lookes real strange. I think it's a followup from the first error. It goes away when the first error is fixed. When I turned on sqllog, I was at first very confused where the last two sql queries came from, since they did not seem to match with the code. I think they were generated by an attempt to generate a nice error template or something, or even for the footer. The thing I got in the email was just a "you didn't select any bugs to change" think. Just copy and paste the stack to the bug, here. OK, impact on this one is a little too high to do this close to a release. The Windows folks will have to wait for 2.17.1. However, 2.17.1 WILL be released within a month of 2.16, and I plan on this being one of the first things checked in after 2.16 goes out. Maybe we'll even put this patch by itself back on top of 2.16 and make it a 2.16.1. We'll RC the 2.16 branch tonight. Created attachment 82711 [details] The requested stack trace from confess. I think the issue is just related to my old perl 5.005_02 installation, so it shouldn't hold up any progress on this bug, as long as you require a newer perl or put in the one-line workaround. Great news about the RC1 :-) If this works with 5.005_03, then yes, we'll just up the perl version. OTOH, are we going to require 5.6 after 2.16, in the end? Dave, I think it's a good idea to relnote the Windows problems and do the 2.16.1 pretty soon after 2.16 is out so that even Windows people can get on with the upgrade. There are lots of reasons to switch from 2.14 to 2.16, and not everyone is willing to use a developer version (2.17.1). If 2.16.1 include this patch and the Bug 84875 this will remove 90% of the modifications needed to work under windows... oups, Bug 84876 not 84875... That's a good idea... 84876 has had a lot of work done on it, but was waived off on 2.16 because those involved (Gerv) were busy getting 2.16 RC1 out the door, and there were some details we were still trying to get nailed down on the patch for 84876. I said in 84876 (comment #72) that I'd backport the patch for 84876 if anyone wanted it, but releasing a 2.16.1 with it in there makes even more sense. Why not making 2.16.1 the first "windows ready" version ?? Becasue it would be 2.17.1, not 2.16.1 - ie a development release, not a stable one. This is really a semantic issue (but then again, as one of my software engineering profs says, life is nothing but semantics), but why can't it be released as 2.16.1? We released a 2.14.1 when we had security "issues"; if we can e very controlled in what we might put into 2.16.1 (maybe fixes for this and 84876, or a subset of 84876), and relnote the fact that it's a release that makes "Bugzilla do sane things on Windows, no need to upgrade if you're on a real OS," I shouldn't think it would be a development release. It's not meant to be used in development situations, or conversely: we *should* have a release that "does the sane thing on Windows," and it *should* be a stable version... we should give the impression, via the version number, that the patches applied to 2.16 are stable enough to be used in a (Windows) production environment, assuming they indeed are. I don't think there will be "Windows support" until we have a developer who runs Bugzilla under Windows. Depending of what you call a "devloper" I can do the job !! I am running 2 system with 2.14 and a test 2.16 and already submite some "win32" patch Jouni's another one, see , . That makes two already. Matty, you don't want to introduce first and second class developers, do you? ;-) OK, the taint this is 5.00503, too - I reproduced this on landfill. The test case is really odd. It appears that having _anything_ which is tainted in the vars hash will cause this: use Template; my $template = Template->new({ INCLUDE_PATH => '.', }) || die Template->error(); my $vars = { 'is_tainted' => sub { return not eval { my $foo = join('',@_), kill 0; 1; }; } }; $vars->{'foo'} = $ENV{'PATH'}; $vars->{'var'} = 1; print "Perl: " . is_tainted($vars->{'var'}) . "\n"; with a template of: [% is_tainted(var) %] Comment out the $vars{'foo'} line, and everything is ok. This looks really odd. The only think I can think of is a builtin function in perl5.005 somewhere which converts the hash to an array and then back, but thats a bit far fetched. Anyone got some ideas? (The test is in ~bbaetz/tainttest/ on landfill, btw) My only idea would be to look at the compiled code of the template, and then try to find out which function is responsible for this. But I have absolutely no experience in this area, so this may be a stupid idea... I thikn its doing this earlier. Actually, this is probably why TT only lets you use hash refs - stuff is put into an array. I'lll have to think about this. Just a status report: I pulled a fresh CVS copy on 9th May at about 10:00 GMT, patched it with attachment 82411 [details] [diff] [review] (bbaetz's patch v3) and have been running it in production use ever since. No problems with mailing at all. Platform W2k, IIS5, ActiveState Perl 5.6.1. So the patch seems to be working quite nicely apart from the taint problems you've been discussing. What does it take to get it on the trunk, to get broader testing? I've been running with this patch for some time, too, and I haven't heard of any problems, besides this one-liner to workaround the taint problem. (But that doesn't mean too much; most of our users wouldn't even complain if Bugzilla was down for a day or two ;-) Mainly me fionding time to tidy this up... I'm starting to get convinced this might make it to 2.16. Andreas, you're using Solaris, right? Hearing from a few more folks (Linux and Windows) that they've used this successfully without problems would be convincing. :) Yes, my installation is on a heavily patched solaris 2.6 Generic_105181-16 on a sun4u sparc SUNW,Ultra-2 with with currently only one processor (though there was a time when it had two) running perl 5.005_02 and mysql Ver 9.36 Distrib 3.22.27, for sun-solaris2.6 (sparc), using sendmail. :-) Created attachment 84192 [details] [diff] [review] v4 OK, this adds the detaint_natural in, and changes the interface so that we pass a hash ref, not a hash, into Bugmail::Send. Any other comments? Note that the wait time is actually a real pain - I almost double submitted a review on a bug with lots of people ccd, because it was taking so long before I got any feedback. If this patch goes in, and I add a [% FLUSH %] call both before and after sending mail, then we get better perceived performace. Comment on attachment 84192 [details] [diff] [review] v4 I have plans on reviewing this soon - if you're kind enough to give me a version that applies cleanly on the trunk tip. :-) Assigned -> me, per a conversation on IRC to this effect (doing this so it shows up in my bug lists). Created attachment 95210 [details] [diff] [review] v5 This version applies cleanly to the trunk as of right now (which isn't saying much). jth/other Windows folks: Could you please apply this new version of the patch and let me know how it goes? I'm driving this patch through to checkin for bbaetz; he's busy right now. I'd like to get this checked in before it rots too much since the trunk not as static as, say, the 2.16 branch, so your help in this area would be appreciated. As I mentioned on irc, I've changed my mind - the template should not have the job of calling code to process mail messages. Also, the .pm should be in the newly-created Bugzilla/ directory Ok. My Win32 testing platform has almost recovered from the crash in late July, so I plan on testing this next week. Two weeks away from computers has been good, although not for Bugzilla ;-) Comment on attachment 95210 [details] [diff] [review] v5 Ok, I've quickly tested this. There are a couple of problems I encountered: The patch doesn't apply cleanly (the first hunk of processmail fails) Whenever Bugmail::Send is about to get called, the following error pops up: undef error - Undefined subroutine &Bugmail::Send called at globals.pl line 1666. These occur on both Windows and Linux. Unfortunately I need to run now, so I can't debug this further now. :-( If you can create another version of the patch, I'll certainly test more. Jouni: Thanks for testing that; I'll get the patch updated to HEAD, address the issues bbaetz raised in comment 62 and test it on Linux. This would be a massive perf gain, as we then wouldn't need to reload all the modules every time we call processmail (Siunce they'd be in the same process) In case anyone cares, I'm gonna try to get this fixed for 2.17.1, since it's a blocker and all. Everyone together now: cross your fingers. *** Bug 125688 has been marked as a duplicate of this bug. *** Created attachment 110202 [details] [diff] [review] v6 Ok, let's get this fun bus back on the road. This is an updated version of v5, which I ended up patching manually because stuff has changed so much. Some things to note: -- I removed a call to processmail in CGI.pl, having to do with voting; I don't know if I have all the correct variables necessary. -- In the new BugMail.pm, calling Taint on the $::db handle caused things to blow chunks (complain about tainting); since it was noted that that stuff's a work in progress, I commented it out and all works, but don't really know if that's the right thing(tm) to do. -- The patch is basically v5, but it's been updated to head, which involved a lot of general changes; for instance, changes to Bugzilla/Template.pm, which didn't exist when v5 was written. -- I modified Bugzilla::BugMail::Send to return a hashref instead of an array, which made the code a bit easier to understand. You can play around with a current HEAD installation + this patch at I did some simple things like file bugs and make changes, and I got the emails, but please try to break it. Any Win32ers on CC: the faster you can test this, the quicker we can knock out another [needed for Win32bz] bug. Comment on attachment 110202 [details] [diff] [review] v6 New patch coming shortly. Created attachment 110301 [details] [diff] [review] v7 This patch is v6 + the fix for bug 183388, which was checked in today. It was merged manually, since it's such a small patch. I'll be cc'ing the patch author, asking him to verify the merge. Cc'ing the patch author for bug 183388; can you please verify my manual merge of the fix for that bug on v7 of this patch? Thanks! Comment on attachment 110301 [details] [diff] [review] v7 Since reviewers@ didn't listen to me, we'll do this this spammish way... Initial note... In sanitycheck.cgi..... 1) s/use Bugmail/use Bugzilla::BugMail/ 2) (nit) If this is being USEd, should it be at the top? Could it be REQUIREd instead? re: comment 74: Ignore that 'use' statement behind the curtain. ;-) It's from bbaetz's first version of the patch, and both issues are moot, since I removed the line entirely, and it works fine. I also had to change a line further down to be: Bugzilla::BugMail::Send($id); Since I'm sure there will be another version or two of this patch, look for it in v8. re: comment 75, where's v8? :-) re comment 76: Why it's in your local grocer's freezer, of course! :-) v8 is still in my local tree; the differences between v7 and v8 are two lines that don't really affect the core patch (just the cleanup features in sanitycheck.cgi), so I didn't see a reason to go through the process of canceling all the r= requests, posting a new page, and then re-requesting everything. I did this, in part, because I'm sure there will be a v8 that includes more than just those two-line changes after those first r='s are done. Comment on attachment 110301 [details] [diff] [review] v7 > Index: CGI.pl > =================================================================== > + $vars->{'people'} = { 'changer' => $who }; As bbaetz commented in comment 8, people isn't such a good name. "mailaddressees"? "addressees"? "recipients"? > Index: post_bug.cgi > =================================================================== > -if (defined $::FORM{'qa_contact'}) { > - push (@ARGLIST, "-forceqacontact", DBID_to_name($::FORM{'qa_contact'})); > +if (defined $::FORM{qa_contact}) { > + $vars->{'people'}->{'qa'} = DBID_to_name($::FORM{'qa_contact'}) Nit: please retain the quotes around the hash key. > push (@{$vars->{'sentmail'}}, { type => 'created', > id => $id, > - mail => $mailresults > + #mail => $mailresults This should be removed rather than commented out - same for the two other occurrences in this file. > Index: process_bug.cgi > =================================================================== > - # Save off the removedCcString so it can be fed to processmail > - $removedCcString = join (",", @removed); > > # If any changes were found, record it in the activity log > if (scalar(@removed) || scalar(@added)) { > @@ -1396,6 +1394,7 @@ > LogActivityEntry($id,"cc",$removed,$added,$whoid,$timestamp); > $bug_changed = 1; > } > + @ccRemoved = @removed; > } This does do a copy rather than just create another reference to the same array, right? > @@ -1709,11 +1691,7 @@ > CheckFormFieldDefined(\%::FORM,'comment'); > SendSQL("INSERT INTO duplicates VALUES ($duplicate, $::FORM{'id'})"); > > - $vars->{'mail'} = ""; > - open(PMAIL, "-|") or exec('./processmail', $duplicate, $::COOKIE{'Bugzilla_login'}); > - $vars->{'mail'} .= $_ while <PMAIL>; > - close(PMAIL); > - > + $vars->{'people'} = { 'changer' => $::COOKIE{'Bugzilla_login'} }; The change appears always to be current user - do we need to specify this everywher explicitly? > Index: sanitycheck.cgi > =================================================================== > - print("Run <code>processmail rescanall</code> to fix this<p>\n"); > + print qq{<a href="sanitycheck.cgi?rescanallBugmail=1">Click here to send these mails</a>.<p>\n}; Please let's not have "Click here". Just "Send these mails" would be a fine link. Also, wouldn't this rerun all the sanity checks that you had just run _again_? On a big installation, that would be rather irritating. Perhaps we could make rescanallbugmail an exclusive action, and exit() after doing it? > Index: Bugzilla/Template.pm > =================================================================== > RCS file: /cvsroot/mozilla/webtools/bugzilla/Bugzilla/Template.pm,v > retrieving revision 1.1 > diff -u -r1.1 Template.pm > --- Bugzilla/Template.pm 20 Dec 2002 07:21:30 -0000 1.1 > +++ Bugzilla/Template.pm 29 Dec 2002 11:55:59 -0000 > @@ -198,6 +198,18 @@ > # UserInGroup - you probably want to cache this > 'UserInGroup' => \&::UserInGroup, > > + # SendBugmail- sends mail about a bug, using Bugzilla::BugMail.pm > + 'SendBugmail' => sub { > + my ($id, $people) = (@_); I'd prefer to call this sub either SendMail (or SendBugMail or SendMailForBug or BugMailSend), if you are using StudlyCaps. My preference would probably be SendMailForBug... > + use Bugzilla::BugMail; > + > + # perl5.005 will taint all template vars if any of them > + # are tainted > + detaint_natural($id) || die "id for SendBugmail isn't a number"; Well, we don't support 5.005 any more, so do we need to do this? > Index: Bugzilla/BugMail.pm > =================================================================== > +# This code is really ugly. It was a commandline interface, then it was moved Nit: full stop. > +# This is run when we load the package > +if (open(FID, "<data/nomail")) { > + while (<FID>) { > + $nomail{trim($_)} = 1; > + } > + close FID; > +} Should we continue to support data/nomail? The Right Way to do this these days is create an account for the user and disable their mail in preferences. > +sub ProcessOneBug($) { > + my ($id) = (@_); > + > + # Set Taint mode for the SQL > + #$::db->{Taint} = 1; > + # ^^^ Taint mode is still a work in progress... What are the implications of doing this? It looks quite scary... how does it interact with normal taint mode? > Index: template/en/default/bug/process/bugmail.html.tmpl > =================================================================== > +[%# INTERFACE: > + #. Where do watchers fit into this scheme? > +<center> > + If you wish to tweak the kinds of mail Bugzilla sends you, you can > + <a href="userprefs.cgi?tab=email">change your preferences</a>. > +</center> Can we get this just once per batch of mails sent, rather than for every mail, as now? Gerv > Should we continue to support data/nomail? The Right Way to do this these days > is create an account for the user and disable their mail in preferences. I saw we should still support data/nomail. 1) If somebody is using it and it suddenly is unsupported with no reasonable replacement, that's a bad thing. 2) There is no easy way for an admin to disable all of a users bugmail prefs, so that makes this not be a viable solution. > What are the implications of doing this? It looks quite scary... how does it > interact with normal taint mode? Turning on $::db->{Taint} was the eventual goal when we stated taint mode. Basically what it does is die() if you try to send a tainted string to and cause data coming from the database to be tained. We accomplished the first half of this with the is_tainted function now in Bugzilla::Util, but we still "trust" everything the database gives us. Of course this is commented out, so it doesn't really effect anything, but I am curious why it's there. the taint stuff is there because we all agreed to turn it off, since taintout as silly - we trust our db. I have patches to use the new DBI TaintIn attribute, which tests stuff gong in, similar to what SendSQL does. Theres a separate bug on moving data/nomail into the db, which should be done for all sorts of other reasons anyway. Comment on attachment 110301 [details] [diff] [review] v7 Canceling the r= requests. I've got a new version of the patch that addresses most of your issues, Gerv and fixes the stuff Joel brought up in sanitycheck.cgi. But now bbaetz and I are hashing out whether TT should send the mail or not... which is an issue we've both been waffling on. Look for a new patch within the week. Just an update: I haven't dropped this on the floor, but my workstation died this weekend, so I'm currently rushing around town for a new motherboard + CPU; I don't think I lost any data, though (hard drives are intact; mobo crapped itself, though); I will post the new patch once I have my workstation back up and running. Created attachment 112381 [details] [diff] [review] v8 v8: try this one on for size. This patch addresses the issues raised in the r=, plus has been updated to HEAD. Installation of the patch went fine, but I can't test it yet. The reason I can't test it isn't related to this bug, so I'll ask on the webtools mailing list. After that is solved, I should be able to test the patch and I'll report my findings here. OK, v8 gives me this when submitting a comment to a bug : _____quote_____ Bugzilla has suffered an internal error. Please save this page and send it to jean_seb@hybride.com with details of what you were doing at the time this message appeared. URL: undef error - Template::Exception at D:/perl/lib/CGI/Carp.pm line 301. _____end_quote_____ The update went through fine though. (the comment is added to the bug) Is there a way of backing out the patch (I have the cygwin version of patch, which is supposed to be the same as the unix version), so that I can check if the error message goes away then? Well, yee haw. My test installation is at; can you try and repro it there? Oh, and what version did you patch? HEAD? It should apply cleanly to earlier versions, but may not be valid with earlier versions. backing out the patch: cvs up -C. This assumes you have cvs 1.11 or better (on both the client and server... don't know if cvs-mirror.m.o is 1.11 or better, though). patch -R Thanks for the commands, I'll try to see what happens. JP, (can I call you JP? :-) I'll check your test installation. Is it win32 as well? You probably tested it already, which would mean that I have a problem on my installation... More info : I also ran a sanity check, which told me that there were about 40 bugs with unsent mail. When clicking the link to send it, I got a page saying this : _____quote_____ Bugzilla Sanity Check OK, now attempting to send unsent mail 25 bugs found with possibly unsent mail. _____end_quote_____ without a footer, which I interpreted to mean that a perl error occured. Checking my apache error log, I found this : [Fri Jan 24 16:00:59 2003] [error] [client 192.8.200.204] Undefined subroutine &Bugzilla::BugMail::Send called at D:/htdocs/bugzilla/sanitycheck.cgi line 164., referer: BTW, the link is still called "Click here to send these mails", which I think Gerv objected to in comment #78... Oh, wait, did I have to have any of the previous patches installed? Or did I have to |mv processmail BugMail.pm| first? Damn, I feel stupid :-( My installation isn't Win32. But that's ok, since we need help making it work on Win32, so the more testers with that platform, the better. I'll change the name of the link; look for it in v9 (note to Gerv: ignore that in v8, please). You shouldn't have had to do any mv's or install any of the other patches... although, make sure Bugzilla/BugMail.pm is there; if it's not, the patch may be horked. Yeah, BugMail.pm wasn't there, bugmail.html.tmpl either. So now, I've created those files manually from the contents of the patch file, because even when I created empty files and did patch < [patch_file], nothing would be put into them. After changing BugMail.pm to use Net::SMTP, it WORKS! :-) And for sanitycheck.cgi, it still gave me the same error, so I added this : <------- [line 161 of sanitycheck.cgi] --------> print "<br>" . @list . " bugs found with possibly unsent mail.<br>"; + use Bugzilla::BugMail; foreach my $id (@list) { Bugzilla::BugMail::Send($id); } <----------------------------------------------> and that works. (I got 25 bugmails about old bugs... :-) But the page telling me that 25 bugmails have been sent still didn't have a footer, is that normal? No error in apache's log... So I can confirm that the patch works. Now all we need is Bug #84876, and most (easily 90%) of the Windows installation headache will be gone! Created attachment 112564 [details] [diff] [review] v9 v9: fixes the problems raised by Jean-Sebastien in comments 91 and 88. *This* one *might* actually go in! :-) Comment on attachment 112564 [details] [diff] [review] v9 Let's start by trying to get an r= from Gerv... :-) Jean-Sebastien: Can you do this for me: back out v8 locally, and try applying v9, and see if the only thing you have to change is using Net::SMTP, and let me know if *that* works. Thanks! Oh, one last spam (sorry): this patch should work correctly this time in terms of creating the new files. Be sure you use the correct -p value, even if it's -p0. Yep, v9 works great. No errors or anything. Great work! Comment on attachment 112564 [details] [diff] [review] v9 This is generally pretty good (with a few nits below). But before I give it review+, I need to see the differences between processmail and (Bug)Mail.pm. These are a bit hard to see in diff - any chance you could quickly summarise the changes you made to make processmail into a module? > Index: CGI.pl > =================================================================== > + $vars->{'bugmailrcpts'} = { 'changer' => $who }; Ick. Can we call it "recipients", please? :-) > Index: attachment.cgi > =================================================================== > - $mailresults .= $_ while <PMAIL>; > - close(PMAIL); > - > # Define the variables and functions that will be passed to the UI template. > + $vars->{'bugmailrcpts'} = { 'changer' => $::COOKIE{'Bugzilla_login'} }; > @@ -791,21 +779,10 @@ > # Define the variables and functions that will be passed to the UI template. > + $vars->{'bugmailrcpts'} = { 'changer' => DBID_to_name($::userid) }; Why do you use $::COOKIE{'Bugzilla_login'} in one place, and DBID_to_name($::userid) in the other? > Index: post_bug.cgi > =================================================================== > -push (@ARGLIST, "-forceowner", DBID_to_name($::FORM{assigned_to})); > +# Tell the user all about it > +$vars->{'bugmailrcpts'} = { > + 'cc' => \@cc, > + 'owner' => DBID_to_name($::FORM{'assigned_to'}), > + 'reporter' => DBID_to_name($::userid), > + 'changer' => $::COOKIE{'Bugzilla_login'} Same question here. The reporter is the changer, so you should make that clear in the code. > foreach my $i (@all_deps) { > - my $mail = ""; > - open(PMAIL, "-|") or exec('./processmail', $i, $::COOKIE{'Bugzilla_login'}); > - $mail .= $_ while <PMAIL>; > - close(PMAIL); > +# my $mail = ""; > +# open(PMAIL, "-|") or exec('./processmail', $i, $::COOKIE{'Bugzilla_login'}); > +# $mail .= $_ while <PMAIL>; > +# close(PMAIL); Please remove this instead of commenting it out. > Index: Bugzilla/BugMail.pm > =================================================================== > +# This is run when we load the package > +if (open(FID, "<data/nomail")) { > + while (<FID>) { > + $nomail{trim($_)} = 1; > + } > + close FID; > +} While you are there, please give this filehandle a better name :-) > +# args: bug_id, and an optional hash ref which may have keys for: > +# changer, owner, qa, reporter, cc > +# Which contain values of people to be forced to their respective > +# roles. Slightly dodgy English here :-) > + # Since any email recipients must be rederived if the user has not > + # been rederived since the most recent group change, figure out when that > + # is once and determine the need to rederive users using the same DB > + # access that gets the user's email address each time a person is > + # processed. This comment is seriously cryptic. But if you didn't write it, I won't make you fix it. > + my $resid = > + > + SendSQL("SELECT bugs_activity.bug_id, bugs.short_desc, fielddefs.name, " . > + " removed, added " . Not sure about this formatting... > Index: Bugzilla/Template.pm > =================================================================== > + # SendBugmail- sends mail about a bug, using Bugzilla::BugMail.pm > + 'SendBugmail' => sub { > + my ($id, $bugmailrcpts) = (@_); > + require Bugzilla::BugMail; > + Bugzilla::BugMail::Send($id, $bugmailrcpts); (Same renaming thing.) Also, please have a consistent capitalisation - is it Bugmail (the function) or BugMail (the package)? Are you certain there's no chance of the BugMail package ever being used to send non-bug-mail? If there is, you should call it Bugzilla::Mail instead. > Index: template/en/default/bug/process/results.html.tmpl > =================================================================== > + # > + # bugmailtcpts: hash; BugMail recipient params. Optional. Typo. But the name's changing anyway, right? ;-) > <h2>[% title.$type %]</h2> > - [% mail %] > + [% PROCESS "bug/process/bugmail.html.tmpl" %] Does this template not need a bug_id passed to it? Gerv > Ick. Can we call it "recipients", please? :-) Recipients is just as bad as "people," IMO. Recipients of what? Granted, Bugzilla only send email now, but... I want to be more clear on what they're recipients *of*. I'd be willing to change it to bugmailrecips if you don't like rcpts... I was in an SMTP-y mood that day... > Are you certain there's no chance of the BugMail package ever being used to > send non-bug-mail? If there is, you should call it Bugzilla::Mail instead. Yes. Bug 84876 will introduce a Bugzilla::ProcessMail package, which is what that will be. >? > Not sure about this formatting... It's to keep the SELECT clause clear from the FROM clause... and pretty much lifted from processmail; I won't defend it, but I understand it, and see no reason to change it. > Why do you use $::COOKIE{'Bugzilla_login'} in one place, and > DBID_to_name($::userid) in the other? Old style, I suspect; I changed it, and will test. > Same question here. The reporter is the changer, so you should make that clear > in the code. Same answer here... :-) re: comments: I didn't write either of those, the first one ("Dodgy English") would be good to ask bbaetz what he meant, assuming he wrote it; the 2nd one, same thing, although I'm planning to rewrite a lot of processmail once things settle down, so... I'll post a summary of the diffs in a moment. Summary of changes between processmail and BugMail.pm: -- Some autoload stuff/package variable initialization; also removed some global vars we don't need anymore because they're in the package now. -- BugMail::Send() replaces the command line parsing stuff, and drives the whole process (aside: I understand that "dodgy" comment now, and will try to clean it up...) BugMail::Send() now also calculates the $::last_changed value; this used to be done whenever processmail was run, but it's done here now since it's a package. -- Removed the generation of the HTML-formatted output of processmail, since that's done by TT now. -- Includes the fix for bug 183388 -- One *really* oddly formatted SQL statement was fixed. -- Finally, remove all the commandline processing stuff. These changes are in order of what diff output, so you should be able to go through them and see what I'm referencing. Created attachment 112865 [details] [diff] [review] v10 Changes between v9 and v10: -- s/bugmailrcpts/bugmailrecips/g -- $::last_changed has been moved to Bugzilla::BugMail; no reason to polute the global namespace. -- The disparity between $::COOKIE{Bugzilla_login} and DBID_to_name was changed/tested -- Changed one of the comments to (hopefully) make it clearer. v10 tested on my side, no problem. > Recipients is just as bad as "people," IMO. Recipients of what? Granted, > Bugzilla only send email now, but... I want to be more clear on what they're > recipients *of*. A notification. Not necessarily mail, as you (or someone) pointed out in bug 84876. It could be an instant message, or whatever. This is a variable being passed to a template, which will do some sort of notification, but the CGI should neither know nor care what. So recipients is obvious as to its content, anyone knows what it means now, but it could mean something additionally in the future. But it's also aesthetically horrible, as are most word contractions. And "bugmailrecips" makes me think of recipes. :-) So, if the above doesn't convince you or your other reviewer, how about "mailrecipients"? > >? Even if it's set elsewhere, you should set it explicitly, to avoid the sort of confusion which confused me. If the interface is an ID, we should make it clear that we are passing it across the interface - otherwise someone might change that template, for some reason, to call it "bug_id", and mail will break because it can't find "id" any more. (Actually, in general, calling any variable "id" in Bugzilla is bad, as we have so many different sorts of id. But that's another thing.) Gerv I'm not convinced, but mailrecipients is fine; I'll s/// that. I'll also fix that id issue you mentioned; you're absolutely right. Joel: assume I fix these, and r= the rest of v10, if you have time, please. Same with with you, Gerv. I'll carry the r='s forward to v11 if you don't find anything else. i like 'victims' but failing that, bugmailees should suffice. oh it isn't mail. bugnotifiess bugalertees :-) bugcontacts <- best bet Comment on attachment 112865 [details] [diff] [review] v10 > Index: post_bug.cgi > =================================================================== > -push (@ARGLIST, "-forceowner", DBID_to_name($::FORM{assigned_to})); > +# Tell the user all about it > +$vars->{'bugmailrecips'} = { > + 'cc' => \@cc, > + 'owner' => DBID_to_name($::FORM{'assigned_to'}), > + 'reporter' => $::COOKIE{'Bugzilla_login'}, > + 'changer' => $::COOKIE{'Bugzilla_login'} > + } > Index: process_bug.cgi > =================================================================== > + $vars->{'bugmailrecips'} = { 'cc' => \@ccRemoved, > + 'owner' => $origOwner, > + 'qa' => $origQaContact, > + 'changer' => $::COOKIE{'Bugzilla_login'} > + }; > + $vars->{'bugmailrcpts'} = { 'changer' => > + $::COOKIE{'Bugzilla_login'} }; Nit: inconsistent formatting between instances. > + # SendBugMail- sends mail about a bug, using Bugzilla::BugMail.pm Tiny nit: missing space. > - [% mailresults %] > + [% PROCESS "bug/process/bugmail.html.tmpl" id = bugid %] Just to be completely clear (I think you are doing this anyway) - I think you should a) always pass the ID across this interface explicitly, and b) rename the variable inside bugmail.html.tmpl to bugid or bug_id, because calling things "id" is bad. > Index: template/en/default/bug/process/bugmail.html.tmpl > =================================================================== > +[%# INTERFACE: > + # bugmailrec. > + #%] Shouldn't id/bugid/bug_id be in this list? Fix these nits, and r=gerv. Let's get this in :-) Gerv Created attachment 113937 [details] [diff] [review] v11 v11; fixes Gerv's nits: -- Changes the %vars variable name to mailrecipients -- always pass the bugid to the template via a variable called mailing_bugid (bugid and bug_id were *really* common). Comment on attachment 113937 [details] [diff] [review] v11 Mostly really small nits, but there's one potentially harmful case issue in BugMail.pm which I'd like to see fixed before checkin. ------------------ >+# Tell the user all about it This comment sucks. If it means nothing, remove it. If it means something, change it :-) (I know, it was there before, but the context of the comment has changed so much that it's a good idea to do something about this, too) >- push (@ARGLIST, "-forceqacontact", DBID_to_name($::FORM{'qa_contact'})); >+ $vars->{'mailrecipients'}->{'qa'} = DBID_to_name($::FORM{'qa_contact'}) Just for the sake of consistency (and for the future), add a semicolon there :-) >- >- my $removedCcString = ""; >+ >+ my @ccRemoved = (); If you're changing those empty lines, kill the spaces totally. Now you've removed one out of four. >+ $vars->{'mailrecipients'} = { 'changer' => >+ $::COOKIE{'Bugzilla_login'} }; This wrapping doesn't look good. >+if (exists $::FORM{'rescanallBugmail'}) { Über-nit: If you're using "BugMail" (with the capital M) everywhere else, you should probably use it in this form field as well. >+ print "<br>" . @list . " bugs found with possibly unsent mail.<br>"; Make this Status("@list bugs found with possibly unsent mail."); or something similar. Two issues: 1) avoid the .-based catenation if you're using "" anyway (or use ''), 2) use Status instead of prints with <br>s to preserve formatting. >+ @{$force{'CClist'}} = (exists $recipients->{'cc'} && >+ scalar($recipients->{'cc'}) > 0) ? map(trim($_), >+ @{$recipients->{'cc'}}) : (); >+ @{$force{'Owner'}} = $recipients->{'owner'} ? >+ (trim($recipients->{'owner'})) : (); >+ @{$force{'QAcontact'}} = $recipients->{'qacontact'} ? >+ (trim($recipients->{'qacontact'})) : (); >+ @{$force{'Reporter'}} = $recipients->{'reporter'} ? >+ (trim($recipients->{'reporter'})) : (); This is _so_ ugly. Any chance of a cuter wrapping? The scope of the ? : operator is not very clear now. But, an important issue (this is the one I'm marking review- for): In BugMail.pm, you have the $force{'QAContact'} key written as both 'QAContact' and 'QAcontact'. Please review the role names once more for case issues. How about making keys of the force hash all lowercase, just to avoid conflicts in the future? >+ [% PROCESS "bug/process/bugmail.html.tmpl" mailing_bugid= bugid %] Nit: a space before the assignment operator. >+ # mailrecip. First, indent the mailrecipients comment like # mailrecipients: hash. Yadda yadda yadda # This is the second line --------------- Empty space here :-) And then, "For bug changes:" isn't too clear. How about a description like: "The following keys have values if the respective fields have changed."? Also, a diff of tip-processmail and the BugMail.pm in your patch reveals the following: >- SendSQL("SELECT userid, (refreshed_when > " . SqlQuote($::last_changed) . ") " . >- "FROM profiles WHERE login_name = " . SqlQuote($person)); >+ SendSQL("SELECT userid, (refreshed_when > " . SqlQuote($last_changed) . >+ ") " . "FROM profiles WHERE login_name = " . SqlQuote($person)); Which is fine with me, but the second line now contains an unnecessary catenation; you could just write it as ") FROM profiles...". Created attachment 113944 [details] [diff] [review] v12 This should hopefully make *everyone* happy. :-) Jouni: I fixed that one instance of "QAContact," since it looks like it was missed in the fix for bug 183388, and I can't find an instance of "QAContact" anywhere else (they were all changed to "QAcontact"); admittedly, this isn't very clear at all, but fixing the ambiguity is *another* bug. I also left some of the ternary operators as is... yes, it's ugly, but... that *too* is another bug. ;-) Comment on attachment 113944 [details] [diff] [review] v12 >I also left some of the ternary operators as is... yes, it's ugly, but... that >*too* is another bug. ;-) I disagree, but don't really care at this point. This comment still sucks, because it's not a "change" we're talking about here. This is post_bug.cgi, so why not make it "Email everyone about the new bug"? r=jouni, but please fix that comment before checkin. Good job, by the way :-) > so why not make it "Email everyone about the new bug"? Done. Also, the reason it's good is because bbaetz wrote it. ;-) Justdave? hooray!! Created attachment 113966 [details] [diff] [review] v12 - finalized Same thing as v12, just updated to HEAD. r='s carried forward from gerv and jth. Checked in: Checking in CGI.pl; /cvsroot/mozilla/webtools/bugzilla/CGI.pl,v <-- CGI.pl new revision: 1.199; previous revision: 1.198 done Checking in attachment.cgi; /cvsroot/mozilla/webtools/bugzilla/attachment.cgi,v <-- attachment.cgi new revision: 1.37; previous revision: 1.36 done Checking in post_bug.cgi; /cvsroot/mozilla/webtools/bugzilla/post_bug.cgi,v <-- post_bug.cgi new revision: 1.78; previous revision: 1.77 done Checking in process_bug.cgi; /cvsroot/mozilla/webtools/bugzilla/process_bug.cgi,v <-- process_bug.cgi new revision: 1.175; previous revision: 1.174 done Removing processmail; /cvsroot/mozilla/webtools/bugzilla/processmail,v <-- processmail new revision: delete; previous revision: 1.94 done Checking in sanitycheck.cgi; /cvsroot/mozilla/webtools/bugzilla/sanitycheck.cgi,v <-- sanitycheck.cgi new revision: 1.63; previous revision: 1.62 done RCS file: /cvsroot/mozilla/webtools/bugzilla/Bugzilla/BugMail.pm,v done Checking in Bugzilla/BugMail.pm; /cvsroot/mozilla/webtools/bugzilla/Bugzilla/BugMail.pm,v <-- BugMail.pm initial revision: 1.1 done Checking in Bugzilla/Template.pm; /cvsroot/mozilla/webtools/bugzilla/Bugzilla/Template.pm,v <-- Template.pm new revision: 1.3; previous revision: 1.2 done Checking in template/en/default/attachment/created.html.tmpl; /cvsroot/mozilla/webtools/bugzilla/template/en/default/attachment/created.html.tmpl,v <-- created.html.tmpl new revision: 1.8; previous revision: 1.7 done Checking in template/en/default/attachment/updated.html.tmpl; /cvsroot/mozilla/webtools/bugzilla/template/en/default/attachment/updated.html.tmpl,v <-- updated.html.tmpl new revision: 1.8; previous revision: 1.7 done RCS file: /cvsroot/mozilla/webtools/bugzilla/template/en/default/bug/process/bugmail.html.tmpl,v done Checking in template/en/default/bug/process/bugmail.html.tmpl; /cvsroot/mozilla/webtools/bugzilla/template/en/default/bug/process/bugmail.html.tmpl,v <-- bugmail.html.tmpl initial revision: 1.1 done Checking in template/en/default/bug/process/results.html.tmpl; /cvsroot/mozilla/webtools/bugzilla/template/en/default/bug/process/results.html.tmpl,v <-- results.html.tmpl new revision: 1.4; previous revision: 1.3 done Thanks for everyone's help on this! i have installed the patch, i thought that all open Sendmail commands will replaced to one in bugmail.pm. Just search for open sendmail and found serveral open sendmail commands. cgi.pl globals.pl importxml.pl move.pl whieneatnews.pl token.pm and so on. Am i wrong with my meaning? Sorry for my english, it's not my default. thanks Re comment 114: You're thinking of bug 84876.
https://bugzilla.mozilla.org/show_bug.cgi?id=124174
CC-MAIN-2016-50
refinedweb
10,406
75.1
30 April 2012 22:17 [Source: ICIS news] MIAMI, Florida (ICIS)--The ?xml:namespace> According to a producer, the decrease will be following the lower PX Asian Contract Price (ACP) for May. The US PX contract price is usually settled in the first half of the month to which it applies, with the PX ACP providing a guide to price direction. The May PX ACP was settled at $1,550/tonne CFR Asia, down by $35/tonne from the April price of $1,585/tonne CFR Asia. The US PX April contract settled at 78.50 cents/lb, down by 3.50 cents/lb from March.
http://www.icis.com/Articles/2012/04/30/9555038/us-px-may-contract-to-decrease-following-px-acp.html
CC-MAIN-2014-42
refinedweb
106
83.25
PREV Here we are going to discuss about advance topics in python.They are 1. Python Iterator 2. Python Generator 3. Python @Property 4. Python Decorator 1. Python Iterators: Iteration is a simple an object,it can be iterated upto given limit.It can allow the user to access or traverse through the all items which are collected without any prior knowldge Python Iterator has been implemented by using iterator tool,it is consists of two Methods.They are 1. __iter__() 2. __next__() 1. __iter__(): This method returns an iterator object 2. __next__(): - This method returns the next element in the sequence of an object - Before going to discuss about this methods ,we will little understanding the standard loops. Take an example as a for loop here we can consider list which have the elements as an int Output: Now we can go thorough internally,what is happening - When we are going to work with loops internally it will use __iter__() and __next__() methods,here __iter__() will iterate the items one by one and __next__() will print the next items in collected. Now below we are going to demonstrate the usage of iter and next. >>> data=[1,2,3,4,5] >>> d_obj=iter(data) >>> next(d_obj) 1 >>> next(d_obj) 2 >>> next(d_obj) 3 >>> next(d_obj) 4 >>> next(d_obj) 5 >>> next(d_obj) Traceback (most recent call last): File “<stdin>”, line 1, in <module> StopIteration >>> Below is the schematic representation of above example Implement user defined iterator Here we are going to create our own user defined iterartor with the help of __next__() and __iter__() methods. Example: Output: Implement user defined infinite iterator Output: 2. Python Generator Before we are going to discuss about the Generator ,here we need to discuss about the “yield” keyword Yield Keyword: - yield is keyword,it is used return the value from the function to the caller,but while returning the value it will pause the function and then return ,once caller received the value again it will resume the execution where it has been started. - By using this keyword we can return more than one value.If the programmer using the yield keyword in function then we can call it as a generator .Generator is not normal functions and it will not destroy the local variables. Example: Output: Example: Output: Advantages: 1.It can strore the local variable state,so overhead of memory allocation has been controlled 2.It will retain the old state ,so flow will not start from the begining Disadvantages: 1. Understanding of the souce code is little difficult. Now we can enter into the Generators. Generators: - Generators are also like normal functions,But the difference is generatiors will have yield keyword,functions will have return statement. - Syntatically also Generators and functions are different Function Synatx: def fun_name(): statements return data Generator Synatx: def gen_name(): statements yield data Python Generator with Iterator: Output: Python Generator working with loops Example: Output: Python recursive Generator working with loops Output: Python Generator Expression: Output: 3. Python @Property: In this tutorial we are going to discuss about setter, getter and delete with respect to @property,what use of @property over normal methods,what the solution we can get by using @property.Before going to discuss about the topic we can go through the some basic concepts. What is attributes in python? - Attributes are nothong an item that can takes the value associated with class and object. - If the attribute is respect to class we can tell as class attributes .If the attribute is respect to instace we can tell as instance attributes. - Here class attributes are shared by every object,but instance attributes are shared individual objects Example: Output: 1. If we observe the above example we have created two attributes they are c_attr is a class attribute ,value is the instance attribute 2. c_attr is assigned with some data i.e 1 3. value is initialized in __init__ function 4. I have created one display method to print the data 5. I have created two objects,obj1 created the for the value of “10”,obj2 created the for the value of “20” 6. If we observe while we are going to access the data for individual objects i am gettig the specific value for instance attribute objects,getting same value for both objects in case of class attributes Why do we want @property? Let’s we can go through below example Example: class Employee: Output: As we seen above example ,we have created employee class,three attributes first,last and email and __init__method,fullname method we have created.Then we have created ’emp’ object for Employee class then we have printed the attributes of class and fullname method,upto here no problem we got the expected output Now if we add below line of the code into source code after immediate create the object,then see emp.first=’raju’ after add this line if we excute the script will get the output like below raju ram.rahim@gmail.com raju rahim if we saw above output we got expected output for first name and full name ,but we got email is still old one,so to overcum this we need @property Example: Output: How to resolve above issue without using @property with help of setter and getter methods Example: class Employee: Output: Now we done few minor changes for above code,here we changed the email attribute as the email method ,so now we are getting the expected output,but here the problem might people are using this class.But for this change if any users are using this class, user will be in problematic state and if the code size less no issue,if it have huge lines of code then it is difficult to change this many lines of code.so to overcum this problem we are going to use the @property How to use the @property
https://fresh2refresh.com/python-tutorial/python-advance/
CC-MAIN-2021-31
refinedweb
983
50.97
PLT Import for IntelliCAD 1.0 Sponsored Links PLT Import for IntelliCAD 1.0 Ranking & Summary RankingClick at the star to rank Ranking Level User Review: 0 (0 times) File size: 1.3M Platform: All Windows License: Commercial Price: $195.00 Downloads: 364 Date added: 2008-10-24 Publisher: PLT Import for IntelliCAD 1.0 description IntelliCAD converts these pen movements into corresponding entities and adds them to the active drawing. PLT Import for IntelliCAD is very easy to use as it adds a new command to the IntelliCAD powered application called "PLTImport". Simply type "PLTImport" at the command prompt and select a PLT plot PLT file to import into the active drawing. PLT Import for IntelliCAD 1.0 Screenshot Advertisements PLT Import for IntelliCAD 1.0 Keywords PLT IntelliCAD PLT Import HPGL HPGL PLT PLT Import IntelliCAD 1.0 import geometric data from hpgl plt files plt files Geometric Data data from import files data geometric 1.0 Bookmark PLT Import for IntelliCAD 1.0 PLT Import for IntelliCAD 1.0 Copyright WareSeeker periodically updates pricing and software information of PLT Import for IntelliCAD 1.0 full version from the publisher, so some information may be slightly out-of-date. You should confirm all information before relying on it. Software piracy is theft, Using crack, password, serial numbers, registration codes, key generators is illegal and prevent future development of PLT Import for IntelliC import tuner pltw plt viewer plt test plta important importance of communication import export import cars database pluto plt scheme pier 1 imports geometric shapes hpgl plt import Related Software CAD Import .NET is an easy-to-use API for reading AutoCAD DXF, DWG and HPGL PLT in C#, VB.NET and other .NET applications. It is programmed completely in C#. Demos: Viewer, Import, Add Entities, Viewe. Free Download PLT Import for SolidWorks is a useful HPGL Plot (.plt) file import add-in for SolidWorks. Free Download HPGL Import for IntelliCAD - HPGL PLT file import plug-in for IntelliCAD. Free Download NC Import for IntelliCAD - NC program file import plug-in for IntelliCAD. Free Download OBJ Import for IntelliCAD - OBJ file import plug-in for IntelliCAD. Free Download Points Import for IntelliCAD - Point text file import plug-in for IntelliCAD. Free Download STL Import for IntelliCAD - STL file import plug-in for IntelliCAD. Free Download PLT Import for AutoCAD - HPGL Plot PLT file import plug-in for AutoCAD. Free Download import pen movements from HPGL Plot files. Free Download Top Popular Software Editor's Picks Software
http://wareseeker.com/Graphic-Apps/plt-import-for-intellicad-1.0.zip/214d77a71
crawl-002
refinedweb
420
50.53
For, the Python package installer - A list of phone numbers of the people who are participating in the Secret Santa exchange - A Twilio account - A Twilio phone number - Your Account SID and Auth Token, found in your Twilio account dashboard as shown below: Create the Python virtual environment Create a new directory for this project in your command prompt called flask-secret-santa-sms/, then navigate to this new directory: $ mkdir flask-secret-santa-sms $ cd flask-secret-santa-sms We will create a new virtual environment for this project so that the dependencies we need to install don’t interfere with the global setup on your computer. To create a new environment called “env”, run the following commands once you’ve navigated to the flask-secret-santa-sms/ directory: $ python3 -m venv env $ source env/bin/activate After you source the virtual environment, you'll see that your command prompt's input line begins with the name of the environment ("env"). Python has created a new folder called env/ in the flask-secret-santa-sms/ directory, which you can see by running the `ls` command in your command prompt. Create a file called .gitignore in the flask-secret-santa-sms/ directory as well. (env) $ touch .gitignore Open the .gitignore file in the text editor of your choice, then add the env/ folder to the contents of the .gitignore file: env/ Note that the env/ folder created by Python for the virtual environment is not the same thing as the .env file that’s created to store secrets like API keys and environment variables. Store environment variables securely You’ll need to use the Account SID and Auth Token you located at the beginning of this tutorial in order to interact with the Twilio API. These two environment variables should be kept private, which means we should not put their values in the code. Instead, we can store them in a .env file and list the .env file in our .gitignore file so git doesn’t track it. A .env file is used whenever there are environment variables you need to make available to your operating system. > Note that the env/ folder created by Python for the virtual environment is not the same thing as a .env file created to store secrets. First, create the .env file: (env) $ touch .env Then, add the .env file as a line item in the .gitignore file: env/ .env # Add this Next, open the .env file in your favorite text editor and add the following lines, replacing the random string placeholder values with your own values: export TWILIO_ACCOUNT_SID=AzLdMHvYEn0iKSJz export TWILIO_AUTH_TOKEN=thFGzjqudVwDJDga Source the .env file so it becomes available to your operating system, then print the environment variable values to your console to confirm they were sourced successfully: (env) $ source .env (env) $ echo $TWILIO_ACCOUNT_SID (env) $ echo $TWILIO_AUTH_TOKEN Install the Python dependencies The Python packages required for the project are: Dependencies needed for Python projects are typically listed in a file called requirements.txt. Create a requirements.txt file in the flask-secret-santa-sms/ directory: (env) $ touch requirements.txt Copy and paste this list of Python packages into your requirements.txt file using your preferred text editor: twilio flask Install all of the dependencies with the command given below, making sure you still have your virtual environment (“env”) sourced. (env) $ pip install -r requirements.txt Create the Flask server It’s time to write some code! In the flask-secret-santa-sms/ project directory, create a new file called app.py: (env) $ touch app.py Open the app.py file in your preferred text editor and add the following code, which imports the dependencies, pulls in our environment variables, and creates a simple Flask server: import os from random import shuffle from flask import Flask, render_template, request, url_for from twilio.rest import Client # Setup Twilio credentials and REST API client TWILIO_ACCOUNT_SID = os.environ.get('TWILIO_ACCOUNT_SID') TWILIO_AUTH_TOKEN = os.environ.get('TWILIO_AUTH_TOKEN') client = Client(TWILIO_ACCOUNT_SID, TWILIO_AUTH_TOKEN) # Setup Flask app DEBUG = False app = Flask(__name__) Create an HTML form that accepts participants’ phone numbers On the homepage of our Secret Santa app, we’ll show a simple HTML form where you can enter the phone numbers of your group’s participants. For this tutorial, we’ll include text fields for 4 participants. You can change the number of text fields to match the number of people in your group by deleting any extra text fields shown in the HTML code below, or by adding new text fields. If you’re adding new text fields, make sure each text field has its own unique id and name. The code we’re going to write will dynamically determine how many text fields there are, so you won’t need to change anything in the code if you want to change the number of people in the Secret Santa group. In the flask-secret-santa-sms/ directory, create a templates/ directory to store our HTML file. Then, create an index.html file inside of the new templates/ directory. (env) $ mkdir templates (env) $ touch templates/index.html Copy and paste this code into your index.html file. It's a simple HTML form that includes text fields for each participant's name and phone number. Display the HTML form on the home page Now that we have the HTML form to collect all the phone numbers, we need to create a Flask route to serve the form. For simplicity, we’ll show this form on the home page of our application. Add this new route to the bottom of the app.py file, which simply renders the HTML form on the home page: @app.route('/') def index(): return render_template('index.html') Create a Flask route to handle the HTML form data Now that we have an HTML form, it’s time to write a function that will process the data that comes through in the form. Copy the create_assignments() function found on lines 21 - 61 of the app.py file in the GitHub repo and paste it at the bottom of your own app.py file. The function does the following: - Intakes the form data - Separates the name and phone number information into separate lists - Sorts each list so that each participant’s name and phone number appear at the same index of each list - Creates the Secret Santa matches - Calls a helper function that sends the SMS notifications Create a helper function to send the SMS notifications The last thing we need to do is write a function that handles sending the SMS notifications. Copy and paste this function into the bottom of your app.py file, underneath the create_assignments() function you added in the previous step. Remember to change the placeholder from_ number in the example code below to your own Twilio phone number, making sure to maintain the E.164 format. def send_assignments(assignments): successful_notification_counter = 0 for gift_sender, gift_recipient in assignments.items(): body = "{}, you are {}'s Secret Santa this year. Remember to get them a gift!".format( mapped_user_info[gift_sender], mapped_user_info[gift_recipient] ) try: message = client.messages.create( body=body, from_=+12345678901, # CHANGE THIS to your own Twilio number to=gift_sender ) print("Secret Santa assignment sent to {} via SMS".format(mapped_user_info[gift_sender])) successful_notification_counter += 1 except Exception as e: print(e) print("There may have been a problem sending the notification to {}".format(mapped_user_info[gift_sender])) continue print("Notifications sent to {} people".format(successful_notification_counter)) return Remember, this function is called within the create_assigments() function. It sends an SMS message to each participant to notify them of who they should give a gift to. Start the server It’s time to start the server and try this out! While you’re in the flask-secret-santa-sms/ directory, run this command to start the Flask server: (env) $ flask run Navigate to localhost:5000 in your web browser. Enter the names and phone numbers for each of the 4 participants, then click the button at the bottom of the form to continue. Behind the scenes, our create_assignments() function will create the Secret Santa pairs, then call the send_assignments() helper function to send the SMS messages. Check your phone to see if the SMS message came through, and also check your CLI output for more details that were printed from the Python script. Note: Because of how the matching logic works, you cannot enter a phone number more than once. Thus, if you want to test this out before running it on a real group, you’ll need a few people to agree to be part of your test. Congratulations! You now have a working application to assign Secret Santa pairs for your family or group! You just learned how to: - Create a web server using Flask - Receive and manipulate data from an HTML form - Send SMS messages using Twilio’s API Next steps If SMS isn’t the right mode of communication for the people in your Secret Santa group, you could modify this tutorial to use one of these instead: - Twilio WhatsApp API - Twilio SendGrid API to send notifications via email - Twilio Programmable Messaging to send and receive group SMS messages Or, check out the Build A Secret Santa Bot for WhatsApp Using Python and Twilio tutorial on the Twilio blog. I can’t wait to see what you build! Author bio August is a Pythonista on Twilio's Developer Voices team. She loves Raspberry Pi, real pie, and walks in the woods. This post will walk you through creating a Twilio Studio IVR that connects to Dialogflow virtual agent and passes the caller along with contextual data back into the IVR. _4<<
https://www.twilio.com/blog/secret-santa-matches-python-flask-twilio-sms
CC-MAIN-2022-05
refinedweb
1,601
61.97
#include <ESP8266WebServer.h> #include <WebSocketsServer.h> #include <DNSServer.h> #include "MSP.h" I have the following ISR conected to Timer0 void inline ppmISR(void){ static boolean state = true; if (state) { //start pulse digitalWrite(sigPin, onState); next = next + 24000; state = false; alivecount++; } else{ //end pulse and calculate when to start the next pulse static byte cur_chan_numb; digitalWrite(sigPin, !onState); state = true; if(cur_chan_numb >= CHANNEL_NUMBER){ cur_chan_numb = 0; next = next + 840000 ; digitalWrite(DEBUGPIN, !digitalRead(DEBUGPIN)); tick ++ ; } else{ next = next + cppm[cur_chan_numb] ; cur_chan_numb++; } } timer0_write(next); } This is a cut down version of one I found in an instructible. I am using a websocket on a peer to peer network to form a HID. Now the Problem is that I am seeing advice that says not to use timer 0 "Other processes use the same timer and it has been reported that a 2 ms tick will crash the ESP8266 if you are using WiFi." Well it does not crash in my case, and my shortest interrupt is 300us (the sync pulse on my cppm.) but I am seeing noise on the pulse train as the server serves html. can anyone clarify this?
https://www.esp8266.com/viewtopic.php?p=88898
CC-MAIN-2020-45
refinedweb
188
72.16
I've been administering Linux systems for several years now, and I don't think a single day has gone by without a learning opportunity. One of my favorite parts about the Linux ecosystem is the willingness to share both tools and information. This year's top Linux administration articles are a testament to that culture of top minds sharing their knowledge so that you can progress on your Linux journey. Linux administration can be very rewarding, including the satisfaction of finding, resolving, and automating a fix to a complicated problem, the thrill of diving deeper into a complex topic, such as Linux namespaces or the Logical Volume Manager, and even the simple pleasure (frustration?) of arguing about your favorite terminal editor. I'm sure if you asked any of our authors about what inspires their articles, their eyes would light up as they describe a problem they solved or a new technology they learned. Whether you're a beginner or a seasoned Linux engineer, 2021's top articles have something for everyone. You can get started with Linux fundamentals, such as LVM resizing, hashing, databases, and network troubleshooting. When you're ready for a deeper dive, read more about Linux namespaces, BIOS to UEFI migration, and process performance management. Finally, top off your journey with some reflections about terminal editors and the overall future of our profession. Have a topic that you're passionate about? Join our community and continue the tradition of knowledge sharing that makes the Linux ecosystem great.
https://www.redhat.com/sysadmin/best-linux-administration-2021
CC-MAIN-2022-40
refinedweb
252
50.87
The first snapshot release of Komodo, a next-generation Linux/.NET-based operating system, is now available. “A new desktop environment codenamed Dagon is now being developed with [D-Bus, Composite, Linux 2.6] to bring a new face of Linux to users. The environment is being developed around the Emotion graphics/UI toolkit to best make use of these new display and communication technologies as well as to provide a solid and useable interface for casual users.” Get the latest snapshots from their FTP servers. Read more information for developers, users and enterprise users. First Komodo Snapshots Released About The Author Thom Holwerda Follow me on Twitter @thomholwerda 86 Comments Right. And it’s to manage/lock user’s desktops… and it’s to centralize user/group management… and it’s to deliver software/patches easily… and it’s to have a decent Directory Service… and it’s to easily get reports on what’s going on… and it’s to facilitate os deployment/disaster recover/backup… and it’s to do all above without sky-rocket science… Please, plese, STOP this distribution race! Yes, you CAN do a distribution, we all know, but start coding something else if you want to see linux BROADLY adopted in the corporate arena..?? I’d like to try it, but can’t hit their torrent (…) I understand that Komodo is a distribution(?) but is the Dagon DE going to be competing with KDE/GNOME? Just noticed: you can download the torrent from their tracker:. Imagine there’s no Windows It’s easy if you try No “Gates” below us Above us only sky Imagine all the people Running Linux… Maybe we need to imagine for a moment why don’t stop KDE and Gnome Development, and other desktops, and rethink the new desktop for linux. 100% freedesktop compliant. Why don’t see where we want to see the linux desktop on 2015. Do a ten years roadmap. Why dont leave our personal touch only inside us. Imagine all the people, running their desktops on linux. More open source software infected with Mono. Only in this, case they’ve taken full advantage of C#’s “portability” and created a new API so that all of your System.Windows.Forms and GTK# apps won’t run. I really don’t understand why these .NET fanatics just don’t run Windows Vista. 1) Because Vista hasn’t been released yet 2) And when it will be it won’t be free (as in freedom) 3) Vista is bloated already now 4) #2 means you can’t really modify the system (no choice of freedom – which is utterly unacceptable, and very anti-MS if you look at the early MS-days) You can surely come up with many other reasons not to use Vista dylansmrjones kristian AT herkild DOT dk Wrong. System.Windows.Forms and GTK# are supported just fine. Komodo includes GTK and QT (not the download available now, because I didn’t want to beef it up unnecessarily) as well as bindings for both (weell really QT is an ongoing concern). I don’t run Windows Vista because I HATE Microsoft. My using .NET does not mean I in any way identify with Microsoft. They just made a good cross-platform environment, made it available via standards (ECMA) and have been very accepting about it. The least we can do is encourage Microsoft to do this stuff more often by allowing the applications written for Komodo to work on Vista (via Emotion’s Windows backend, which, is like 10% coded right now but whatever). We use the Mono everywhere. DotGNU _is_ available for Komodo, but Mono is the standard. It is NOT “Xsharp” it is X11.NET, an X11 binding _based_ on Xsharp. The newest changes to it including moving it to X11.dll, changing the namespaces to X11 instead of Xsharp etc. What else is different between the two? X11.NET supports Composite, DAMAGE, Xfixes, ARGB windows, where Xsharp does not, as well as many other architecture improvements. They really should at least google a name before using it for a project. ActiveState has a software product name Komodo. It would’ve taken 10 seconds of effort to discover this. Anyone else remember the fiasco with Mozilla using already taken software project names like Firebird (rdbms)? - thryllkill 2005-09-22 6:36 pm EST I’m not saying that they shouldn’t have looked first. I am just pointing out that both the Gentoo projects get along fine even though they share the same name. >. If you mean NeXT, then you mean it worked for Steve Jobs. This was his second (or third?) incarnation of his vision of computing made usable for everyone. Apple just bought a company owning a mature product, and improved it (evolution). (Ok, from a user’s point of view, it was like starting over, except it wasn’t because Classic let you run old apps.)?” 🙂 - Thom Holwerda 2005-09-22 7:05 pm EST It looks great, nice, cool, and so on, but why don’t they join the KDE or Gnome development instead of creating even more confusion? Because contrary to popular belief, you can’t just barge in into GNOME or KDE and start coding with the big KDE/GNOME guys. That’s not how it works (sadly one one side, thankfully on the other). Other than that, it might simply be that they don’t agree with KDE’s “More is more” or GNOME’s “Less is more” design philosphies.. - roguelazer 2005-09-22 9:37 pm EST >> Other than that, it might simply be that they don’t >> agree with KDE’s “More is more” or GNOME’s “Less is >> more” design philosphies. So therefore they’re proponents of the “Less is less” design philosophy? Nihilism and Realism combined as a software design philosophy? Sounds cool, but I think for all of my “futuristic, Über-advanced DE” needs, I’ll stick to E17. At least the enlightenment website doesn’t time out. 🙂 Indeed! Why hasn’t anyone thought of a “Less is more until the user hints at more” philosophy, or, indeed templateable UI can produce both philosophies with one code base. I’m not saying those two ideas are concrete for Komodo, but where are the people who TRY that stuff instead of just trying to be A. More like OSX and B. More like Windows. That’s the fundamental two ways that Gnome and KDE (respectively) are developing their software. I can’t agree with either, because even though OSX is a wonderful OS, it is not without it’s flaws. Instead, our philosophy is more “study your predecessors” and “investigate new ways to improve upon your predecessors”. Our system directory layout is partially based on OSX’s, but our APIs look closer to BeOS, and we use C# and .NET from Microsoft. On top of that we think of our own ideas above this “sweet blend” of ideas working together.. um, why does it look so like Aqua OS X? So, they added some more buttons to the window title-bars. That doesn’t deny the fact that the rest of the title bar scheme is identical! And I could similar things about their dock, hell they even call it a dock (which has always seemed a slightly stupid name to me!) Well at least they have a sense of humour, one of the screen-shots shows a desktop picture which fairly clearly features a stylised iMac G4. um, why does it look so like Aqua OS X? So, they added some more buttons to the window title-bars. That doesn’t deny the fact that the rest of the title bar scheme is identical! That’s just the window borders. This theme is available for various X11 window managers. In PekWM it’s called Agualemon. I’m not sure where it was originally ported from (XFCE4?) and if the same name was used there. Also it’s not identical to the borders in Aqua – it’s simpler (and IMO alot slicker). And I could similar things about their dock, hell they even call it a dock (which has always seemed a slightly stupid name to me!) Yes, the dock is obviously somewhat inspired by the one in OS X (the name has historical reasons – in NeXTSTEP the implementation felt more like things actually docked into the “Dock”), but probably also by Slicker for KDE (). Haha, jokes on you! That isnt at all what the screenshots are showing. In fact, that theme is the “agualemon” theme for XFCE (which is what I’m using on my main Komodo system)! You weren’t even looking at what was important, and in fact Dagon will completely replace XFCE anyway. That theme and background show MY preferences for UI, not what will be conveyed in the final edition of Dagon. The dock is a test, not part of Komodo. We’ve never actually claimed that the Dock was going to be part of the UI, and if it is, there will be fundamental changes which make it more attractive and useful for the customers. We have no intention of “copying” Apple — like Jobs says is so common. Where indeed! (eh, we’ve been here for 3 years on and off) Emotion is more than just a graphics toolkit: it’s intended to unify all the disparate APIs in Linux. Also, it’s not a native toolkit anymore as of recently. It now has full graphics backend support to allow us to use the same build of Komodo with many different platforms through use of backend plugins. Right now the only working one is Xlib, and Cairo must be available for the system being used to allow drawing. Backends mostly just cover things like window properties, input events etc. The Emotion Komodo.Graphics module uses Cairo for drawing and has a backend plugin system for input, widget properties, etc. I can give you a rough idea of performance: “Okay”. I have been concentrating more on flexibility and power as well as wait until Mono and Cairo performance reaches the right level. If you are a programmer you may be interested in working on performance related Emotion work, we are an open source project. Whoa, yeah you’re right. Those words were never intended to be in the screenshots. Sorry if it offends anyone, I’l l change them later today when I get home. Those screenshots were originally to show the other Komodoware people (Brian Reynolds our business guy and James Champlin the system designer of GenSTEP, a Komodo-based NextSTEP-like OS which we are pursuing) Some of these comments are pretty funny…”why don’t they just join Gnome or KDE”…”we already have enough desktops”. The funny part of this is when you advocate a “standard” desktop some of these people will be the first to throw a hissy fit. So I guess when it’s Gnome and KDE, it’s all cool, but when these guys do something different (and this is very different down to the toolkit) it’s not ok. But when another random distro comes out that does nothing different, but has a new name and a new repository, then “choice” is good. A bunch of hypocrites. Let’s face some facts. Not having a “standard” which really means dominant desktop has hurt LotD, and since there isn’t going to be one at least there are people being a little bit innovative – rather than YAD > you can’t just barge in into GNOME or KDE and start > coding with the big KDE/GNOME guys you can certainly “barge in” and start coding. what you can’t do is “barge in” and redesign everything the way you want just because you showed up one day. but the general barrier to entry in KDE is pretty amazingly low: show up and start coding. we’ve had people that have come in and a few months later are working on core areas in KDE. true, these people tend to be talented and hard working. > and I said previously now they’re defacto working > for RedHat and Suse heh. at least in the case of KDE, you have it backwards. SUSE does not own nor control KDE, but they do contribute to it. just because SUSE bases the desktop side of their distro around KDE doesn’t make “KDE coders defacto SUSE employees”. with open source, you can use something and even contribute to it without owning it. or are we all defacto Linspire employees? mandriva employees? xandros employees? kanotix employees? knoppix employees? ark employees? kubuntu employees? we certainly help all of them, but that’s highly different than “working for” them. Because contrary to popular belief, you can’t just barge in into GNOME or KDE and start coding with the big KDE/GNOME guys. That’s not how it works (sadly one one side, thankfully on the other). My own experience is, that the KDE people are very helpful to new programmers (or contributors in general) and it is really easy to become involved with KDE.. This is great. It’s great to see some new ideas for the desktop (Finally!) I think they should have used Etoile as their DE. Etoile is a lightwight DE based on GNUstep. The advantage for Komodo would be that people already work on GNUstep and Etoile and that GNUstep (and Cocoa) have already proven that they are excellent frameworks. Also Objective-C is a good language. Maybe it is not as good as C#/Mono, but the advantage is that it does not need a VM and I think even MS has given up on writing everything in .NET because it is just too slow and memory consuming for certain things. Furthermore, I think there would be much more people from the FOSS community that would be willing to help GNUstep/Etoile once it picked up some momentum and would be the default desktop. On the other hand a lot of people from the FOSS community are sceptical of Mono because of the unresolved patent issues. I can of course understand that the developers of Komodo want to do what they enjoy, but I still think that corporation between Etoile and Komodo could be a really good thing. I am not seeing the type of movement in GNUStep or any of the light or heavy DEs based on it the way I would like to. What exactly are you/they doing? GNUStep isn’t becoming any more compatible with Cocoa and none of the DEs is moving faster then molasses. The WMs in Objective-C are dead (InterfaceWM and what was the name of the other). Cameleon is still not ready. I’m really rooting for you guys but your not making it easy. I am not involved in GNUstep/Etoile. Actually I am using KDE and I also coded some stuff for it. But I like GNUstep, I at least know some Objective-C and I think it is a pity GNUstep has so few developers. The main problem of GNUstep is that all Linux-distributions treat it stepmotherly. Another problem, of course, is the total lack of programs like a browser. Without the basic set of applications people will not use it. And a more modern looking theme would of course be needed, too. But the GNstep framework is good and with some more people working on it, it could become a good alternative to KDE/GNOME. If Komodo would use GNUstep as the default DE, it would help GNUstep/Etoile because some people would start using it and because the Komodo developers could help GNUstep/Etoile. But it would also help Komodo because GNUstep/Etoile is much further then their home-brewn DE based on Mono and Cairo. I rather see two small projects that have sort of the same goal combine their efforts, then two small projects that have no users because they are too small to compete. This guy is not doing anything new. I laugh at all this excitement. This toolkit is already supported under all Linux distro’s, OSX, Windows and many other unix flavors. It even runs on an Ipaq ARM system! You wonder where this came from? DotGNU folks. The Portable.NET project. It’s System.Windows.Forms namespace written entirely using X11, so any X11 system should in essence run this code. Basically all he did was marry Mono + Xsharp/System.Windows.Forms from Portable.NET and put it together and wrote his own little apps. That’s all, thats it, very simple. I hope the source is available and he investigated the legal ramifications because Mono / Portable.NET both use different licenses. So here is the truth behind it. Congratulations. I know that some people feel the need to flame, but I think that this is the essence of open source: you have a vision, a project and the talent to make things happen and you have created something new. I wish you good luck with this project. It seems that you have some good ideas, and look forward to toy with them. Its the applications !!!
https://www.osnews.com/story/11964/first-komodo-snapshots-released/
CC-MAIN-2019-04
refinedweb
2,875
72.36
note pboin <p>Umm... I feel like I'm a little stupid on this one.. I'm getting the following: <i> <p>Prototype mismatch: sub main::head vs ($) at pmstats line 8 </i> <p>So, I did a little research, and found a <a href=""> cpan doc</a> on just this problem (a namespace issue w/ CGI). <p> So, am I the only one that's getting this, or did everyone else know how to fix it right away and not say anything? I'm kinda off-balance, 'cause I know L~R's stuff is good and works. <p>Wondering... 337515 337515
http://www.perlmonks.org/?displaytype=xml;node_id=338179
CC-MAIN-2014-52
refinedweb
103
78.59
- Tutoriais - 2D Roguelike tutorial - Enemy Animator Controller Enemy Animator Controller Verificado com a versão: 5 - Dificuldade: Intermediário This is part 11 of 14 of the 2D Roguelike tutorial in which we set up the Enemy Animation controller and add code to the GameManager to manage the enemies. Enemy Animator Controller Intermediário 2D Roguelike tutorial Transcrições - 00:02 - 00:04 In this video we're going to setup the - 00:04 - 00:08 animator controller for our enemy, - 00:08 - 00:10 and we're also going to add some code to the - 00:10 - 00:13 game manager to manage and move our enemies. - 00:14 - 00:16 Let's start with the animator controller. - 00:16 - 00:19 We're going to go to Animations - Animator Controller - 00:19 - 00:22 and open our animator controller Enemy1. - 00:24 - 00:26 We're going to create some transitions - 00:26 - 00:28 from our default idle state - 00:28 - 00:30 to our Enemy1Attack state - 00:30 - 00:32 by right clicking on Enemy1Idle - 00:32 - 00:34 selecting Make Transition - 00:35 - 00:37 and clicking on Enemy1Attack. - 00:37 - 00:40 We're also going to transition back to Enemy1Idle - 00:40 - 00:42 once Enemy1Attack is complete. - 00:46 - 00:48 Next we're going to set a condition - 00:48 - 00:51 for when we want to transition from Enemy1Idle - 00:51 - 00:52 to Enemy1Attack. - 00:52 - 00:55 We're going to do this using a trigger parameter - 00:55 - 00:56 as we did in the player. - 00:56 - 00:59 In our list of parameters here let's click + - 01:00 - 01:02 and we'll choose Trigger. - 01:02 - 01:05 We're going to label this EnemyAttack. - 01:06 - 01:08 And we'll add this to our list of conditions - 01:08 - 01:11 from Enemy1Idle to Enemy1Attack. - 01:12 - 01:15 Make sure that the transition is highlighted by clicking on it - 01:15 - 01:17 and then we're going to click the + button - 01:17 - 01:19 in the list of conditions. - 01:19 - 01:22 Since there's only 1 parameter Enemy1Attack will be added. - 01:24 - 01:27 We're also going to adjust the settings for this transition. - 01:27 - 01:30 We're going to uncheck Has Exit Time - 01:30 - 01:32 since we want to immediately transition - 01:32 - 01:34 when it's time for the enemy to attack. - 01:36 - 01:38 We'll unfold our settings, - 01:38 - 01:40 set the transition duration to 0. - 01:45 - 01:47 Transitioning out of Enemy1Attack - 01:47 - 01:49 we will use Has Exit Time - 01:49 - 01:52 and we'll set the exit time to 1 - 01:52 - 01:55 because we want to transition at the end of the animation. - 01:59 - 02:02 We'll also set the transition duration to 0. - 02:04 - 02:06 Next let's dock out animator - 02:06 - 02:09 at the bottom of the screen and test the animation. - 02:26 - 02:28 It's worth noting, because we created our animator - 02:28 - 02:30 override controller for Enemy2 - 02:30 - 02:34 and are reusing the Enemy1 animator controller - 02:34 - 02:36 we don't need to do this twice. - 02:36 - 02:39 This same state machine will be used for both enemies. - 02:41 - 02:43 Let's return to our Scripts folder - 02:44 - 02:46 and open our Game Manager. - 02:47 - 02:49 First, in Game Manager, we're going to declare - 02:49 - 02:51 a public float called TurnDelay - 02:51 - 02:54 and initialise it to 0.1 seconds. - 02:54 - 02:57 This is how long the game is going to wait between turns. - 02:57 - 03:00 We're also going to add to our namespace declarations - 03:00 - 03:03 using System.Collections.Generic - 03:03 - 03:06 We're going to do this so that we can use lists, - 03:06 - 03:09 which we're going to use to keep track of our enemies. - 03:09 - 03:12 Next we're going to declare a private list - 03:12 - 03:15 of Enemy classes called enemies. - 03:15 - 03:17 We're going to use this list enemies to keep - 03:17 - 03:19 track of our enemies and to send them - 03:19 - 03:20 their orders to move. - 03:20 - 03:24 We're also going to declare a private boolean - 03:24 - 03:26 called enemiesMoving. - 03:27 - 03:29 With that done, in awake we're going to set - 03:29 - 03:34 enemies to equal a new list of the type Enemy. - 03:34 - 03:36 In InitGame we're going to clear our list of - 03:36 - 03:40 enemies when the game starts using enemies.Clear. - 03:41 - 03:43 We need to do this because the Game Manager will - 03:43 - 03:45 not be reset when the level starts - 03:45 - 03:48 and we need to clear out any enemies from the last level. - 03:48 - 03:51 The next thing that we're going to do is declare our coroutine - 03:51 - 03:53 called MoveEnemies. - 03:53 - 03:55 We're going to use this to move our enemies - 03:55 - 03:57 one at a time in sequence. - 03:58 - 04:00 Inside our coroutine we're going to set - 04:00 - 04:02 enemiesMoving to true. - 04:02 - 04:04 Next we're going to yield and wait for the - 04:04 - 04:06 amount of time set in turnDelay, - 04:06 - 04:08 in this case 0.1 seconds. - 04:08 - 04:11 We're also going to check if no enemies have been spawned yet, - 04:11 - 04:14 which would be the case in the first level. - 04:14 - 04:16 If they haven't we're going to add - 04:16 - 04:20 an additional yield to cause our player to wait - 04:20 - 04:22 even though there's no enemy that they're waiting for. - 04:22 - 04:24 We're checking the length of our enemies list - 04:24 - 04:27 using enemies.Count. - 04:27 - 04:30 With our waiting done we're going to use a for loop - 04:30 - 04:32 to loop through our enemies list - 04:33 - 04:35 and issue the MoveEnemy command - 04:35 - 04:37 to each of these enemies by calling - 04:37 - 04:41 the MoveEnemy function in our enemy script. - 04:41 - 04:43 As we call the MoveEnemy function - 04:43 - 04:44 on each of the enemies in our list - 04:44 - 04:46 we're going to wait before we call the next one - 04:46 - 04:50 using yield, parsing in the moveTime variable - 04:50 - 04:52 of our enemies. - 04:54 - 04:56 Once we've finished our loop we're going to set - 04:56 - 04:58 playersTurn to equal true - 04:58 - 05:01 and enemiesMoving to equal false. - 05:02 - 05:05 Now that our MoveEnemies coroutine is ready to use - 05:05 - 05:07 we're going to call that from within update. - 05:08 - 05:10 The first thing we're going to do in update - 05:10 - 05:12 is we're going to check if it's the playersTurn - 05:12 - 05:15 or enemiesMoving is already true, - 05:15 - 05:17 meaning the enemies are already moving. - 05:18 - 05:20 If either of those are true we're going to return - 05:20 - 05:23 and not execute the following code. - 05:23 - 05:25 If neither are true we're going to start - 05:25 - 05:27 our coroutine MoveEnemies. - 05:29 - 05:31 Next we're going to declare a public function - 05:31 - 05:34 that returns void called AddEnemyToList. - 05:34 - 05:38 It's going to take a parameter of the type Enemy called script - 05:38 - 05:41 We're using AddEnemyToList to have the enemies - 05:41 - 05:44 register themselves with the Game Manager - 05:44 - 05:46 so that the Game Manager can issue - 05:46 - 05:47 movement orders to them. - 05:47 - 05:49 In AddEnemyToList we're just going to use - 05:49 - 05:52 the Add function of our enemies list - 05:52 - 05:55 and parse in our parameter script. - 05:55 - 05:57 And that's all we're going to need to do for now. - 05:57 - 05:59 Let's save our Game Manager and return to the editor. - 06:01 - 06:03 Now that we're added the needed functionality - 06:03 - 06:06 to the Game Manager and setup our animator controllers - 06:06 - 06:09 we're going to make 2 small additions to our Enemy script. - 06:10 - 06:12 Let's open it up in Monodevelop. - 06:12 - 06:15 The first thing that we're going to do, in the start function, - 06:15 - 06:19 is have our Enemy script add itself to - 06:19 - 06:22 our list of enemies in Game Manager. - 06:22 - 06:25 By adding our enemy to the list this way - 06:25 - 06:27 the Game Manager will now be able to call the - 06:27 - 06:30 public function we declared MoveEnemy in Enemy - 06:32 - 06:35 Lastly for the Enemy we're going to use SetTrigger - 06:35 - 06:37 to set the EnemyAttack trigger - 06:37 - 06:39 in the animator controller that we setup. - 06:41 - 06:43 Let's save our Enemy script and return to the editor. - 06:46 - 06:50 Back in the editor, let's add our Enemy script - 06:50 - 06:51 to our Enemy prefabs. - 06:51 - 06:53 We're going to go to Prefabs - - 06:54 - 06:56 shift-click to highlight the two Enemy prefabs - 06:57 - 07:01 choose Component - Scripts - Enemy. - 07:03 - 07:06 We'll set the blocking layer for both enemies - 07:07 - 07:09 to Blocking Layer - 07:10 - 07:13 and we're going to define the PlayerDamage variable separately. - 07:14 - 07:17 Enemy1 is going to do 10 damage - 07:17 - 07:19 and Enemy2 is going to do 20. - 07:19 - 07:22 This allows us to have a weaker and stronger enemy type - 07:22 - 07:24 to make the game a little more interesting - 07:24 - 07:26 Let's play our scene and give it a test. - 07:31 - 07:33 When we move the player we can see that the - 07:33 - 07:35 enemy moves every other turn, - 07:35 - 07:37 we can collect our food pickups, - 07:38 - 07:41 if we collide with the walls we can attack them - 07:41 - 07:42 and break them. - 07:43 - 07:47 And so so far everything is working nicely. - 07:47 - 07:49 We can see that the player's hit animation is - 07:49 - 07:51 playing and that the enemy attack animation - 07:51 - 07:52 is playing when it should. - 07:52 - 07:54 The next step is going to be to add some - 07:54 - 07:59 user interface elements, including title cards for the levels, - 07:59 - 08:01 and some score text so that - 08:01 - 08:03 the player can keep track of their - 08:03 - 08:05 current food point total. - 08:05 - 08:07 We'll do that in the next video. GameManager Code snippet using UnityEngine; using System.Collections; using System.Collections.Generic; //Allows us to use Lists.() { //Clear any Enemy objects in our List to prepare for next level. enemies.Clear(); //Call the SetupScene function of the BoardManager script, pass it current level number. boardScript.SetupScene(level); } //Update is called every frame. void Update() { //Check that playersTurn or enemiesMoving or doingSetup are not currently true. if(playersTurn || enemiesMoving) / Animator Controller (Lição) - Lists and Dictionaries (Lição) - Inheritance (Lição)
https://unity3d.com/pt/learn/tutorials/projects/2d-roguelike-tutorial/enemy-animator-controller
CC-MAIN-2019-30
refinedweb
2,047
65.96
Go Unanswered | Answered Tax Refunds ~2700 answered questions Parent Category: Income Taxes Tax refunds refer to the amount returned to the taxpayer after the taxing authority determines that the actual tax paid is greater than the tax liability. Taxpayers can usually avail of the refund at the end of every financial year. 1 2 3 ... 27 > Are state income payments deductible on federal taxes? Yes, State Income Taxes are deductible against Federal income; not the amount you owe the state, but the amount you actually paid through withholding, prior year credits, payments with the prior year state return, and/or estimated payments, during the calendar year for which you are filing Form 1040… Popularity: 92 Can you return a car after being tricked into tax evasion? %DETAILS% Yes, but unfortunately it is tax evasion on your part too. I had the exact same thing happen to me. I bought a car for $3500 10 years ago, and the dealership wrote it up at $750.00, claiming it would save me money. What they really did was two-fold. 1. In most states, the lemon law/… Popularity: 199 Is there a place to find rules and regulations for repossessing vehicles in the state of Indiana? %DETAILS% If this is a contract sale of a motor vehicle, then Indiana Code 26-1-9.1-609 "Secured Party's right to take possession after default". In Indiana - self help repossession is permitted as long as there is no breach of the peace. WHOOPS - you want to be a repo man. There are no … Popularity: 35 How does a car repo affect your credt score and how do you fix it? It makes it LOWER. 2 ways. Get it taken off or wait until it goes away. Popularity: 39 Can a vehichle be taken in the middle of the night without warning for one late payment? i… Popularity: 30 Why has Andrew Divoff stopped doing television and will he return soon? Andrew Divoff From Wikipedia: Andrew Divoff has played many villains in film and on television and is best known for playing Djinn in the first two films of the Wishmaster series. His films range from Another 48 Hours, The Hunt for Red October and Toy Soldiers. Andrew Divoff's TV guest appe… Popularity: 133 can you find out if you owe any stores money from past bounced checks? You would need to see if you had been reported to SCAN. Popularity: 147 Can you be sued as an authorized user by primary card holder after he has written off credit card charges as business expense on his tax return? How can you file for bankruptcy? How to file for bankruptcy online The following paragraph sounds very intimidating to a bankruptcy consumer, but in reality, it is not. The stack of forms to file bankruptcy is about 1/4 inch thick, there are Mandatory Credit Counseling and Debtor Education courses to take, and a Meeting of Credito… Popularity: 158 Will the bankruptcy court take a tax refund? It depends on the individual. Your lawyer should be able to answer this question the best because they know the situation. I had this same question. My lawyer told me that no they will not take tax refunds. However, if you owe student loans to the government they could take your refund. I had no pro… Popularity: 107 Can NSF checks be discharged with a bankruptcy? Yes, but beware of the bank or the recipient of the check claiming Fraud. Popularity: 66 If you filed chapter 13 bankruptcy will the trustee take your tax refund? Yes. For 3 years. They do not take it all. You will get to keep your EIC and certain other credits that may be given that year. This is per my bankruptcy lawyer. Popularity: How long after a chapter 7 discharge can a trustee take windfall money or tax returns or proceeds from property sales? The answer to this question isn't cut and dry, since the trustee can reach quite far into the future to get an asset if they think the debtor had the asset during the bankruptcy case and failed to disclose it or accuratelt represent its value. But, assuming it is truly an unexpected windfall, I thi… Popularity: 34 In Nebraska can you use a judgment to garnish a person's tax refund? No, the seizure of tax refunds both federal and state can only be done with a court order pertaining to circumstances specified under state and/or federal law. An example would be the seizure of the tax refund to pay court ordered child support that is in arrears. Popularity: 141 How can you put a lien on someone's income tax return for your summary judgment? Liens are only used against real property. It might be possible to levy the person's bank account. MSJ are issued by judges and are often easily overturned. Popularity: 70 Can the entire amount of a state and federal tax refund be taken for child support that is owed? Yes, the entire state and federal refund can be seized for payment of child support arrearages. Popularity: 69 If you want to file bankruptcy on your own how do you go about getting the forms to file? You can usually pick up a set form the local bankruptcy court. You can also purchase a set and a paper goods store like Staples or office Depot. You can also purchase them online. It is really difficult to do on your own, so please consider contacting a qualified bankruptcy attorney. The fi… Popularity: 103 Can they take earned income credit from a tax return for back child support even though it is being paid now? Yes they will intercept your taxes until that balance of back pay is paid and current. Then once you are paid you may have to ask that state that intercepted your tax returns to do a tax offset review. Give all documents that show you have paid and or current, then they may dismiss the lien against … Popularity: 20 ref… Popularity: 76 If you are an illegal alien and your car is repossessed can they come after you for the balance of the note if you return to Mexico? I doubt it and serves them right for having sold something to an illegal alien in the first place. Anyone can go after anyone else in a court of law for anything they want. So I guess the quick answer is yes they can go after you. What you are asking is can they collect once you run and hide … Popularity: 16 If you file bankruptcy because of personal debts related to your business then sell your business after the bankruptcy can the courts take your proceeds? YES. All of your property is considered in a bankruptcy. Your creditors have every right to get at ALL of your property including your business assets. I would be very surprised if the court didn't order the sale of the business to satisfy the creditors demands. Popularity: 26 Will the IRS or state take your tax refund due to delinquent student loans? Answer Yes. (Although in fairness, it isn't the IRS taking it, it is the IRS can be instructed to give it to the government agency you owe). Popularity: 32 If a dependent filed their taxes and checked the box that they can't be claimed as a dependent but they can how can this be remedied? The child would have to have their tax return amended and repay any money that they should not have got. If the parents also filed incorrectly, theirs would have to be amended as well. Popularity: 63 Can credit card creditors take your income tax refund? No, neither federal nor state tax refunds are subject to creditor garnishment or seizure. Tax refunds can only be seized or garnished for, taxes that are due, child support, federally funded student loans and in some cases spousal maintenance (alimony). Popularity: 29 Does a borrower to receive a letter from lender after they Repo a car? Y nsible to pay the difference, even though you will have nothing to show for it. If you do not pay, the bank/lender will file for a judgment against you and (where allowed by law) will have the right to place a lien against your home or other property and/or garnish your wages and or attach your … Popularity: 15 If a parent gives their adult child a monetary gift of 10000 cash will the parent be allowed to write that off on their tax return in the year they actually gave the gift? Answer No. If you give your kid money, this doesn't make this money a write-off on the parent's tax return. Answer Basically, everything you give your child..food, clothes, allowance, etc is a gift...but like most all life expenses, it isn't deductible! What you may be thinking of is th… Popularity: 23 If a chapter 13 bankruptcy was dismissed in June of 2005 when could tax refund offset of a student loan be expected? Answer If you are due a refund for taxes filed for the 2005 tax year, that refund can be siezed to offset the student loan - and every refund after that too. Popularity: 39 What is the minimal amount of income required to receive anything on tax returns? There is no minimum if you've had taxes taken out and you can claim a refund, i.e., if your income is low enough and you don't owe any tax, even if you're not required to file a return, you can claim a refund for what you've paid in.If I am not mistaking , If you make under 10,000 you will get money… Popularity: 57 Will the bankruptcy court seize your refund if you had a child tax credit filed in March of 2005 in Mississippi? No, I doubt it. Taxes are different than income. You get the child tax for having kids, and refunds for overpaying on your taxes.....Taxes which you have to pay on income. It isn't like an inheritance or extra profit. Popularity: 23 How would the IRS contact someone that owes them money? Answer When a taxpayer owes back taxes or penalties the IRS always contacts them in written form. There will be a cover letter on official IRS stationery and additional information such as forms that can be used for disputing the claim, requesting a payment schedule and so forth. They may also c… Popularity: 17 In the state of Ohio is a chapter 7 bankruptcy trustee entitled only to the portion of a tax refund from the date of the bankruptcy filing.? Answer Depending upon the amount of time between the filing of the BK and the filing of the tax return; a refund may be pro-rated to determine the portion that is included as a BK asset. And if anything, the portion to be taken by the trustee is the part relative to before the filing. Follow: Pr… Popularity: 12 If a final decree states that trustee is hereby discharged and any bond required is cancelled and that the chapter 7 case of the debtor is closed might they still ask for your tax refund? They may not be able to as for your tax refund because they have freed you of all obligations. However, you need to read the paperwork and make sure there is no language that gives the right to go after the money in the future. Popularity: 2 Can the Bankruptcy trustee include as part of your estate Tax Refunds which were received prior to the filing of your bankruptcy petition? Answer The quick answer would be no. Any money received prior to filing would not be included in your bankruptcy estate and this, not recoverable by the trustee. Any money still owed to you would be part of the estate and would be collectible. What I would want to know is if the money was complet… Popularity: 31 Can a person get into trouble if they lie to the child support office? Answer Yes, and depending upon the type of false information given "trouble" might be an understatement. Another Answer: What if the mother is the one who lied to get more child support but refuses to let you see the kids and court officials don't do their job and allow her to keep up her crap?… Popularity: 21 If you are entitled to a large inheritance right after bankruptcy how much can the trustee take? Your bankruptcy trustee has the right to receive your share of the inheritance within 6 months of filing your case. The trustee has the right to receive it all. Typically what happens though is the trustee receives the full amount and then makes a determination of how much is needed to satisfy your … Popularity: 26 How can you get professional help with a very complicated tax matter if you cannot afford an accountant and the IRS and state are threatening to garnish your modest wages? Answer File paperwork for help from a Tax Advocate - a division of the government, NOT associated with the IRS aimed at helping individuals with tax issues /problems.... Beware - their help is limited but they may at least be able to get you an extension / point you in the right direction at no c… Popularity: 37 How can you find out about money owed to someone from a liability claim 15 years ago? Must have been perhaps an annuity settlement? Or was there a judgement? Call the insurance company's main office, and start there, do you have any information at all? Such as the claim number etc? If you have trouble finding the company let me know specifics and maybe I can help you find out the con… Popularity: 20 If your car gets repossessed and you choose not to pay for it can they take your tax return? If the lender has placed a judgment against you for the deficit balance (balance left after the car is auctioned/sold), then yes in most states, they can take the tax return and apply it to the balance owing. Popularity: 4 While married and self-employed you filed joint tax returns with your now ex-husband who received tax credits for all the 5 years but you had a business so how do you get the tax credits split up? Answer If the refund was made out to both spouses then it belongs equally to both unless or until a court rules otherwise. If the ex-husband forged the wife's signature and cashed the refund check he may have some explaining to do on several levels. Be that as it may, the injured spouse's only … Popularity: 2 How much money does a tax preparer earn at H and R Block? Answer H&R Block's starting pay is different at each office. The offices are independently owned and operated so the pay varies. I work at an office in MI and the starting pay is $11.00 for a first year tax pro. And depending on one's ability, first year pros do schedule C's, (Small business … Popularity: 43 What should you do if your husband will not allow you to file a tax return? It is a federal offence not to file your taxes. Is he trying to land you in jail? Is he willing to pay a hefty fine for you not filing. Did you know that you can file separately and it will not affect him. The only difference with a joint filing is a bigger return ( if your combined monies put you i… Popularity: 21 If your husband will not let you file a tax return should you contact the IRS or a third party for assistance? Answer Tax evasion is a very serious issue. If the IRS charges someone with tax evasion takes them to court and wins the case (they almost always do) the involved parties can literally lose everything they own and can be subject to imprisonment. DO NOT contact the IRS. The best option is to sp… Popularity: 6 Where can you find Alexis Bledel's address? she lives in brooklyn NY Popularity: 34 How do you if you have income file taxes after marrying a non-US citizen who is here on a student visa who has no income to report? In theory the non citizen spouse is not required to file a return, but the other spouse as a citizen has only two choices: either married filing joint or married filing separate. Most likely you will probably want to file jointly so that you can claim your spouse's exemption amount as well as being… Popularity: 17 DE… Popularity: 16 In Texas can a judgment be used to seize an income tax refund? No if it is for creditor debt.Yes if it is for child support or tax arrearages. Popularity: 10 If the custodial parent does not want child support can the minor child file for the support themselves? Popularity: 6 Who can be claimed as a dependent on a federal income tax return? Answer Officially: A person, other than the taxpayer or the taxpayer's spouse, for whom an exemption can be claimed. To be your dependent, a person must be your qualifying child or qualifying relative. For more information, see Exemptions for Dependents in Publication 501. And I'll try to p… Popularity: 17 If your wages are garnished can they also take your tax refund? Tax refunds cannot be directly seized by a creditor judgment.The IRS can seize tax refunds without the use of due process for federal taxes owed.State and federal tax refunds can be seized for child support arrearages. Popularity: 14 If you are arrested for drugs and you have to pay taxes on those drugs can you claim the amount on your tax return? The answer is: HELL NO I DO IT ALLLLLL THE TIME o to make it better have sex while doing thm ;D Popularity: 27 … Popularity: 23 Why would you pay someone to file your federal income tax return online when you can file for free on the IRS website? Filing Federal Taxes OnlineThe reason is, the majority of taxpayers cannot file their taxes online for free, you must be qualify under the IRS guidelines.Visit the IRS website for complete information. Filing TaxesWhether it be on line and efile, or by paper..many people find taxes, in fact any… Popularity: 1 What do you do if you filed your taxes as married filing jointly but you forgot to add your spouse's income and are there any penalties associated with this if your spouse made under 5000? Answer You will need to file an amended return, and the quicker the better. If it is for a very recent tax year, you may be able to avoid any interest charges. Do not forget you may have to do a similar process for your State. Officially: If you discover an error after your return ha… Popularity: 34 What are the tax implications after your property has been foreclosed on and how does that affect an income tax return? Answer One consideration would be that cancellation of a debt becomes income. However, as you can find in many discussions here, if your foreclosed and the sale of the property does not return enough funds to fully satisfy the debt, the deficiency is normally not forgiven. Instead you remain owin… Popularity: 27 Will a wage garnishment affect a tax refund? Answer It depends on who garnished your wages to begin with. If it was the taxing authority, your state Dept. of Tax, or the IRS, yes the garnishment will include your tax refunds, if the Agent you're dealing with knows what he's doing. Only the government or a judicial court can take a tax refun… Popularity: 14 Can a creditor take your federal or state income tax refund to pay a judgment? If the judgment is for state or federal taxes then any refund is subject to seizure by the agency holding the judgment. If it is a creditor judgment, a tax refund would only be subject to attachment if it were placed in a bank account that was being levied by the judgment creditor. I would consult w… Popularity: 34 What is covered when you file for bankruptcy? when i file a business bankruptcy are nsf checks covered? payroll and to vendors Before filing for bankruptcy, remember that it remains in record for 10 long years. Meet good financial lawyers to be able to handle this issue properly. I know financial experts who offer free consultation regarding… Popularity: 6. Popularity: 2 If you employ less than 3 can a contractor charge for workmans comp? Answer Contractors' Bill and Charges The key word here is "contractor". The question of the size of the employer's workforce becomes moot when (s)he must deal with a contractor; because, a contractor is not an employee. Those things which govern the actions of and expectations from an employee by… Popularity: 23 whic… Popularity: 8 Can a student loan company withhold your income tax returns? Yes, if you have defaulted on a student loan your taxes can in fact be taken by the government to repay the loan. Please note that they will also charge additional interest as well as penalties. If you have made payment arrangements with a your guarantor once you have made 4-6 consecutive monthly pa… Popularity: 37 How long does it take to get an IRS refund if you file late? Answer Obviously this can only be addressed on a generally basis, many factors - how well the return is prepared, how complex it is, etc. effect it. Normally refund checks take a 4-6 week process. Electronic deposits, are faster. When you say "file late", that is generally interpreted as in afte… Popularity: 28 What do you need to know about filing your tax return if you have filed bankruptcy during that year.? Answer No, you still owe the government. Ans Bankruptcy proceedings begin with the filing of a petition with the bankruptcy court. The filing of the petitions creates a bankruptcy estate, which generally consists of all the assets of the person filing the bankruptcy petition. A separate tax… Popularity: 20 If you file a chapter 13 bankruptcy and include taxes years 2004 2005 can the IRS take your refund in 2007 for what you owe for tax year 2004? como subio y bajo los precios en metro pcs desde ano 2004 al 2007 Ans I believe that the IRS maintains it's right to offset (that is clearly legal/proscribed) and what is being done in something like this, even in cases of BK. Whether that right is absolutely always allowed may be another story...b… Popularity: 9 What percentage of the take do pimps usually keep? I normally take around 30-35%...it depends on the market. Currently with the NSW Police upping their patrols in the Mayfield and Islington area, business is down around 18%. As such my cut has dropped to 25% to allow fair trading. 100% as any true pimp....100% always Popularity: 14 What tax forms must be considered when filing a consolidated C corporation tax return when assets are less than 10 million dollars? Answer That you are asking implies you may want to get some additional guidance. The 1120 series is used to file C corp returns. Each co in the group should have it's own entire pro-forma return made, and an elimination Co made on the Consolidating schedules for tax to produce one combined entit… Popularity: 24 Can you find out if your tax refund will be taken for a past debt i.e student loans before you file?. Popularity: 42 Self-employed tax returns? Same as any other...the source of the income is different. Some different things may be applicable to fill out... Those that are self-employed will have to pay self-employment tax four times a year. This ensures you that you will not have to pay one lump sum at the time your last return is filed. … Popularity: 3 What does pumping effects mean in the sound reinforcement system? This is a fault also known as breathing. Too much sound compression can cause the compressor/limiter circuit to engage and disengage rapidly. Popularity: 1 Money weights in grams? co… Popularity: 23 Would hotel investment in Canadian cities like Toronto Vancouver and Calgary yield higher returns then investment in US cities such as LA San Francisco and Washington if investment occurs in 2007? Answer Canada sucks with Real Estate investments due to a Hefty Tax that is imposed on non-residents. Real Estate investments in Los Angeles, San Francisco and Washington seem to have less risk and more gain in equity. Once again I'd recommend not to invest in Canada due to its tax laws. Popularity: 33 Should you file a US tax return if you were working abroad and paying tax abroad and claimed as a dependent on your father's US tax returns? Answer You still need to file if you meet the minimum income requirement for your circumstances. US citizens are taxed on their worldwide income, although you may be able to qualify for the foreign earned income exclusion. Even if you qualify for the exclusion and subsequently have no taxable inc… Popularity: 15 Who can file a joint income tax return? In order to file a joint return, the parties must be married at the end of the year, living together in a recognized common law marriage, or married and living apart but not legally separated or divorced. You can also file a joint return for the year in which your spouse died. But that is only the o… Popularity: 22 Does both spouce's have to file bankruptcy? yes Popularity: 1 Change of mailing address? The following link is to the USPS and you can change your address online: Popularity: 3 What can you do to lower tax when you received severance pay and a retention bonus in one year that pushed you into a much higher tax bracket than normal? Answer As it has already been done, very little. It is earnings in that year. It is unlikely it would push you to a much "higher tax bracket", as there really are only a few brackets! (And the low one drops off at about 18K). So, while you may be paying more cash this year, your probably paying … Popularity: 16 How do you file chapter 7 together or separate if you are separated? When you file a mutual bankruptcy, you and your partner file a single set of bankruptcy papers with the court. In your bankruptcy appeal, you release all property, debt, income, and expenses you have between both you and your partner. Popularity: 1 1 2 3 ... 27 >
http://www.answers.com/Q/FAQ/3669
CC-MAIN-2016-50
refinedweb
4,519
60.85
Something else in the same idea, would be to set class="odd" and class="even" on each row. To can change the background for each row... -- William Dode - Aahz wrote: > Felix Wiemann wrote: > >> David Goodger wrote: >> >>> [CVS checkin of BUGS.txt] >>> >>> * Are you using the latest Docutils code? If not, the bug may have >>> already been fixed. Please get the latest code from CVS_ or from >>> the `development snapshot`_ and check again. >> >> Reporting a bug should be made as easy a possible. > > [...] we *DON'T* want to make submitting bugs too easy; it eats up > valuable developer time. A few days ago I noticed an open bug in Docutils on SF which I knew was already fixed. So I wrote "Fixed in CVS. Get the most current snapshot from <http://...>."; and I marked the bug as 'closed'. It took approximately two minutes. Considering the fact that such bugs which are already fixed in CVS aren't reported very often, I wouldn't say that it really 'eats up valuable developer time'. In fact, it's (at least for software without GUI like Docutils) usually faster for a developer to verify that a bug has already been fixed than for a user to get a current snapshot and to verify that the bug has not been fixed yet. So there is economically not much sense. Furthermore, there are quite some disadvantages of demanding to get a current snapshot before reporting a bug: 1. What does the user do when he sees that he should download a snapshot before reporting the bug? a) He downloads the snapshot. b) He doesn't download the snapshot and reports the bug anyway. c) He doesn't report the bug at all. The risk of c) is much higher than the benefit of a). 2. If we released a version of Docutils which still contains undocumentated bugs, it's our fault. We should be thankful that the user bothers writing a bug report at all. It's simply bad style to impose any major requirements for reporting a bug in an official release. So if there isn't anybody who thinks that this paragraph in BUGS.txt is *absolutely* necessary, I'd very like to remove it. -- Hi, About cell alignment in table, now that we can assign a class to a table, it could be easy to change the alignement with CSS (i speak about html writer) if we could have a different class for each column like this : <td class="col1">111</td> <td class="col2">222</td> ... I did it, just 3 lines in the html writer. What do you think about that ? *** html4css1.py~ 2004-05-09 18:43:08.000000000 +0200 --- html4css1.py 2004-06-04 18:34:09.000000000 +0200 *************** *** 585,596 **** def depart_emphasis(self, node): self.body.append('</em>') ! def visit_entry(self, node): if isinstance(node.parent.parent, nodes.thead): ! tagname = 'th' else: ! tagname = 'td' atts = {} if node.has_key('morerows'): atts['rowspan'] = node['morerows'] + 1 --- 585,597 ---- def depart_emphasis(self, node): self.body.append('</em>') ! def visit_entry(self, node): if isinstance(node.parent.parent, nodes.thead): ! tagname = 'th class="col%d"'%self.td_col else: ! tagname = 'td class="col%d"'%self.td_col ! self.td_col += 1 atts = {} if node.has_key('morerows'): atts['rowspan'] = node['morerows'] + 1 *************** *** 1024,1029 **** --- 1025,1031 ---- self.depart_docinfo_item() def visit_row(self, node): + self.td_col = 1 self.body.append(self.starttag(node, 'tr', '')) def depart_row(self, node): -- William Dode - Use the --trim-footnote-reference-space option (trim_footnote_reference_space setting) to "Remove spaces before footnote references". -- David Goodger <> davidg , this is not writer business, or ? ---------- Forwarded message ---------- Date: Thu, 03 Jun 2004 14:21:27 -0400 From: David Abrahams <dave@...> To: Englebert Gruber <engelbert.gruber@...> Subject: space before footnote To make a footnote be recognized after a word in ReST, I need to precede it with a space: this is the word that is foo [#bar]_. ^ here If you don't strip that trailing space, LaTeX may format the footnote number at the beginning of a new line: this is the word that is foo 1. ^ here which is awkward. Can you strip that trailing space? -- Dave Abrahams Boost Consulting I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/docutils/mailman/docutils-develop/?viewmonth=200406&viewday=4
CC-MAIN-2017-26
refinedweb
735
68.47
[3.0] Ext.ux.form.ResizableFieldSet - ability to drag to increase height of fieldset Just finished successful upgrade to 3.0 and thought I'd share a few extensions. This first one is very simple - just gives you a resizable drag bar on the south portion of a fieldset allowing you to increase/decrease it's height. I had a grid inside my fieldset so this allows more/less rows in the grid to be viewed at the same time (grid will auto size if you're using a fit/border layout within your fieldset). Edit: I have turned this into a plugin, usage - plugins: [new Ext.ux.form.ResizableFieldSet()] PHP Code: Ext.namespace("Ext.ux.form"); Ext.ux.form.ResizableFieldSet = Ext.extend(Ext.util.Observable, { init: function(fieldSet) { // apply resizer to south only (after render) fieldSet.on("afterrender", function() { this.resizable = new Ext.Resizable(fieldSet.id, { handles: "s", listeners: { resize: { fn: function(r, w, h) { fieldSet.setHeight(h); } } } }); }, this); fieldSet.on("collapse", function() { fieldSet.getEl().setStyle("height", null); }, this); } }); why not hoist this up to Ext.Panel instead? with that in place, even your Grid sitting inside your Fieldset can be easily resized. scratch that. i was thinking of a plugin. that, plus i just saw this: Last edited by mystix; 13 Aug 2009 at 7:32 PM. Reason: edit Good point about the plugin - I've turned it into a plugin and updated the code in the original post. ooh. nice. might be better to rename it to Ext.ux.plugin.ResizablePanel though (and change the init() signature to init:function(panel) instead), since you can now drop this plugin into any Panel / subclass thereof. Yea - it's just I haven't tried it with anything other than a fieldset! Hello, I tried this plugin today and seeing something strange. Please see the attached screenshot - the "resize" line goes in the middle of the fieldset, not across the bottom border. Is that expected? Thanks.
https://www.sencha.com/forum/showthread.php?77561-3.0-Ext.ux.form.ResizableFieldSet-ability-to-drag-to-increase-height-of-fieldset
CC-MAIN-2016-30
refinedweb
326
59.3
79035/navigate-method-available-selenium-webdriver-with-python The line of code which you have tried as : driver.navigate().to('') It is a typical Java-based line of code. However, as per the current Python API Docs of The WebDriver implementation navigate() method is yet to be supported/implemented. To navigate to a webpage you just write driver.get('') You can do this in your program multiple times. Hope this helps! Below will help you: You can disable the ...READ MORE I using next code for facebook for ...READ MORE Hello Inderjeet, deselectAll() method is useful to ...READ MORE Hey Naushin, yes it is possible to .. Try this code, from selenium import webdriver download_dir = ...READ MORE The explicit wait is used to tell ...READ MORE OR At least 1 upper-case and 1 lower-case letter Minimum 8 characters and Maximum 50 characters Already have an account? Sign in.
https://www.edureka.co/community/79035/navigate-method-available-selenium-webdriver-with-python?show=79081
CC-MAIN-2021-31
refinedweb
149
60.92
The Exchange support team relatively frequently receives cases where mobile devices using Exchange ActiveSync (EAS) protocol send too many requests to Exchange server resulting in a situation where server runs out of resources, effectively causing a ‘denial of service’ (DOS) attack. The worst outcome of such a situation is that the server also becomes unavailable to other users who may not be using EAS protocol to connect. We have documented this issue with possible mitigations in the following KnowledgeBase article: 2469722 Unable to connect using Exchange ActiveSync due to Exchange resource consumption A recent example of this issue was Apple iOS 4.0 devices retrying a full sync every 30 seconds (see TS3398). Another example could be some devices that do not understand how to handle a ‘mailbox full’ response from the Exchange server, resulting in several tries to reconnect. This can cause such devices to attempt to connect & sync with the mailbox more than 60 times in a minute, killing battery life on the device and causing performance issues on server. Managing mobile devices & balancing available server resources among different types of clients can be a daunting challenge for IT administrators. Trying to track down which devices are causing resource depletion issues on Exchange 2010/2007 Client Access server (CAS) or Exchange 2003 Front-end (FE) server is not an easy task. As referenced in the article above, you can use Log Parser to extract useful statistics from IIS logs (see note below), but most administrators do not have the time & expertise to draft queries to extract such information from lengthy logs. The purpose of this post is to introduce everyone in Exchange community to a new PowerShell script that can be utilized to identify devices causing resource depletion issue, help in spotting performance trends and automatically generate reports for continuous monitoring. Using this script you can easily & quickly drill into your users' EAS activity, which can be a major task when faced with IIS logs that can get up to several gigabytes in size. The script makes it easier to identify users with multiple EAS devices. You can use it as a tool to establish a baseline during periods of normal EAS activity and then use that for comparison and reporting when things sway in other directions. It also provides an auto-monitoring feature which you can use to receive e-mail notifications. Note: The script works with IIS logs on Exchange 2010, Exchange 2007 and Exchange 2003 servers. All communication between mobile devices using EAS protocol and Microsoft Exchange is logged in IIS Logs on CAS/FE servers in W3C format. The default W3C fields enabled for logging do vary between IIS 6.0 and 7.0/7.5 (IIS 7.0 has the same fields as 7.5). This script works against both versions. Because EAS uses HTTP, all EAS requests are logged in IIS logs, which is enabled by default. Sometimes administrators may disable IIS logging to save space on servers. You must check whether logging is enabled or not and find the location of log files by following these steps: IIS 7 IIS 6 Before we delve into the specifics of the script, let's review some important requirements for mobile devices that use The script utilizes Microsoft Log Parser 2.2 to parse IIS logs and generate results. It creates different SQL queries for Log Parser based on the switches (see table below) you use. A previous blog post Exchange 2003 - Active Sync reporting talking about Log Parser that touches on similar points. The information in that post still applies to Exchange 2010 & 2007. Since that blog post, additional commands were added to EAS protocol), which are also utilized by this new script while processing the logs. Here's a list of the EAS commands that the script will report in results: For more details about each EAS command, see ActiveSync HTTP Protocol Specification on MSDN. In addition to these commands, the following parameters are also logged by the script. Note: The existence of the 503 status code does not imply that a server must use it when becoming overloaded. Some servers may wish to simply refuse the connection. (Ref: RFC 2616) InvalidContent, ServerError, ServerErrorRetryLater, MailboxQuotaExceeded, DeviceIsBlockedForThisUser, AccessDenied, SyncStateNotFound, DeviceNotFullyProvisionable, DeviceNotProvisioned, ItemNotFound, UserDisabledForSync You can process logs using this script to retrieve the following details: Please make sure you have the following installed on your machine before using this script: Below are some examples (with commands) on how you can use the script and why you might use them. The following command will parse all the IIS Logs in the folder W3SVC1 and only report the hits by users & devices that are [In above command, script ‘ActiveSyncReport.ps1’ is located at the root of C drive, -IISLog switch specifies the default location of IIS logs, -LogparserExec switch points to the location of Log Parser executable application file, -ActiveSyncOutputFolder switch provides the location where output or result file needs to be saved, MinimumHits with a value of ‘1000’ is the script parameter explained in the above table] Output: Usually if a device is sending over 1000 requests per day, we consider this ‘high usage’. If the hits (requests) are above 1500, there could be an issue on the device or environment. In that case, the device & its user’s activity should be further investigated. As a real world example, in one case we noticed there were several users who were hitting their Exchange server via EAS a lot (~25K hits, 1K hits per hour) resulting in depletion of resources on the server. Upon further investigation we saw that all of those users’ requests were resulting in a 507 error on mailbox servers on the back-end. Talking to those EAS users we discovered that during that time period they were hitting their mailbox size limits (25 MB) & were trying to delete mail from different folders to get under the size limit. In such situations, you may also see HTTP 503 (‘TooManyJobsQueued’) responses in IIS logs for EAS requests as described in KB: 2469722 Here the following command will parse all the IIS Logs in the folder C:\IISLogs and will look for the Device ID xxxxxx and display its hourly statistics. .\ActiveSyncReport.ps1 -IISLog " C:\inetpub\logs\LogFiles\W3SVC1" -LogparserExec “C:\Program Files (x86)\Log Parser 2.2\LogParser.exe” -ActiveSyncOutputFolder c:\EASReports –DeviceID xxxxxx -Hourly Output: With the above information you can pick a user/device and see the hourly trends. This can help identify if it’s a user action or a programmatic one. As a real world example, in one case we had to find out which devices were modifying calendar items. So we looked at the user/device activity and sorted that by different commands they were sending to the server. After that we just concentrated on which users/devices were sending ‘MeetingResponse’ command and its frequency, time period & further related details. That helped us narrowing the issue to related users and their calendar specific activity to better address the underlying calendaring issue. Another device related command & error to look for is ‘Options’ command and if it does not succeed for a device then the HTTP 409 error code is returned in IIS log. The following command will parse only the files that match the date 12-24-2011 in the folder W3SVC1 and will only report the hits 12-24-2011 Output: With the above information you can identify users sending high number of requests. Also, within the columns, you can see what kind of commands those users are sending. This helps in coming up with more directed & efficient troubleshooting techniques. When analyzing IIS logs with the help of script, you should look for one specific command being sent over and over again. The frequency of particular commands being sent is important, any command failing frequently is also very important & one should further look into that. We should also look & compare the wait times between the executions of certain commands. Generally, commands taking longer time to execute or resulting in delayed response from server will be suspicious & should be further investigated. Keep in mind though, the Ping command is an exception as it takes longer to execute and you will see it frequently in the log as well, which is expected. If you notice continuous failures to connect for a device with an error code of 403 that could mean that the device is not enabled for EAS based access. Sometimes mobile device users complain of connectivity issues not realizing that they’re actually not entering their credentials correctly (understandably it’s easy to make such mistakes on mobile devices). When looking thru logs, you can focus on that user & may find that user’s device is failing after issuing the ‘Provision’ command. You may want to create a report or generate an e-mail with such reports and details of user activity. The following command will parse all the IIS Logs in the folder W3SVC1 and will only report the hits greater than 1000. Additionally it will create an HTML report of the results. . -HTMLReport The following command will parse all the files in the folders C:\Server1_Logs and D:\Server2_Logs and will also email the generated report to ‘user@contoso.com’. .\ActiveSyncReport.ps1 -IISLog "C:\Server1_Logs",”D:\Server2_Logs” -LogparserExec “C:\Program Files (x86)\Log Parser 2.2\LogParser.exe” -ActiveSyncOutputFolder c:\EASReports -SendEmailReport -SMTPRecipient user@contoso.com –SMTPSender user2@contoso.com -SMTPServer mail.contoso.com We sincerely hope our readers find this script useful. Please do let us know how these scripts made your lives easier and what else can we do to further enhance it. Konstantin Papadakis and Brian Drepaul Special Thanks to: M. Amir Haque, Will Duff, Steve Swift, Angelique Conde, Kary Wall, Chris Lineback & Mike Lagase This is fantastic. ". Thanks. Great work, thanks! Great article can you do the same for other protocols such as EWS and RCA? Thanks @Paul: Well, keep checking the blog; you never know what might show up! =) The flag is actually named Minimun? Anyway to set the date range to look for last 7 days. Would work great for a weekly scheduled job with emailed report Chad +1 @C and all: thanks for catching that typo; it was not supposed to be "MinimunHits" but rather "MinimumHits". We fixed this in the blog post and also uploaded a newer version of the script with that fixed. When I moved to exchange 2010 I noticed that our log fille generation was 2-3x as on 2007. After running this script, I see that 10-15% of our users are over 1000 hits per day, and some are close to 3000. Could this be cause for a greater of log files being generated? Nobody is complaining of performance issues and all the users are configured the same way (ie same cluster, URL etc.) so I cannot explain the reason for such high usage. @Brendan: It would depends on what the 3000 hits are. if they are Pings than I don't think that could cause log growth. In this blog you can review the ActiveSync spec docs to see what each command can do. Similar to Chad's question, can I set the Data parameter to run based on a variable. For example, if I schedule this to run each day, I would want to pull just the previous day's log files -- Today's date minus one. Can that be done? It would be very useful to build this sort of functionality into the Exchange 2010 MP for SCOM if possible. @VRunyon: Try this -Date $(get-date (get-date).Adddays(-1) -Format "MM-dd-yyyy") I tried to post earlier, but it didn't get through. Anyway.... good work on this one. I appreciate this tool. I pulled numbers from my environment and we are getting an entry that shows the username and then one entry that shows no user name. So every device has two entries- one wiht a username and one that is blank. The one that shows no username is generating the same number of 4xxx errors as hits. We only use iPhones in our environment. Any idea why we would see the entry with no username associated to it, and how can we determine the actual 4xxx error that is being generated? Is is normal for an iPhone to generate a lot of 4xxx errors? Also, what qualifies as a "hit?" Thanks for the blog. It has helped me a great deal. OT: When Exchange Management Shell will support transactions? @John1975: I have gotten this question more than once. What is happening is a device is coming in with an authenticated request. This will be all in the row with a user name. The second row with the same DeviceId is an unauthenticated request. This could be an app on the device attempting to access the Mailbox but not dissipating the request to the Mail app but trying to do the request on its own. As you stated for each Hit we see a 4xx count. This is because the request was denied with a response 401 or 403. As to the question regarding what quantifies a Hit, any HTTP request that is logged in IIS and is sent to the URI stem of /Microsoft-Server-ActiveSync. When i run the script, im getting the following with the error below. The CSV file remains empty. Does anyone know why? Thanks for this tool! Building Log Parser Query... Gathering Statistical data Running Log Parser Command against the IIS Log(s): c:inetpublogslogfilesw3svc1*.log Error: Cannot find #Fields directive LogParser Command finished CSV, File location: c:easreportsEASyncOutputReport-Multiple_Files.csv I'm getting an error when running the script that cs-uri-query is an unknown field. I've tried it with and without the -disablecloumndetect switch, it doesn't help. [PS] F:Program FilesMicrosoftExchange ServerScriptsCustom Scripts>.ActiveSyncReport.ps1 -IIS Log "\<servername>e$logfilesW3SVC1" -LogparserExec "C:Program FilesLog Parser 2.2LogParser.exe" -ActiveSyncOutputFolder F:_SourceCAS -MinimunHits 1000 -disablecolumndetect "cs-uri-query" Building Log Parser Query... Gathering Statistical data Running Log Parser Command against the IIS Log(s): \<servername>e$logfilesW3SVC1*.log Error: SELECT clause: Syntax Error: unknown field 'cs-uri-query' To see valid fields for the W3C input format type: LogParser -h -i:W3C LogParser Command finished CSV, File location: C:TempEASyncOutputReport-cs-uri-query_.csv Perhaps you like to correct the example and description for the "-Date"-Parameter: the script expects the format YYYY-MM-DD: "... if ($date) { Date should be in YYYY-MM-DD format..." (from inside the script) and throws here an error for "-Date 31-01-2012": PS C:tmp> .ActiveSyncReport.ps1 ... -Date 01-31-2012 .... Get-Date : Der Parameter "Date" kann nicht gebunden werden. Der Wert "01-31-2012" kann nicht in den Typ "System.DateTime" konvertiert werden. Fehler: "Die Zeichenfolge wurde nicht als gültiges DateTime erkannt." Bei C:tmpActiveSyncReport.ps1:697 Zeichen:28 + $filterdate = (Get-Date <<<< $date -format yyMMdd) + CategoryInfo : InvalidArgument: (:) [Get-Date], ParameterBindingException + FullyQualifiedErrorId : CannotConvertArgumentNoMessage,Microsoft.PowerShell.Commands.GetDateCommand "-Date 2012-01-31 ...." works. Nice! ex-VMC boys getting it done ;) @Brian: Thanks for the date parameters -- it works perfectly! This script was a lifesaver for our Exchange server last week when I found the post. Log files were being generated like crazy, filling up the volume and threatening to crash the server. I could use ExMon to see which user was generating the logs but without your script I didn't know why.. Again, thanks for the info! good job, Thank you Several bugs in the script: Line 721: replace with if ($IISLogs.Count -gt 1) { Also I suggest to delcare IISLogs with type [String[]] rather than [array] (line 48); this makes the script deal better with passed array of paths. In Create-HTML, around line 593-602 - the function works but this is somehow by chance :) : Replace >> $html = ConvertTo-HTML -Head $HTMLHeader -Body $body >" return ($HTMLHeader += $body) << with >> >" ConvertTo-HTML -Head $HTMLHeader -Body $body << Also line 698: Replace line: Write-Host "Trying to find IIS logs from this date: $title" With: Write-Host "Trying to find IIS logs from this date: $filterdate". <a href="bestburnhamboilers.net">burnham boilers</a>. 口臭. <a href="markething.org/">House of brokers</a> This is fantastic, I have clubbed it with mine script to troubleshoot fast growing transactional logs exchangedomino.wordpress.com/.../troubleshooting-fast-growing-transactional-logs-part-2 I ran this in my organization, I have about 100 users that are well over 1k hits, some users had 40k, 50k, 100k, even 175 hits, all of these requests for 'Sync' commands. How do we block this? the 175k was an iPad on the latest iOS 5 software..Some of these devices are essentially sending a 'Sync' command every second if not 3 times every second. I don't think throttling helps as this basically stops 'concurrent' connections, however how do we stop devices so they only send 'Sync' command or we only 'accept' sync commands on particular intervals so they do not send 100k requests in 24 hours for example. This is killing CAS. @Frenchie: The best thing to do is to remove the partnership from the device thus stopping the device from attempting to sync at all. Now that is not the best thing for the end user. If possible identify what is wrong with the device (know build issue and upgrade, user workflow education, 3rd party installed on device that is causing the issue). Also in some organizations cannot afford to wait for a user to disable the device so there are ways with reverse proxy server/device (Ex. Threat Management Gateway) to block the User or User-Agent (if known bad device). This will allow the Exchange environment to be functional and give the admin time to work out the issues with the suspect device.
https://techcommunity.microsoft.com/t5/exchange-team-blog/a-script-to-troubleshoot-issues-with-exchange-activesync/bc-p/601941/highlight/true
CC-MAIN-2020-10
refinedweb
2,974
61.87
IRC log of ws-ra on 2009-06-10 Timestamps are in UTC. 16:02:31 [RRSAgent] RRSAgent has joined #ws-ra 16:02:31 [RRSAgent] logging to 16:02:33 [trackbot] RRSAgent, make logs public 16:02:33 [Zakim] Zakim has joined #ws-ra 16:02:35 [trackbot] Zakim, this will be WSRA 16:02:35 [Zakim] ok, trackbot, I see WS_WSRA(3 day)11:30AM already started 16:02:36 [trackbot] Meeting: Web Services Resource Access Working Group Teleconference 16:02:36 [trackbot] Date: 10 June 2009 16:02:56 [Bob] rrsagent, this meeting spans midnight 16:04:33 [Zakim] - +1.908.696.aaaa 16:04:52 [Zakim] +??P13 16:05:00 [Zakim] + +1.908.696.aabb 16:05:40 [Zakim] + +1.408.274.aacc 16:05:47 [Zakim] +Mark_Little 16:06:02 [Vikas] Vikas has joined #ws-ra 16:06:15 [gpilz] gpilz has joined #ws-ra 16:06:15 [Geoff] Geoff has joined #ws-ra 16:06:27 [Bob] scribe: Ashok Malhotra 16:06:38 [Ashok] Ashok has joined #ws-ra 16:06:44 [Bob] scribenick: Ashok 16:06:48 [Ashok] scribe: Ashok Malhotra 16:07:00 [Ashok] scribenick: Ashok 16:07:29 [Zakim] + +1.703.860.aadd 16:07:46 [Ashok] Starting on Monday June 10, 2009 16:08:09 [PrasadY] PrasadY has joined #ws-ra 16:08:09 [Ashok] s/Monday/Wednesday/ 16:08:23 [gpilz] q+ 16:08:29 [Ashok] Resuming yesterday.s meeting 16:08:44 [Ashok] Bob: I sent out a mail about 'mode' 16:09:18 [dug] dug has joined #ws-ra 16:09:27 [Bob] my mail this am 16:10:26 [Ashok] Bob: I tried to define what were called 'problems' yesterday 16:10:51 [Zakim] - +1.703.860.aadd 16:11:13 [BobN] BobN has joined #ws-ra 16:11:25 [Zakim] + +1.703.860.aaee 16:11:38 [Ashok] First, compositipn on 'mode' ... it is done as an attribute 16:12:03 [Ashok] ... extensions were suggested 16:12:52 [Ashok] ... proposal is that extensions could be named as QNames 16:13:27 [Ashok] Bob: Do we agree that this is a way forward? 16:13:35 [Ashok] No disagreement 16:14:07 [Ashok] Bob: So we agree that a list of QNames would be a way to handle composability 16:14:46 [Ashok] Second problem is scope of the extensions 16:16:53 [Ashok] ... extensions apply to parent and children of element 16:18:46 [Ashok] Asir: So, in general, put extension where it belongs 16:19:37 [Ashok] Bob: ... as child of the thing it extends 16:20:55 [asir] asir has joined #ws-ra 16:21:16 [Ashok] Geoff: I'm concerned about this ... re.DeliveryMode 16:21:56 [Ashok] .... you will agrue that all extensions go in NotifyTo or Subscribe and we don't need DeliveryMode 16:22:06 [Yves] trackbot, start telcon 16:22:08 [trackbot] RRSAgent, make logs public 16:22:10 [trackbot] Zakim, this will be WSRA 16:22:10 [Zakim] ok, trackbot, I see WS_WSRA(3 day)11:30AM already started 16:22:11 [trackbot] Meeting: Web Services Resource Access Working Group Teleconference 16:22:11 [trackbot] Date: 10 June 2009 16:22:59 [Ashok] Bob: In aggregate the delivery mode you get is the result of the composition 16:24:27 [dug] <GB:Frog/> 16:24:41 [Ashok] ... that's the mode 16:24:49 [Ashok] Geoff: disagrees 16:25:40 [Ashok] Geoff: I accept that "we shd put things where they belong" 16:25:58 [Ashok] ... I'm worried about the second-last problem 16:26:12 [Ashok] Bob: Let's wait till we get there 16:26:47 [Ashok] Bob: Third, mode-not-supported fault ... waht do we do abt that 16:27:15 [Ashok] s/waht/what/ 16:27:56 [gpilz] ?q 16:27:59 [Ashok] ... also the usecase 'I want to create a subcription only if all extensions are there" 16:28:02 [gpilz] q? 16:28:53 [Ashok] ... so I created boolean called 'strict' which says only create subscription if all extensions supported 16:29:35 [Ashok] Asir: I not sure all features you are trying to realize are useful 16:29:48 [Geoff] q+ 16:29:56 [Ashok] ... you are trying to tighten the faulting mechanism 16:30:55 [Bob] ack gpi 16:30:56 [Ashok] Bob: Sorta like mustUnderstand 16:31:23 [Ashok] Gil: Strict does not allow you to say these extensions are vital and these are optional 16:31:37 [Geoff] q- 16:32:07 [Geoff] q+ 16:32:20 [Wu] q+ 16:32:52 [Ashok] Ashok: Policy can be used to say what's required and what's optional 16:33:11 [Bob] ack geo 16:34:13 [Ashok] Geoff: There is the fault business that returns all this stuff ... we need fault to say that a delivery mode is not supported 16:34:28 [Wu] q- 16:35:53 [Ashok] Bob: We agree that we want a fault that the delivery mode cannot be supported 16:36:44 [Ashok] ... we also agree there may be a usecase where a portion of the delivery mode is supported and that's acceptable 16:37:09 [Wu] q+ 16:38:46 [Ashok] Asir: Differentiate between not understood and not accepted 16:40:14 [Ashok] Bob: You could use mustUnderstand faults in addition 16:40:50 [Ashok] Geoff: There is a concept of delivery and a fault that say I did not agree with how you want it delivered 16:41:00 [Tom_Rutt] q+ 16:41:35 [Ashok] ... if we has lot's of delivery modes we must also have lot's of faults 16:41:57 [Ashok] s/has/have/ 16:42:21 [dug] q+ 16:42:34 [Ashok] ... I did not understand how you wanted me to deliver stuff 16:44:57 [asir] q+ 16:45:01 [Ashok] Bob: We agree we need fault-tightening behaviour which also deals with composition problem 16:45:10 [Bob] ack wu 16:45:31 [Ashok] Wu: We want default of delivery mode as in the current spec 16:45:53 [Bob] ack tom 16:46:16 [Ashok] Tom: I'm trying to grasp the requirements 16:46:38 [Ashok] ... I hearing strong attachment to this concept called "delivery" 16:47:19 [Ashok] ... I want a fault that gives you the info yiu want 16:47:46 [Ashok] ... Does not matter if it not called delivery 16:47:48 [Bob] ack dug 16:48:21 [Ashok] Dug: All will agree with fault that says 'I cannot meet yiur needs" 16:48:42 [Ashok] ... but what is a dlivery need and what is a subscription need 16:48:53 [Ashok] c/dlivery/delivery/ 16:49:46 [Ashok] ... why do you want a fault that says 'cannot meet delivery needs' and not just cannot meet your needs 16:50:41 [Ashok] ... delivery need vs. subscription need 16:51:25 [Wu] q+ 16:51:51 [Ashok] Dug: Do not calssify faults .. we need we need a fault 16:52:09 [Ashok] s/calssify/classify/ 16:52:24 [gpilz] q? 16:53:05 [Wu] q- 16:53:34 [Ashok] Asir: Agree we ned to tighten faulting mechanism. We can define detailed faults later. 16:54:39 [Ashok] Bob: We also need to talk abt contents subscribe/response 16:55:24 [Ashok] ... need to specify waht you got 16:55:40 [Ashok] Bob: Let's talk abt Delivery Mode 16:55:52 [Ashok] ... may be affected by more than one extension 16:57:34 [Ashok] Bob draws Subscribe box NotifyTo and EndTo children 16:57:49 [Ashok] s/box/box with/ 16:59:36 [Ashok] Bob: The concept of delivering stuff is NotifyTo + extensions plus other extensions that affect delivery (Delivery Concept) 17:00:13 [Ashok] Dug: Filter can also be part of delivery mode 17:00:24 [Ashok] Bob: Is EndTo part of delivery mode 17:00:55 [Bob] q? 17:01:02 [Zakim] - +1.408.274.aacc 17:01:22 [Ashok] Dug: If you cannot tell me why subscription eneded prematurly that part of delivery mode 17:02:24 [Ashok] Bob: Delivery concept is everything subscription mgr know to fulfill it's contract with you 17:02:40 [Ashok] c/know/must know/ 17:02:54 [Ashok] c/it's/its/ 17:04:16 [gpilz] q? 17:04:20 [gpilz] q+ 17:04:24 [Ashok] Tom: Delivey is in the eyes of the beholder 17:05:03 [Ashok] Asir: 2 cycles ... subscribe then response and end subscription and response 17:05:17 [Wu] q+ 17:05:44 [dug] q+ 17:06:40 [Ashok] Wu: Separate delivery from subscription 17:06:53 [Bob] ack wu 17:06:55 [Bob] ack gp 17:08:07 [Ashok] Gil: Trying to callisy extension as deliry extension or subscription extension is not useful 17:08:25 [Ashok] c/callisy/classify/ 17:09:15 [Ashok] c/deliry/delivery/ 17:09:24 [Wu] We can view WS-E with three semantics components: Event subscription, Event Generator and Delivery engine 17:10:16 [Ashok] Dug: Suppose we kept the <delivery> element and decided that <frog> was a delivery extension 17:10:48 [Ashok] ... now we put <frog> outside delivery ... waht what happen ... would system fall apart 17:11:14 [Wu] q+ 17:11:45 [Ashok] Asir: Folks said WS-MAN put in extensions here that there, we need better guidelines 17:12:24 [Ashok] Dug: We have extension points and can put extensions in different places 17:13:02 [Ashok] Bob draws generic diagram of event source 17:15:52 [Ashok] ... six boxes all involved in my delivery 17:18:19 [Ashok] ... do we need wrapper around extensions to each of the six bozes? 17:20:32 [Ashok] ... Should it be possible to put an extsion at the subscribe level and affect everything? 17:20:42 [Ashok] Dug: Yes 17:21:40 [gpilz] q+ 17:22:37 [Ashok] Wu: Need to provide structure and people add etensions in a strctured manner 17:23:15 [Ashok] ... I like current spec. Each element has an extension point 17:23:24 [Geoff] q+ 17:23:32 [Ashok] Bob: Are talking abt delivery element 17:23:58 [Bob] ack dug 17:24:16 [Ashok] Dug: I diagree that elements inside map to implementation bits 17:25:13 [Ashok] ... need to tell what each extension applies to ... so put in appropriate element 17:26:46 [Ashok] Gil: We only talk abt EPS to EPR communication ... not abt implementation structure 17:27:01 [dug] s/EPS/EPR/ 17:27:16 [Bob] acl wu 17:27:24 [Bob] ack gp 17:27:30 [Bob] ack wu 17:27:37 [Bob] ack geo 17:28:05 [Ashok] Geoff: But we talk abt event source and a subscription manager in spec. So we separate them 17:28:36 [Ashok] ... 2 separate concepts 17:29:06 [Ashok] Bob: EndTo is not related to delivery 17:29:58 [Ashok] ... is there anything abt delivery concept not inherenetly connected to NotifyTo? 17:31:15 [Ashok] ... is there any need to have an element associated with concept of delivery? 17:31:36 [Ashok] ... NotifyTo EPR is essential 17:33:25 [Ashok] You specify 'push', 'push-with-acks' in the NotiFyTo EPR 17:34:01 [Ashok] Asir: Today PUSH is built into the spec as default 17:34:26 [Ashok] ... if something is specified then that's the delivery mode 17:34:52 [Ashok] ... today you can ignore evrything other than the address of the EPR 17:36:05 [Ashok] ... If we say delivery mode needs to be inferred we need to say whare it is inferred from 17:36:57 [Ashok] ... today if you say mode='something' then you must understand the mode 17:37:38 [Ashok] Bob: Anything in the scope affects its parent 17:38:22 [Ashok] .... somesubset of the element you want to call a Delivery Concept 17:38:33 [Ashok] Asir: Delivery and NotifyTo 17:39:36 [Ashok] Geoff: Default is the 'push' mode. If we delete mode there is no way to say that 17:40:33 [Ashok] Bob shows how to do that ... put PUSH as final child of Subscribe 17:40:57 [Zakim] + +1.408.202.aaff 17:41:26 [Ashok] ... Format and Filter are separate elements 17:42:00 [Ashok] ... I don't think Delivery element it dangerous. May not be necessary 17:42:08 [Ashok] .... Mode is dangerous 17:43:50 [Ashok] Gil questions need for delivery element 17:44:23 [Ashok] Dug: Cannot classify extensions 17:45:13 [Ashok] ... is Format a Delivery extension or a Subscription extension 17:45:29 [Ashok] Asir: It's an implementation problem 17:45:51 [Ashok] Gil: Supports Dug 17:46:20 [Ashok] ... cannot classify extensions ... distinction does not affect anything 17:47:07 [Ashok] Bob: Asks if the delivery wrapper elemnt is removed is that a lie-down-in-road issue? 17:47:13 [Ashok] Wu: Yes 17:47:18 [Ashok] Geoff: Yes 17:48:03 [Ashok] Bob: If delivery element is not removed is that a lie-down-in-road issue? 17:48:19 [Ashok] Dug, Gil, Tom: Yes 17:48:33 [Ashok] Mark: Yes 17:49:45 [Ashok] Asir: Maybe we shd focus on delivery element some more to try and get consensus 17:50:02 [Ashok] BREAK 17:50:11 [Zakim] - +1.703.860.aaee 17:50:13 [Zakim] -Mark_Little 17:51:47 [Zakim] - +1.408.202.aaff 17:59:09 [Bob] gathering the flock 18:00:43 [Zakim] +[Microsoft] 18:02:38 [Ashok] RESUMING 18:03:14 [Ashok] Geoff: Where shd people send slides 18:03:31 [Ashok] Bob: To be public list 18:03:39 [Ashok] s/be/the/ 18:04:26 [Zakim] + +1.949.926.aagg 18:05:38 [asir] His name is Hemal Shah 18:06:06 [Ashok] That's the speaker coming up 18:08:24 [Ashok] Topic: WS-Man issues 18:10:46 [Ashok] In 2003 customers meet with hardware vendores and asked for facilities to manage hardware independent of specific hardware 18:11:13 [Ashok] DMTF took on this challenge with SMASH and DASH 18:11:33 [Ashok] Intial feeling was Web Services stack was too heavy 18:12:51 [Ashok] We now have 3 specs with Web Services profiles and features 18:13:46 [Zakim] + +1.408.970.aahh 18:15:22 [Ashok] Hemal: I work for Broadcom 18:15:38 [Ashok] I started with WS-MAN in 2005 18:16:25 [Ashok] ... folks were sceptical because spec was heavyweight and resources limited 18:18:11 [dug] Prasad: 18:18:16 [Ashok] ... Start with WS-Eventing issue on removing 'mode' 18:18:55 [Ashok] Hemal: We are using 'mode'. We have defined several modes in WS-MAN 18:19:26 [Ashok] ... If you remove mode you lose function which is in many implementations 18:19:50 [Ashok] Bob: As a single attribute is not composable 18:20:22 [Ashok] ... proposal to replace mode with a set of QNames so they cam be composed 18:20:51 [Ashok] ... you look for an extension QName 18:21:09 [Ashok] .... there would be a set of elements rather than one attribute 18:21:53 [Ashok] Josh: Are there other elements also? Asks scope question 18:22:16 [Ashok] Bob: The scope of extension is the parent and its children 18:22:39 [Ashok] ... the faulting behaviour needs to be tightened up 18:22:53 [Ashok] ... it is optional so not reliable 18:23:10 [Ashok] .... has unbounded list of elements and namespaces 18:23:37 [Ashok] .... need to generate fault saying waht cannot be honored 18:24:04 [Ashok] ... Folks divided in whether we need a delivery element and what shd be in ti 18:24:12 [Ashok] s/ti/it/ 18:25:15 [Ashok] Bob: We have not decided on categories, whether we need them, what they are? 18:26:24 [Ashok] Hemal: We have implementations. If the info is in another element it will break the implementations. 18:26:46 [Ashok] Bob: Other changes we have agreed on will break wire protocols 18:27:20 [Ashok] Hemal: Keepthe mode and also provide other facilities and allow both mechanisms 18:27:38 [Ashok] s/Keep/Keep / 18:27:53 [Ashok] Bob: Mode as it is currently non-composable 18:28:32 [Ashok] .... you would add elements e.g. Push-with-acks 18:29:05 [Ashok] Jeff: You will have to change becuse namespace will change and there will be other changes 18:29:23 [Ashok] c/beuse/because/ 18:29:31 [Zakim] - +1.408.970.aahh 18:29:42 [Ashok] ... we shd shift to how to migrate 18:30:04 [Ashok] .... you could map extensions to modes 18:30:47 [asir] we should not worry about namespace name changes because wire compat is not a requirement .. feature-wise backward compat is a requirement 18:31:10 [Ashok] Gil: Extensibility model in WS-Eventing has change. Now ignore what you don't recognize not fault 18:31:47 [Ashok] Bob: We have moved <format> out and moved it higher up 18:32:19 [Ashok] Hemal: What possible extensions? 18:32:37 [Ashok] Gil: Relaible messaging or security for example 18:32:40 [asir] q+ 18:33:02 [Wu] q+ 18:33:05 [Ashok] .... argues composability requirements 18:33:37 [Ashok] Asir: We shd not talk abt security and reliability as extensions 18:33:49 [Ashok] Bob: Cannot anticipate extensions 18:34:12 [Wu] q- 18:34:39 [Ashok] ... also need to be flexible abt scale of implementations 18:35:27 [Wu] q+ 18:35:30 [Ashok] Hemal: If you add RM, and security that's not WS-Eventing. 18:35:44 [Ashok] Bob: We need to support composability 18:36:48 [Ashok] Hemal: People can extend values of mode attribute 18:37:28 [Ashok] Bob: E.g. push-with-acks is defined as specific URI value that can be used as value of mode. 18:37:29 [Ram] Ram has joined #ws-ra 18:37:50 [Wu] q- 18:38:20 [Ashok] ... equivalent is a push-with-acks element. This could combine with other features such as queue management 18:38:20 [Ram] q+ 18:39:02 [Ashok] Wu: Yiu several good points. We are still discussing. 18:39:21 [Ashok] s/Yiu/You/ 18:39:37 [Ashok] Hemal: My concern is removal of 'mode' 18:40:04 [Ashok] ... if you remove it I worry about existing implementations and transition path 18:40:05 [Tom_Rutt] q+ 18:41:05 [Ashok] Asir: Keep RM and Security out of mode discussion. They are different. 18:41:12 [Ashok] Bob: Disagrees 18:42:03 [Bob] ack asir 18:42:17 [Wu] q+ 18:42:26 [Ashok] Jeff: Mode does not compose and MS has been pushing 'composable specifications' 18:42:35 [Ram] q- 18:42:40 [Wu] q- 18:42:59 [Ashok] Bob: I would like to hear all of Hemal's concerns 18:43:43 [Ashok] Jeff: We are trying to ensure reasonable clean migration path but we don't have an absolute requirement to have backwards compatibility 18:44:00 [Ashok] Tom: The mode that has been defined is 'push' 18:44:18 [Ram] q+ 18:44:28 [Ashok] ... WS-Man ahs added others and they can define 'push-with-ack' 18:44:45 [Ashok] s/ahs/has/ 18:45:13 [Ashok] Hemal: Next point 6413 - T/RT merge 18:45:41 [dug] q+ 18:45:51 [Ashok] ... didn't completely understand prooosal. Is it trying combine Enum functionality 18:46:52 [Ashok] Bob: Current proposal is to move framnet support from RT and make that an optional paty of T or possibly a separate spec. 18:47:11 [Ashok] ... or possibly another form of fragment support 18:47:25 [Ashok] ... still open on details 18:47:38 [Ashok] ... no agreement yet 18:48:15 [dug] q- 18:48:20 [Ashok] Hemal: Fragment level transfer poses signifact challenges in resource constrained envvironment 18:48:53 [Ashok] ... more we can deal with this in headers the better 18:50:14 [Ashok] Bob: If frag level transfer is presented as a feature with a mustUnderstand type feature that would work for you? 18:50:24 [Ashok] Hemal: We can gennerate a fault 18:50:44 [Ashok] Bob: We are propsing it as an optional feature 18:51:04 [Ashok] Heaml: Is it going in body or header? 18:51:19 [asir] q+ 18:51:29 [Ashok] Dug: Currently in body but being worked on 18:52:01 [Bob] ack tom 18:52:13 [Ashok] Bob: You want to be able figure with minimal processing if you don't support it 18:52:28 [Bob] ack ram 18:52:34 [Geoff] q+ 18:52:36 [Bob] ack asir 18:52:54 [Bob] ack geo 18:53:12 [Ashok] Asir: We say it is optional but current proposal is not optional. We have raised an issue on this. 18:53:43 [asir] s/on this/against the current proposal/ 18:54:34 [Ashok] Moving to 6724 18:55:06 [Ashok] Geoff: Subscribe as a Resource 18:56:11 [Ashok] Hemal: You can get to instances once you have subscribe. 18:56:41 [Ashok] Bob: We will not remove GetStatus and Renew. Those are off the table 18:57:31 [Ashok] Dug: Eveting spec defines minimum function. Implementaions can extend 18:58:25 [Ashok] This would allow you get full properties of the subscription and even update subscription properties 18:58:51 [Ashok] Hemal: For the SIM case this will not provide any more information 18:59:29 [Ashok] Dug: Are you talking abt enumeration instances? 19:00:08 [Ashok] ... this allows you reterive subscription properties. GET may not give you back what you need. 19:00:31 [Ashok] Bob: Has SIM extended Transfer to get this info 19:01:11 [Ashok] ... what spec defines represenation of the subscription 19:02:09 [Ashok] Josh: With any SIM class you can manipulate subscription info 19:02:35 [Ashok] Bob: There is no conflict with that. 19:02:57 [Ashok] Josh: We want to make sure it is aligned with SIM or SIM can be put in it 19:04:09 [Ashok] Dug: I think we are talking abt something different. Please send mail so we don't lose your idea of possible conflict 19:04:53 [Ashok] Josh: We can followup with emal. 19:05:02 [dug] s/SIM/CIM/ 19:05:17 [Ashok] Bob: Someone shd open an issue. Very interested in follwing up with you. 19:05:36 [Zakim] - +1.949.926.aagg 19:05:38 [Zakim] -[Microsoft] 19:05:48 [li] li has joined #ws-ra 19:05:57 [Ashok] END OF MORNING SESSION. BREAKING TILL 1PM PACIFIC 19:06:10 [Zakim] - +1.908.696.aabb 19:47:52 [li] li has joined #ws-ra 19:48:29 [Zakim] -??P9 20:00:04 [Zakim] + +1.908.696.aaii 20:00:17 [Wu] Wu has joined #ws-ra 20:00:25 [Bob] link to Josh's slides 20:00:28 [Bob] 20:00:53 [PrasadY] scribe PrasadY 20:01:02 [PrasadY] scribeNick PrasadY 20:01:12 [PrasadY] Starting the afternnon session 20:01:27 [Bob] scribenick: PrasadY 20:01:48 [Bob] scribe: Prasad Yendurli 20:02:08 [PrasadY] s/Yendurli/Yendluri/ 20:02:46 [PrasadY] Bob: Doug sent his write up on 6712 to the list 20:02:55 [Bob] 20:02:55 [Zakim] + +0207827aajj 20:03:42 [PrasadY] Bob: That was proposed resoloution to Issue 6712 20:04:41 [PrasadY] [Body]/wst:Create@ContentDescription 20:04:41 [PrasadY] 20:05:11 [gpilz] zakim who is making noise 20:05:36 [Bob] zaki, who is making noise? 20:05:43 [li] zakim, aaii is li 20:05:43 [Zakim] +li; got it 20:05:45 [Bob] zakim, who is making noise? 20:05:57 [Zakim] Bob, listening for 10 seconds I heard sound from the following: li (5%), ??P13 (33%) 20:06:43 [Zakim] + +1.408.970.aakk 20:12:29 [PrasadY] Ashok: Why not call the "implied" value "default" value 20:12:52 [PrasadY] Dug: I have seen both used but, ok 20:12:58 [PrasadY] Geoff: What does the default/implied value mean? 20:13:18 [PrasadY] Bob: As stated there is no way not to have a value 20:14:10 [PrasadY] Agreement - Change the last sentence to say "no default value" 20:14:35 [PrasadY] Asir: "Corretly" in 1st sentence should be drpped 20:15:05 [PrasadY] Consensus: agreed 20:15:17 :15:19 [Bob] ...resource representation or an instruction, but the attribute is not present or the URI is not known, then the service MUST generate an invalidContentDescriptionURI fault. There is no default value. 20:16:33 [PrasadY] Asir: Name is contentDescription the fault also should be called the same (no URI in the end) 20:16:39 [PrasadY] agreed 20:17:40 [PrasadY] Asir: wants to name the attribute, contentDescriptionHint 20:18:20 [PrasadY] Dug: Does not think the word Hint is needed. The description conveys that 20:19:05 [PrasadY] Yves: Hint also means it is not trustable 20:19:15 [PrasadY] Bob: Hints can be wrong 20:19:52 [PrasadY] Ashok: Server can send a fault it wants. It is explained in a complex way 20:20:26 [PrasadY] .. Say, "if the server does not understand the att, it may send a fault' 20:20:46 [PrasadY] Dug: we need to call out the two cases described explicitly 20:21:08 [PrasadY] ... if you needed the att to process the message 20:21:28 [PrasadY] Ashok: does it matter to the client / user? 20:21:59 [PrasadY] Dug: The spec needs to clr on when the fault is generated 20:22:09 [PrasadY] s/to clr/to be clr/ 20:23:02 [PrasadY] Gil: if you get a fault, you need to be able to look up the spec to understand when the fault is generated 20:23:31 [PrasadY] Ashok: I am not going to make a big issue. Just i would have written that way 20:24:05 [asir] here is what we agreed yesterday 20:24:06 20:25:01 [PrasadY] Dug: does not think hint is well-defined 20:25:49 [PrasadY] s/drpped/dropped/ 20:26:29 [PrasadY] Bob: In the version I have I have not added the Hint language 20:26:40 [PrasadY] Asir: We agreed to it yesterday 20:27:26 [PrasadY] Bob: I am happy to leave it as is, even though we used that word yesterday 20:28:28:54 [PrasadY] Dug: I already added the fault to spec. I will change it to match above 20:28:55 [Bob] ...resource representation or an instruction, but the attribute is not present or the URI is not known, then the service MUST generate an invalidContentDescription fault. There is no default value. 20:29:29 [fmaciel] fmaciel has joined #ws-ra 20:31:26 [Bob] RESOLUTION: Issue-6712 resolved with text above along with parallel modifications to the associated fault 20:31:28 [PrasadY] rrsagent, where am I? 20:31:28 [RRSAgent] See 20:32:04 [PrasadY] Bob: Back on Issue 6692, Delivery concept 20:33:26 [PrasadY] Bob: describes where we stand 20:38:18 [Yves] Yves has changed the topic to: GB.Frog 20:46:52 [gpilz] q+ 20:47:12 [Bob] ack gpi 20:51:56 [Ram] q+ 20:52:39 [Bob] ack ram 20:53:32 [dug] q+ 20:54:05 [Bob] ack dug 20:55:32 [asir] q+ 20:58:31 [li] q+ 21:03:35 [Wu] q+ 21:03:45 [Wu] q- 21:07:10 [Bob] ack asir 21:08:45 [Wu] q+ 21:08:54 [Bob] ack li 21:09:54 [PrasadY] in depth discussion on different ways to place the delivery brack and if there is a value in having it or not 21:10:08 [PrasadY] s/brack/bracket/ 21:11:00 [Geoff] q+ 21:11:21 [Bob] ack wu 21:12:08 [PrasadY] Bob: Suggests "stamp" element that qualifies the EPR (NotifyTo) 21:12:23 [PrasadY] Wu: Stamp is equivalant to Delivery 21:12:51 [gpilz] q+ 21:12:57 [Bob] ack geoff 21:13:22 [dug] q+ 21:13:56 [PrasadY] Geoff: The Eventing spec saya there is a difference between subscription and event source. The arh boxes the cxoncepts 21:14:10 [PrasadY] s/cxoncepts/concepts/ 21:14:31 [li] q+ 21:15:13 [PrasadY] .. 2nd pt. Every one accpts push mode, yet we have no defined way to change it 21:15:26 [PrasadY] s/accpts/accepts/ 21:16:38 [PrasadY] Bob: We talked about Delivery.. Could you come-up with a concept of "Delivery"? 21:16:50 [dug] q- 21:17:56 [PrasadY] Dug: As an extension writer, I should be able to tell if it goes in Delivery or not 21:18:02 [li] q? 21:18:14 [li] q? 21:18:31 [PrasadY] Geoff: accept that 21:18:43 [li] hi 21:19:07 [PrasadY] 10 minutes Break... 21:19:19 [PrasadY] q- li 21:19:23 [li] q? 21:19:29 [li] q+ 21:19:44 [li] hi 21:20:39 [li] li has joined #ws-ra 21:20:50 [li] q? 21:21:02 [li] testing 21:22:33 [Geoff] the definition for delievery we should start with can be found in an email sent by Asir 21:22:39 [Geoff] the link is here: 21:33:37 [Zakim] +Mark_Little 21:35:54 [Bob] ? 21:35:57 [Bob] q? 21:36:16 [gpilz] q- 21:37:27 [PrasadY] Dug: does not think Pull does not fit the above (MEP part) 21:37:34 [PrasadY] Asir: Thinks it does 21:38:08 [PrasadY] s/does not fit/fits/ 21:40:17 [jeffm] jeffm has joined #ws-ra 21:40:41 [li] q- 21:40:57 [PrasadY] Bob: Can we simply define: Delivery is rules for transportation of Notifications from source to sink 21:41:10 [Ashok] Ashok has joined #ws-ra 21:41:13 [PrasadY] Dug: How about batching? 21:41:27 [PrasadY] Asir/Bob: That is formatting not transportation 21:44:01 [Zakim] -Mark_Little 21:44:20 [PrasadY] Delivery is rules for conveyance of Notifications from source to sink 21:45:38 [PrasadY] Ashok: Suppose we agree on this, how does it change things? 21:46:30 [PrasadY] Tom: The from and To would be part of this 21:47:29 [PrasadY] Gil: Thinks it is hard for people outsite this room to figure out whether an extension goes with delivery or not 21:48:24 [PrasadY] Bob: With this definition Notify:to comes back into the bracket 21:48:38 [PrasadY] Dug: Does not solve the EndTo problem 21:49:32 [Ashok] Ashok has left #ws-ra 21:49:38 [PrasadY] Bob: If no more arguments, we are going to decide 21:51:00 [PrasadY] Asir: Not all directional proposals from this am have translated to concrete proposals 21:52:34 [Ashok] Ashok has joined #ws-ra 21:52:55 [PrasadY] s/ted to/ted into/ 21:54:39 [PrasadY] Tom: Rules don't go into the "Stamp", the effects of rules do 21:55:02 [PrasadY] Not all effects may go into stamp also 22:02:38 [li] q+ 22:06:47 [PrasadY] Bob: Within the subscribe Msg - a Yes vote supports the directional decision todefine an element that acts as a container for all extension QNames defined by this spec or externally, and data necessary fro conveyance of Notfications from Source to Sink 22:07:04 [PrasadY] s/todefine/to define/ 22:07:48 [PrasadY] Dug: This is an incomplete soultion does not address EndTo 22:07:55 [li] i'm on queue 22:08:23 [Bob] ack li 22:09:00 [PrasadY] Li: WS-Eventing a pt to pt protocol - establishing a channel from source to sink 22:10:12 [PrasadY] s/a pt/is a pt/ 22:10:55 [PrasadY] Subscription establishes 2 links, between source and sink and subscription manager and client 22:11:52 [PrasadY] Bob: Any other concerns before wew vote on the directional proposal? 22:12:10 [PrasadY] Asir: Want to account for EndTo? 22:12:28 [PrasadY] Dug/Gil: No need. May raise as a separate issue 22:12:59 [PrasadY] Bob" Vote Yes - to support the wrapper 22:13:03 [PrasadY] Avaya - Yes 22:13:11 [PrasadY] Fujitsu - Yes 22:13:19 [PrasadY] Hitachi - No 22:13:22 [PrasadY] IBM- No 22:13:25 [PrasadY] MS - Yes 22:13:30 [PrasadY] Oracle - No 22:13:40 [PrasadY] Redhat - No 22:14:07 [PrasadY] Software AG - Yes 22:14:12 [PrasadY] W3C - Yes 22:14:27 [PrasadY] Yes - 5 22:14:35 [PrasadY] No - 4 22:14:47 [li] one link = one wrapper 22:15:30 [PrasadY] Bob: yes carries => Directional proposal 22:15:42 [PrasadY] Bob: We want to rest this for a bit 22:15:56 [PrasadY] Bob: Need a concrete proposal 22:16:09 [PrasadY] Geoff: Will do in couple of weeks 22:16:28 [PrasadY] Bob: Need it before 23rd so that people can look at it 22:16:40 [PrasadY] Break .. 22:17:25 [PrasadY] Bob:I have notification that Redhat has given proxy to Oracle for the duration of F2F 22:17:53 [PrasadY] back at 20 to 4pm 22:19:28 [Zakim] - +0207827aajj 22:19:35 [Ram] Ram has joined #ws-ra 22:27:36 [li] testing 22:46:23 [PrasadY] Resuming 22:46:34 [PrasadY] Topic: Issue 6401 22:47:01 [PrasadY] Gil: Recaps where we are 22:52:08 [Wu] 22:52:47 [PrasadY] Wu: Issue, WSDL in WS-E does not confrom to WS-I BP 22:53:04 [li] q+ 22:53:46 [Wu] 22:54:02 [PrasadY] Dug sent the above in March 22:55:53 [gpilz] q+ 22:55:58 [PrasadY] Wu: Using Policy to link out bound operations with source is a clean solution 22:57:07 [Wu] 22:58:09 [Geoff] q+ 22:58:21 [Bob] ack li 22:59:23 [PrasadY] Li: Two proposals from Gil, (1) BP compliant (2) Make WSDL <....> 23:00:17 [PrasadY] You link Event Source WSDL with Notification WSDL 23:01:51 [gpilz] 23:02:06 [Bob] ack gpil 23:02:12 [Bob] ack geoff 23:02:26 [PrasadY] Geoff: Your proposal is centered around wrapped mode. Pls address why wrapped mode changes things 23:04:41 [PrasadY] Gil: Details his proposal at the above URL 23:10:38 [Geoff] q+ 23:10:45 [Bob] ack geo 23:10:55 [PrasadY] q+ li 23:11:18 [PrasadY] Geoff: Why do need this rather than WSDL 23:11:25 [dug] q+ 23:13:26 [PrasadY] Gil: WSDL msg types etc define notification type - other parts are for raw notifications 23:13:51 [Bob] q+ asir 23:14:16 [PrasadY] ack li 23:14:29 [asir] q+ to ask a question, what aspect of wrapped notifications did not fit into WSDL? 23:15:54 [gpilz] q? 23:16:12 [Bob] ACTION: Geoff to write a concrete proposal to capture the decisions to-date on Issue-6692 23:16:13 [trackbot] Created ACTION-70 - Write a concrete proposal to capture the decisions to-date on Issue-6692 [on Geoff Bullen - due 2009-06-17]. 23:17:22 [Wu] q+ 23:17:34 [Bob] ack dug 23:17:54 [gpilz] q+ 23:18:08 [PrasadY] Dug: Describes why he found WSDL was not good enough 23:24:47 [li] q? 23:24:57 [Bob] ack asir 23:24:57 [Zakim] asir, you wanted to ask a question, what aspect of wrapped notifications did not fit into WSDL? 23:25:09 [li] q+ 23:27:20 [PrasadY] Asir: Wants concrete examples of why WSDL alone is not enough 23:27:29 [PrasadY] Gil: Can provide 23:38:41 [Geoff] q+ 23:38:48 [Bob] ack wu 23:43:09 [asir] q+ to ask a follow-on clarification question to Gil 23:45:32 [Bob] ack asir 23:45:32 [Zakim] asir, you wanted to ask a follow-on clarification question to Gil 23:46:49 [dug] q+ 23:47:48 [PrasadY] ack gp 23:48:40 [li] q? 23:49:17 [PrasadY] ack li 23:57:30 [Bob] Time warning 00:01:00 [PrasadY] ack Ge 00:01:03 [PrasadY] ack du 00:01:11 [dug] 00:05:27 [PrasadY] Bob: Gil is ferminting the proposal and we have another proposal from Wu 00:05:59 [PrasadY] Link to today's IRC log: 00:06:17 [RRSAgent] I have made the request to generate Yves 00:07:40 [li] good night 00:07:43 [PrasadY] Bob: Recessed until tomorrow 00:07:54 [Zakim] -li 00:08:19 [Zakim] - +1.408.970.aakk 00:09:40 [Bob] rrsagent, generate minutes 00:09:40 [RRSAgent] I have made the request to generate Bob 00:11:39 [Zakim] -??P13 00:11:41 [Zakim] WS_WSRA(3 day)11:30AM has ended 00:11:43 [Zakim] Attendees were +1.908.696.aaaa, +1.908.696.aabb, +1.408.274.aacc, Mark_Little, +1.703.860.aadd, +1.703.860.aaee, +1.408.202.aaff, [Microsoft], +1.949.926.aagg, +1.408.970.aahh, 00:11:45 [Zakim] ... +1.908.696.aaii, +0207827aajj, li, +1.408.970.aakk 00:15:19 [gpilz] gpilz has left #ws-ra 02:32:27 [Zakim] Zakim has left #ws-ra 03:57:13 [dug] dug has joined #ws-ra 04:05:45 [dug] dug has joined #ws-ra
http://www.w3.org/2009/06/10-ws-ra-irc
CC-MAIN-2015-32
refinedweb
6,260
63.83
Thanks to those of you who take the time to read through the article. If you feel like dropping off a vote (and particularily if it's a low one), please include a comment which mentions what the problem was. I've been getting mostly high votes for this article, apart from the odd 1's or 2's, and I'd really like to know what bothered those voters the most. Feedback is what drives improvement. This article and attached library + demo projects aim to describe an approach to cross thread synchronized function calls. Cross thread calls is in essence the process of having one thread instruct another thread to call a function. Serializing calls to certain code sections, functions and classes is very much the reality of multi-thread programming, but there are ways to go about this effort without introducing too much code or complex logics. If you're interested, take the time to read through the following paragraphs. It's an introduction to the concept, as well as a description of my way of solving it. If you're short on time -- feel free to skip ahead to the section "Using the code", which gives a quick sum-up of the library, ThreadSynch. ThreadSynch I assume that you, the reader, is at least vaguely familiar with threads, and all the pitfalls they introduce when common data is being processed. If you're not, feel free to read on, but you may find yourself stuck wondering what all the fuzz was about in the first place. A classical example is the worker thread which fires off a callback function in a GUI class, to render some updated output. There are a bunch of different approaches, let alone patterns (e.g. Observer), to use in this case. I'll completely disregard the patterns, and focus on the actual data and notification. The motivation for doing cross thread calls is; 1. to simplify inter-thread notifications, and 2. avoid cluttering classes and functions with more synchronization code than what's absolutely necessary. Imagine the worker class Worker, and the GUI class SomeWindow. How they are associated makes little or no difference, what's important is that Worker is supposed to call a function, and/or update data in SomeWindow. The application has two threads. One "resides" in Worker, and the other in SomeWindow. Let's say that at a given point in time, the Worker object decides to make a notification to SomeWindow. How can this be done? I can sum up a few of the possible approaches, including major pros/cons. Throughout the last few years, I've had a number of approaches to this field of problems. Usually, I've ended up using a mix of #2 and #3 as listed above. While I've made a few abstractions, and integrated this in a threading library, there was nothing major about it. It wasn't till I had a crack at the .NET framework, and more specifically the InvokeRequired / BeginInvoke techniques, that I started pondering doing the same in a native framework. The .NET framework approach really is appealing from a usage point of view, as it introduces a bare minimum of alien code to, say, the business logic. While many would argue that the ideal approach would be to avoid synchronization altogether, and rely on the operating system to deal with the complexities related to cross thread calls and simultaneous data access; that's not likely be part of any efficiency focused application anytime soon. I won't go into the details of my first few synchronization frameworks, but rather be focusing on the one I typed up specially for this read. It is, as mentioned, based on the ideas from the .NET framework, but it's not quite the same. Granted the differences between native and managed code, as well as the syntactical inequalities, the mechanics have to be a little different, and so is the use. The motivation of the framework is obviously to simplify cross thread calls, which may or may not access shared resources. It goes to great lengths to be safe, flexible, and reliable in terms of its promises to the user. The flexibility is achieved through the introduction of templated policies for the notifications made across the threads, as well as functors and parameter bindings from Boost. I'll get back to the reliable part in a jiffy. The base principle is quite simple. Thread A needs to update or process data logically related to Thread B. To do this, A wants to issue a call in context of B. Thread B is of a nature which allows it to sleep or wait for commands from external sources, so that'll be the window in which A can make it's move. Thread B would ideally be GUI related, a network server / client, an Observer (as in the Observer Pattern) or similar. Thread A Thread B A B What needs to be done is: PickupPolicy The pickup policy, or more specifically the way Thread A delivers the notification to Thread B, can involve a number of different techniques. A couple worth mentioning are UserAPCs (user-mode asynchronous procedure call) and Window Messages. QueueUserAPC() allows one to queue a function for calling in context of a different thread, and relies on the other thread to go into alertable wait for the call to be made. Alertable waits have their share of problems, but I'll disregard those for now. In terms of the GUI type thread, window messages are a better alternative. The pickup policies make up a fairly simple part of this play, but they are nevertheless important in terms of flexibility. QueueUserAPC() Ok, so we've covered the motivation, as well as some of the requirements. It's time to give off an example of how the mechanism can be used. For the sake of utter simplicity, I will not bring classes and objects into the puzzle just yet. Just imagine the following simple console program: char globalBuffer[20]; DWORD WINAPI testThread(PVOID) { // Keep sleeping while the event is unset while(WaitForSingleObjectEx(hExternalEvent, INFINITE, TRUE) != WAIT_OBJECT_0) { Sleep(10); } // Alter the global data for(int i = 0; i < sizeof(globalBuffer) - 1; ++i) { globalBuffer[i] = 'b'; } globalBuffer[sizeof(globalBuffer) - 1] = 0; // null terminate // Return and terminate the thread return 0; } int main() { DWORD dwThreadId; CreateThread(NULL, 0, testThread, NULL, 0, &dwThreadId); ... There's nothing out of the ordinary so far. We've got the entry point, main, and a function, testThread. When main is executed, it will create and spawn a new thread on testThread. All testThread does in this example, is to wait for an external event to be signaled, and then alter a data structure, globalBuffer. What's important is that the thread is waiting for something to happen, and while it's waiting we can instruct it to do some other stuff. Our objective is therefore to have the thread call another function, testFunction: main testThread globalBuffer testFunction string testFunction(char c) { for(int i = 0; i < sizeof(globalBuffer) - 1; ++i) { globalBuffer[i] = c; } globalBuffer[sizeof(globalBuffer) - 1] = 0; // null terminate return globalBuffer; } testfunction will alter the global buffer, setting all elements except the last to the value of the char parameter c, then null terminate it and finally return a new string with the global buffer's content. What we can tell straight away, is that testFunction and testThread may alter the same buffer. If our main thread executed testFunciton directly, it could get around to alter the first 10 or so elements of the global before being swapped out of the CPU. If the external event in testThread were to be signaled at this point, that thread would also start altering the buffer. The string returned from testFunction would obviously contain anything but what we expect it to. testfunction c testFunciton While this example doesn't make much sense in terms of a real world application as it is, the concept is very much realistic. Imagine, if you wish, that the global buffer represents the text in an edit box within a dialog, and that testThread is supposed to alter this text based on a timer. At certain intervals, external threads may also wish to update the same edit box with additional information, so they call into the GUI's class (which in this simplistic example is represented by testFunction). To avoid crashes, garbled text in the text box, or other freaky results, we want to synchronize the access. We don't want to add a heap of mutexes or ciritcal sections to our code, but rather just have the GUI thread call the function which updates the text. When the GUI thread alone is in charge of updating its resources, we're guaranteed that all operations go about in a tidy order. In other words: there will be no headache-causing crashes and angry customers. So, instead of adding a whole lot of interlocking code to both testThread and testFunction, which both update the global buffer, we use a cross thread call library to have the thread which owns the shared data do all the work. int main() { DWORD dwThreadId; CreateThread(NULL, 0, testThread, NULL, 0, &dwThreadId); CallScheduler<APCPickupPolicy>* scheduler = CallScheduler<APCPickupPolicy>::getInstance(); try { // Create a boost functor with a bound parameter. // The functor returns a string, and so will the // synchronized call. boost::function<string()> callback = boost::bind(testFunction, 'a'); // Make the other thread call it. The return value // is deduced from the functor. string dataString = scheduler->syncCall ( dwThreadId, // Target thread callback, // Functor with parameter 500 // Milliseconds to wait ); cout << "testFunction returned: " << dataString << endl; } catch(CallTimeoutException&) { // deal with the problem } catch(CallSchedulingFailedException&) { // deal with the problem } return 0; } CallScheduler makes all the difference here. Through fetching a pointer to this singleton class, with the preferred pickup policy (in this case the APCPickupPolicy), we can schedule calls to be made in context of other threads, granted that they are open for whatever mechanism the pickup policy uses. In our current example, we know that the testThread wait is alertable, and that suits the APC policy perfectly. To attempt to execute the call in the other thread, we call the syncCall function, with a few parameters. The template parameter is the return type of the function we wish to execute, in this case a string. The first parameter is the id of the thread in which we wish to perform the operation, the second parameter is a boost functor, and the third is the number of milliseconds we are willing to wait for the call to be initiated. The use of boost functors also allows us to bind the parameters in a timely fashion. As you can see in the above call, testFunction should be called with the char 'a' as its sole parameter. CallScheduler APCPickupPolicy syncCall string At this point, we wait. The call will be scheduled, and will hopefully be completed. If the pickup policy does its work, the call will be executed in the other thread, and we are soon to get a string from testFunction as returned by syncCall. Should the pickup fail or timeout, an exception will be thrown. Consider the example -- it really should make it all pretty clear. In the attached source, there are two example projects. One consists of the code shown above, and the other is a few inches closer to real world use -- in a GUI / Worker thread application. There are a few restrictions on the use of a framework such as the one described here. Some are merely points to be wary of, while others are showstoppers. The parameters passed to a function which will be called from another thread, should not use the TLS (Thread Local Storage) specifier - that goes without saying. A variable declared TLS (__declspec(thread)) will have one copy per thread it's accessed from. In terms of the previous example, the main thread would not necessarily see the same data as testThread, even with the value passed through the synchronized call mechanism to testFunction. In short: there's nothing stopping you from passing TLS, but through doing so you are bound to see some odd behavior. The general guideline is to be thoughtful. Don't pass anything between threads without knowing exactly what the consequences are. Even though the mechanism, or rather principle, of cross thread synchronized calls goes to great lengths to keep the task simple; there are always ways to stumble. __declspec(thread) main thread A couple of guidelines and requirements regarding parameter passing and returning, in an example scenario where Thread A does a synchronized call to Function F through Thread B: Function F Thread B: F Also, though this really goes without saying, synchronized code blocks in multi-threaded environments are more or less bound to cause somewhat of a traffic jam. In some cases, a redesign may be the solution, while in other cases; it's simply unavoidable. If a multi-threaded application is to safely access the same resources, interlocking really cannot be omitted, so it's all up to how you make the best of the situation. A quick metaphor, if you. Though this example may seem silly in the real world terms, it makes perfect sense for a multi-threaded application. If a piece of code cannot be accessed by thread A, because a block has been placed by thread B, A may be better off doing some more calculations or operations before re-attempting the lock. So, even though the aforementioned traffic jams are pretty much unavoidable, there are ways to make the best of them. We've just about covered everything there is to say. The attached file does have two example projects, in addition to documented code and generated HTML from doxygen tags. I'll quickly sum up the basics, though. The library is strictly header-based, so in essence all you have to do is include ThreadSynch.h and the pickup policies of your choosing. The WMPickupPoilcy will notify the target thread through use of UserAPCs, granted that the target enters alertable wait before the specified call timeout runs out. WMPickupPoilcy CallScheduler<APCPickupPolicy>* scheduler = CallScheduler<APCPickupPolicy>::getInstance(); There's an example of this in the attached ThreadSynchTest project. ThreadSynchTest The WMPickupPoilcy will notify the target thread through a user defined window message. typedef WMPickupPolicy<WM_USER + 1> WMPickup; CallScheduler<WMPickup>* scheduler = CallScheduler<WMPickup>::getInstance(); The template parameter passed to WMPickupPolicy indicates which message to post to the thread. The receiving thread should deal with this message code in its message loop, such as: WMPickupPolicy while(GetMessage(&msg, NULL, 0, 0)) { if(msg.message == WMPickup::WM_PICKUP) { WMPickup::executeCallback(msg.wParam, msg.lParam); continue; } TranslateMessage(&msg); DispatchMessage(&msg); } There's an example of this in the attached ThreadSynchWM project. scheduler->syncCall(dwThreadId, function, timeoutInMS); // Class in which our target function resides MyClass classInstance; // String parameter string myString = "hello world"; // Init functor boost::function<int(const string&)> myFunctor = boost::bind(&MyClass::someFunction, // Function &classInstance, // Instance boost::ref(myString)); // Parameter // Make the call // The return value template specification can be ommited // in this case, as it's also deduced from the boost functor. // I've included it here to show how it can be specified, // and how it must be specified if mere function pointers // are used in place of the functors. int x = scheduler->syncCall < int // Return type. > ( dwThreadId, // Target thread myFunctor, // Functor to call from target thread timeoutInMS // Number of milliseconds to wait // for the call to begin ); // Class in which our target function resides MyClass classInstance; // String parameter string myString = "hello world"; // String for the return value string myReturnedString; try { // Init functor boost::function<string(const string&)> myFunctor = boost::bind(&MyClass::someFunction, // Function &classInstance, // Instance boost::ref(myString)); // Parameter // Make the call myReturnedString = scheduler->syncCall < string, // Return type ExceptionTypes<std::exception, MyException> > ( dwThreadId, // Target thread myFunctor, // Functor to call from target thread timeoutInMS // Number of milliseconds to // wait for the call to begin ); } catch(CallTimeoutException&) { // The call timed out, do some other stuff and try again } catch(CallSchedulingFailedException&) { // The call scheduling failed, // probably caused by the pickup policy not doing its job } catch(std::exception& e) { // Deal with e } catch(MyException& e) { // Deal with e } catch(UnexpectedException&) { // We didn't expect this one. // It's time to read someFunction's docs. } You obviously won't have to catch all these exceptions all the time, but if you feel like it, you may. It's up to you, really. Whether or not you want an exception-safe application, that is. In most cases, you will want to have a function re-call itself in context of the function's "owner thread", rather than call specific functions such as shown above. The ThreadSynchTestWM example attached shows how to do this in the updateText function. ThreadSynchTestWM updateText The ThreadSynch library heavily uses templates and preprocessor macros (through e.g. boost's MPL). If you wish to understand exactly how (and why) the library works; you should read through the source code. That being said, I will cover some of the basics here. There are two main players in each cross thread call, the "client" Thread A and the "target" Thread B. Right before a cross thread call, Thread B is in an unknown state. It's up to the PickupPolicy to either forcefully change that state, or gracefully take care of the scheduled calls when Thread B becomes available (such as enters an alert wait state). Thread A will call CallScheduler::syncCall with a set of template parameters, as well as a target thread, functor and timeout. To get a quick idea of what happens next, consider this activity diagram. CallScheduler::syncCall The CallScheduler::syncCall function will essentially allocate a CallHandler instance, which is the structure that takes care of the actual call, once the other thread has picked up on the notification made by the PickupPolicy. CallHandler includes wrapper classes for exception- and return value capturing within Thread B. syncCall's newly created CallHandler instance is enqueued to the specific target thread's call queue, which can be seen in this activity diagram. CallHandler When the call has been enqueued, and the PickupPolicy has been notified, syncCall will wait for an event or timeout to occur. Regardless of which happens first, syncCall will follow up by locking the CallHandler. If the scheduled call had already begun executing (but not completed) when the timeout passed, this lock will wait for the calls completion. Upon getting the lock, the state of the scheduled call will be checked. In case of completion, the result be passed back to syncCall's caller -- that is either an exception being re-thrown, or a return value returned. If, however, the call had not yet been completed nor begun when the CallHandler lock was obtained, the call will be removed from the target thread's queue. This guarantees that return values, exceptions and parameters aren't lost. The status returned by syncCall will be the accurate status of what's gone down in Thread B. To rewind a bit, Thread B is going about it's business as usual. Then, at some arbitrary (though policy defined) point in time, the PickupPolicy steps in and makes the thread call a function within CallScheduler. That function, executeScheduledCalls, will fetch and execute each and every CallHandler callback scheduled for the current thread, in a first-in-first-out order. See this activity diagram for CallScheduler::executeScheduledCalls. executeScheduledCalls The scheduled calls will be fetched through the function getNextCall, until no more are found. See this activity diagram for CallScheduler::getNextCall. The key part to this function is the locking of the CallHandler. As opposed to all other lock types used in the library, this one will return immediately if the CallHandler is already locked. The only reason for the lock to be found in place at this point, is that the call has timed out, and syncCall is about to delete it. This de-queue and delete will take place as soon as Thread A obtains a lock on both the CallHandler and the thread queue, which it will when getNextCall returns (and thus releases its scoped lock). getNextCall For each executed CallHandler, there are two layers. One utility class takes care of exception trapping (ExceptionExpecter), and another takes care of return value capturing (FunctorRetvalBinder). The results of both these layers will be placed in the CallHandler and processed by Thread A when Thread B completes the call, and drops its lock. I won't go into the details of either of these layers, as it's documented in the attached code. ExceptionExpecter FunctorRetvalBinder I strongly suggest that you take the time to read through the source code, if you are to use this library. It shouldn't be too hard to pick up on the flow of things, given the information in this article. If you find any specific section confusing, please do post a comment here. That also goes for this article -- any suggestions are welcome. scoped_try_lock CallHandlers CallScheduler::getNextCallFromQueue This article, along with any associated source code and files, is licensed under The Apache License, Version 2.0 Void ThreadBProc() { CallScheduler::Initialize(SCHEDULER_MSG_QUEUE); // Or CallScheduler::Initialize(SCHEDULER_APC_QUEUE); // Business logic of the thread CallScheduler::Uninitialize(); } Void ThreadAProc() { // … CallScheduler* pScheduler = CallScheduler::getInstance(dwThreadID); If(pScheduler != NULL) { pScheduler-> syncCall(fn, …); } else { // Thread B has not been initialized yet or it’s already un-initialized } // or the two steps can be combined into single operation CallScheduler::syncCall(dwThreadID, fn…); // … } CallScheduler* scheduler = CallScheduler::getInstance(); // Create a boost functor with a bound parameter. // The functor returns a string, and so will the // synchronized call. boost::function callback = boost::bind(testFunction, 'a'); using namespace std; async::layered_cage<exception, async::dont_slice<runtime_error>, logic_error, async::catch_all> cage; async::future<double> fut = async::call(cage, &some_func_returning_a_double, arg1, arg2, arg3); // ... time passes ... double d = fut.value(); // yields the result or propagates an exception General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/16726/Cross-thread-calls-in-native-C?msg=2147480
CC-MAIN-2015-48
refinedweb
3,682
59.84
Re: How to draw a rectangle with gradient? - From: Pontus Ekberg <herrekberg@xxxxxxxxxxxx> - Date: Sat, 16 Sep 2006 23:01:20 +0200 Daniel Mark wrote: Hello all: I am using PIL to draw a rectangle filled with color blue. Is there anyway I could set the fill pattern so that the drawn rectangle could be filled with gradient blue? Thank you -Daniel I don't think there is a built-in gradient function, but you could do something like this: .... colour = (0, 0, 255 * l / 100).... colour = (0, 0, 255 * l / 100)import Image import ImageDraw im = Image.new("RGB", (100, 100)) draw = ImageDraw.Draw(im) for l in xrange(100): .... draw.line((0, l, 100, l), fill=colour) .... im.show() You have to adjust for the size and colours you want of course, and change the loop so that the gradient is in the right direction. // Pontus ------------ And now a word from our sponsor ---------------------- For a quality mail server, try SurgeMail, easy to install, fast, efficient and reliable. Run a million users on a standard PC running NT or Unix without running out of power, use the best! ---- See ---- . - References: - How to draw a rectangle with gradient? - From: Daniel Mark - Prev by Date: Re: Pythondocs.info : collaborative Python documentation project - Next by Date: Re: Pythondocs.info : collaborative Python documentation project - Previous by thread: How to draw a rectangle with gradient? - Next by thread: Re: RegexBuddy (anyone use this?) - Index(es):
http://coding.derkeiler.com/Archive/Python/comp.lang.python/2006-09/msg02344.html
crawl-002
refinedweb
241
75.1
Hide Forgot The following program: tcsh> cat bug.c #include <stdio.h> #include <stdlib.h> union u { unsigned i; struct { unsigned low : 16; unsigned high : 16; } s; }; static void cvt (unsigned *v) { union u u; u.i = *v; u.s.low = 3; memcpy (v, &u.i, sizeof (unsigned)); } int main () { unsigned i = 0x00010002; cvt (&i); printf ("res = %08x\n", i); return EXIT_SUCCESS; } Compiled on: tcsh> uname -a Linux hostname 2.2.12 #3 SMP Thu Sep 2 03:34:03 MDT 1999 i586 unknown Using cc: tcsh> cc --version egcs-2.91.66 With the command: tcsh> cc -O1 -o bug bug.c Produces the following result: tcsh> ./bug res = 00000003 I would prefer it to produce a "00010003" result. At first glance this looks like the same bug as 5184 (also see), which for reference concerns the "addressof" feature within GCC. Fixed in the egcs currently in Raw Hide.
https://bugzilla.redhat.com/show_bug.cgi?id=6855
CC-MAIN-2020-24
refinedweb
150
77.94
SYNOPSIS #include <tracefs.h> int tracefs_synth_echo_cmd(struct trace_seq *seq, struct tracefs_synth *synth); struct tracefs_hist *tracefs_synth_get_start_hist(struct tracefs_synth *synth); const char *tracefs_synth_get_name(struct tracefs_synth *synth); int tracefs_synth_raw_fmt(struct trace_seq *seq, struct tracefs_synth *synth); const char *tracefs_synth_show_event(struct tracefs_synth *synth); const char *tracefs_synth_show_start_hist(struct tracefs_synth *synth); const char *tracefs_synth_show_end_hist(struct tracefs_synth *synth); struct tep_event *tracefs_synth_get_event(struct tep_handle *tep,. See tracefs_synth_alloc(3) for allocation of synthetic events, and tracefs_synth_create() for creating the synthetic event on the system. tracefs_synth_echo_cmd() acts like tracefs_synth_create(), but instead of creating the synthetic event in the system, it will write the echo commands to manually create it in the seq given. tracefs_synth_get_start_hist() returns a struct tracefs_hist descriptor describing the histogram used to create the synthetic event. enum tracefs_synth_handler { TRACEFS_SYNTH_HANDLE_MATCH, TRACEFS_SYNTH_HANDLE_MAX, TRACEFS_SYNTH_HANDLE_CHANGE, }; tracefs_synth_get_name() returns the name of the synthetic event or NULL on error. The returned string belongs to the synth event object and is freed with the event by tracefs_synth_free(). tracefs_synth_raw_fmt() writes the raw format strings (dynamic event and histograms) of the synthetic event in the seq given. tracefs_synth_show_event() returns the format of the dynamic event used by the synthetic event or NULL on error. The returned string belongs to the synth event object and is freed with the event by tracefs_synth_free(). tracefs_synth_show_start_hist() returns the format of the start histogram used by the synthetic event or NULL on error. The returned string belongs to the synth event object and is freed with the event by tracefs_synth_free(). tracefs_synth_show_end_hist() returns the format of the end histogram used by the synthetic event or NULL on error. The returned string belongs to the synth event object and is freed with the event by tracefs_synth_free(). The tracefs_synth_get_event() function returns a tep event, describing the given synthetic event. The API detects any newly created or removed dynamic events. The returned pointer to tep event is controlled by @tep and must not be freed. RETURN VALUE tracefs_synth_get_name(), tracefs_synth_show_event(), tracefs_synth_show_start_hist() and tracefs_synth_show_end_hist() return a string owned by the synth event object. The tracefs_synth_get_event() function returns a pointer to a tep event or NULL in case of an error or if the requested synthetic event is missing. The returned pointer to tep event is controlled by @tep and must not be freed. All other_create(3), tracefs_synth_destroy(3), tracefs_synth_complete(3), tracefs_synth_trace(3), tracefs_synth_snapshot(3), tracefs_synth_save).
https://trace-cmd.org/Documentation/libtracefs/libtracefs-synth-info.html
CC-MAIN-2022-27
refinedweb
376
54.02
Ancestral Quest Basics is a free light edition of the full Ancestral Quest product. When Incline Software introduced Ancestral back in 2007, I found their limited functionality trial edition less than commendable; it allowed you to import your data from a GEDCOM file, but did not allow you to export a GEDCOM file. I recommended against buying genealogy software until you had a chance to examine its GEDCOM output for your own data. I dared Incline Software: Show me the GEDCOM! On 2008 Nov 12, Incline Software did just that. With the introduction of Ancestral Quest 12.1, they replaced their limited functionality trial edition with a fully-functional 60-day trial. Today Incline Software introduces Ancestral Quest 12.1.27, which replaces the 60-day trial with a free edition: Ancestral Quest Basics. Incline Software managed to pick the worst possible name for their free edition. Ancestral Quest offers optional collaboration features. Those features are truly optional; the base product does not include the collaboration features, you need to run a separate installer to get those features. For many years, Incline Software has referred to installer for the base product as the Basic Installer and the one for the optional collaboration features as the Collaboration Support Installer. The download page tells you that the Basic Installer will give you the basic functionality of Ancestral Quest - and that it is all you need if you do not intend to use the Collaboration feature. Moreover, the filename of the Basic Installer is AQ12-Basic.exe, so it is only natural to think of the base product as Ancestral Quest Basic. Today, Incline Software introduces Ancestral Quest Basics. That is Basics, with a Latin Small Letter S, not Basic. Ancestral Quest Basics is a free light edition of the full Ancestral Quest product. It is the light edition of the full product, but I hesitate to call it a light product. Ancestral Quest Basics is a fairly complete edition of Ancestral Quest. Even some of the collaboration features are available in Ancestral Quest Basics; You can opt to install either the Basic Ancestral Quest Basics, or Ancestral Quest Basics and Collaboration Support. So, Incline Software is now offering Ancestral Quest Basics and Ancestral Quest Full, and both are available as the Basic product, or the Basic product including collaboration support. Incline Software managed to pick the worst possible name for their free edition. This creates confusion. Incline manages to create even more confusion. There are two download pages; one for Ancestral Quest Full and one for Ancestral Quest Basics. However, there is no real difference here; the two different pages offer exactly the same downloads. Ancestral Quest includes limited PDF support, but it isn't native PDF support. It is one of those programs that come with a PDF printer driver. I personally dislike setup program that install yet another PDF printer driver without asking my permission, and I am happy to say that the Ancestral Quest installer does not do that, but you do need to pay attention. The installer offers three setup types: Custom, Minimal and Typical. The Typical Setup is the default and does include the PDF printer driver. Choose Minimal to get just Ancestral Quest or Custom to have full control over all installation choices. The installer does not make you choose between Ancestral Quest Basics and Ancestral Quest Full. There is just one installer for both editions; which edition you'll be using depends on whether you have a registration key. When you start Ancestral Quest for the first time, it will throw up a dialog box prompting you to either enter your registration key for Ancestral Quest Full or to continue using Ancestral Quest Basics. Actually, every time you start Ancestral Quest Basics will it throw up that dialog box, asking you to either register Ancestral Quest Full or continue using Ancestral Quest Basics. Ancestral Quest Basics isn't freeware, it is annoyware. It really is annoyware; it does not only annoy you with this dialog box every time you start Ancestral Quest Basics, but also throws it up whenever you select a restricted feature. Incline Software could have greyed out the restricted menu items, or left them off altogether, they chose to make you hate this dialog box. There is another dialog, which you may or may not see, depending on whether you already had Ancestral Quest installed on your system. It is the Ancestral Quest Welcome Dialog, which asks you to choose some options, such as whether to capitalise surnames (don't!), enable LDS options (don't, unless you are mormon), and PAF Compatibility Mode; it is a nice feature, but it isn't a mode, it should have been called Import my PAF setting. The next dialog prompts you about Ancestry.com integration. Ancestral Quest, even Ancestral Quest Basics, will do background searches through the Ancestry.com database to find matches to your tree. That can be very handy. However, actually viewing those matches generally requires a rather expensive Ancestry.com subscription. When you start Ancestral Quest, it connects to the Internet without asking your permission or warning you that it is about to do so. It does so to check for program updates and get Ancestral Quest news. The dialog box that shows the update status and news already includes a pair of radio buttons to choose between daily and weekly update checks. Oddly, Ancestral Quest performs this update check while the option to Automatically check for product updates is unchecked off by default. Ancestral Quest Basics lacks some features, but includes all the, ahem, Basics. Ancestral Quest features includes full editing capabilities including the ability to merge individuals. It includes GEDCOM import and export, as well as database backup and restore. Ancestral Quest and PAF are not only look and feel very similar because PAF is based on the code for Ancestral Quest 3, Ancestral Quest can also read and write PAF files. In fact, Ancestral Quest will let work with PAF files. Naturally, Ancestral Quest offers the ability to convert PAF files to AQ files. Ancestral Quest Basics includes all those features. It even adds an Ancestral Quest menu item to PAF's Tool menu. What it lacks, and what Ancestral Quest Full offers, is the ability to convert AQ files to PAF files. Ancestral Quest Basics includes almost the same support for notes and sources as Ancestral Quest Full. That includes sources designed around Evidence! (note: not the later book Evidence Explained) and the ability to share source citations between multiple events. A nice-to-have feature that's missing from Ancestral Quest Basics is the ability to spell-check notes. Ancestral Quest Basics does not disappoint. It really offers all the basics. There's quite a difference in support for reports and charts between Ancestral Quest Basics and Ancestral Quest Full, but Ancestral Quest Basics does not disappoint. It really offers all the basics. Ancestral Quest Basics includes a fair number of reports and charts, even wall charts. Ancestral Quest Basics includes individual summary reports, pedigree charts, family group sheets, ancestry charts and descendants charts, ancestral reports (ahnenlists) and descendant reports (modified register). Several sheets and charts can be printed as blank forms. Ancestral Quest Basics includes reports for possible problems, end of line individuals, duplicate individuals and unlinked individuals. It even includes LDS reports. It even includes a research log, and LDS reports. Ancestral Quest Full additionally includes fan charts, dropline descendancy charts, line of descent reports, and the ability to display siblings on ancestry charts and reports. It allows customising the boxes of wall charts, and the sentences used in narrative reports (ahnenlist and modified register). It has reports for citations and family reunion contacts, supports custom reports, and allows combining reports into a family history book. Ancestral Quest Full also includes the ability to print reports to a PDF file, a Word file or a WordPerfect file. Ancestral Quest Basics includes web features; direct links to Ancestry.com, and searches on FamilySearch and WorldVitalRecords from within Ancestral Quest. It does not include the ability to search other sites, or add your own favourites. Not that it matters much, most of the web integration isn't much more than a menu that starts your browser. Ancestral Quest Basics does include integrated FamilySearch IGI search. Ancestral Quest Basics includes LDS features. It supports LDS event and offers LDS reports. Unless you are a mormon, you probably don't care much about that. It also integrates with New FamilySearch (NFS), the decade-delayed social genealogy site. It lacks a few features that Ancestral Quest Full offers. That hardly matters, as NFS is still only accessible to mormons anyway. 0 HEAD 1 SOUR AncestQuest 2 NAME Ancestral Quest 2 VERS 12.01.27 2 CORP Incline Software, LC 3 ADDR PO Box 95543 4 CONT South Jordan, UT 84095 4 CONT USA 1 DEST Ancestral Quest 1 DATE 17 DEC 2010 2 TIME 21:45:39 1 FILE AncestralQuest.ged 1 GEDC 2 VERS 5.5 2 FORM LINEAGE-LINKED 1 CHAR UTF-8 Back in 2007, in my review of Ancestral 12, I counseled against buying an genealogy application that does not allow you to evaluate its GEDCOM out as a matter of principle; you need to make sure that you can not only get your data in, but also get it out again. Fact is, Ancestral Quest's GEDCOM support is excellent; it allows you to export your data to ANSEL, UTF-8 and UTF-16 ( UNICODE). It does not support export to ASCII, but that is hardly an issue. Not only are ANSEL, UTF-8 and UTF-16 all supersets of ASCII, they are also considerably better choices, and it is a little-known fact that few genealogy applications import ASCII files correctly anyway. When I reviewed Ancestral Quest 12, I remarked that Ancestral Quest supports all the right export formats and none of the wrong ones. That is not entirely correct. Ancestral Quest is a Unicode application, and should not export to ANSEL, as export to anything but a Unicode encoding might lose information. However, users might complain if they dropped it, because there are still many non-Unicode utilities around. The GEDCOM header that Ancestral Quest writes is close to perfect. The header contains the product name and the full address of the company that created the product. The VERS tag contains nothing but a version number, the header contains the date and time it was created and its original file name. There are two thing wrong here. One, the SUBM (submitter) tag is missing. I had not entered my details yet, and Ancestral Quest did not bother to prompt me (PAF does). Secondly, this header is incorrect and technically illegal; it claims to be a PAF 5.5 header but uses UTF-8, a character set that's illegal in PAF 5.5. Ancestral Quest supports UTF-8 and several other GEDCOM 5.5.1 features, so Ancestral Quest GEDCOM header should identify the GEDCOM version as 5.5.1. Ancestral Quest was always a safe choice for PAF users looking for something better, and now Ancestral Quest Basics is free. Ancestral Quest is a Unicode-based 32-bit Windows application. It does not require Microsoft .NET, just Windows, and it does not have to be a very recent version. The latest version of Ancestral Quest still runs on Windows NT 4.0 and Windows 2000, both more than a decade old already. The Ancestral Quest installer lacks some Microsoft and InstallShield components to reduce size, because these are probably on your system already. Some of these weren't on my Vista 64-bit system, and were downloaded almost seamlessly. Still, Incline Software should provide a complete installer that does not require any more downloading to do its thing. Like PAF, Ancestral Quest runs fine on Vista 64-bits. That's worth mentioning, because Legacy 7.4 does not. However, the PDF-Change software that Ancestral Quest includes for printing to PDF does not work on 64-bit Windows. The Ancestral Quest web site has a page that explains the issue and provides several alternatives. Ancestral Quest is very PAF-like. Well, actually, PAF is very Ancestral Quest-like. I don't particularly like Ancestral Quest. Its interface is even more dated than the of Legacy or TMG interfaces. I even feel that its user interface is some ways inferior to PAF. However, Ancestral Quest is very PAF-like. Well, actually, PAF is very Ancestral Quest-like. Ancestral Quest offers more features than PAF and its GEDCOM support is certainly as good as PAF's GEDCOM support. Incline Software continues to maintain and support Ancestral Quest, while FamilySearch has silently abandoned PAF. Ancestral Quest was always a safe choice for PAF users looking for something better, and now Ancestral Quest Basics is free. Despite it dated looks, Ancestral Quest is also of interest to Family Tree Maker users. Ancestry.com has done little to address the many complaints about Family Tree Maker defects and shortcomings. Many Ancestry.com users only use Family Tree Maker because it integrates with Ancestry.com. The Ancestry.com support that Ancestral Quest has been offering for years may be a practical alternative. Ancestral Quest Basics is of interest to anyone looking for free genealogy software for Windows. The four best known free genealogy applications for Windows are FamilySearch PAF, Millennia Legacy Family Tree Standard, MyHeritage Family Tree Builder Regular and RootsMagic Essentials. Of these four, only PAF and RootsMagic are Unicode-based. The release of Ancestral Quest Basics seems very timely. FamilySearch has finally dropped all pretense of maintaining PAF; they just revealed a new home page, and PAF is no longer on it. Incline Software is positioning Ancestral Quest Basics as the ideal PAF replacement; extremely compatible, very similar look and feel, more features and free. They just want to remind you, every time you use it, that their full product offers yet more features.
http://www.tamurajones.net/AncestralQuestBasics.xhtml
CC-MAIN-2013-48
refinedweb
2,338
55.84
This. Reverse int and String array in Java - Example import java.util.Arrays; import org.apache.commons.lang.ArrayUtils; /** * * Java program to reverse array using Apache commons ArrayUtils class. * In this example we will reverse both int array and String array to * show how to reverse both primitive and object array in Java. * * @author */ public class ReverseArrayExample { public static void main(String args[]) { int[] iArray = new int[] {101,102,103,104,105}; String[] sArray = new String[] {"one", "two", "three", "four", "five"}; // reverse int array using Apache commons ArrayUtils.reverse() method System.out.println("Original int array : " + Arrays.toString(iArray)); ArrayUtils.reverse(iArray); System.out.println("reversed int array : " + Arrays.toString(iArray)); // reverse String array using ArrayUtis class System.out.println("Original String array : " + Arrays.toString(sArray)); ArrayUtils.reverse(sArray); System.out.println("reversed String array in Java : " + Arrays.toString(sArray)); } } Output: Original int array : [101, 102, 103, 104, 105] reversed int array : [105, 104, 103, 102, 101] Original String array : [one, two, three, four, five] reversed String array in Java : [five, four, three, two, one] That's all on How to reverse Array in Java. In this Java program, we have seen examples to reverse String and int array using Apache commons ArrayUtils class.By the way there are couple of other ways to reverse array as well, e.g. by converting Array to List and than using Collections.reverse() method or by using brute force and reverse array using algorithm. It depends upon, which method you like. If you are doing practice and learning Java, try to do this exercise using loops. Related Java Programming tutorials from Javarevisited Blog 5 comments : Hi Javin gr8 article but if we go by the way of java..then this thing can also be achieved.. You can reverse an array like this: public void reverse(Object [] a){ for(int i = 0; i < a.length / 2; i++){ Object temp = a[i]; // swap using temporary storage a[i] = a[a.length - i - 1]; a[a.length - i - 1] = temp; } } It's worthy to note that it doesn't matter if the array length is an odd number, as the median value will remain unchanged. I have to admit that I haven't tested this but it should work. For now please check the following program.. disappointed. i expected that you would provide algorithm for this. but you ended up in using open source library method. public void swap(int[] arr,int a,int b){ int temp=arr[a]; arr[a]=arr[b]; arr[b]=temp; } public int[] reverseArray(int[] arr){ int size=arr.length-1; for(int i=0;i<size;i++){ swap(arr,i,size--); } return arr; } /* let s1 and s2 be the two string that we have to compare. we'll insert every character of the first string into a hashmap. the character will be the key, whereas, the frequency of the character will be the value */ Hashmap hm = new Hashmap(); count = 1; for (i=0; i1) { val--; hm.put(j,val); } else { hm.remove(j); } else { System.out.println("Not anagrams"); } } if (hm.isEmpty()) { System.out.println("Anagrams"); } else { System.out.println("Not anagrams"); } Disappointing...... Was looking for an algorithm.
http://javarevisited.blogspot.com/2013/03/how-to-reverse-array-in-java-int-String-array-example.html?showComment=1399931072274
CC-MAIN-2016-40
refinedweb
528
59.8
onlyTLDonlyTLD Just only get TLD from domain. No other function. No non-standard library dependencies. Because it is simple, it is fast. One million queries only require 2.4s. How to useHow to use In Python3.5+: from onlytld import get_tld, get_sld assert get_tld("abersheeran.com") == "com" assert get_sld("upload.abersheeran.com") == "abersheeran.com" Support punycode-encoded domain names: if a punycode-encoded domain is passed in, a punycode-encoded domain will be returned, otherwise a utf8 string will be returned. Update TLD ListUpdate TLD List Refer to, you can run onlytld.data.fetch_list regularly in the code or run python -m onlytld.data in crontab. Use yourself TLD ListUse yourself TLD List Maybe this is useless, but I still set this function. from onlytld import set_datapath, get_tld set_datapath(YOUR_FILE_PATH) assert get_tld("chinese.cn") == "cn" Why thisWhy this There are many libraries in pypi that can get tld, such as publicsuffix2, publicsuffixlist, dnspy, but they have too many functions. I just need a repository that can get tld, and it is best not to have dependencies other than the non-standard library.
https://libraries.io/pypi/onlytld
CC-MAIN-2021-49
refinedweb
182
52.66
Bdale Garbee <address@hidden> ha escrit: > Please comment on the attached patch from Bastian Blank regarding > SIGPIPE behavior proposed for inclusion in my Debian packaging of tar. It is reasonable, but it's better be done in main(). I installed the following patch: diff --git a/src/tar.c b/src/tar.c index dbffc2a..e10b804 100644 --- a/src/tar.c +++ b/src/tar.c @@ -2454,10 +2454,10 @@ main (int argc, char **argv) obstack_init (&argv_stk); -#ifdef SIGCHLD + /* Ensure default behavior for some signals */ + signal (SIGPIPE, SIG_DFL); /* System V fork+wait does not work if SIGCHLD is ignored. */ signal (SIGCHLD, SIG_DFL); -#endif /* Decode options. */ Regards, Sergey
http://lists.gnu.org/archive/html/bug-tar/2009-06/msg00009.html
CC-MAIN-2015-32
refinedweb
106
67.15
I need to mock an interface to call to MSMQ, is there a way I can use Moq to simulate real MSMQ scenario that there are 10 messages in the queue, I call mocked function 10 times and I can get a pre-defined object, on 11th time I should get a different return value (e.g. null)? Moq now has an extension method called SetupSequence() in the Moq namespace which means you can define a distinct return value for each specific call. The general idea is that that you just chain the return values you need. In the example bellow the first call will return Joe and the second call will return Jane: customerService .SetupSequence(s => s.GetCustomerName(It.IsAny<int>())) .Returns("Joe") //first call .Returns("Jane"); //second call
https://codedump.io/share/HBJzIcSo5Jqg/1/moq-to-set-up-a-function-return-based-on-called-times
CC-MAIN-2017-39
refinedweb
130
67.89
I have spent most of last night and this afternoon working out how to implement a website for my local LAN that would enable use of my DVD writer from a remote host over a web interface. I need to provide a small web application that can be used to burn ISO images onto CDs or DVDs. The application should also verify the CD or DVD once it has been burnt. Security To start with I needed to find out how to control the CD or DVD burner from the website. There is the small problem of security here. I could not simply add the Apache user access to /dev/sr0 ( the cd device ) because then it is conceivable that anyone or any rouge application might be able to use the Apache service to monkey with my device. I had to provide some kind of abstraction which could authenticate / authorise the request prior to performing it. Python Python is fast becoming my favourite scripting language for working in Linux. It has some very nice libraries that makes things like network programming very easy. It also has great SYS and OS libraries that are useful for working with the native operating system and environments. YAMI ( Yet Another Messaging Infrastructure ) YAMI makes the nuts and bolts of client server communication very easy. Read up on it here. It can be compiled with support for c/c++, java, tcl and python. I only bothered with support for python. I had to ensure that the yamipyc.so module was located in the default python search path for my machine so mod_python could find it. Apache It is a reactively simple procedure to add a python handler to a website. Lookup mod_python. I will just say that you can configure mod_python in the Apache config to use a specific python file to handle all python requests. In my case I used the mod_python.publisher handler which is a built-in handler that is geared for reading post and get vars as well as publishing responses. I could have done all this in PHP, but seeing as though my plan was to use python for the application layer, I thought a connector to python was the easiest. In the background the plan is to have a python server listening for connections on a specific port. The client will send it commands and it will respond appropriately. AS the service is executed under a user with permissions to the CD device and there is authentication and sanitisation going on in front of the device, we have extra security. I also plan to implement controls on the firewall to allow only one specific machine on my LAN to connect to it. Flow SO here is how it should all work: - Website posts form to python handler ( handler.py ) - Apache mod_python knows how to manage this. - handler.py Authenticates the request - handler.py establishes a client connection to server.py - handler.py sends commands based on the post it has received to server.py - server.py sanitises the commands and executes an os.system call to the device. OR it rejects the commands. - server.py responds with status messages and results. - handler.py receives the results or status messages and reports back to the website. Here are the scripts: ( source code highlighting found here.) handler.py #!/usr/bin/env python from mod_python import apache from YAMI import * import os def eject(req): agent = Agent() agent.domainRegister('cdburner', '127.0.0.1', 12340, 2) agent.sendOneWay('cdburner', 'cd', 'eject', ['']) del agent def shutdown(req): agent = Agent() agent.domainRegister('cdburner', '127.0.0.1', 12340, 2) agent.sendOneWay('cdburner', 'cd', 'shutdown', ['']) del agent Server.py #!/usr/bin/env python from YAMI import * import os agent = Agent(12340) agent.objectRegister('cd') print 'server started' while 1: im = agent.getIncoming('cd', 1) src = im.getSourceAddr() msgname = im.getMsgName() if msgname == 'eject': print 'Ejecting' os.system("eject") elif msgname == 'shutdown': print 'Shutting down' del im break del im del agent So, a request to will call the eject function ( this functionality is provided by mod_python.publisher ) and the cd tray is ejected. ( so long as the server.py script is running. ) A request to will stop the server.py service altogether. I will also be looking at logging and all sorts of other things. Conclusion I have looked at a very simple web layer to application layer messaging system provided by mod_python and the mod_python.publisher handler, and YAMI compiled with support for python. The thing to note here is that the web server can make calls to the application server ( which, incidentally can be on a different physical server ) and the application server responds to the client which then reports back to the website. All this without changing any security permissions of the underlying operating system. Where's the third tier? Well that's the database. Python has excellent support for databases. This application will be no different. I intend to use the python connector so that access to the database is managed by the application and not the web server. Unfortunately I only have one machine so all three tiers will be on the same physical hardware. I accept this blatant security risk because a) I am cheap and b) this is a LAN application only. It will have no access from the world wide web. I control that little nugget with a real firewall in front of my LAN.
http://david-latham.blogspot.com/2008/06/python-yami-3-tier.html
CC-MAIN-2018-09
refinedweb
908
66.74
Suppose you have a hierarchy of components, where you pass props from a top component, and you need to pass those props unaltered to a children. It happens many times, and you don’t really want to do like this: const IntermediateComponent = (props) => { return ( <ChildComponent prop1={props.prop1} prop2={props.prop2} /> ) } instead, you want to pass all the props, regardless of their name. You can do so with the spread operator: const IntermediateComponent = (props) => { return ( <ChildComponent {...props} /> ) } This syntax is much easier to the eye, much less error prone, and it allows flexibility, since you don’t need to change the props names or add props in the intermediate component when you change them. Download my free React Handbook!
https://flaviocopes.com/react-pass-props-to-children/
CC-MAIN-2022-27
refinedweb
119
52.39
/* Kernel Object Display facility for Cisco Copyright 1999 KOD_H #define KOD_H typedef void kod_display_callback_ftype (char *); typedef void kod_query_callback_ftype (char *, char *, int *); /* ???/???: Functions imported from the library for all supported OSes. FIXME: we really should do something better, such as dynamically loading the KOD modules. */ /* FIXME: cagney/1999-09-20: The kod-cisco.c et.al. kernel modules should register themselve with kod.c during the _initialization*() phase. With that implemented the extern declarations below would be replaced with the KOD register function that the various kernel modules should call. An example of this mechanism can be seen in gdbarch.c:register_gdbarch_init(). */ #if 0 /* Don't have ecos code yet. */ extern char *ecos_kod_open (kod_display_callback_ftype *display_func, kod_query_callback_ftype *query_func); extern void ecos_kod_request (char *, int); extern void ecos_kod_close (void); #endif /* Initialize and return library name and version. The gdb side of KOD, kod.c, passes us two functions: one for displaying output (presumably to the user) and the other for querying the target. */ extern char *cisco_kod_open (kod_display_callback_ftype *display_func, kod_query_callback_ftype *query_func); /* Print information about currently known kernel objects. We currently ignore the argument. There is only one mode of querying the Cisco kernel: we ask for a dump of everything, and it returns it. */ extern void cisco_kod_request (char *arg, int from_tty); extern void cisco_kod_close (void); #endif
http://opensource.apple.com/source/gdb/gdb-1344/src/gdb/kod.h
CC-MAIN-2016-36
refinedweb
211
50.33
Hello! I have been recently working with the Omniverse python extension API, and I’m having trouble finding any documentation on how to access Audio2face through python. I’m currently trying to create an a2f pipeline, load an audio file into it, and control the player with a custom python extension. (This will be part of a larger project but I’m just concerned with this for now) I’ve tried import omni.audio2face.core in my own extension.py file, but I’m not sure how to proceed as I can’t seem to access any methods through that module. Currently, I’m running this as an extension loaded through the extensions window in the audio2face application if that helps. thanks :)
https://forums.developer.nvidia.com/t/accessing-audio2face-with-python/218672
CC-MAIN-2022-27
refinedweb
122
60.95