text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
When. (I’ll discuss parallel query execution in future series of posts.)
Hash join shares many characteristics with merge join. Like merge join, it requires at least one equijoin predicate, supports residual predicates, and supports all outer and semi-joins. Unlike merge join, it does not require ordered input sets and, while it does support full outer join, it does require an equijoin predicate.
The algorithm
The hash join executes in two phases: build and probe. During the build phase, it reads all rows from the first input (often called the left or build input), hashes the rows on the equijoin keys, and creates an in-memory hash table. During the probe phase, it reads all rows from the second input (often called the right or probe input), hashes these rows on the same equijoin keys, and looks or probes for matching rows in the hash table. Since hash functions can lead to collisions (two different key values that hash to the same value), we typically must check each potential match to ensure that it really joins.
In pseudo-code:
for each row R1 in the build table begin calculate hash value on R1 join key(s) insert R1 into the appropriate hash bucket endfor each row R2 in the probe table begin calculate hash value on R2 join key(s) for each row R1 in the corresponding hash bucket if R1 joins with R2 return (R1, R2) end
Note that unlike the nested loops and merge joins which immediately begin flowing output rows, the hash join is blocking on its build input. That is, it must completely read and process its entire build input before it can return any rows. Moreover, unlike the other join methods, the hash join requires a memory grant to store the hash table. Thus, there is a limit to the number of concurrent hash joins that SQL Server can run at any given time. While these characteristics are generally not a problem for data warehouses, they are undesirable for most OLTP applications.
Memory and spilling
Before a hash join begins execution, SQL Server tries to estimate how much memory it will need to build its hash table. We use the cardinality estimate for the size of the build input along with the expected average row size to estimate the memory requirement. To minimize the memory required by the hash join, we try to choose the smaller of the two tables as the build table. We then try to reserve this much memory to ensure that the hash join can successfully execute.
What happens if we grant the hash join less memory than it requests or if the estimate is too low? In these cases, the hash join may run out of memory during the build phase. If the hash join runs out of memory, it begins spilling a small percentage of the total hash table to disk (to a workfile in tempdb). The hash join keeps track of which “partitions” of the hash table are still in memory and which ones have been spilled to disk. As we read each new row from the build table, we check to see whether it hashes to an in-memory or an on-disk partition. If it hashes to an in-memory partition, we proceed normally. If it hashes to an on-disk partition, we write the row to disk. This process of running out of memory and spilling partitions to disk may repeat multiple times until the build phase is complete.
We perform a similar process during the probe phase. For each new row from the probe table, we check to see whether it hashes to an in-memory or an on-disk partition. If it hashes to an in-memory partition, we probe the hash table, produce any appropriate joined rows, and discard the row. If it hashes to an on-disk partition, we write the row to disk. Once we complete the first pass of the probe table, we return one by one to any partitions that we spilled, read the build rows back into memory, reconstruct the hash table for each partition, and then read the corresponding probe partitions and complete the join.
Left deep vs. right deep vs. bushy hash join trees
These terms refer to the shape of the query plan as illustrated by this figure:
The shape of the join tree is particularly interesting for hash joins as it affects the memory consumption.
In a left deep tree, the output of one hash join is the build input to the next hash join. Because hash joins consume their entire build input before moving to the probe phase, in a left deep tree only adjacent pairs of hash joins are active at the same time. For example, in the above picture, we begin by building the hash table for HJ1. When HJ1 begins probing, we use the output of HJ1 to build the hash table for HJ2. When HJ1 is done probing, we can release the memory used by its hash table. Only then do we begin probing HJ2 and building the hash table for HJ3. Thus, HJ1 and HJ3 are never active at the same time and can share the same memory grant. The total amount of memory we need is max(HJ1 + HJ2, HJ2 + HJ3).
In a right deep tree, the output of one hash join is the probe input to the next hash join. All of the hash joins must build their complete hash tables before we can begin probing. All of the hash joins are active at once and cannot share memory. When we do begin probing, rows flow up the entire tree of hash joins without blocking. Thus, the total amount of memory we need is HJ1 + HJ2 + HJ3.
Examples
The following examples use this schema:
create table T1 (a int, b int, x char(200))
create table T2 (a int, b int, x char(200))
create table T3 (a int, b int, x char(200))
set nocount on
declare @i int
set @i = 0
while @i < 1000
begin
insert T1 values (@i * 2, @i * 5, @i)
set @i = @i + 1
end
while @i < 10000
insert T2 values (@i * 3, @i * 7, @i)
while @i < 100000
insert T3 values (@i * 5, @i * 11, @i)
Here is a simple example:
from T1 join T2 on T1.a = T2.a
Rows
Executes
334
1
|--Hash Match(Inner Join, HASH:([T1].[a])=([T2].[a]), RESIDUAL:([T2].[a]=[T1].[a]))
1000
|--Table Scan(OBJECT:([T1]))
10000
|--Table Scan(OBJECT:([T2]))
Notice that the T2 has ten times as many rows as T1 and indeed the optimizer chooses to use T1 as the build table and T2 as the probe table.
Now consider this three table join:
from (T1 join T2 on T1.a = T2.a)
join T3 on T1.b = T3.a
|--Hash Match(Inner Join, HASH:([T1].[b])=([T3].[a]), RESIDUAL:([T1].[b]=[T3].[a]))
|--Hash Match(Inner Join, HASH:([T1].[a])=([T2].[a]), RESIDUAL:([T1].[a]=[T2].[a]))
| |--Table Scan(OBJECT:([T1]))
| |--Table Scan(OBJECT:([T2]))
100000
|--Table Scan(OBJECT:([T3]))
Note that the optimizer has selected a left deep plan. First, we join the two small tables, T1 and T2. The results of this join yield only 334 rows which we use to build a hash table before joining with the large table T3.
Now observe what happens if we add a predicate to restrict the size of the smaller two tables. (A single where clause suffices; the optimizer can derive “T2.a < 100” from “T1.a < 100” and “T1.a = T2.a”.)
where T1.a < 100
17
|--Hash Match(Inner Join, HASH:([T2].[a])=([T1].[a]), RESIDUAL:([T1].[a]=[T2].[a]))
34
|--Table Scan(OBJECT:([T2]), WHERE:([T2].[a]<(100)))
50
|--Hash Match(Inner Join, HASH:([T1].[b])=([T3].[a]), RESIDUAL:([T1].[b]=[T3].[a]))
|--Table Scan(OBJECT:([T1]), WHERE:([T1].[a]<(100)))
|--Table Scan(OBJECT:([T3]))
This time the optimizer selected a right deep plan. T1 and T2 are now so small (34 and 50) rows that it is better to build a hash table on these two tables and probe using the large table T3 than it is to build a hash table on an intermediate hash join result.
What next?
Now that I’ve given an overview of how each of the three physical join operators works, in my next post (or two) I plan to summarize the different characteristics of these operators and to give more examples to show how SQL Server makes various tradeoffs when deciding how to join tables.
SQL Server uses one of two different strategies to parallelize a hash join . The more common strategy
Last week I looked at how concurrent updates may cause a scan running at read committed isolation level
PingBack from
Since the beginning of learning SQL Server I'm pretty much confused with JOIN conditions that defines
Let's take a look at a simple query: CREATE TABLE T1 (A INT, B CHAR(8)) INSERT T1 VALUES (0, '0') INSERT
PingBack from
Can you explain me more about difference between execution plan of firstQuery and secondQuery.
In first query execution plan shows
|--Hash Match(Inner Join, HASH:([T1].[a])=([T2].[a]), RESIDUAL:([T2].[a]=[T1].[a]))
In second query execution plan shows
|--Hash Match(Inner Join, HASH:([T1].[a])=([T2].[a]), RESIDUAL:([T1].[a]=[T2].[a]))
Difference is "RESIDUAL:([T2].[a]=[T1].[a]))"[Query1] and "RESIDUAL:([T1].[a]=[T2].[a]))"[Query2].
In both the case T2 probe T1 but why execution plan shows different value for "RESIDUAL"?
The two residual predicates (and, thus, the two plans) are equivalent as equality is commutative (i.e., the order of the operation does not matter). Simply reversing the order of the predicate in an ON clause (e.g., "ON T1.a = T2.a" vs. "ON T2.a = T1.a") is sufficient to cause this change in the plan due to how the query optimizer generates the residual predicate.
HTH,
Craig | http://blogs.msdn.com/b/craigfr/archive/2006/08/10/687630.aspx | CC-MAIN-2015-35 | refinedweb | 1,663 | 68.5 |
At the recent Sapphire one of the big news for the developers was the beta release of Cloud Foundry(CF) based services for SAP HANA Cloud Platform. Rui Nogueira provided a lot of details in his blog. As explained in Rui’s blog we get a lot of new capabilities with these services like additional runtimes and backing services, additional ways for apps to be scalable etc. Developers are known to be magpies i.e. interested in new and shiny things. So how does the existing rich set of HANA Cloud Platform capabilities fit with the new and shiny things provides by these CF services. In this blog I would show how to mash the new and existing services to develop a Fiori Application that is backed by an Odata service running inside cloud foundry services.
Prerequisites
To successfully run this scenario you would need the following
- A SAP HANA Cloud Platform(HCP) trial/facotry account ( For new register here )
- Registered for the Cloud Foundry based services on HCP. ( For new follow instructions here OR Step 1 of this blog)
- Working Cloud Foundry CLI from the step above.
- Working JDK/Maven/GIT
- Some knowledge of WebIDE
Part 1: Olingo based OData Backend
Step 1: Developing the OData based Cars service locally
I will be a little lazy over here and take a short cut to do this. Apache OData is a java library that is used to enable OData services for your project. It comes with a Maven artifact that we will use to generate sample code which would be sufficient for our purposes. Launch a command line CMD on Windows and Terminal on Mac. If you have a working maven launch the following command ( if you don’t have maven clone this git repo), please note this is a really long command line so make sure to copy paste all of it
mvn archetype:generate -DinteractiveMode=false -Dversion=1.0.0-SNAPSHOT -DgroupId=com.acme -DartifactId=cf-cars-svc -DarchetypeGroupId=org.apache.olingo -DarchetypeArtifactId=olingo-odata2-sample-cars-annotation-archetype -DarchetypeVersion=RELEASE
You should see something like this
This will create a cf-cars-svc directory in your current folder. That’s it, we have all the Java code needed for our purpose 🙂 Change directory to the newly created cf-cars-svc folder. You can check the code if you want to by firing up your favorite IDE. Proceed further to package it using Maven
mvn package
Maven would try to download the internet 😉 once it has everything it needs it will package it nicely as a war file in the target folder.
Step 2: Prepping for Cloud Foundry – manifest.yml
Cloud Foundry requires a manifest file that acts as metadata for your application. This manifest.yml file instructs CF on how much memory to use, what to name the app any backing services to use etc. Please create a manifest.yml file in the cf-cars-svc folder
--- applications: - name: cars memory: 512M host: carsxxx path: target/cf-cars-svc.war
As you can see we will call our application cars, it will use 512MB of memory (you a 1GB of quota as part of beta) and the host will be called carsxxx. Please note the host needs to be unique across all the applications even by different users. So if you use just cars it may already be taken. If that happens during the next step. Just come back and change it to something unique. Another thing to remember is that yml files are quite finicky about the syntax so if you get syntax error, you may want to use this one from github. We have everything we need locally to push these bits to SAP HCP Cloud Foundry.
Step 3: Push it push it
Remember you would need Cloud Foundry CLI for this step. If you haven’t downloaded that yet, now would be good time to do it from here as other wise you are going to hit a wall. Open a command prompt and specify CF API point for HCP CF
cf api
You should see something like this
Your CF CLI is now pointing to HCP CF and knows all commands you give from now should be in the context of HCP CF. As the good doctor CLI says in the screenshot you have to login. You can do that by doing cf login and giving the credentials as shown below
All that is left now is to do the cf push command to push the code. Please make sure you are in the cf-cars-svc folder and there is a target folder containing the war file in it. Also check that manifest file manifest.yml is in the cf-cars-svc folder. After cf push you should see this
Followed at the end of successful push you would see the following, notice the URL in the logs. This would be the URL to access our application. You can also get the same URL from HCP CF Cockpit.
You can also login to web based HCP CF cockpit to check the application you would need to login with your credentials. After login you would see a list of all the apps deployed in your account. Click on the your application name hyperlink and youshould see the following with the URL to your application as well
Notice the URL for your app where xxx is the host name you chose in your manifest file. Go to this URL and you would see a the following webpage with link to various OData Endpoints. There is a button to generate the data as well. Click on the button only once. There is also link for service document. This URL ( ) will be used in creating the Fiori app.
Also click on the links with Cars or Drivers in the link and you should see some data. Congrats you are done with the first part and you have a Odata based service running on HCP CF.
Part 2: Developing Fiori Front End
Step 1: Create HCP Destination
Login to HCP Cockpit and click on destinations in the left navigation bar. Create a destination with the name Car-O-Data. In the URL field use the Service document URL we had noted down in Part 1 () Replace xxxx with the host name you used in manifest file. Rest of the details for the destination are shown below
Step 2: Developing the Fiori App front End
Browse to WebIDE and choose new project from Template. Choose the Fiori – Master Detail template. Supply a good project name. Most important part happens in the Data Connection tab as shown below. Click on Service URL from the left hand Sources table and choose the Car-O-Data system. Specify the URL again. Make sure you see the entities from Odata service when you click on the button in the URL field
In the template customization specify the name, namespace etc. For the Master section i.e. Object choose Manufactures as shown below
For the line item choose cars as shown below.
We are ready with the Fiori Application. Click Finish and now you can run your Fiori App to see it pulling data from Odata service you deployed in part one. There you have it OData backend running in Cloud Foundry services of HCP while Fiori app running in WebIDE.
I would be very interested to know how you found this hands on. Looking forward to your work with HCP Cloud Foundry services.
Wow – nice blog Pankaj! Keep it up!!!
Nice tutorial – and also very nice to see that an Olingo based OData service runs without issues on CloudFoundry ;o)
Part 2: Developing Fiori Front End
Step 1: Create HCP Destination
In this step , I’m getting a service Unavailable for my URL . But it is working in browser
That is strange. If you are able to see the odata service document from the browser then there is no reason for the service unavailable dialog. Is it possible for you to upload the screenshot for your settings.
– Pankaj
We got the solution . The error is because , we are using the depriciated version.Thanks.
Hi Pankaj, Thanks for the detailed post. This was my first attempt to create a service in CF and I did it in less than 30 minutes following your instructions 🙂 | https://blogs.sap.com/2016/06/08/a-fiori-app-backed-by-cloud-foundry-based-odata-service/ | CC-MAIN-2017-34 | refinedweb | 1,391 | 71.24 |
this and super are reserved keywords in Java.
this refer to current instance of a class while
super refer to the parent class of that class where
super keyword is used.
1. Java this keyword
this keyword automatically holds the reference to current instance of a class. It is very useful in scenarios where we are inheriting a method from parent class into child class, and want to invoke method from child class specifically.
We can use this keyword to access static fields in the class as well, but recommended approach to access static fields using class reference e.g. MyClass.STATIC_FIELD.
2. Java super keyword
Similar to
this keyword,
super also is a reserved keyword in Java. It always hold the reference to parent class of any given class.
Using
super keyword, we can access the fields and methods of parent class in any child class.
3. Java this and super keyword example
In this example, we have two classes
ParentClass and
ChildClass where ChildClass extends ParentClass. I have created a method
showMyName() in parent class and override it child class.
Now when we try to invoke
showMyName() method inside child class using this and super keywords, it invokes the methods from current class and parent class, respectively.
public class ParentClass { public void showMyName() { System.out.println("In ParentClass"); } }
public class ChildClass extends ParentClass { public void showMyName() { System.out.println("In ChildClass"); } public void test() { this.showMyName(); super.showMyName(); } }
public class Main { public static void main(String[] args) { ChildClass childObj = new ChildClass(); childObj.test(); } }
Program output.
In ChildClass In ParentClass
In this java tutorial, we learned what are this and super keywords. We also learned to use both keywords in java applications.
Happy Learning !!
Ask Questions & Share Feedback | https://howtodoinjava.com/java/basics/this-vs-super/ | CC-MAIN-2019-43 | refinedweb | 289 | 56.66 |
0
Hi. I am using a comparer class to sort a list of People details by their firstName.
The code to do this is as follows:
public class PersonDetailsComparer : IComparer<PersonDetails> { public int Compare(PersonDetails x, PersonDetails y) { int returnValue = 0; if (x != null && y != null && x. FirstName != null && y. FirstName != null) { return x.FirstName.CompareTo(y.FirstName ); } return returnValue; } }
This is working fine, that is the list is sorted in ascending order. But now, I need to add some exceptions, for example I want the following names to appear first from the list:
Lever
Johnson
Zaher
and then sorting the remaining list in ascending order.
Can any one please help me.
Thanks | https://www.daniweb.com/programming/software-development/threads/437749/sorting-using-icomparer | CC-MAIN-2017-47 | refinedweb | 113 | 53.41 |
close - close a pipe stream to or from a process
#include <stdio.h>
int pclose(FILE *stream);
The pclose() function shall close a stream that was opened by popen(), wait for the command to terminate, and return the termination status of the process that was running the command language interpreter. However, if a call caused the termination status to be unavailable to pclose(), then pclose() shall return -1 with errno set to [ECHILD] to report this situation. This can happen if the application calls one of the following functions:() shall return the termination status of the command language interpreter. Otherwise, pclose() shall return -1 and set errno to indicate the error.
The pclose() function shall fail if:The following sections are informative.
None.
None.
There is a requirement that pclose() not return before the child process terminates. This is intended to disallow implementations that return [EINTR] if a signal is received while waiting. If pclose() returned before the child terminated, there would be no way for the application to discover which child used to be associated with the stream, and it could not do the cleanup itself.().
Some historical implementations of pclose() either block or ignore the signals SIGINT, SIGQUIT, and SIGHUP while waiting for the child process to terminate. Since this behavior is not described for the pclose() function in IEEE Std 1003.1-2001, such implementations are not conforming. Also, some historical implementations return [EINTR] if a signal is received, even though the child process has not terminated. Such implementations are also considered non-conforming.
Consider, for example, an application that uses:
popen("command", () , . | http://manpages.sgvulcan.com/pclose.3p.php | CC-MAIN-2017-09 | refinedweb | 266 | 53.92 |
Following the solution of my own question here, I could create an executable of my code. Now I need to compile it in the DLX machine in my university. If I understood properly, we have the Root software as a singularity container. Therefore, I have to compile my code inside the container, after loading the Root image.
I can get the Root image to work (Version 6.06/08), but when I try to compile with “make” I get the error:
fatal error: gsl/gsl_matrix.h: No such file or directory
#include <gsl/gsl_matrix.h>
It seems that there is a problem with the GSL libraries. If I type
root-config --has-builtin_gsl the output is “yes”, so the GSL libraries are there somewhere. I don’t know how to tell the compiler where these libraries are, since I am working with a Root image.
I did not change anything in my code or my “Makefile” after I could compile the code in my own machine.
How can I tell the compiler to use the GSL libraries that are in Root? | https://root-forum.cern.ch/t/create-an-executable-with-gsl-libraries-in-a-root-image/32204 | CC-MAIN-2022-27 | refinedweb | 181 | 73.88 |
1. Activating Facebook Pages Managed Source
Log in to your account.
Click the Sources tab.
Select Managed Sources to view available managed sources.
Locate Facebook Pages and click the + button to add a new instance of the Facebook Pages source.
The first time you activate a managed source you are required to complete the license agreement. Fill in the form and click Agree button.
2. Creating a New Facebook Pages Instance
To create a new instance of Facebook Pages after the license terms and conditions have been accepted, click + on the Facebook Pages managed source.
Access Tokens
Facebook Pages Managed Source uses Facebook access tokens to authenticate users to a private page. There are two types of access token, User and App. User Access Tokens enable access to localized and age-restricted content, but are only valid for two months and are not automatically renewed. App Access Tokens never expire, but they will not give you access to Facebook Pages that impose location or age restrictions. An Access Token is assigned to one Facebook account. If additional Facebook Pages instances are required, use a different Facebook account.
To obtain a User Access Token from Facebook, click Add Token button in the new managed source instance.
Login to Facebook to obtain your User Access token generated by Facebook.
Note: DataSift does not store your password or email address, but will store the User Access Token returned by Facebook.
A new User Access token has been generated by Facebook.
Now we have a valid user access token, complete the New Facebook Pages Managed Source form. Enter a name for this instance, in the example the instance name is Jimmy Choo Pages. This instance will monitor interations for Comments, Likes and Posts on the company pages of Jimmy Choo which sells luxury brands of shoes and accessories.
Enter a search string for pages you want to filter.
Select the pages in the search results you want to filter on and click Select Pages button.
Selected pages are displayed in the Activated pages list. Click Save to complete the configuration of this instance.
Note: You may search and add more sites to the Activated pages list, for example, if this instance contains a list of Pages from competitive companies.
Click on Sources > My Managed Sources to view the new Jimmy Choo Pages source.
Click on the new instance to view a summary of attributes including a unique ID for this source.
Note: It is best practice to start consuming a stream from a managed source before starting it.
3. Creating a Facebook Pages Stream
A stream is all the social media interactions and extra data added by DataSift as a result of your filter.
Click on Sources tab and select My Managed Sources to view the new Facebook Pages Managed source, then click on the new instance. Scroll down and select How to use button.To create a filter, get CSDL code to enable a Facebook Pages instance to be used in a filter.
Scroll down to In Streams section.
Copy either the source.id or interaction.type CSDL code.
Create a new Facebook Pages filter. Click the Filters tab and click Create a Filter button.
Type in a name and description for your stream. Select the CSDL Code Editor and click the Start Editing button.
Copy the Facebook Pages instance information into the filter, in my example I have copied the source.id and included another condition that matches Facebook post or message content (facebook_page.message target attribute) that will match the string summer or yellow.
Click Save and Close.
For more information on Facebook targets, see the Facebook Pages targets documentation.
A summary of the configured stream is shown along with the cost (in Data Processing Units) and options to run the stream or edit it again. Click Live Preview.
Click the Play button at the bottom of the screen to start the live preview.
No interactions are displayed as the Jimmy Choo Pages managed source has not been started.
4. Starting a Facebook Pages Managed Source
Open My Managed Sources in a new browser tab.
Click Start button on the Jimmy Choo Pages source.
Click Start Source.
The Jimmy Choo Pages instance has been started.
5. Verifying Filter Conditions
Now the managed source is running, use Live Preview to verify filter conditions are matching interactions correctly. Go back to the browser tab that has Live Preview running. A burst of interactions appear, then interactions continue in real time.
Note: DataSift monitors posts for a window of seven days. Any new comments or likes to a post that is older than seven days will not be filtered, even if the post was created after running the Facebook Pages managed source.
Click the pause button.
6. Analyzing Post Interactions
To display a Facebook Pages Post interaction in more detail, move the mouse pointer over an interaction and a debug symbol is displayed. Click on the interaction to reveal more information.
Use the debug window to view interaction output data. In the example, the Facebook object is a photo which is described in the output attribute facebook_page.type, facebook_page.message contains the content of a post or comment, the example is a post and facebook_page.link contains a link to the actual post.
Expand facebook_page.from to view information about the entity who created the post, in this example the entity is a company.
Expand facebook_page.page to view information about the actual Facebook page including its link.
For more examples of the Facebook Pages namespace, see the Facebook Pages Namespace documentation.
Copy the facebook_page.link attribute value and paste it into a new browser window to view the original post.
The browser displays the Facebook object. Object name is Timeline Photo, the photo is referenced in the facebook_page.picture attribute and the message content matches the attributes displayed in the interaction ouput.
7. Analyzing Comment Interactions
Monitoring interations for Comments was enabled when the Jimmy Choo Pages instance was configured. To add a Facebook Page Comment to the Jimmy Choo Facebook page, open the browser window that was running Live Preview. Click the Play button at the bottom of the screen to start Live Preview again.
Open the browser window where Live Preview is streaming interactions. The comment posted by Julie Evans has been filtered in real time.
Move the mouse pointer over the interaction and click on it to display debug mode.
Use the debug window to view interaction output data. The Facebook object is a comment, which is desribed in the output attribute facebook_page.type, facebook_page.message contains the content of the comment.
Expand facebook_page.from to view information about the entity who created the post, in this example the entity is Julie Evans.
8. Analyzing Link Interactions
Monitoring interations for Likes was enabled when the Jimmy Choo Pages instance was configured. To generate a Like interaction using the Jimmy Choo Facebook Page, open the browser window that was running Live Preview. Click the Play button at the bottom of the screen to start Live Preview again.
Click the Like link in the browser window.
Open the browser window where Live Preview is streaming interactions. The Like posted by Julie Evans has been filtered in real time.
Move the mouse pointer over the interaction and click on it to display debug mode.
Use the debug window to view interaction output data, the Facebook object is a like which is desribed in the output attribute facebook_page.type.
Expand facebook_page.from to view information about the entity who liked the post, in this example the entity is Julie Evans.
9. Viewing Facebook Access Token Information
To view information about your access token, select the Jimmy Choo Pages source in Sources > My Managed Sources.
Click Edit this Managed Source button and copy the token in the Add Facebook Tokens to DataSift section.
Open a new browser window and go to Graph API Explorer.
Paste the token into the Access Token: field and click the Debug button.
Your access token details are displayed. My token expires in about two months time. The App ID field contains DataSift Pages which enables this token to be used to validate DataSift Facebook Pages managed sources.
| http://dev.datasift.com/docs/products/stream/howto/sources-and-augmentations/activate-facebook-pages-source | CC-MAIN-2016-40 | refinedweb | 1,367 | 66.13 |
Views
- The View Class
- Creating a Custom View
- Summary
Of all the pieces of the Android system, views are probably the most used. Views are the core building block on which almost every piece of the UI is built. They are versatile and, as such, are used as the foundation for widgets. In this chapter, you learn how to use and how to create your own view.
The View Class
A view is a rather generic term for just about anything that is used in the UI and that has a specific task. Adding something as simple as a button is adding a view. Some widgets, including Button, TextView, and EditText widgets, are all different views.
Looking at the following line of code, it should stand out that a button is a view:
Button btnSend = (Button) findViewById(R.id.button);
You can see that the Button object is defined and then set to a view defined in the application layout XML file. The findViewById() method is used to locate the exact view that is being used as a view. This snippet is looking for a view that has been given an id of button. The following shows the element from the layout XML where the button was created:
<Button android:
Even though the element in the XML is <Button>, it is still considered a view. This is because Button is what is called an indirect subclass of View. In total, there are more than 80 indirect subclasses of View as of API level 21. There are 11 direct subclasses of View: AnalogClock, ImageView, KeyboardView, MediaRouteButton, ProgressBar, Space, SurfaceView, TextView, TextureView, ViewGroup, and ViewStub.
The AnalogClock Subclass
The AnalogClock is a complex view that shows an analog clock with a minute-hand and an hour-hand to display the current time.
Adding this view to your layout XML is done with the following element:
<AnalogClock android:
This view can be attached to a surface by using the onDraw(Canvas canvas) method, and it can be sized to scale to the screen it is being displayed on via the following method:
onMeasure(int widthMeasureSpec, int heightMeasureSpec)
It should be noted that if you decide to override the onMeasure() method, you must call setMeasuredDimension(int, int). Otherwise, an IllegalStateException error will be thrown.
The ImageView Subclass
The ImageView is a handy view that can be used to display images. It is smart enough to do some simple math to figure out dimensions of the image it is displaying, which in turn allows it to be used with any layout manager. It also allows for color adjustments and scaling the image.
Adding an ImageView to your layout XML requires the following:
<ImageView android:
To show multiple figures, you can use multiple ImageViews within a layout. Similar to other views, you can attach events such as a click event to trigger other behavior. Depending on the application you are building, this may be advantageous versus requiring the user to click a button or use another widget to complete an action.
The KeyboardView Subclass
The KeyboardView is one of the most interesting views that exist. This is one of the true double-edged components of the Android system. Using the KeyboardView allows you to create your own keyboard. Several keyboards exist in the Play store that you can download right now and use on your Android device that are based on using the KeyboardView.
The problem is that using an application with a custom keyboard means that all data entry must pass through it. Every “keystroke” is passed through the application, and that alone tends to send shivers down the spine of those who are security conscious. However, if you are an enterprise developer and need a custom keyboard to help with data entry, then this view may be exactly what you are looking for.
Creating your own keyboard is an involved process. You need to do the following:
- Create a service in your application manifest.
- Create a class for the keyboard service.
- Add an XML file for the keyboard.
- Edit your strings.xml file.
- Create the keyboard layout XML file.
- Create a preview TextView.
- Create your keyboard layout and assign values.
The KeyboardView has several methods you can override to add functionality to your keyboard:
- onKey()
- onPress()
- onRelease()
- onText()
- swipeDown()
- swipeUp()
- swipeLeft()
- swipeRight()
You do not need to override all of these methods; you may find that you only need to use the onKey() method.
The MediaRouteButton Subclass
The MediaRouteButton that is part of the compatibility library is generally used when working with the Cast API. This is where you need to redirect media to a wireless display or ChromeCast device. This view is the button that is used to allow the user to select where to send the media.
Note that per Cast design guidelines, the button must be considered “top level.” This means that you can create the button as part of the menu or as part of the ActionBar. After you create the button, you must also use the .setRouteSelector() method; otherwise, an exception will be thrown.
First, you need to add an <item> to your menu XML file. The following is a sample <item> inside of the <menu> element:
<item android:
Now that you have a menu item created, you need to open your MainActivity class and use the following import:
import android.support.v7.app.MediaRouteButton;
Next, you need to declare it in your MainActivity class:
private MediaRouteButton myMediaRouteButton;
Finally, add the code for the MediaRouteButton to the menu of the onCreateOptionsMenu() method. Remember that you must also use setRouteSelector() on the MediaRouteButton. The following demonstrates how this is accomplished:
@Override public boolean onCreateOptionsMenu(Menu menu) { super.onCreateOptionsMenu(menu); getMenuInflater().inflate(R.menu.main, menu); myMediaRouteItem = menu.findItem(R.id.mediaroutebutton_cast); myMediaRouteButton = (MediaRouteButton) myMediaRouteItem.getActionView(); myMediaRouteButton.setRouteSelector(myMediaRouteSelector); return true; }
The ProgressBar Subclass
The progress bar is a familiar UI element. It is used to indicate that something is happening and how far along this process is. It is not always possible to determine how long an action will take; luckily, the ProgressBar can be used in indeterminate mode. This allows an animated circle to appear that shows movement without giving a precise measurement of the status of the load.
To add a ProgressBar, you need to add the view to your layout XML. The following shows adding a “normal” ProgressBar:
<ProgressBar android:
Other styles of ProgressBar may also be used. To change the style, you need to add a property to the <ProgressBar> element. The following styles may be used:
Widget.ProgressBar.Horizontal Widget.ProgressBar.Small Widget.ProgressBar.Large Widget.ProgressBar.Inverse Widget.ProgressBar.Small.Inverse Widget.ProgressBar.Large.Inverse
Depending on your implementation, you may apply the style either with your styles.xml or from your attrs.xml. For the styles from styles.xml, you would use the following:
style="@android:style/Widget.ProgressBar.Small"
If you have styles inside your attrs.xml file that you want applied to the progress bar, use the following property in the <ProgressBar> element:
style="?android:attr/progressBarStyleSmall"
If you are planning on using the indeterminate mode, you need to pass a property of android:indeterminate into the <ProgressBar> element. You may also specify the loading animation by setting the android:indeterminateDrawable to a resource of your choosing.
A ProgressBar that is determinate requires updates to be passed to it via the setProgress() or incrementProgressBy() method. These methods should be called from a worker thread. The following shows an example of a thread that uses a Handler and an int for keeping the progress value, and a ProgressBar has been initialized:
new Thread(new Runnable() { public void run() { while (myProgress < 100) { myProgress = doWork(); myHandler.post(new Runnable() { public void run() { myProgressBar.setProgress(myProgress); } }); } } }).start();
The Space Subclass
For those who have worked on layouts and visual interfaces, the Space view is one that is both helpful and brings on somewhat lucid nightmares. This view is reserved to add “space” between other views and layout objects.
The benefit to using a Space is that it is a --lightweight view that can be easily inserted and modified to fit your needs without you having to do an absolute layout or extra work trying to figure out how relative spacing would work on complex layouts.
Adding a Space is done by adding the following to your layout XML:
<Space android:
The SurfaceView Subclass
The SurfaceView is used when rendering visuals to the screen. This may be as complex as providing a playback surface for a live camera feed, or it can be used for rendering images on a transparent surface.
The SurfaceView has two major callbacks that act as lifecycle mechanisms that you can use to your advantage: SurfaceHolder.Callback.surfaceCreated() and SurfaceHolder.Callback.surfaceDestroyed(). The time in between these methods is where any work with drawing on the surface should take place. Failing to do so may cause your application to crash and will get your animation threads out of sync.
Adding a SurfaceView requires adding the following to your layout XML:
<SurfaceView android:
Depending on how you are going to use your SurfaceView, you may want to use the following callback methods:
- surfaceChanged()
- surfaceCreated()
- surfaceDestroyed()
Each of these callback methods gives you an opportunity to initialize values, change them, and more importantly free some system resources up when it is released. If you are using a SurfaceView for rendering video from the device camera, it is essential that you release control of the camera during the surfaceDestroyed() method. Failing to release the camera will throw errors when you attempt to resume usage of the camera in either another application or when your application is resumed. This is due to a new instance attempting to open on a resource that is finite and currently marked as in use.
The TextView Subclass
The TextView is likely the first view added to your project. If you create a new project in Android Studio that follows the default options, you will be given a project that contains a TextView with a string value of “Hello World” in it.
To add a TextView, you need to add the following code to your layout XML file:
<TextView android:
Note that in the previous example, the value for the TextView is taken from @string/hello_world. This value is inside of the strings.xml file that is in your res/values folder for your project. The value is defined in strings.xml as follows:
<string name="hello_world">Hello world!</string>
The TextView also contains a large number of options that can be used to help format, adjust, and display text in your application. For a full list of properties, visit.
The TextureView Subclass
The TextureView is similar to the SurfaceView but carries the distinction of being tied directly to hardware acceleration. OpenGL and video can be rendered to the TextureView, but if hardware acceleration is not used for the rendering, nothing will be displayed. Another difference when compared to SurfaceView is that TextureView can be treated like a View. This allows you to set various properties including setting transparency.
In similarity to SurfaceView, some methods need to be used with TextureView in order for proper functionality. You should first create your TextureView and then use either getSurfaceTexture() or TextureView.SurfaceTextureListener before using setContentView().
Callback methods should also be used for logic handling while working with the TextureView. Paramount among these callback methods is the onSurfaceTextureAvailable() method. Due to TextureView only allowing one content provider to manipulate it at a time, the onSurfaceTextureAvailable() method can allow you to handle IO exceptions and to make sure you actually have access to write to it.
The onSurfaceTextureDestroyed() method should also be used to release the content provider to prevent application and resource crashing.
The ViewGroup Subclass
The ViewGroup is a special view that is used for combining multiple views into a layout. This is useful for creating unique and custom layouts. These views are also called “compound views” and, although they are flexible, they may degrade performance and render poorly based on the number of children included, as well as the amount of processing that needs to be done for layout parameters.
CardView
The CardView is part of the ViewGroup that was introduced in Lollipop as part of the v7 support library. This view uses the Material design interface to display views on “cards.” This is a nice view for displaying compact information in a native Material style. To use the CardView, you can load the support library and wrap your view elements in it. The following demonstrates an.support.v7.widget.CardView xmlns: <TextView android: </android.support.v7.widget.CardView> </RelativeLayout>
This example shows a card in the center of the screen. The color and corner radius can be changed via attributes in the <android.support.v7.widget.CardView> element. Using card_view:cardBackgroundColor will allow you to change the background color, and using card_view:cardCornerRadius will allow you to change the corner radius value.
RecyclerView
The RecyclerView was also added in Lollipop as part of the v7 support library. This view is a replacement for the aging ListView. It brings with it the ability to use a LinearLayoutManager, StaggeredLayoutManager, and GridLayoutManager as well as animation and decoration support. The following shows how you can add this view to your layout XML:
<android.support.v7.widget.RecyclerView android:
Similar to with a ListView, after you have added the RecyclerView to your layout, you then need to instantiate it, connect it to a layout manager, and then set up an adapter to display data.
You instantiate the RecyclerView by setting it up as follows:
myRecyclerView = (RecyclerView) findViewById(R.id.my_recycler_view);
The following shows connecting to a layout manager using the LinearLayoutManager that is part of the v7 support library:
myLayoutManager = new LinearLayoutManager(this); myRecyclerView.setLayoutManager(myLayoutManager);
All that is left is to attach the data from an adapter to the RecyclerView. The following demonstrates how this is accomplished:
myAdapter = new MyAdapter(myDataset); myRecyclerView.setAdapter(myAdapter);
The ViewStub Subclass
The ViewStub is a special view that is used to create views on demand in a reserved space. The ViewStub is placed in a layout where you want to place a view or other layout elements at a later time. When the ViewStub is displayed—either by setting its visibility with setVisibility(View.VISIBLE) or by using the inflate() method—it is removed and the layout it specifies is then injected into the page.
The following shows the XML needed to include a ViewStub in your layout XML file:
<ViewStub android:
When the ViewStub is inflated, it will use the layout specified by the android:layout property. The newly inflated view will then be accessible via code by the ID specified by the android:inflatedId property. | http://www.informit.com/articles/article.aspx?p=2492004&seqNum=2 | CC-MAIN-2019-43 | refinedweb | 2,459 | 53.61 |
Debugging is an important concept. The concept of debugging is trying to figure out what is wrong with your code or just trying to understand the code. There are many times where I will come to unfamiliar code and I will need to step through it in a debugger to grasp how it works. Most Python IDEs have good debuggers built into them. I personally like Wing IDE for instance. Others like PyCharm or PyDev. But what if you want to debug the code in your Jupyter Notebook? How does that work?
In this chapter we will look at a couple of different methods of debugging a Notebook. The first one is by using Python’s own pdb module.
Using pdb
The pdb module is Python’s debugger module. Just as C++ has gdb, Python has pdb.
Let’s start by opening up a new Notebook and adding a cell with the following code in it:
def bad_function(var): return var + 0 bad_function("Mike")
If you run this code, you should end up with some output that looks like this:
--------------------------------------------------------------------------- TypeError Traceback (most recent call last)
in () 2 return var + 0 3 ----> 4 bad_function("Mike") in bad_function(var) 1 def bad_function(var): ----> 2 return var + 0 3 4 bad_function("Mike") TypeError: cannot concatenate 'str' and 'int' objects
What this means is that you cannot concatenate a string and an integer. This is a pretty common problem if you don’t know what types a function accepts. You will find that this is especially true when working with complex functions and classes, unless they happen to be using type hinting. One way to figure out what is going on is by adding a breakpoint using pdb’s set_trace() function:
def bad_function(var): import pdb pdb.set_trace() return var + 0 bad_function("Mike")
Now when you run the cell, you will get a prompt in the output which you can use to inspect the variables and basically run code live. If you happen to have Python 3.7, then you can simplify the example above by using the new breakpoint built-in, like this:
def bad_function(var): breakpoint() return var + 0 bad_function("Mike")
This code is functionally equivalent to the previous example but uses the new breakpoint function instead. When you run this code, it should act the same way as the code in the previous section did.
You can read more about how to use pdb here.
You can use any of pdb’s command right inside of your Jupyter Notebook. Here are some examples:
- w(here) – Print the stack trace
- d(own) – Move the current frame X number of levels down. Defaults to one.
- u(p) – Move the current frame X number of levels up. Defaults to one.
- b(reak) – With a *lineno* argument, set a break point at that line number in the current file / context
- s(tep) – Execute the current line and stop at the next possible line
- c(ontinue) – Continue execution
Note that these are single-letter commands: w, d, u and b are the commands. You can use these commands to interactively debug your code in your Notebook along with the other commands listed in the documentation listed above.
ipdb
IPython also has a debugger called ipdb. However it does not work with Jupyter Notebook directly. You would need to connect to the kernel using something like Jupyter console and run it from there to use it. If you would like to go that route, you can read more about using Jupyter console here.
However there is an IPython debugger that we can use called IPython.core.debugger.set_trace. Let’s create a cell with the following code:
from IPython.core.debugger import set_trace def bad_function(var): set_trace() return var + 0 bad_function("Mike")
Now you can run this cell and get the ipdb debugger. Here is what the output looked like on my machine:
The IPython debugger uses the same commands as the Python debugger does. The main difference is that it provides syntax highlighting and was originally designed to work in the IPython console.
There is one other way to open up the ipdb debugger and that is by using the %pdb magic. Here is some sample code you can try in a Notebook cell:
%pdb def bad_function(var): return var + 0 bad_function("Mike")
When you run this code, you should end up seeing the `TypeError` traceback and then the ipdb prompt will appear in the output, which you can then use as before.
What about %%debug?
There is yet another way that you can open up a debugger in your Notebook. You can use `%%debug` to debug the entire cell like this:
%%debug def bad_function(var): return var + 0 bad_function("Mike")
This will start the debugging session immediately when you run the cell. What that means is that you would want to use some of the commands that pdb supports to step into the code and examine the function or variables as needed.
Note that you could also use `%debug` if you want to debug a single line.
Wrapping Up
In this chapter we learned of several different methods that you can use to debug the code in your Jupyter Notebook. I personally prefer to use Python’s pdb module, but you can use the IPython.core.debugger to get the same functionality and it could be better if you prefer to have syntax highlighting.
There is also a newer “visual debugger” package called the PixieDebugger from the pixiedust package:
I haven’t used it myself. Some reviewers say it is amazing and others have said it is pretty buggy. I will leave that one up to you to determine if it is something you want to add to your toolset.
As far as I am concerned, I think using pdb or IPython’s debugger work quite well and should work for you too.
Related Reading
- StackOverflow: What is the right way to debug in IPython Noteobook?
- The visual Python debugger for Jupyter Notebooks You’ve always wanted
- Debugging Jupyter Notebooks – David Hamann | https://www.blog.pythonlibrary.org/2018/10/17/jupyter-notebook-debugging/ | CC-MAIN-2020-10 | refinedweb | 1,008 | 69.92 |
Networking Services Library Functions ypclnt(3NSL)
NAME
ypclnt, yp_get_default_domain, yp_bind, yp_unbind, yp_match,
yp_first, yp_next, yp_all, yp_order, yp_master,
yperr_string, ypprot_err - NIS Version 2 client interface
SYNOPSIS
cc [ flag ... ] file ... -lnsl [ library ... ]
#include <rpcsvc/ypclnt.h>
#include <rpcsvc/yp_prot.h>
DESCRIPTION
This package of functions provides an interface to NIS, Net-
work Information Service Version 2, formerly referred to as
YP. In this version of SunOS, NIS version 2 is supported
only for compatibility with previous versions. The recom-
mended enterprise level information service is NIS+ or NIS
version 3, see nis+(1). Moreover, this version of SunOS
supports only the client interface to NIS version 2. It is
expected that this client interface will be served either by
an existing ypserv process running on another machine on the
network that has an earlier version of SunOS or by an NIS+
server, see rpc.nisd(1M), running in "YP-compatibility
mode". Refer to the NOTES section in ypfiles(4) for implica-
tions of being an NIS client of an NIS+ server in "YP-
compatibility mode", and to ypbind(1M), ypwhich(1),
ypmatch(1), and ypcat(1) for commands to access NIS from a
client machine.
may . 0 if they
succeed, and a failure code (YPERR_xxxx) otherwise. Failure
codes are described in the ERRORS section.
Routines
yp_bind (char *indomain);
To use the NIS name services, the client process must
be "bound" to an NIS server that serves the appropri-
ate domain using yp_bind(). Binding need not be done
SunOS 5.8 Last change: 10 Nov 1999 1
Networking Services Library Functions ypclnt(3NSL)
explicitly by user code; this is done automatically
whenever an NIS lookup function is called. yp_bind()
can be called directly for processes that make use of
a backup strategy (for example, a local file) in cases
when NIS services are not available. If a process
calls yp_bind(), it should call yp_unbind() when it is
done using NIS in order to free up resources.
void yp_unbind(char *indomain);
Each binding allocates (uses up) one client process
socket descriptor; each bound domain costs one socket
descriptor. However, multiple requests to the same
domain use that same descriptor. yp_unbind() is avail-
able at the client interface for processes that expli-
citly
o the client process cannot bind a server for the
proper domain, or
o RPC requests to the server fail.
If an error is not RPC-related, or if rpcbind is not
running, or if ypbind is not running, or if a bound
ypserv process returns any answer (success or
failure), the ypclnt layer will return control to the
user code, either with an error code, or a success
code and any results.
yp_get_default_domain (char **outdomain);
The NIS lookup calls require a map name and a domain
name, at minimum. It is assumed that the client pro-
cess knows the name of the map of interest. Client
processes should fetch the node's default domain by
calling yp_get_default_domain(), and use the returned
outdomain as the indomain parameter to successive NIS
name service calls. The domain thus returned is the
same as that returned using the SI_SRPC_DOMAIN command
to the sysinfo(2) system call. The value returned in
outdomain should not be freed.
yp_match(char *indomain,
char *inmap, char *inkey, int inkeylen, char **outval,
int *outvallen);" 6 yp_match() returns the value
SunOS 5.8 Last change: 10 Nov 1999 2
Networking Services Library Functions ypclnt(3NSL)
associated with a passed key. This key must be exact;
no pattern matching is available. yp_match() requires
a full YP map name; for example, hosts.byname instead
of the nickname hosts.
yp_first(char *indomain,
char *inmap, char **outkey, int *outkeylen, char
**outval, int *outvallen);" 6 yp_first() returns the
first key-value pair from the named map in the named
domain.
yp_next(char *indomain, char
*inmap, char *inkey, int inkeylen, char **outkey, int
*outkeylen, char **outval, int *outvallen);" 6
yp_next(), for that matter, of next)
is particular to the structure of the NIS map being
processing; there is no relation in retrieval order to
either the lexical order within any original (non-NIS
name service) data base, or to any obvious numerical
sorting order on the keys, values, or key-value pairs.
The only ordering guarantee made mes-
sages that would otherwise be returned in the midst of
the enumeration. The next paragraph describes a better
solution to enumerating all entries in a map.
yp_all(char *indomain, char
*inmap, struct ypall_callback *incallback);" 6 The
SunOS 5.8 Last change: 10 Nov 1999 3
Networking Services Library Functions ypclnt(3NSL)
function yp_all() provides a way to transfer an entire
map from server to client in a single request using
TCP (rather than UDP as with other functions in this
package). The entire transaction take place as a sin-
gle RPC request and response. yp_all() can be used
just like any other NIS name service procedure, iden-
tify the map in the normal manner, and supply the name
of a function which will be called to process each
key-value pair within the map. The call to yp_all()
returns only when the transaction is completed (suc-
cessfully or unsuccessfully), or the foreach() func-
tion con-
verts an NIS name service protocol error code to
a ypclnt layer error code.)
The key and value parameters are somewhat dif-
ferent than defined in the synopsis section
above. First, the memory pointed to by the inkey
and inval parameters is private to the yp_all()
function,,
SunOS 5.8 Last change: 10 Nov 1999 4
Networking Services Library Functions ypclnt(3NSL) some-
thing useful, or ignore it.
The foreach() function is(char *indomain,
char *inmap, unsigned long *outorder);" 6
yp_order() returns the order number for a
map. This function is not supported if the
ypbind process on the client's system is
bound to an NIS+ server running in "YP-
compatibility mode".
yp_master(char *indomain,
char *inmap, char **outname);" 6 yp_master()
returns the machine name of the master NIS
server for a map.
char *yperr_string(int incode);
yperr_string() returns a pointer to an
error message string that is null-
terminated but contains no period or <NEW-
LINE>.
ypprot_err (unsigned int incode);
ypprot_err() takes an NIS name service pro-
tocol error code as input, and returns a
ypclnt layer error code, which may be used
in turn as an input to yperr_string().
RETURN VALUES
All integer functions return 0 if the requested operation is
successful, or one of the following errors if the operation
fails.
YPERR_ACCESS
Access violation.
SunOS 5.8 Last change: 10 Nov 1999 5
Networking Services Library Functions ypclnt(3NSL)
YPERR_BADARGS
The arguments to the function are bad.
YPERR_BADDB
The YP database is bad.
YPERR_BUSY
The database is busy.
YPERR_DOMAIN
Cannot bind to server on this domain.
YPERR_KEY
No such key in map.
YPERR_MAP
No such map in server's domain.
YPERR_NODOM
Local domain name not set.
YPERR_NOMORE
No more records in map database.
YPERR_PMAP
Cannot communicate with rpcbind.
YPERR_RESRC
Resource allocation failure.
YPERR_RPC
RPC failure; domain has been unbound.
YPERR_YPBIND
Cannot communicate with ypbind.
YPERR_YPERR
Internal YP server or client error.
YPERR_YPSERV
Cannot communicate with ypserv.
YPERR_VERS
YP version mismatch.
FILES
/usr/lib/libnsl.so.1
ATTRIBUTES
See attributes(5) for descriptions of the following attri-
butes:
SunOS 5.8 Last change: 10 Nov 1999 6
Networking Services Library Functions ypclnt(3NSL)
____________________________________________________________
| ATTRIBUTE TYPE | ATTRIBUTE VALUE |
|_____________________________|_____________________________|
| MT-Level | Safe |
|_____________________________|_____________________________|
SEE ALSO
nis+(1), ypcat(1), ypmatch(1), ypwhich(1), rpc.nisd(1M),
rpcbind(1M), ypbind(1M), ypserv(1M), sysinfo(2), malloc(3C),
ypfiles(4), attributes(5)
SunOS 5.8 Last change: 10 Nov 1999 7 | http://www.manpages.info/sunos/yp_match.3.html | CC-MAIN-2014-10 | refinedweb | 1,259 | 53.71 |
Training a Predictive Model (Basic)
This dataset is a typical test case for many classification techniques in machine learning. For example, we can use an algorithm called random forest and train a classifier in R:
## Loading the random forest package in R library(randomForest) ## Training a random forest classifer in R ## Note: the iris dataset is available in R by default. ## To find out more, try `head(iris)` and `summary(iris)` model <- randomForest(Species ~., data = iris) ## Save the model for API Endpoint save(model, file = "iris_rf_model.rda")
The code above is the bare minimum for training and saving a random forest model for Iris in R. Once the script has been uploaded to your Domino project, you can start the run from Domino’s web interface. Simply open your Domino project, browse to Files section, and then click on the Run button next to the R script.
After the run, the Files section will show a new iris_rf_model.rda file: this file contains the model we have trained. In the next step, we will expose this model as an API.
Deploying the Model as REST API
In order to deploy the model as an API Endpoint, we need a function that utilizes the model for future prediction. The function should:
- Take four numeric features as inputs.
- Load the random forest model iris_rf_model.rda.
- Make a prediction based on the four numeric inputs.
- Return the prediction.
For this demo, I wrote a function predict_iris (see below) and included it in an R script called use_model_as_API.R. (Note that, although this example uses R, publishing a Python function works the same way, if you prefer Python to R.)
## Load Library library(caret) library(randomForest) ## Load Pre-trained model load(file = "iris_rf_model.rda") ## Create a function to take four numeric inputs ## and then return a prediction (setosa, versicolor or virginica) predict_iris <- function(sepal_length, sepal_width, petal_length, petal_width) { ## Use the model for prediction y_pred <- predict(model, c(sepal_length, sepal_width, petal_length, petal_width)) ## Return the predicted class return(as.character(y_pred)) }
Once you have this R script in the Domino project, you can link the API Endpoint to the function.
- First, go to the API Endpoints section.
- Enter the name of the script (use_model_as_API.R).
- Point it to the specific function (predict_iris).
- Click Publish to activate the API service.
That’s it! That is all you need to deploy your predictive model as a web service. Within a few minutes, your Iris model is live and accessible from the internet.
Making an API Call
OK, now we have the REST API. How do we use it?
At the bottom of the API Endpoints page, you can find code templates for languages such as Bash, Python, Ruby, PHP, JavaScript, and Java (other languages can follow a similar pattern).
Let us go through a Python example here. You will need to replace the API Endpoint URL and the Domino API key with your own.
import unirest import json import yaml ## Four numeric inputs X1 = 5.6; X2 = 3.2; X3 = 1.7; X4 = 0.8; response = unirest.post("YOUR_API_
Endpoint_URL", headers={ "X-Domino-Api-Key": "YOUR_API_KEY", "Content-Type": "application/json" }, params=json.dumps({ "parameters": [ X1, X2, X3, X4 ] }) ) ## Extract information from response response_data = yaml.load(response.raw_body) ## Print Result only print "Predicted:" print response_data['result']Endpoint_URL", headers={ "X-Domino-Api-Key": "YOUR_API_KEY", "Content-Type": "application/json" }, params=json.dumps({ "parameters": [ X1, X2, X3, X4 ] }) ) ## Extract information from response response_data = yaml.load(response.raw_body) ## Print Result only print "Predicted:" print response_data['result']
If successful, running this Python script should give you an output like this:
Predicted: ['setosa']
Ready to Try?
When you are ready to give this a try, you can do it in four steps:
- Sign up for a free account at Domino Data Lab.
- Go to my project folder for this demo.
- Click the Fork this project button on the control panel (and again when you see the pop-up window).
- Publish your own API endpoint as mentioned above (see section Deploying the Model as REST API).
- Sit back. Relax. Your API will be ready in a few minutes.
Domino provides some other nice features designed for deploying predictive models as APIs. For example, it automatically keeps a revisioned snapshot of your code and data each time you update your published models, in case you need to roll back. It also lets you schedule recurring training tasks that can re-train your models on large datasets and automatically deploy your updated models to your API.
Conclusions
For me, the key benefit of using a third-party MaaS is that I do not have to worry about technical issues like server uptime, or marshaling JSON into or out of my code. I can focus my time and energy on developing better predictive models.
I believe this functionality can enable data scientists and software engineers to better collaborate, by easily integrating sophisticated models to create richer, more intelligent software systems. | https://www.programmableweb.com/news/how-to-turn-your-predictive-models-apis-using-domino/how-to/2015/07/22?page=2 | CC-MAIN-2017-13 | refinedweb | 820 | 56.55 |
Very often we would like to control the way WCF service objects are instantiated on a WCF server. You would want to control how long the WCF instances should be residing on the server.
The WCF framework has provided three ways by which we can control WCF instance creation. In this article, we will first try to understand those three ways
of WCF service instance control with simple code samples of how to achieve them. Finally, we will compare when to use them and under what situations.
There is a small ebook for all my .NET friends which covers topics like WCF, WPF, WWF, AJAX, Core .NET, SQL, etc., which you can download from
here or you can catch me on my daily free training from here.
In normal WCF request and response communication, the following sequence of actions takes place:
Following is a pictorial representation of how WCF requests and responses work.
Following are different ways by which you can create WCF instances:
To meet the above scenarios, WCF has provided three ways by which you can control WCF service instances:
When we configure a WCF service as per call, new service instances are created for every method call you make via a WCF proxy client. The image below shows this in a pictorial format:
In other words, for every WCF client method call, a WCF service instance is created, and destroyed once the request is served.
In order to specify the instancing mode, we need to provide the InstanceContextMode value in the ServiceBehavior attribute as shown below.
This attribute needs to specified on the Service class. In the below code snippet, we have specified intCounter as a class level variable
and the class counter is incremented by one when the Increment method is called.
InstanceContextMode
ServiceBehavior
Service
intCounter
Increment
[ServiceBehavior(InstanceContextMode = InstanceContextMode.Percall)]
public class Service : IService
{
private int intCounter;
public int Increment()
{
intCounter++
return intCounter;
}
}
At the client, we consume the WCF client and we call the Increment method twice.
ServiceReference1.ServiceClient obj = new ServiceReference1.ServiceClient();
MessageBox.Show(obj.Increment().ToString());
MessageBox.Show(obj.Increment().ToString());
Even though we have called the Increment method twice, we get the value ‘1’. In other words, the WCF service instance is created for every method call made
to the WCF service instance so the value will always be one.
Very often we need to maintain state between method calls or for a particular session. For those kinds of scenarios, we will need to configure the service
per session. In per session, only one instance of a WCF service object is created for a session interaction. The figure below explains this in pictorial format.
To configure service as per session, we need to configure the ServiceBehavior attribute with a PerSession value in the InstanceContextMode object.
PerSession
[ServiceBehavior(InstanceContextMode = InstanceContextMode.PerSession)]
public class Service : IService
{
private int intCounter;
public int Increment()
{
intCounter++
return intCounter;
}
}
At the client side, when we run the below client code, you should see the value ‘2’ after the final client code is executed. We have called the method
twice so the value will be seen as two.
Often we would like to create one global WCF instance for all WCF clients. To create a single instance of a WCF service, we need to configure the WCF
service as Single instance mode. Below is a simple pictorial notation of how the single instance mode will operate:
Single
In order to create a single instance of a WCF service, we need to specify InstanceContextMode as Single.
[ServiceBehavior(InstanceContextMode = InstanceContextMode.Single)]
public class Service : IService
{
}
If you call the WCF service from a different client, you will see the counter incrementing. The counter becomes a global variable.
You can download the source code for this tutorial from here.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
private static int intCounter = 0;
Here we are talking about creation of Instances, thus making varible static is wrong here as Static belongs to class not to any instance. Shiv's code is working perfectly fine. Whenever you change the InstanceContextMode in webservice, make sure you click on "Update Service Refrence" on client side. This is what you are missing and thus not getting the updated result.
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/86007/ways-to-do-WCF-instance-management-Per-call-Per?msg=4392537 | CC-MAIN-2016-44 | refinedweb | 744 | 52.7 |
mbrlen(3) BSD Library Functions Manual mbrlen(3)
NAME
mbrlen, mbrlen_l -- get number of bytes in a character (restartable)
LIBRARY
Standard C Library (libc, -lc)
SYNOPSIS
#include <wchar.h> size_t mbrlen(const char *restrict s, size_t n, mbstate_t *restrict ps); #include <wchar.h> #include <xlocale.h> size_t mbrlen_l(const char *restrict s, size_t n, mbstate_t *restrict ps, locale_t loc);
DESCRIPTION
The mbrlen() function inspects at most n bytes, pointed to by s, to determine the number of bytes needed to complete the next multibyte char- acter. The mbstate_t argument, ps, is used to keep track of the shift state. If it is NULL, mbrlen() uses an internal, static mbstate_t object, which is initialized to the initial conversion state at program startup. It is equivalent to: mbrtowc(NULL, s, n, ps); Except that, when ps is a NULL pointer, mbrlen() uses its own static, internal mbstate_t object to keep track of the shift state. Although the mbrlen() function uses the current locale, the mbrlen_l() function may be passed a locale directly. See xlocale(3) for more infor- mation.
RETURN VALUES
The mbrlen() functions returns: 0 The next n or fewer bytes represent the null wide character (L'\0'). >0 The next n or fewer bytes represent a valid character, mbrlen().
EXAMPLES
A function that calculates the number of characters in a multibyte char- acter string: size_t nchars(const char *s) { size_t charlen, chars; mbstate_t mbs; chars = 0; memset(&mbs, 0, sizeof(mbs)); while ((charlen = mbrlen(s, MB_CUR_MAX, &mbs)) != 0 && charlen != (size_t)-1 && charlen != (size_t)-2) { s += charlen; chars++; } return (chars); }
ERRORS
The mbrlen() function will fail if: [EILSEQ] An invalid multibyte sequence was detected. [EINVAL] The conversion state is invalid.
SEE ALSO
mblen(3), mbrtowc(3), multibyte(3), xlocale(3)
STANDARDS
The mbrlen() function conforms to ISO/IEC 9899:1999 (``ISO C99''). BSD April 7, 2004 BSD
Mac OS X 10.5 - Generated Sun Oct 28 21:34:56 EDT 2007 | http://www.manpagez.com/man/3/mbrlen_l/ | CC-MAIN-2017-47 | refinedweb | 320 | 62.27 |
We are about to switch to a new forum software. Until then we have removed the registration on this forum.
I have a class I'd like to call createGraphics on but I get the following error: The method createGraphics(float, float, String) is undefined for the type SceneChild
I'm not sure how to implement it? I can't make the class an extension of PApplet, as this SceneChild class will be instantiated by another which will be an extension of PApplet
import processing.core.*; public class SceneChild { PApplet parent; PGraphics graphics_container; public SceneChild(PApplet _parent, float _w, float _h) { parent = _parent; graphics_container = createGraphics(_w, _h, parent.P3D); } public void display() { } }
Answers
Why don't you use your field parent, which recieves a PApplet reference btW, in order to call main sketch's functions?
parent.createGraphics(_w, _h, PConstants.P3D);
Better yet, rather than your class be responsible to call createGraphics(), demand an already instantiated PGraphics as constructor's parameter:
An even shorter version, relying on capturing sketch's reference outta the PGraphics itself: *-:)
Thanks for your suggestions. I would like the SceneChild class to be a wrapper for PGraphics so I tried your first suggestion again, as it didn't work for me. But changing the dimensions from float to int I think has fixed it. Thanks again!
Even being a wrapper, it doesn't mean it has to create the PGraphics by itself.
It's a very viable option to just ask an already existing PGraphics instance instead! ;)
Yep true. But what if the wrapper was a few levels down? PApplet class > Scene class > SceneChild class
What would be the best way to instantiate PGraphics on SceneChild?
Thanks for you comments
In case SceneChild is the wrapper now, have a PGraphics parameter in its constructor.
But in the case that Scene class is responsible to create SceneChild objects,
createGraphics() is used inside the former w/ the help of the PApplet's reference.
Ok, I think I'm implementing as you've suggested but get a bunch of errors within newChild I think. Doesn't like children.add either. Is the scoping wrong?
Main
Scene
SceneChild (wrapper for PGraphics in this example)
Hello !
I started to write to you yesterday but my computer crashed before the end of the message. I thought it was "dead" but it was "saved by the forum". In the timeline of the message, it should be after the first message of GoToLoop.
I think - but maybe I'm wrong - that what clifford is looking for is
"The method createGraphics(float, float, String) is undefined for the type SceneChild"
It means exactly what it means :) You trying to call a function called "createGraphics" inside a class called "SceneChild" without any function "createGraphics" inside it, so there is a problem.
Your issue come from the fact that when you're using the Processing IDE instead Eclipse, it's like you are inside the PApplet class, even if you are in another class. That's why you can call "createGraphics" from everywhere if you work with the Processing IDE, because your code is running "inside / at the scope of " the class PApplet.
When you work in Eclipse, it's a bit different because Eclipse is a tool for developer and Processing IDE is a tool for artist / student unfamiliar with object-oriented-programming. The Processing IDE is kind of layer over the true java code, it allows the users to use the function of the PApplet class directly to get a smaller & more readable code.
But if you use Eclipse, it's a bit different because it's not very usual to be able to call a function from a class inside another class without saying explicitly where does the functions come from.
Anyway, don't give up ! It's not so hard ! :D The single thing you have to change if you work in Eclipse is to call the function from the instance of the class PApplet. That's what I'm doing in the code.
The instance of PApplet is what you called "parent" , it contains every Processing functions so you can do every what you want from it ! :)
Sorry, I didn't read your code until the end. The problem come from the fact you didn't create your arrayList
Doh! Well spotted. Yes even though I understand the concept of how Processing works in Eclipse when it comes to practical it's still a bit weird to get my head around trying to work in oop but not oop in the way I'm used to :)
Many thanks guys for your input and quick responses. Very helpful!
Below is the code if anyone may find it useful. Thanks
Main
Scene
SceneChild | https://forum.processing.org/two/discussion/10272/using-creategraphics-in-eclipse | CC-MAIN-2020-40 | refinedweb | 789 | 68.91 |
09 March 2010 03:05 [Source: ICIS news]
By Malini Hariharan and John Richardson
SINGAPORE (ICIS news)--A final decision on a mega merger being planned among four Thai companies - affiliates of oil and gas major PTT - will be made by early Q2, said a senior executive of PTT Aromatics & Refining (PTTAR) on Tuesday.
A merger is being evaluated between PTTAR, Integrated Refining and Petrochemical Complex (IRPC), Thai Oil and PTT Chemical.
“We are doing a study; we are trying to identify synergies. A final decision is not yet taken,” Chainoi Puankosoom President & CEO of PTTAR told ICIS news.
While integration between PTTAR and IRPC’s plants would be easy as the two were located only 30km apart and already had pipeline connections it would be more difficult with Thai Oil, he added.
“The problem is that Thai Oil is 100km from us,” pointed out Puankosoom.
Thai Oil’s 275,000 bbls/day highly complex refinery and 900,000 tonnes/year aromatics plants are located at Sriracha in Chonburi province.
PTTAR operates a 280,000 bbls/day complex refinery with 228,000 bbls/day of petroleum products production capacity and a 2.2m tonnes/year aromatics facility at the Mab Ta Phut industrial estate in Rayong province.
IRPC runs a simple refinery and cracker outside the estate - 30km from PTTAR
PTT Chem, which is focused only on petrochemicals, has its crackers and derivative plants inside the Mab Ta Phut estate.
As part of the merger study, upgrading of plants and routes for value addition are also being studied.
For instance, PTTAR could upgrade residue from IRPC’s refinery at its hydrocracker or PTTAR could send hydrowax, which is currently being recycled, to IRPC to use at its cracker, added Puankosoom.
He emphasised that synergies and integration benefits arising from a merger were of greater priority than cost savings.
“When we merged Aromatics (?xml:namespace>
A merger between the four PTT affiliates was first mooted early last year but progress slowed down following the Mab Ta Phut environmental crisis and the
“The market also collapsed dramatically in the second half of 2008; margins had become negative and share prices of all the companies had collapsed.
"So we [the PTT group companies] sat down to do something together. We thought of integration to enhance the value of the firms; so that we can survive going forward.
"We thought we would have a clear picture by end-2009. But then when the Mab Ta Phut issue happened; we could not think of integration [projects] until this issue had been resolved and so we decided to do a detailed study,” explained Puankosoom.
He was optimistic that the Mab Ta Phut issue would be cleared by the time the companies sat for a final decision on the merger.
“A solution for the existing 76 projects is due to be announced soon. We have to follow the procedure set up by the government.”
He pointed out that people outside the industry might have misread the environmental issue.
“Industry players are not denying the need for a health impact assessment (HIA) study, which is required under the 2007 constitution.
"The problem was that to set up a body to carry out these studies, laws were needed.
"We wrote to the government at an early stage, asking for a procedure for the HIAs to be carried out. All we asked was that this would be done in parallel [with the construction of projects],” he said.
Puankosoom said that the Thai government’s rules on emissions were more stringent that even
“We have to reduce emissions at existing plants to build new plants; there should be a net reduction. So for projects approved since 2007, there will be lesser pollution in Mab Ta Phut [if they are implemented].”
None of PTTAR’s projects have been affected by the Mab Ta Phut crisis, but Puankosoom said that a longer-term solution was needed to attract new investments to
The Mab Ta Phut issue affected all industries, he said. And he warned that foreign investors were looking for a clear legal framework.
"A hub in the south is not easy. We have to solve the Mab Ta Phut issue first. A lot of things have to be relooked; there must be a better solution to the environmental issue otherwise it would be very difficult to build any more plants in
But he did not see the need for more refinery investments in Thailand as the country’s total refining capacity of 1.2m bbls/day is in excess of demand which is running at 900,000 bbls/day.
“We are also not as competitive as
Additionally the Thai government’s policy of promoting alternative fuels meant weak demand growth for fossil fuels in the future.
In petrochemicals, the company completed a major expansion project in 2009.
“That was because we had feedstock [condensate] from | http://www.icis.com/Articles/2010/03/09/9340863/decision-on-merger-of-ptt-affiliates-by-early-q2-pttar-ceo.html | CC-MAIN-2015-06 | refinedweb | 813 | 58.92 |
Microsoft Corporation
February 2005
Applies to:
Microsoft Visual Studio .NET 2003
Windows Mobile-based Pocket PC Phone Editions
Microsoft .NET Compact Framework version 1.0
Microsoft Visual C#
Microsoft MapPoint Location Server version 1.0
Summary: The objective of this exercise is for you to use the Microsoft .NET Compact Framework to create an application that will run on a Pocket PC Phone Edition. Through MapPoint Location Server, the application will get the real-time location of one of your contact's mobile devices and present a map of the location. This exercise should take 30 – 40 minutes to complete. (17 printed pages)
Download Developing Location-based Apps Using MapPoint Location Server.msi from the Microsoft Download Center.
Introduction Part 1: Creating a New Project in Visual Studio Part 2: Creating the UI Part 3: Implementing the Code to Fetch a Contact List from MapPoint Location Server Part 4: Implementing the Code to Get a Contact's Location Part 5: Implementing the Code to Draw a Map of the Location Part 6: Handling Zooming the Map In and Out Summary Appendix A: Configuring the Pocket PC Emulator for Internet Access Appendix B: Using .NET Compact Framework 1.0 SP2
*If your computer is using Compact Framework 1.0 SP2, please see Appendix B.
Microsoft MapPoint Location Server is a new server product from Microsoft that enables developers to easily access multiple wireless operator networks to acquire the real-time locations of mobile phones. Developers acquire this information through the simple MapPoint Location Server application programming interface (API), which is consistent across all supported networks. For instance, a taxi company can install MapPoint Location Server and give each of its drivers a phone. This technology would enable the company to enhance its dispatch system by keeping track of drivers' locations in real time.
You will need to install MapPoint Location Server to run this exercise. MapPoint Location Server is available for download from the Microsoft Download Center.
Domain: mls
Username: bjohnson
Password: Password1!
If you do not have time to complete the exercise, but you want to view the completed application (and its code), see the accompanying download. Feel free to browse the application code and to build and run it.
In this part of the exercise, you will create a new smart device project in Microsoft Visual Studio .NET 2003. You will also add a Web reference to MapPoint Location Server.
To create a new project
Your project is now set up and the MapPoint Location Server reference is in place.
In this part of the exercise, you will add all of the user interface (UI) elements to the form. You will use a tab control with two tabs to hold your controls. The first tab will have text boxes to collect logon information and a drop-down list that will contain your locatable contacts. The second tab will have a picture box to hold the map and two buttons for zooming in and out.
Some of the illustrations in this part of the exercise are thumbnails. You can click the thumbnails for larger images.
To create the UI
The Search tab of your form should now look something like the following illustration.
The Map tab of your form should now look like the following illustration.
Your UI is complete.
In this part of the exercise, you will begin adding code. More specifically, you will make a call to MapPoint Location Server to populate the list of locatable contacts.
To get a contact list from MapPoint Location Server
try
{
//Declare an array of contacts to hold the
//results from the server.
LocatableContact[] myContacts ;
//Set credentials by using the values that the user
//provides on the form.
SetCredentials();
//Make the call to MapPoint Location Server.
myContacts = locService.GetContacts();
//Step through the results and put each one
//in the combo box of locatable contacts.
cmbWho.Items.Clear();
foreach (LocatableContact who in myContacts )
{
cmbWho.Items.Add(who.DomainAlias);
}
}
catch(Exception ex)
{
MessageBox.Show("Unable to get buddies." + ex.ToString());
}
private void SetCredentials()
{
//Get the form fields and create a NetworkCredential object.
string username = txtUsername.Text ;
string password = txtPassword.Text ;
string domain = txtDomain.Text ;
System.Net.NetworkCredential cred =
new System.Net.NetworkCredential(username,password,domain);
//Apply these credentials to the two MapPoint services.
locService.Credentials = cred;
myRenderService.Credentials = cred;
//Set each service's URL manually.
locService.Url = "";
myRenderService.Url = "";
}
using MyFirstMLSApp.MLSService;
//These are the three MapPoint services you will use.
public RenderServiceSoap myRenderService = new RenderServiceSoap() ;
public LocationServiceSoap locService = new LocationServiceSoap();
// International settings. Can also be set for Europe and
// European languages.
public string myDataSourceName = "MapPoint.NA";
public string myCountryRegion = "United States";
public string myLanguage = "en";
// Size of the map in pixels. This will be used for rendering
// in addition to handling map click events (for re-centering).
public int PixelWidth;
public int PixelHeight;
// A MapView object is used for rendering the map.
// This object is set after doing a search, and is then
// used by the renderer.
public ViewByHeightWidth[] myViews = new ViewByHeightWidth[1];
// A collection of pushpins to draw on the map.
// In this application, a pushpin is created for
// each coffee shop found.
public Pushpin[] POIPins ;
// After finding a contact, this object is set with a new
// value used to render a map.
public LatLong CenterPoint =new LatLong();
The Emulator window opens, and your application starts.
Note If this is the first time you have run the emulator on this computer, you may need to configure it for Internet connectivity before the application will function properly. For information about configuring Internet connectivity for the emulator, see Appendix A at the end of this exercise.
In a few moments, the combo box is populated with a list of your locatable contacts. Your screen then looks something like the following illustration.
In this part of the exercise, you will write the code that runs when a user presses the Go! button. You will get the unique user identifier from the combo box, and then pass it to MapPoint Location Server. You will then receive information about the user, including his or her real-time location.
To get a contact's location
//Set credentials.
SetCredentials();
//Initialize center point.
CenterPoint = null;
//Try to get the selected user's location from MapPoint Location Server
try
{
PositionResults posRes =
locService.GetPositions(new string[]
{cmbWho.SelectedItem.ToString()});
//If you find a position...
if(posRes != null && posRes.PositionsFound > 0)
{
CenterPoint = posRes.Positions[0].LatLong;
//Set MyViews. You will use this later to draw a map.
myViews = new ViewByHeightWidth[1];
myViews[0] = new ViewByHeightWidth();
myViews[0].CenterPoint = new LatLong();
myViews[0].CenterPoint = CenterPoint;
myViews[0].Height = 1.0;
myViews[0].Width = 1.0;
//Create a pushpin at the found location.
Pushpin pp = new Pushpin();
pp.LatLong = this.CenterPoint;
pp.Label = cmbWho.SelectedItem.ToString();
pp.IconName = "1";
pp.IconDataSource = "MapPoint.Icons";
POIPins = new Pushpin[] {pp};
//Set the tab control to the second tab (map).
tabcontrol.SelectedIndex = 1;
// Update the map to show the found location.
refreshMap();
}
}
catch(Exception ex)
{
MessageBox.Show("Unable to find the contact" + ex.ToString() ,
"Locate Message");
return;
}
In this task, you will make a call to the Map Render Service of the MapPoint Location Server to draw a map of the user's real-time location.
To draw a map of the user's location
private void refreshMap()
{
//Here a new map is requested from the server and displayed
//to the user on the Map tab of the tab control.
try
{
//Set credentials.
SetCredentials();
//Create a MapSpecification object to be passed to
//the renderer.
//This object will hold all of the MapOptions and parameters.
MapSpecification mapSpec = new MapSpecification();
mapSpec.DataSourceName = myDataSourceName;
mapSpec.Views = myViews;
//If you have pushpins or a route, include them.
mapSpec.Pushpins = POIPins;
//Request the map the exact same size as the picture box
//it is going to be displayed in.
PixelWidth = pbMap.Width;
PixelHeight = pbMap.Height;
mapSpec.Options = new MapOptions();
mapSpec.Options.Format = new ImageFormat();
mapSpec.Options.Format.Height = PixelHeight;
mapSpec.Options.Format.Width = PixelWidth;
//Request the map from MapPoint.
MapImage[] mapImages = myRenderService.GetMap(mapSpec);
//Display the resulting map in the picture box.
System.IO.Stream streamImage;
streamImage =
new System.IO.MemoryStream(mapImages[0].MimeData.Bits);
Bitmap bitmapImage = new Bitmap(streamImage);
pbMap.Image= bitmapImage;
}
catch ( Exception ex)
{
MessageBox.Show(ex.Message);
}
}
The emulator changes to the Map tab, and it displays a map centered at the found location, as shown in the following illustration.
Congratulations! You have made a Simple Object Access Protocol (SOAP) call to an instance of MapPoint Location Server and retrieved the real-time location of a mobile phone.
In this part of the exercise, you will add a few lines of code to each of the zoom buttons on the Map tab to let the user zoom in and out. You will not implement panning in this exercise, but it takes only a few more lines of code to enable the user to re-center the map or to provide links for panning north, south, east, or west.
To enable zooming functionality
// When they zoom in, take half of the current
//map zoom and refresh the display.
myViews[0].Height = myViews[0].Height/2;
myViews[0].Width = myViews[0].Width /2;
refreshMap();
// Take the current map
// zoom and double it, and then refresh the display.
myViews[0].Height = myViews[0].Height* 2;
myViews[0].Width = myViews[0].Width * 2;
refreshMap();
In this exercise, you performed the following activities:
You built a Microsoft Visual C# application on the .NET Compact Framework that uses real-time location information from MapPoint Location Server. Although the scenario presented was simplistic, you should now see how simple it is to integrate real-time location into your own business applications, from fleet management and dispatch scenarios to mobile customer relationship management and sales force automation.
The easiest way to test Internet connectivity in the Pocket PC emulator is to run Microsoft Pocket Internet Explorer and browse to a public Web site. If you can browse to the site just as if you were using your desktop computer, you have Internet connectivity and can run the exercise application successfully.
If you receive an error when trying to browse, you need to configure the emulator.
To configure the emulator for Internet access
If your computer has .NET Compact Framework 1.0 SP2 installed and you have not updated the netcf.core.ppc3.ARM.cab and netcf.core.ppc3.x86.cab files in .NET Compact Framework 1.0, you need to update these files prior to beginning this task. The installation of SP2 does not automatically update these cabinet files for you.
To update the netcf.core.ppc3.ARM.cab and netcf.core.ppc3.x86.cab files in .NET Compact Framework 1.0 | http://msdn.microsoft.com/en-us/library/ms838242.aspx | crawl-002 | refinedweb | 1,773 | 50.73 |
in reply to
SOAP reply with correct namespace
Am I right in what I think?
I think no.
Anyone know how to make SOAP::Transport::HTTP::Server reply with the namespace provided by the client?
Specify a namespace, use one the client gives you, basic SOAP::Data.... make some xml
BTW, the namespace seems to be an issue because Visual Studio client won't recognise the result without the namespace it provided.
That sounds backwards, but it probably isn't the case. The likely issue is your WSDL specification is wrong, in addition to SOAP::Lite not handling it correctly.
PS. I hate soap.
Priority 1, Priority 2, Priority 3
Priority 1, Priority 0, Priority -1
Urgent, important, favour
Data loss, bug, enhancement
Out of scope, out of budget, out of line
Family, friends, work
Impossible, inconceivable, implemented
Other priorities
Results (252 votes),
past polls | http://www.perlmonks.org/?node_id=991364 | CC-MAIN-2015-32 | refinedweb | 145 | 55.95 |
wfw namespace elements
The
wfw namespace,
contains multiple elements. As more are added in various places I will
endeavor to keep the list here updated.
- wfw:comment
- The first element to appear in this namespace is
comment. This element appears in RSS feeds and contains the URI that comment entries are to be POSTed to. The details of this are outlined in the CommentAPI Specification.
- wfw:commentRSS
- The second element to appear in the wfw namespace is
commentRSS. This element also appears in RSS feeds and contains the URI of the RSS feed for comments on that Item. This is documented in Chris Sells Specification
2003-10-10 13:11 Comments (1)
This is great! I’ve been waiting for something to stick. Looks like people are going to use the commentRSS element.
Posted by Randy Charles Morin on 2003-10-25 09:38 | http://web.archive.org/web/20050305162845/http:/wellformedweb.org/news/wfw_namespace_elements/ | CC-MAIN-2015-22 | refinedweb | 144 | 67.45 |
[PATCH v14 3/6] namei: permit ".." resolution with LOOKUP_{IN_ROOT,BENEATH}
From:
Aleksa Sarai
Date:
Thu Oct 10 2019 - 01:43:13 EST ]
This patch allows for LOOKUP_BENEATH and LOOKUP_IN_ROOT to safely permit
".." resolution (in the case of LOOKUP_BENEATH the resolution will still
fail if ".." resolution would resolve a path outside of the root --
while LOOKUP_IN_ROOT will chroot(2)-style scope it). Magic-link jumps
are still disallowed entirely[*].
The need for this patch (and the original no-".." restriction) is
explained by observing there is a fairly easy-to-exploit race condition
with chroot(2) (and thus by extension LOOKUP_IN_ROOT and LOOKUP_BENEATH
if ".." is allowed)2(dirb, "b/c/../../etc/shadow",
{ .flags = O_PATH, .resolve = RESOLVE_IN_ROOT } );
With fairly significant regularity, thread2 will resolve to
"/etc/shadow" rather than "/a/b/etc/shadow". There is also a similar
(though somewhat more privileged) attack using MS_MOVE.
With this patch, such cases will be detected *during* ".." resolution
and will return -EAGAIN for userspace to decide to either retry or abort
the lookup. It should be noted that ".." is the weak point of chroot(2)
-- walking *into* a subdirectory tautologically cannot result in you
walking *outside* nd->root (except through a bind-mount or magic-link).
There is also no other way for a directory's parent to change (which is
the primary worry with ".." resolution here) other than a rename or
MS_MOVE.
This is a first-pass implementation, where -EAGAIN will be returned if
any rename or mount occurs anywhere on the host (in any namespace). This
will result in spurious errors, but there isn't a satisfactory
alternative (other than denying ".." altogether).*.
A variant of the above attack is included in the selftests for
openat2(2) later in this patch series. I've run this test on several
machines for several days and no instances of a breakout were detected.
While this is not concrete proof that this is safe, when combined with
the above argument it should lend some trustworthiness to this
construction.
[*] It may be acceptable in the future to do a path_is_under() check (as
with the alternative solution for "..") for magic-links after they
are resolved. However this seems unlikely to be a feature that
people *really* need -- it can be added later if it turns out a lot
of people want it.
Signed-off-by: Aleksa Sarai <cyphar@xxxxxxxxxx>
---
fs/namei.c | 43 +++++++++++++++++++++++++++++--------------
1 file changed, 29 insertions(+), 14 deletions(-)
diff --git a/fs/namei.c b/fs/namei.c
index 9d00b138f54c..0d6857ac4e5b 100644
--- a/fs/namei.c
+++ b/fs/namei.c
@@ -491,7 +491,7 @@ struct nameidata {
struct path root;
struct inode *inode; /* path.dentry.d_inode */
unsigned int flags;
- unsigned seq, m_seq;
+ unsigned seq, m_seq, r_seq;
int last_type;
unsigned depth;
int total_link_count;
@@ -1769,22 +1769,35 @@ static inline int handle_dots(struct nameidata *nd, int type)
if (type == LAST_DOTDOT) {
int error = 0;
- /*
- * Scoped-lookup flags resolving ".." is not currently safe --
- * races can cause our parent to have moved outside of the root
- * and us to skip over it.
- */
- if (unlikely(nd->flags & LOOKUP_IS_SCOPED))
- return -EXDEV;
if (!nd->root.mnt) {
error = set_root(nd);
if (error)
return error;
}
-_IS_SCOPED)) {
+ bool m_retry = read_seqretry(&mount_lock, nd->m_seq);
+ bool r_retry = read_seqretry(&rename_lock, nd->r_seq);
+
+ /*
+ * If there was a racing rename or mount along our
+ * path, then we can't be sure that ".." hasn't jumped
+ * above nd->root (and so userspace should retry or use
+ * some fallback).
+ *
+ * In future we could do a path_is_under() check here
+ * instead, but there are O(n*m) performance
+ * considerations with such a setup.
+ */
+ if (unlikely(m_retry || r_retry))
+ return -EAGAIN;
+ }
}
return 0;
}
@@ -2254,6 +2267,10 @@ static const char *path_init(struct nameidata *nd, unsigned flags)
nd->last_type = LAST_ROOT; /* if there are only slashes... */
nd->flags = flags | LOOKUP_JUMPED | LOOKUP_PARENT;
nd->depth = 0;
+
+ nd->m_seq = read_seqbegin(&mount_lock);
+ nd->r_seq = read_seqbegin(&rename_lock);
+
if (flags & LOOKUP_ROOT) {
struct dentry *root = nd->root.dentry;
struct inode *inode = root->d_inode;
@@ -2275,8 +2292,6 @@ static const char *path_init(struct nameidata *nd, unsigned flags)
nd->path.mnt = NULL;
nd->path.dentry = NULL;
- nd->m_seq = read_seqbegin(&mount_lock);
-
/* LOOKUP_IN_ROOT treats absolute paths as being relative-to-dirfd. */
if (flags & LOOKUP_IN_ROOT)
while (*s == '/')
--
2.23.0 ] | http://lkml.iu.edu/hypermail/linux/kernel/1910.1/02468.html | CC-MAIN-2021-10 | refinedweb | 686 | 56.86 |
In certain cases you may need to use motors for different purposes. Depending on the experiment or the device you are building, the type of motor to use can be different. In this series of posts I will show you how to use those motors with Arduino.
Servo Motors
Servo motors are motors for which you can control accurately enough the speed and the angle of rotation. They can be used to precisely move parts in your device. For examples, in physics experiment you can use a servo motor to hold a ball in a given position and release it at a given time to study its motion.
As most of the motors, servo motors can just rotate around their axis. However, with a bit of mechanics you can realise any kind of movement. To have an idea of what you can do with a rotating axis and some mechanics, just read the article on.
Controlling a servo motor is incredibly easy with Arduino. Servo motors have three wires: usually coloured in black, red and white. The black one must be connected to the GND pin of the Arduino, while the red one to the 5V pin. They are used to feed the motor with some power. The third wire is used to control it. Any servo motor works receiving voltage pulses of a given width. Pulses are repeated at regular intervals (e.g. 20 ms), while the duration of a single pulse determines the rotation angle of the motor around its axis or its speed.
Given the working principle of a servo motor, you will not be surprised to know that you need to connect the control wire to one of the PWM (Pulse Width Modulation) pins of Arduino. PWM pins can be configured such that they provide voltage pulses at given interval with the chosen duty cycle. The duty cycle of a PWM pin is the faction of time for which the pin is at its highest voltage. An excellent tutorial on PWM pins is given here.
Putting the pin to zero makes the PWM pin to stay off, while putting it at its maximum value of 255 makes the pin to stay always on, i.e. with a duty cycle of 100%. In order to observe a square wave on the pin you must put it at 128, such that f=128/255=0.5 and the duty cycle is 50%. Any other number makes the pin to produce a square wave whose pulse width ranges from 0 to T, where T depends on the Arduino flavour. On the Arduino UNO T is about 2 ms.
With Arduino you don't need to compute the values yourself to control a servo motor. The Servo library is in fact available for use and provides all the relevant configurations automatically. Consider the following sketch:
#include <Servo.h>
Servo myservo;
void setup() {
myservo.attach(9);
}
void loop() {
myservo.write(90);
delay(1500);
myservo.write(0);
delay(1500);
}
The attach() method in setup() just tells Arduino to which pin the motor is attached, represented by the object myservo, defined as a member of the main class (Servo myservo). In the example the servo is attached to pin 9. You can then provide an angle, in degrees, to the write() method. Its effect depends on which type of servo motor you have.
There are, in fact, two types of servo motors: continuous rotation and standard. The write() command makes a standard servo rotate of the specified angle. A call to
myservo.write(90);
with a standard servo makes it rotate of 90 degrees. With the above sketch a servo of this type rotates of 90 degrees, then holds that position for 1.5 s, before returning at its initial position, where it stays for 1.5 s. Being in the loop the above rotations are repeated continuously.
When a servo motor of this type is powered and is receiving a train of pulses, it resists to forces applied to it holding its position. Of course you cannot apply forces too large, if you do not want to damage your servo. The maximum torque you can exert on the servo axis is one of the characteristics you can find on the servo data sheet.
The maximum torque is usually measured in kg·cm and is the product of the mass of a weight suspended at a given distance on a beam mounted on the motor axis. Using a servo whose maximum torque is 10 kg·cm you can suspend 10 kg at a distance of 1 cm from the axis as well as 1 kg at 10 cm. The larger the distance, the lower the weight the servo can tolerate. Needless to say, it is not a good idea to use a servo close to its limit. If you need a given torque, choose a servo that can tolerate at least twice that torque.
In order to compute the torque in N·cm it is enough to multiply the torque in kg·cm by 9.8, i.e. by the gravitational acceleration. In this case a servo whose maximum torque is 10 kg·cm can tolerate a force acting 1 cm far from the axis of 10·9.8=98 N, or a force of 9.8 N acting at a distance of 10 cm from the axis.
Continuos rotation servo motors, instead, rotate clockwise or anticlockwise until they are powered. The number passed to the write() method represents the rotation speed: numbers from 0 to 89 make the motor axis rotate anticlockwise (seen from above), while numbers from 91 to 180 make it rotating clockwise. The maximum speed is attained for 0 or 180. The higher the number, the slower the speed if the number is less than 90. On the contrary, if the number is larger than 90, the speed increases with the number. Setting the speed at 90 makes the servo stop rotating. In the above sketch, then the servo rotates anticlockwise for 1.5 s, then stops for other 1.5 s and so on.
Here you can find the motor we used for our tests, but it is plenty of them in the Farnell Electronics store. Below is a video showing our continuous rotation servo working with the sketch outlined above.
This post appears on my personal blog, too () and its content will be part of my freely available publication Scientific Arduino available here. | https://www.element14.com/community/community/arduino/arduino-tutorials/blog/2016/04/01/working-with-motors-servo-motors | CC-MAIN-2020-34 | refinedweb | 1,076 | 73.07 |
This 2010..
<UserControl>
</UserControl.Resources>.
The below XAML consumes the above resources.
<Grid x:
<TextBlock
Text="{StaticResource applicationTitle}"
Foreground="{StaticResource applicationTitleForeground}"
FontSize="{StaticResource applicationTitleFontSize}"
VerticalAlignment="Top"/>
</Grid>
Resources are located in resource dictionaries.
What I have referred to as a resources section is actually the Resources property (WPF Silverlight) which is of type ResourceDictionary.. This means that resources can be located on Windows, UserControls, Grids, Buttons, TextBoxes, ListBoxes, etc.
In addition to an objects Resources property, resources can also be located in a ResourceDictionary XAML file.
Create a new WPF or Silverlight project and add the following XAML.
<Grid x:
<Grid.Background>
<LinearGradientBrush EndPoint="0.5,1" StartPoint="0.5,0">
<GradientStop Color="Blue" Offset="0" />
<GradientStop Color="#460000FF" Offset="1" />
</LinearGradientBrush>
</Grid.Background>
<TextBlock
Text="Resource Dictionary Demo"
Foreground="Yellow"
FontSize="18"
VerticalAlignment="Top"/>
</Grid>
We will now use the "Extract Value to Resource…" feature of the Designer..
In the below image we can see what the Designer tool did for us.
To help us quickly locate the TextBlock properties that have been set, we will now use a very cool feature of the Designer's Properties Window, "Sort by property source."
After extracting the above properties to resources, your XAML will look like the below image.
The required root Grid and application title TextBlock properties are now set for reuse..
You have several options open to you to change these. First, if you define the alias in the XAML file before extracting to a resource, the Designer tool will reuse the alias when referencing the same namespace and assembly. You could also edit the XAML after the Designer tool creates it. After you have worked with WPF or Silverlight for a short time, you'll get an understanding of the aliases you use and can just add them to your new XAML files before you start adding other content to the XAML file.
The below image is the anatomy of a simple WPF or Silverlight application that is composed of hierarchy of objects nested within each other.
Note
Silverlight does not have system scoped resources like WPF does.
When we say resources are scoped, we mean that resources can be defined at various levels of the application hierarchy. Resources defined at a certain scope are available to all elements in or below that scope.
For example, we previously extracted property values to the UserControl scope and the child Grid and TextBlock could access those resources.
Now, what if we wanted to consume those same resources we extracted earlier in another UserControl; how could we accomplish this?
To make the resources available, to other UserControls or Windows in the application, we need to move the resources to the Application scope.
When moving resources to Application scope, you have several options available to you:
For purposes of demonstration, I've cut the UserControl Resources and pasted them into the Application Resources section. Noticed I also cut and pasted the alias "my:" too.
You can also remove the UserControl.Resource from the UserControl's XAML as I have done the Demo.xaml image below.
You'll need to rebuild the application to remove the squiggles from your UserControl's XAML. The UserControl looks exactly as it did, but is now resolving its resources from the Application scope instead of the UserControl scope. We have just enabled reuse of these resources throughout the application. You can also see how Demo.xaml is smaller in size and looks cleaner now that we have moved the resources out of the local XAML file.
<Application
xmlns=""
xmlns:x=""
x:Class="ResourceDictionaryDemo.App"
xmlns:
<Application>
</Application.Resources>
</Application>
<UserControl
x:Class="ResourceDictionaryDemo.MainPage"
xmlns=""
xmlns:x=""
xmlns:d=""
xmlns:mc=""
mc:Ignorable="d"
d:DesignHeight="300" d:DesignWidth="400"
xmlns:
<Grid x:
<TextBlock
Text="{StaticResource applicationTitle}"
Foreground="{StaticResource applicationTitleForeground}"
FontSize="{StaticResource applicationTitleFontSize}"
VerticalAlignment="Top"/>
</Grid>
</UserControl>
If your application only requires a few resources, locating them in the Application.xaml or App.xaml would be fine.
However, in most applications the number of resources grows quickly and must be managed early in the application lifecycle.
Resource dictionaries are used to group related resources, then those resource dictionaries are then merged at the Application or other required scope.
What we will do now is:
<Application
xmlns=""
xmlns:x=""
x:
<Application.Resources>
<ResourceDictionary>
<ResourceDictionary.MergedDictionaries>
<ResourceDictionary Source="Assets/FormDictionary.xaml" />
</ResourceDictionary.MergedDictionaries>
</ResourceDictionary>
</Application.Resources>
</Application>
Tip
In the next step we will be extracting Double and String values to a resource. The Designer tool will add the "my" alias for the System namespace in mscorlib if an alias has not been added.
To make your applications more readable and easier to maintain over time, you can proactively add meaningful aliases before they needed by the Designer as I have done in the yellow highlighted XAML below.
The next logical step from here is to learn how to combine individual resources values into a Style and then apply a single Style to a control.
This MSDN topic, WPF Styling and Templating provides a good starting point for understanding Styles.
MSDN topic WPF Resources Overview
MSDN topic Silverlight Resource Dictionaries
Microsoft values your opinion about our products and documentation. In addition to your general feedback it is very helpful to understand:
Thank you for your feedback and have a great day,
Karl Shifflett
Expression Team
Hi,
Thank you for article.
Do you have some advice (recommendation) about using resource dictionary in a complex solution ex: Silverlight Business Application + one or more WCF RIA Service Class Library.
Right now, I am not be able to see - at design tine - resources (added in App.xaml : <Application.Resources> <ResourceDictionary> <ResourceDictionary.MergedDictionaries> ...) in Silverlight project of WCF RIA Class Library.
Thank you
Radu Lodina
Radu,
I was able to see the resources located in another project in the solution by following these steps:
1. Installed the latest SL Tools for VS from here:
2. Create a new SL Business Application
3. Added a new SL Class Library to the solution
4. Added a Resource Dictionary
5. Added a SolidColorBrush resource
6. In the main SL project, added a reference to the SL Class Library project
7. In App.xaml inside the <ResourceDictionary.MergedDictionaries> section I added
<ResourceDictionary Source="SilverlightClassLibrary1;component/Dictionary1.xaml" />
8. Open MainPage.xaml in the Designer. Select any control, using the properties window, find a brush property, then click on the property marker. The resource was visible for me.
If you can't get this to work in your project, please send me a link to the project so I can download it and have a look at it.
Have a nice day,
Karl Shifflett
Thank you for response.
My problem is:
How can I can see in my SL Class Library project (at design-time) resource defined in my main project (SL Business Application (App.xaml)) - all projects is in the same solution.
At run-time: resource defined on application level (App.xaml) it's available to all controls instantiated from my SL Class Library.
In my scenario (my VS solution contain):
1. SL Business Application - it's a container (provide common framework functionality to my app : logging, user notification, etc. and more important for this discution: simple theme selection - dynamically add/remove entry from Application.Current.Resources.MergedDictionaries.).
2. One or more WCF RIA Service Class Library - to keep things simple - one or more SL Class Library. Each of this library contain well defined task (there are my module (ex: Accounting, Warehouse, PR)) witch I load dynamically - based of user request. A functional app (SL4) can be found at demo.indecosoft.net user:demo, pass: demo
Now - in your above steps :
A. In step 4: Added a Resource Dictionary - add this resource dictionary in SL Business Application (your main SL project).
B. Change step 7 - to: <ResourceDictionary Source="Dictionary1.xaml" />
C. In SL Class Library project add a simple user control.
D. Apply SolidColorBrush resource defined on step A to user control defined in step C.
I'm sorry, but I don't understand what problem you are having. I can see and apply resources merged in App.xaml, from any project within my solution using the Designer's Apply Resource dialog.
Do you have the Visual Studio 2010 RTM?
Do you have the lastest Silverlight 4 Tools for Visul Studio installed? New version on 5/13/2010.
Do you have your resource dictionaries merged in App.xaml? In order for the Designer to "see" your resources, they MUST be merged in App.xaml or merged in the Page or UserControl that is consuming them.
Please let me know so I can try and help you, thank you,
This is the best post I ever seen about how to manage resources in WPF, and more than anything, it works like a charm!
Thanks a lot
Thank you for you kind remarks on Visual Studio and this post.
Have a great day,
Karl
Hi Karl,
Great post. Going through the steps I have a better feeling for how the resource dictionaries work. I'm looking at the 3 new Themes recently released that when opened in Blend as a project can be used to create a custom Theme. I like the organization and though its a bit overwhelming at first.. Forgive me for asking a possibly stupid question here, but what is the process to get from creating a new theme based on one of these 3 to using it in an application? I'm confused about which files to add to the application I want to apply the theme to and how to implicitly apply it to a large application.
Might you have a suggestion to complete that process? I'm having a difficult time finding this information.
Regards,
Bob
Bob,
Thank you for your kind words.
Want to be sure I understand your requirements.
You have an existing application, and you would like to be able to use one of the new themes, is that correct? You are looking for some guidance on how to plug the new theme into your application, correct?
Cheers,
Great article. I'm looking forward to building my first WPF app and this article has helped me understand more about using resources. Thanks. | http://blogs.msdn.com/b/wpfsldesigner/archive/2010/06/03/creating-and-consuming-resource-dictionaries-in-wpf-and-silverlight.aspx | CC-MAIN-2015-27 | refinedweb | 1,701 | 56.96 |
In this tutorial, I'm going to show you how to build a BeatFlower: A flower shaped, USB powered "wall light" that doubles as a color display for e.g. music on your computer.
I use the following components in this tutorial:
- 5x 12 bit RGB LED ring using the WS2812b controller
- 1x ATtiny85 microcontroller on a Digispark USB Development Board
- 1x FT232R cable (FTDI for USB to serial)
- 2x Acrylic plate
Additionally, I'm using the following consumables:
- Wire (thin, mine is around 0.3mm)
- Solder (the regular one)
- Screws and nuts (mine are plastic, 2mm diameter)
I used these tools for completing the project:
- Soldering iron (a normal one suffices)
- Wire cutter
During the process of making the BeatFlower, I also used an external service for cutting my acrylic plates. You might want to consider doing it yourself it you have access to a laser cutter; I will go into detail in the respective step.
Teacher Notes
Teachers! Did you use this instructable in your classroom?
Add a Teacher Note to share how you incorporated it into your lesson.
Step 1: Testing the LED Rings
I got my LED rings through eBay for cheap from China (look for "ws2812b ring 12"). I got mine for around 2€ a pop (plus free shipping, yay). If you buy them from e.g. Adafruit they come in much more expensive, but of course you won't have to wait four weeks for them to arrive.
The controller used in these rings is a WS2812b. This neat little device has four pins and can be daisy-chained; you can control up to 1024 LEDs with one single line.
The four pins on it are:
- 5V: Your power supply. I tested 64 LEDs on full brightness with my regular USB power, worked just fine.
- GND: Ground of whatever power supply you use.
- DI: This is the "Data In" pin; you connect your microcontroller data pin to this.
- DO: This is the "Data Out" pin; if you daisy-chain these, this is what goes to the next element's DI pin.
So for our tests for now we just need the first three pins. I soldered all of them anyway, as you can see on the picture. For testing purposes, I connected it to my ATmega2560 (see pictures). Connect 5V and GND to the respective pins on your ATmega (or equivalent) and use this code for testing it. I configured it to use pin 5 on the ATmega, so if you don't change it in the code, connect your DI to your pin 5 as well.
This code uses the (GPLv3) NeoPixel library from Adafruit. I installed it via "Sketch/Include Library/Manage Libraries..." in my Arduino IDE. See the picture for the exact library to install; I used version 1.0.4. If your ring lights up like on the picture, you're set. If not, check if you mixed up DI and DO, check all connections for loose cables, reflash the code on your microcontroller, and try again.
Step 2: Assembling the Lights, Adding an ATtiny85
For convenience, I used a Digispark USB Development Board (driven by an ATtiny85) rather than directly using an ATtiny85. I got the Digispark for around 1.50€ from eBay (from China again, so waiting time applies). The reason to use this is that its flashable easily using USB, and it brings its own pins.
As you can see on the pictures, I daisy-chained the five LED rings. As mentioned before, the output (DO) of one ring goes to the DI of the next. The first DI is fed directly by the ATtiny85. An important thing to note here is that the physical pins on the controller and the software pins you define in your code don't follow the same numbering. This article has a great schematic picture of it. So right now I'm using the software pin P0 (physical pin 5) as the output line. The Digispark actually has the software pins written onto its PCB to reduce confusion.
Besides DI and DO, you have to connect all 5V pins and all GND pins on the rings. Once you've done that, you're set. If you want to test this, you have to change the NUM parameter in the test code from 12 to 60 (5x12) so that the NeoPixel library knows what to send. If everything lights up as expected, you're good to go.
Step 3: Making the Frame for the Flower
The flower frame itself consists of two parts: A black acrylic back plate, and a milky white, semi-translucent acrylic front plate. I let the awesome guys at FORMULOR* make these for me. The designs I sent them are available here; they are in exactly the format they accept. You'll have to open them with Inkscape (or similar) to see the very fine hair lines that the laser cutter expects. They take about a week to ship and the results are awesome.
You can see my exact order on the picture. All in all I paid roundabout 17€ for the material, the cutting, and shipping. Not too bad.
Note that the holes on the schematics are 2mm in diameter. I chose this because the mounting holes on the LED rings are that size. This is a non-standard size (usually the smallest you can get is M3, which measures 3mm). To make this work, I ordered special plastic screws and nuts from smartshapes. Specifically, I got the screws as M2 in length 5mm and 8mm. Also, I bought the nuts there. You'll need ten each, which luckily is exactly the amount they package. Plus shipping, this sums up to about 10€.
The back plate has an extra hole where the cables go out. You don't see it through the semi-translucent front plate, but the lights are well visible through the front plate.
Step 4: Assembling the Lights Onto the Front Plate
To assemble the lights onto the front plate, simply push the longer 8mm screws through the holes, directly attach the LED rings onto them using the mounting holes, and put a nut on each.
Each LED ring has two mounting holes and all in all you'll have to attach ten nuts. The best is to arrange them in a circular, clock-wise order, starting with the top one (if you look at the front plate's shape, one sticks out; this is the top). This way, your design will be best compatible with the code I'm supplying for actually using it later on (in terms of numbering the rings when accessing them software-wise).
Also, the LED rings have four mounting holes. Use the outer two; this means, the cables go towards the center of the acrylic plate and are on the opposite side of the mounting holes you use. See the pictures for details.
It really is a bit of a squeeze with the cables I used, but they fit thanks to the distance nuts. You might consider making them shorter. Actually, after doing this, I realized that it might have been easier to first assemble the lights onto the front plate, then solder the cables. This way, the length of the cables would have been easier to estimate - plus, the LED rings are fixed and don't slip away during soldering.
Step 5: Assembling the Front and the Back Plate
After having fully assembled the front plate, we're going to fix it onto the back plate. In the first picture, you see how the nuts will be aligned. They will be fixed to the back plate using the 5mm screws.
Here is something I encountered because I haven't thought about it before: Neither the Digispark nor the connector I soldered to the wires fit through the hole in the back plate. I had to unsolder and resolder the cables in order to bring the cables through the hole.
After doing that, screw the nuts (that are currently on the front plate) to the back plate. It only fits in one way due to the one leaf of the flower sticking out. After assembling this, try the lights - it should resemble what you see in the video.
Step 6: Adding a Serial Interface to Control the Lights
To control the lights from a computer and not only let them flicker like crazy, we're adding a serial interface, connected to the computer via USB. My USB to RS232 cable (got it from China, 1.39€) essentially looks like the one sold by Adafruit (also, see the picture). The pinout is as follows:
- Red wire: Power (5V)
- Black wire: Ground
- White wire: RX into USB port
- Green wire: TX out of USB port
Remember that the RX of this guy has to go into the TX of the ATtiny85, and vice versa.
One of the larger challenges in this project was that the ATtiny85 has a very limited flash storage. This means that not all combinations of software actually fit onto it. My goal is to combine Adafruit's Neopixel library and SoftwareSerial on it such that the LEDs can be controlled from a host computer through USB-to-serial. My initial implementation hit a wall with the flash storage when compiling for the Digispark though:
Sketch uses 6,586 bytes (109%) of program storage space. Maximum is 6,012 bytes.
This is a bit too much. I played around with using alternative LED libraries (e.g. FastLED), but that didn't much reduce the size (although it has other cool features on its own). Next, I tried to replace SoftSerial (which is basically Digispark-speak for SoftwareSerial; they added PCINT support). I ended up implementing a stripped-down version of SoftSerial which you can download here (just put the folder into your Arduino IDE's libraries folder). I completely removed all TX functionality because we only want to send data to the ATtiny, not receive any from it. With a bigger controller, I would argue that return values and current state are super useful, but with the ATtiny in mind, I'll go for this solution.
Anyway, with this RecvOnlySoftSerial library, things look brighter:
Sketch uses 5,910 bytes (98%) of program storage space. Maximum is 6,012 bytes.
The sketch (which you can download here) listens for input from the serial line and acts according to what it receives. Each command must be followed by a carriage return (CR, '\r'). A simple API overview is shown here:
- r%d: Select the ring with the index R. Use the range 0-4.
- l%d %d %d %d: Set the LED with the index of the first integer on the currently selected ring to the RGB color value defined by R, G, B (second through fourth integer value). Use the range 0-12 for the first, and 0-25 for the other values.
- c: ("commit") Apply the changes described by the former functions to the LEDs (before this, nothing changes on the LEDs).
For simplicity reasons, I wrote a small library that can interface the serial line and execute these commands (and even do a bit more). You can download it here.
The final result can be seen in the video. The code repository includes a file called demo.cpp that just runs through all rings, their individual LEDs, and lights them up in a random color, continuously. See the video for an impression.
I left the project at that stage. Everything seems to be working, but the interface still is a bit slow-ish. I will probably return to this at a later point to prepare a visualizer plugin for VLC or the like, but at its current state, this project is working and fine by itself.
Step 7: Bottom Line: Experiences, Extensions, Costs
Here, I will write down my thoughts on what I learned, what could have gone better, and what the overall project cost in terms of money and time.
What I learned
I definitely took away a lot about programming the ATtiny85, the role of the individual pins, and how to connect devices to it. This was a first for me. The same applies for programming the Digispark (which, frankly, is just making life easier).
Also, I gathered lots of experience in preparing assembly, designing sketches for laser cutters, and finding the right materials/consumables for putting it all together. There are also lots of details I had to re-do because I didn't think of them; just think of putting the connectors for the Digispark through the backplate hole.
Finally, I got a much better understanding of how long certain tasks take. Its easy to underestimate the effort for making things, so its safer to assume it'll gonna take a bit longer (or you have to start over in the middle because you realize you're a stupid moron and forgot something super important).
What could have gone better
Mounting holes: One thing I keep being annoyed by is that I didn't let the FORMULOR* guys make holes for mounting it somewhere. I can sure lean it against the wall, but to actually mount it somewhere I'd like some extra holes. Next time, I'll consider this. Maybe my next project will be a stand for this thing.
Cable length: The cables I used to connect the LED rings are pretty long; I should have made them shorter. I really didn't expect the mess under the front plate (although you don't really see them). This is more an aesthetic issue than anything else.
What it cost
I ordered/bought these parts:
- 5x LED rings: 9.50€
- Digispark USB Development Board: 1.39€
- USB to RS232 cable: 1.39€
- Acrylic plates + Cutting + Shipping: 16.85€
- Screws + Nuts + Shipping: 9.70€
- Wire: 1€
- Solder: 1€
All material considered, the BeatFlower cost somewhere around 40€. I'm not counting the ATmega2560 because that was really only for testing purposes and I already had it. I could have used the Digispark for that as well.
In terms of time, I roughly spent 2-3 hours on finding all the parts I needed, soldering (and resoldering) sums up to about two hours, programming (and testing) the controllers, LEDs, and the library is another 4-5 hours, and for the overall assembly I'd account 2 more hours. So all in all I'd say you're looking at the product of around 10-12 hours of work.
All in all, I'm very happy with the project's outcome. It was fun, had a cool result, and I learned a lot. I hope you enjoyed it as much as I did.
2 Discussions
11 months ago
Your post is the kind of post that makes me love Instructables. I prefer to learn as I go, and you broke it down with great references. I have some WS2812B strips on the way (from China) that I can't wait to play with.
2 years ago
Cool lighting display. This could be used to make some really nice Christmas light setups. | https://www.instructables.com/id/BeatFlower-With-DigisparkATtiny85-and-WS2812b/ | CC-MAIN-2019-39 | refinedweb | 2,527 | 80.21 |
wmemset man page
Prolog
This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux.
wmemset — set wide characters in memory
Synopsis
#include <wchar.h> wchar_t *wmemset(wchar_t *ws, wchar_t wc, size_t n);
Description
The functionality described on this reference page is aligned with the ISO C standard. Any conflict between the requirements described here and the ISO C standard is unintentional. This volume of POSIX.1-2008 defers to the ISO C standard.
The wmemset() function shall copy the value of wc into each of the first shall copy zero wide characters.
Return Value
The wmemset() functions shall return the value of ws.
Errors
No errors are defined.
The following sections are informative.
Examples
None.
Application Usage
None.
Rationale
None.
Future Directions
None.
See Also
wmemchr(), wmemcmp(), wmemcpy(), wmemmove()cpy(3p), wmemmove(3p). | https://www.mankier.com/3p/wmemset | CC-MAIN-2019-04 | refinedweb | 162 | 51.85 |
The documentation of the DtoKPiPiCLEO class implements the Dalitz plot fits of the CLEO collaboration for
, Phys.
More...
#include <DtoKPiPiCLEO.h>
The documentation of the DtoKPiPiCLEO class implements the Dalitz plot fits of the CLEO collaboration for
, Phys.
Rev. Lett. 89 (2002) 251802, and
, Phys. Rev. D63 (2001) 092001.
Definition at line 31 of file DtoKPiPiCLEO.h.
Calculate the amplitude for a resonance.
Make a simple clone of this object.
Implements ThePEG::InterfacedBase.
Definition at line 122 of file DtoKPiPiCLEO 128 of file DtoKPiPiCLEO.
Magnitudes and phases of the amplitudes for
.
Amplitude of the non-resonant component
Definition at line 328 of file DtoKPiPiCLEO.h.
Magnitudes and phases of the amplitudes for
.
Amplitude for the
Definition at line 453 of file DtoKPiPiCLEO.h.
Mass, Widths and related parameters.
Whether to use local values for the masses and widths or those from the ParticleData objects
Definition at line 173 of file DtoKPiPiCLEO.h.
Parameters for the phase-space integration.
Maximum weights for the modes
Definition at line 638 of file DtoKPiPiCLEO.h.
Masses for the
Breit-Wigner.
The pion mass
Definition at line 653 of file DtoKPiPiCLEO.h.
Parameters for the Blatt-Weisskopf form-factors.
Radial size for the
Definition at line 623 of file DtoKPiPiCLEO.h.
The static object used to initialize the description of this class.
Indicates that this is a concrete class with persistent data.
Definition at line 155 of file DtoKPiPiCLEO.h. | http://herwig.hepforge.org/doxygen/classHerwig_1_1DtoKPiPiCLEO.html | CC-MAIN-2018-05 | refinedweb | 235 | 61.73 |
In this post, we will understand the difference between local and global variables.
It is generally declared inside a function.
If it isn’t initialized, a garbage value is stored inside it.
It is created when the function begins its execution.
It is lost when the function is terminated.
Data sharing is not possible since the local variable/data can be accessed by a single function.
Parameters need to be passed to local variables so that they can access the value in the function.
It is stored on a stack, unless mentioned otherwise.
They can be accessed using statement inside the function where they are declared.
When the changes are made to local variable in a function, the changes are not reflected in the other function.
Local variables can be accessed with the help of statements, inside a function in which they are declared.
Following is an example −
#include <stdio.h> int main () { /* local variable declaration */ int a, b; int c; /* actual initialization */ a = 10; b = 20; c = a + b; printf ("value of a = %d, b = %d and c = %d\n", a, b, c); return 0; }
It is declared outside the function.
If it isn’t initialized, the value of zero is stored in it as default.
It is created before the global execution of the program.
It is lost when the program terminates.
Data sharing is possible since multiple functions can access the global variable.
They are visible throughout the program, hence passing parameters is not required.
It can be accessed using any statement within the program.
It is stored on a specific location inside the program, which is decided by the compiler.
When changes are made to the global variable in one function, these changes are reflected in the other parts of the program as well.
Following is an example −
#include /* global variable declaration */ int g; int main () { /* local variable declaration */ int a, b; /* actual initialization */ a = 10; b = 20; g = a + b; printf ("value of a = %d, b = %d and g = %d\n", a, b, g); return 0; } | https://www.tutorialspoint.com/difference-between-local-and-global-variable | CC-MAIN-2022-21 | refinedweb | 341 | 64.51 |
This action might not be possible to undo. Are you sure you want to continue?
Structure
1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 Introduction Objectives Mathematical Background Process of Analysis Calculation of Storage Complexity Calculation of Time Complexity Summary Solutions/Answers Further Readings
Analysis of Algorithms
Page Nos.
7 7 8 12 18 19 21 22 22
1.0
INTRODUCTION
A common person’s belief is that a computer can do anything. This is far from truth. In reality, computer can perform only certain predefined instructions. The formal representation of this model as a sequence of instructions is called an algorithm, and coded algorithm, in a specific computer language is called a program. Analysis of algorithms has been an area of research in computer science; evolution of very high speed computers has not diluted the need for the design of time-efficient algorithms. Complexity theory in computer science is a part of theory of computation dealing with the resources required during computation to solve a given problem. The most common resources are time (how many steps (time) does it take to solve a problem) and space (how much memory does it take to solve a problem). It may be noted that complexity theory differs from computability theory, which deals with whether a problem can be solved or not through algorithms, regardless of the resources required. Analysis of Algorithms is a field of computer science whose overall goal is understand the complexity of algorithms. While an extremely large amount of research work is devoted to the worst-case evaluations, the focus in these pages is methods for average-case. One can easily grasp that the focus has shifted from computer to computer programming and then to creation of an algorithm. This is algorithm design, heart of problem solving.
1.1 OBJECTIVES
After going through this unit, you should be able to: • • • • • • understand the concept of algorithm; understand the mathematical foundation underlying the analysis of algorithm; to understand various asymptotic notations, such as Big O notation, theta notation and omega (big O, Θ, Ω ) for analysis of algorithms; understand various notations for defining the complexity of algorithm; define the complexity of various well known algorithms, and learn the method to calculate time complexity of algorithm.
7
Introduction to Algorithms and Data Structures
1.2
MATHEMATICAL BACKGROUND
To analyse an algorithm is to determine the amount of resources (such as time and storage) that are utilized by to execute. Most algorithms are designed to work with inputs of arbitrary length. Algorithm analysis is an important part of a broader computational complexity theory, which provides theoretical estimates for the resources needed by any algorithm which solves a given computational problem. These estimates provide an insight into reasonable directions of search for efficient algorithms.
Definition of Algorithm
Algorithm should have the following five characteristic features: 1. 2. 3. 4. 5. Input Output Definiteness Effectiveness Termination.
Therefore, an algorithm can be defined as a sequence of definite and effective instructions, which terminates with the production of correct output from the given input.
Complexity classes
All decision problems fall into sets of comparable complexity, called complexity classes. The complexity class P is the set of decision problems that can be solved by a deterministic machine in polynomial time. This class corresponds to set of problems which can be effectively solved in the worst cases. We will consider algorithms belonging to this class for analysis of time complexity. Not all algorithms in these classes make practical sense as many of them have higher complexity. These are discussed later. The complexity class NP is a set of decision problems that can be solved by a nondeterministic machine in polynomial time. This class contains many problems like Boolean satisfiability problem, Hamiltonian path problem and the Vertex cover problem.
What is Complexity?
Complexity refers to the rate at which the required storage or consumed. It may be noted that we are dealing with complexity of an algorithm not that of a problem. For example, the simple problem could have high order of time complexity and vice-versa.
8
Asymptotic Analysis
Asymptotic analysis is based on the idea that as the problem size grows, the complexity can be described as a simple proportionality to some known function. This idea is incorporated in the “Big O”, “Omega” and “Theta” notation for asymptotic performance. The notations like “Little Oh” are similar in spirit to “Big Oh” ; but are rarely used in computer science for asymptotic analysis.
Analysis of Algorithms
Tradeoff between space and time complexity
We may sometimes seek a tradeoff between space and time complexity. For example, we may have to choose a data structure that requires a lot of storage in order to reduce the computation time. Therefore, the programmer must make a judicious choice from an informed point of view. The programmer must have some verifiable basis based on which a data structure or algorithm can be selected Complexity analysis provides such a basis. We will learn about various techniques to bind the complexity function. In fact, our aim is not to count the exact number of steps of a program or the exact amount of time required for executing an algorithm. In theoretical analysis of algorithms, it is common to estimate their complexity in asymptotic sense, i.e., to estimate the complexity function for reasonably large length of input ‘n’. Big O notation, omega notation Ω and theta notation Θ are used for this purpose. In order to measure the performance of an algorithm underlying the computer program, our approach would be based on a concept called asymptotic measure of complexity of algorithm. There are notations like big O, Θ, Ω for asymptotic measure of growth functions of algorithms. The most common being big-O notation. The asymptotic analysis of algorithms is often used because time taken to execute an algorithm varies with the input ‘n’ and other factors which may differ from computer to computer and from run to run. The essences of these asymptotic notations are to bind the growth function of time complexity with a function for sufficiently large input.
The Θ-Notation (Tight Bound)
This notation bounds a function to within constant factors. We say f(n) = Θ(g(n)) if there exist positive constants n0, c1 and c2 such that to the right of n0 the value of f(n) always lies between c1g(n) and c2g(n), both inclusive. The Figure 1.1 gives an idea about function f(n) and g(n) where f(n) = Θ(g(n)) . We will say that the function g(n) is asymptotically tight bound for f(n).
Figure 1.1 : Plot of f(n) = Θ(g(n)) 9
10 . we have to find three positive constants. So. the value of f(n) always lies on or below cg(n). Therefore 6n3 ≠ Θ (n2). But this fails for sufficiently large n. Now we may show that the function f(n) = 6n3 ≠ Θ (n2). let us assume that c3 and no exist such that 6n3 ≤ c3n2 for n ≥ no. c 1. f(n) = an + c is O(n) is also O(n2). let us show that the function f(n) = 1 2 n − 4n = Θ(n2).2: Plot of f(n) = O(g(n)) Mathematically for a given function g(n). we use O-notation to define the upper bound on a function by using a constant factor c. by selecting no = 13 c1 ≤ 1/39. Similarly. Figure 1. To prove this. we denote a set of functions by O(g(n)) by the following notation: O(g(n)) = {f(n) : There exists a positive constant c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0 } Clearly. it follows that 1/3 n2 – 4n = Θ (n2). the right hand inequality holds true. c2 and no. The big O notation (Upper Bound) This notation gives an upper bound for a function to within a constant factor. We write f(n) = O(g(n)) if there are positive constants n0 and c such that to the right of n0. c2 = 1/3 and no ≥ 13.2 shows the plot of f(n) = O(g(n)) based on big O notation. for c1 = 1/39 . Figure 1. c 2 and no such that c1n2 ≤ ⇒ 1 2 n – 4n ≤ c2 n2 for all n ≥ no 3 1 4 c1 ≤ − ≤ c2 3 n By choosing no = 1 and c2 ≥ 1/3 the right hand inequality holds true. Certainly. We can see from the earlier definition of Θ that Θ is a tighter notation than big-O notation.Introduction to Algorithms and Data Structures For example. there are other choices for c1. but O (n) is asymptotically tight whereas O(n2) is notation. 3 Now.
it is used to bound the best case running time of an algorithm. it is often used to describe the worst case running time of algorithms. Analysis of Algorithms The Ω-Notation (Lower Bound) This notation gives a lower bound for a function to within a constant factor. every polynomial of degree k can be bounded by the function nk. 3n + 4 is also O(n²). . f(n) = O(f(n)) for any function f. ak ∈ R. Figure 1. In other words. As big-O notation is upper bound of function. . 2. loga n = O(logb n) for any bases a.. Asymptotic notation Let us define a few functions in terms of above asymptotic notation. i. we may define Ω(g(n)) as the set of functions. Smaller order terms can be ignored in big-O notation. Basis of Logarithm can be ignored in big-O notation i. if there are positive constants n0 and c such that to the right of n0. b. We generally write O(log n) to denote a logarithm n to any base. Ω(g(n)) = { f(n) : there exists a constant c and n0 ≥ 0 such that 0 ≤ cg(n) ≤ f(n) for all n ≥ n0 }. but as a convention.3 depicts the plot of f(n) = Ω(g(n)). a1. as 4n + 3 is of O (n) = 3n3+ O (n2).Whereas in terms of Θ notation. Since Ω notation describes lower bound. too. as 2n2 + O (n) is O (n2) = O (n3) Example: f(n) = n² + 3n + 4 is O(n²). we use the tighter bound to the function. Here are some rules about big-O notation: 1. . the value of f(n) always lies on or above cg(n). every function is bounded by itself. Figure 1. . We write f(n) = Ω(g(n)). In other words. aknk + ak−1nk−1 + · · · + a1n + a0 = O(nk) for all k ≥ 0 and for all a0.e. since n² + 3n + 4 < 2n² for all n > 10. 11 3. .3: Plot of f(n) = Ω(g(n)) Mathematically for a given function g(n). O(n).e. By definition of big-O. Example: f(n) = 3n3 + 2n2 + 4n + 3 = 3n3 + 2n2 + O (n). the above function f(n) is Θ (n).
5. as the size of the problem being solved gets larger and larger? For example. compiler. This depends on the algorithm. Efficiency is dependent on the resources that are used by the algorithm. Any exponential function can be bound by the factorial function.e. the time and memory requirements of an algorithm which 12 .). an = O(n!) for any base a. 6. There are two important attributes to analyse an algorithm.Introduction to Algorithms and Data Structures 4. True/False If a function f(n) = O(g(n)) and h(n) = O(g(n)). logb n = O(nc) for any b (base of logarithm) and any positive exponent c > 0.e.) True/False The asymptotic complexity of algorithms depends on hardware and other factors. Check Your Progress 1 1) 2) 3) 4) 5) The function 9n+12 and 1000n+400000 are both O(n).3 PROCESS OF ANALYSIS The objective analysis of an algorithm is to find its efficiency. For example. • • • • CPU utilization (Time complexity) Memory utilization (Space complexity) Disk usage (I/O) Network usage (bandwidth). nk = O (bn. machine. True/False Give simplified big-O notation for the following growth functions: • • • • • 30n2 10n3 + 6n2 5nlogn + 30n log n + 3n log n + 32 …………………………………………………………………………………………………… …………………………………………………………………………………………………… …………………………………………………………………………………………………… …………………………………………………………………………………………………… 1. In other words. Any polynomial function can be bounded by an exponential function i. etc. what happens to the performance of an algorithm. They are: Performance: How much time/memory/disk/network bandwidth is actually used when a program is run. then f(n)+h(n) = O(g(n)). Complexity: How do the resource requirements of a program or algorithm scale (the growth of resource requirements as a function of input). True/False If f(n) = n2 + 3n and g(n) = 6000n + 34000 then O(f(n)) < O (g(n). Any logarithmic function can be bounded by a polynomial i. For example.
. It depends on the kinds of statements used in the program. but depend upon the size of the data and the contents of a subroutine. Linear-time method is “order n”: O(n). the time required will increase by four times. Loops and subroutine calls are not simple operations. The time required is proportional to the input size. Each memory access takes exactly 1 step. then. =. if. The total time can be found out by adding the times for all statements: Total time = time(statement 1) + time(statement 2) + . Time Complexity: The maximum time required by a Turing machine to execute on any input of length n.g. Statement k. then.. Consider the following example: Example 1: Simple sequence of statements Statement 1.. The time required is proportional to the square of the input size. The time required is constant independent of the input size. -. e. call) takes exactly 1 step. Space Complexity: The amount of storage space required by an algorithm varies with the size of the problem being solved.. The space complexity is normally expressed as an order of magnitude of the size of the problem. O(n2) means that if the size of the problem (n) doubles then the working storage (memory) requirement will become four times. 13 .computes the sum of 1000 numbers is larger than the algorithm which computes the sum of 2 numbers. If the input size doubles. the time to run the algorithm also doubles. + time(statement k). Determination of Time Complexity The RAM Model The random access model (RAM) of computation was devised by John von Neumann to study algorithms.. The complexity of algorithms using big-O notation can be defined in the following way for a problem of size n: • • • Constant-time method is “order 1” : O(1). . Statement 2.. If the input size doubles.. We will do all our design and analysis of algorithms based on RAM model of computation: • • • Analysis of Algorithms Each “simple” operation (+. Quadratic-time method is “order N squared”: O(n2). The process of analysis of algorithm (program) involves analyzing each step of the algorithm. Algorithms are studied in computer science because they are independent of machine and language. .
Thus. the inner loop also executes n times). the sequence of statements also executes n times. Assuming that each of the above statements involve only basic operation. i + +) { for (j = 0. assume the statements are simple unless noted otherwise. the total time for the loop is n * O(1). i < n. the outer loop executes n times.. i < n. consider a function that calculates partial sum of an integer n. Since we assume the time complexity of the statements are O(1). which is O(n). the inner loop executes m times. the time complexity is O(n * m). then the total complexity for the nested loop is O(n2). the time for each simple statement is constant and the total time is also constant: O(1). As a result of this. if sequence 1 is O(N2) and sequence 2 is O(1). the number of statements does not matter as it will increase the running time by a constant factor and the overall complexity will be same O(n). then the worst-case time for the whole if-then-else statement would be O(N2). Every time the outer loop executes. Example 4: Now. the loop executes n times. The worst-case time in this case is the slower of the two possibilities. j + +) { sequence of statements } } Here. Here. Example 4:nested for loop for (i = 0. if-else statement. we observe that.Introduction to Algorithms and Data Structures It may be noted that time required by each statement will greatly vary depending on whether each statement is simple (involves only basic operations) or otherwise. partial_sum. If we modify the conditional variables. statements in the inner loop execute a total of n * m times. So. 14 . For example.e. j < m. or sequence 2 will execute depending on the boolean condition. Example 3: for loop for (i = 0. i + +) { sequence of statements } Here. where the condition of the inner loop is j < n instead of j < m (i. Example 2: if-then-else statements In this example. if-then-else statements if (cond) { sequence of statements 1 } else { sequence of statements 2 } In this. either sequence 1 will execute. int psum(int n) { int i.
The function f(n) = 4n+6 = O(n) (by choosing c appropriately as 5). we can count a statement as one. since n2 > nlog(n) > 2n+3. for (i = 1. psum = 12 + 22+ 32 + …………. The for loop on line 2 are actually 2n+2 statements: • • • i = 1. then we see that cn > 2n+3. i. we will choose the smallest function that describes the order of the function and it is O(n). Line 1 2 3 4 Pseudocode For j=2 to length [A] do { key = A[j] i=j−1 while (i > 0) and (A[i] > key) do Cost factor c1 c2 c3 c4 c4 No. hence one statement. It is again reiterated here that smaller order terms and constants may be ignored while describing asymptotic notation. 4n+6 = Ω(n) (by choosing c = 1). and therefore Ω(n) too. and 2n+3 = Ω(n). statement is executed once for each value of i from 1 to n+1 (till the condition becomes false). if f(n) = 4n+6 instead of f(n) = 2n +3 in terms of big-O. By looking at the definition of Omega notation and Theta notation.e. too. this does not change the order of the function. i++ is executed once for each execution of the body of the loop. This is executed n times. } return partial_sum. the sum is 1+ (n+1) + n+1 = 2n+ 3 times. As we have to determine the running time for each statement in this program. The essence of this analysis is that in these asymptotic notation.e. then we see that cn < 2n+3. For example. big-O notation only provides a upper bound to the function. and should not worry about their relative execution time which may depend on several hardware and other implementation factors. Exact analysis of insertion sort: Let us consider the following pseudocode to analyse the exact runtime complexity of insertion sort. i. of iterations (n−1) + 1 (n−1) (n−1) ∑T j= 2 n j= 2 n j 5 { A[i+1] = A[I] ∑T −1 j 15 . However. partial_sum = 0. + n2 . as long as it is of the order of 1. it implies that 2n+3 = Θ(n) . this function is O(n). Since 2n+3 = O(n). As we have already noted earlier. because if we choose c=3. it is also clear that it is of Θ(n). i++) { partial_sum = partial_sum + i*i. i <= n. statement : simple assignment. O(1). In terms of big-O notation defined above. Ω and Θ./* Line 4 */ } This function returns the sum from i = 1 to n of i squared. we have to count the number of statements that are executed in this procedure. it is also O(nlog(n)) and O(n2). /* Line 1 */ /* Line 2 */ /* Line 3 */ Analysis of Algorithms Thus. Because if we choose c=1. i <= n. The code at line 1 and line 4 are one statement each. hence Ω(n) . and therefore 4n+6 = Θ(n). The statement is executed n+1 times.
The running time for any given size of input will be the average number of operations over all problem instances for a given size. The statement at line 4 will execute Tj number of times. It guarantees that. which indicates that the time complexity is linear. So. the case is where the list was already sorted. T (n) = c1n + c2(n −1) + c3(n−1) + c4 ∑T j =2 n j + c5 ∑T j =2 n j − 1 + c6 ∑T j =2 n j − 1 + c7 (n−1) Three cases can emerge depending on the initial configuration of the input list. First. Average case : This gives the average running time of algorithm. So. < T(n) worst . So. rest of the lines in the inner loop will not execute. second case is the case wherein the list is sorted in reverse order and third case is the case where in the list is in random order (unsorted). the list will be in some random order. the boolean condition at line 4 will be true for execution of line 1. T (n) best 16 < T(n) Avg. Average case : In most of the cases. Best Case : It guarantees that under any cirumstances the running time of algorithms will at least take this much time. step line 4 is executed ∑ j = n(n+1)/2 j =2 n − 1 times T (n) = c1n + c2(n −1) + c3(n −1) + c4 (n(n+1)/2 − 1) + c5(n(n −1)/2) + c6(n(n−1)/2) + c7 (n −1) = O (n2). The statements at lines 5 and 6 will execute Tj − 1 number of times (one step less) each Line 7 will excute (n−1) times So. The best case scenario will emerge when the list is already sorted.Introduction to Algorithms and Data Structures 6 7 } i = I –1 } c5 c6 ∑T −1 j j= 2 n A[I+1] = key } n−1 Tj is the time taken to execute the statement during jth iteration. Worst Case: Worst case running time is an upper bound for running time with any input. Best Case : If the list is already sorted then A[i] <= key at line 4. total time is the sum of time taken for each line multiplied by their cost factor. it neither sorted in ascending or descending order and the time complexity will lie some where between the best and the worst case. the algorithm will not take any longer than the worst case time. That is. Then. Worst Case: This case arises when the list is sorted in reverse order. T (n) = c1n + c2(n −1) + c3(n −1) + c4 (n −1) = O (n). irrespective of the type of input.
If the running time of algorithms is not good then it will 17 .Figure 1. } What is the running time of the above program segment in big O notation? ………………………………………………………………………………………… 4) Prove that if f(n) = n2 + 2n + 5 and g(n) = n2 then f(n) = O (g(n)). end. Average and Worst case scenarios Check Your Progress 2 1) 2) The set of algorithms whose order is O (1) would run in the same time. True/False Find the complexity of the following program in big O notation: printMultiplicationTable(int max){ for(int i = 1 . it is very much important to analyse the amount of memory used by a program.4 : Best. the prominence of runtime complexity is increasing.4 depicts the best. j <= max . Worst case Analysis of Algorithms Average case Best case Time Input size Figure 1. 5) How many times does the following for loop will run for (i=1. However. average and worst case run time complexities of algorithms. i <= n.4 CALCULATION OF STORAGE COMPLEXITY As memory is becoming more and more cheaper. 1. j + +) cout << (i * j) << “ “ . i *= 2) { j = 1. cout << endl . i*2) k = k + 1. i + +) { for(int j = 1 . i <= max . i<= n. } //for ……………………………………………………………………………………… 3) Consider the following program segment: for (i = 1.
Introduction to Algorithms and Data Structures take longer to execute. So. General recursive calls use linear space. n = t. } The space-complexity of the above algorithm is a constant. But. But.. Subtract n from m. Time Complexity: O(exp n) Space Complexity: O(exp n) Example: Find the greatest common divisor (GCD) of two integers. Consider the following example: Binary Recursion (A binary-recursive routine (potentially) calls itself twice). and also some space is allocated for remembering where each call should return to. It is therefore more critical than run time complexity. for n recursive calls. e. Each recursive call takes a constant amount of space and some space for local variables and function arguments. The analysis of recursive program with respect to space complexity is more complicated as the space used at any time is the total space used by all recursive calls active at that time. 18 . 3. the matter of respite is that memory is reutilized during the course of program execution. the space complexity is O(1). swap m and n.n). That is. it is usually just a matter of looking at the variable declarations and storage allocation calls. int n) /* The precondition are : m>0 and n>0. the space complexity is O(n). n is the GCD Code in C int gcd(int m. if it takes more memory (the space complexity is more) beyond the capacity of the machine then the program will not execute at all. We will analyse this for recursive and iterative programs. 2. */ { while( m > 0 ) { if( n > m ) { int t = m. } return n. } /* swap m and n*/ /* m >= n > 0 */ m − = n. n and t. It just requires space for three integers m. m = n. 4. number of variables. If n equals 0 or 1. then return 1 Recursively calculate f (n−1) Recursively calculate f (n−2) Return the sum of the results from steps 2 and 3. For an iterative program. m and n.g. Let g = gcd(m. length of an array etc. The algorithm for GCD may be defined as follows: While m is greater than zero: If n is greater than m. 1.
N/2.5 CALCULATION OF TIME COMPLEXITY Example 1: Consider the following of code : x = 4y + 3 z=z+1 p=1 As we have seen.. the space complexity is defined as follows: Space complexity of a Turing Machine: The (worst case) maximum length of the tape required to process an input string of length n. Example 2: Binary search Binary search in a sorted list is carried out by dividing the list into two parts based on the comparison of the key.. 1 The number of iterations (number of elements in the series) is not so evident from the above series. space can be reused during the execution of the program. log2 N−3.. The important concept behind space required is that unlike time. z and p are all scaler variables and the running time is constant irrespective of the value of x. if m = 1 there are n iterations) O(n). then there is just one iteration. O(1) Worst case : If n = 1. y... Here. 0 19 .. But. but the bottom line is that they will take constant amount of time. and unlimited time..then there are m iterations.. Thus. log2 N−2.. The space complexity of a computer program is the amount of memory required for its proper execution. The search interval will look like following after each iteration N. 8.. As the search interval halves each time. The real issue is.. the class PSPACE is the set of decision problems that can be solved by a Turing machine using a polynomial amount of memory. 2. the iteration takes place in the search. N/8 ... to execute.y. how many iterations take place? The answer depends on both m and n. we emphasize that each line of code may take different time. there is often a trade-off between the time and space required to run a program. Analysis of Algorithms In complexity theory... In formal definition... 4. log2 N −1.z and p. then log2 N . if we take logs of each element of the series.. 3. . Check Your Progress 3 1) Why space complexity is more critical than time complexity? …………………………………………………………………………………… …………………………………………………………………………………… 2) What is the space complexity of Euclid Algorithm? …………………………………………………………………………………… …………………………………………………………………………………… 1.The time complexity depends on the loop and on the condition whether m>n or not. 1. this is the worst-case (also equivalently. 2. Best case: If m = n. As discussed. N/4. . x.. we will describe run time of each line of code as O(1).
T(n) does not grow at all as a function of n. For example.Introduction to Algorithms and Data Structures As the sequence decrements by 1 each time the total elements in the above series are log2 N + 1. Solutions: How many tours are possible? n*(n −1). T(n) = O(n log (n).. . T(n) grows proportional to the k-th power of n. This is called nlogn growth. In fact. no sorting algorithm that uses comparison between elements can be faster than n log n. array access has this characteristic. T(n) = O(n). This is called logarithmic growth. That is. O(2n). the base of logarithm does not matter. binary search has this characteristic.*1 = n! Because n! > 2(n-1) So n! = Ω (2n) (lower bound) As of now. This is called linear growth.2 compares the typical running time of algorithms of different orders.. it is a constant. • • • • • Table 1. T(n) grows linearly with n. The complexity Ladder: • T(n) = O(1).. So. because such algorithms are very slow and not practical. This is called constant growth. looping over all the elements in a one-dimensional array of n elements would be of the order of O(n). there are numerous very good heuristic algorithms. However. T(n) grows proportional to n times the base 2 logarithm of n. Example 3: Travelling Salesman problem Given: n connected cities and distances between them Find: tour of minimum length that visits every city. Does . A[i] takes the same time independent of the size of the array A. Actually. . For example. We rarely consider algorithms that run in time O(nk) where k is bigger than 2 . the number of iterations is log2 N + 1 which is of the order of O(log2N). there is no algorithm that finds a tour of minimum length as well as covers all the cities in polynomial time. This is called polynomial growth. T(n) grows exponentially. Notation O(1) 20 O(1) < O(log(n)) < O(n log(n)) < O(n2) < O(n3). Algorithms that grow this way are basically useless for anything except for very small input size. Table 1. For example. T(n) = O(log2 (n)). The growth patterns above have been listed in order of increasing size.1 compares various algorithms in terms of their complexities. T(n) = O(nk).. For example. selection sort is an O(n2) algorithm. T(n) grows proportional to the base 2 logarithm of n. Exponential growth is the most-danger growth pattern in computer science. T(n) = O(2n) This is called exponential growth. Name Constant Example Constant growth. Time complexity of Merge Sort has this characteristic.
matrix multiplication Analysis of Algorithms Table 1. The exact analysis of insertion sort was discussed to describe the best case.1 : Comparison of various algorithms and their complexities Array size 8 128 256 1000 100.15*1077 1. 1.384 65.6 SUMMARY Computational complexity of algorithms are generally referred to by space complexity (space required for running program) and time complexity (time required for running the program).. These asymptotic orders of time and space complexity describe how best or worst an algorithm is for a sufficiently large input. sometimes “geometric” Exponential Factorial not grow as a function of n. In the field of computer of science. accessing array for one element A[i] Binary search Looping over n elements. the concept of runtime complexity has been studied vigorously. worst case and average case scenario. We studied various asymptotic notation..2: Comparison of typical running time of algorithms of different orders 1. Merge sort Worst time case for insertion sort. Omega and Theta notations. For example..07*10301 . to describe the time complexity and space complexity of algorithms.4*1038 1.536 1 million 10 billion Exponential: 2N 256 3... Table 1.O(log n) O(n) O(n log n) O(n2) O(nc) O(cn) O(n!) Logarithmic Linear Sometimes called “linearithmic” Quadratic Polynomial.000 Quadratic: N2 64 16.000 Logarithmic: log2N 3 7 8 10 17 Linear: N 8 128 256 1000 100. We studied about the process of calculation of runtime complexity of various algorithms...7 1) SOLUTIONS / ANSWERS True Check Your Progress 1 21 . Enough research is being carried out to find more efficient algorithms for existing problems. of an array of size n (normally). namely the big-O.
O(n3). 2.Introduction to Algorithms and Data Structures 2) 3) 4) 5) True False False O(n2).Leung. FURTHER READINGS Fundamentals of Data Structures in C++. Galgotia Publications. E. 1.wikipedia. Kruse. O(log n). But. C. then it will take longer to execute.Mehta.webopedia.O(log n) Check Your Progress 2 1) 2) 3) 5) True O(max*(2*max))=O(2*max*max) = O(2 * n * n) = O( 2n2 ) = O( n2 ) O(log(n)) log n Check Your Progress 3 1) If the running time of algorithms is not good. Pearson Education.Tonodo and B. Data Structures and Program Design in C. Sahni and D.8 1.L.org/wiki/Big_O_notation. 2) O(1). Reference Websites. O(n log n).Horowitz.com 22 . if it takes more memory (the space complexity is more) beyond the capacity of the machine then the program will not execute.
For example.UNIT 2 ARRAYS Structure 2.1 2.6.8 2.81.5}. 23 24 24 25 28 30 31 32 32 32 Applications Summary Solutions/Answers Further Readings 2.4 2.60. char. int b[] = {2. The amount of storage required to hold an array is directly related to its type and size. the total size in bytes required for the array is computed as shown below. float etc.3 2. integer.5.93.2}.0 INTRODUCTION This unit introduces a data structure called Arrays.5 2. Memory required (in bytes) = size of (data type) X length of array The first array index value is referred to as its lower bound and in C it is always 0 and the maximum index value is called its upper bound. The general form for declaring a single dimensional array is: data_type array_name[expression].7 2.9 Introduction Objectives Arrays and Pointers Sparse Matrices Polynomials Representation of Arrays 2. called its range is given by upper bound-lower bound. For example.6 2.2 Row Major Representation Column Major Representation Arrays Page Nos. an array may contain all integers or all characters or any other data type.4.2 2.− 60}.3. The number of elements in the array. int a[4] = {34.5. float c[] = {-4. which is stored in contiguous memory locations. It declares an array of 100 integers. but may not contain a mix of data types. We store values in the arrays during program execution. For a single dimension array. where data_type represents data type of the array. That is. consider the following C declaration: int a[100]. array_name is the name of array and expression which indicates the number of elements in the array.0 2. We conclude the following facts from these examples: 23 . Let us now see the process of initializing an array while declaring it. The simplest form of array is a one-dimensional array that may be defined as a finite ordered set of homogeneous elements.1 2.
A string constant is a one-dimensional array of characters terminated by a null character(\0).1 shows the way a character array is stored in memory. One of the most common arrays is a string. know the advantages and disadvantages of Arrays.1: String in Memory C concedes a fact that the user would use strings very often and hence provides a short cut for initialization of strings. Also. then the dimension of the array is optional. The elements of the character array are stored in contiguous memory locations. C inserts the null character automatically. Each character in the array occupies one byte of memory and the last character is always ‘\0’. ‘a’. which is simply an array of characters terminated by a null character. Till the array elements are not given any specific values. a three-dimensional array will require three pairs of square brackets and so on. It is your job to do the necessary work for checking boundaries wherever needed.’\0’}. use multidimensional arrays. For example. in this declaration ‘\0’ is not necessary. The value of the null character is zero. s e n t e n c e \n \0 Figure 2. Note that ‘\0’ and ‘0’ are not the same. and know the representation of Arrays in memory. you will be able to: • • • • use Arrays as a proper data structure in programs. ‘x’. ‘m’. 2. For example. Note that. 2. Multidimensional arrays are defined in the same manner as one-dimensional arrays.1 OBJECTIVES After going through this unit. consider the following: char message[ ]= {‘e’. ‘p’. except that a separate pair of square brackets is required for each subscript. Thus a two-dimensional array will require two pairs of square brackets. ‘l’. the string used above can also be initialized as char name[ ] = “sentence\n”.’e’. they contain garbage values. consider the following string which is stored in an array: “sentence\n” Figure 2.2 ARRAYS AND POINTERS C compiler does not check the bounds of arrays. 24 .Introduction to Algorithms and Data Structures (i) (ii) If the array is initialized at the time of declaration.
a [10]. Conversely a pointer can be indexed as if it were declared to be an array.3 SPARSE MATRICES Matrices with good number of zero entries are called sparse matrices.expr n are positive valued integer expressions.. The schematic of a two-dimensional array of size 3 × 5 is shown in Figure 2. x = a. char etc. expr 2. Furthermore the (0. consider the following program fragment: int *x. So. 25 .The format of declaration of a multidimensional array in C is given below: data_type array_name [expr 1] [expr 2] …. * (x+5) = 100. for any two-dimensional array a[j][k] is equivalent to: *((base type *)a + (j * rowlength)*k) 2. Consider the following array: char p[10]. where data_type is the type of array such as int. x[5] = 100. Both assignment statements place the value 100 in the sixth element of a. For example. Row 0 Row 1 Row 3 a[0][0] a[1][0] a[2][0] a[0][1] a[1][1] a[2][1] a[0][2] a[1][2] a[2][2] a[0][3] a[1][3] a[2][3] A[0][4] A[1][4] A[2][4] Arrays Figure 2. …. [expr n]. In general. As you know. an array name without an index generates a pointer. p and &p[0] are identical because the address of the first element of an array is the same as the address of the array. an array name without an index is a pointer to the first element in the array. the following formula yields the number of bytes of memory needed to hold it: bytes = size of 1st index × size of 2nd index × size of (base type) The pointers and arrays are closely related.2: Schematic of a Two-Dimensional Array In the case of a two-dimensional array. or by the pointer *((int *) a+4).4) element of a two-dimensional array may be referenced in the following two ways: either by array indexing a[0][4].2. array_name is the name of array and expr 1.
Tridiagonal matrices are also sparse matrices. the 3.4 is shown in Figure 2. This is nothing but wastage of memory.Introduction to Algorithms and Data Structures Consider the following matrices of Figure 2. In real life applications. number of columns and number of non zero elements in the matrix. an efficient method of storing sparse matrices has to be looked into. such wastage may count to megabytes. The first row of sparse matrix always specifies the number of rows. Let us consider a sparse matrix from storage point of view. indicating the row number and column number respectively in which the element is present in the original matrix.3. Triangular matrices are sparse matrices. with the 1st and 2nd elements of the row. the number 6 represents the total number of columns in the matrix. 4 3 −5 1 0 6 −7 8 −1 3 5 −2 0 2 5 1 −8 −3 4 9 3 −3 2 6 4 3 -7 -1 0 6 -5 3 8 -1 (a) (b) Figure 2. Suppose that the entire sparse matrix is stored. Figure 2. Then. The 3rd element in this row stores the actual value of the non zero element. 0 0 1 2 3 4 5 6 0 0 0 0 1 0 0 1 0 4 0 3 0 0 0 2 0 0 0 0 2 0 8 3 5 0 0 2 0 0 0 4 0 0 9 0 0 0 0 5 0 0 0 0 0 0 0 Figure 2.tuple representation of the matrix of Figure 2.4 shows a sparse matrix of order 7 × 6.5. The number 8 represents the total number of non zero elements in the matrix. So. Each non zero element is stored from the second row. a considerable amount of memory which stores the matrix consists of zeroes. For example. Similarly. A tridiagonal matrix is a square matrix in which all the elements except for the main diagonal. The number 7 represents the total number of rows sparse matrix.4: Representation of a sparse matrix of order 7 × 6 A common way of representing non zero elements of a sparse matrix is the 3-tuple form. 26 . diagonals on the immediate upper and lower side are zeroes.3: (a) Triangular Matrix (b) Tridiagonal Matrix A triangular matrix is a square matrix in which all the elements either above or below the main diagonal are zero.
a[i][j]).(j+1).4 The following program 1. for(i=0. The order should be less than 5 × 5: 33 Enter the elements of the matrix: 123 010 004 The 3-tuple representation of the matrix is: 1 1 1 1 2 2 1 3 3 2 2 1 3 3 4 27 .h> void main() { int a[5][5].&a[i][j]). } Output: enter the order of the matrix.i<rows. 3. 9 5 4 9 3 2 1 2 8 Arrays Figure 2. printf("Enter the elements of the matrix:\n"). 4. (i+1). which is sparse and prints the corresponding 3-tuple representations.columns. 1.j. 4. 7. 2.5: 3-tuple representation of Figure 2.j<columns. scanf("%d %d". 0.7. 1.j<columns. for(i=0. 6. The order should be less than 5 × 5:\n").i<rows.j++) { if (a[i][j]!=0) { printf("%d } } %d %d\n".j++) { scanf("%d".1 accepts a matrix as input. 3. 2.1 /* The program accepts a matrix as input and prints the 3-tuple representation of it*/ #include<stdio. 3.i++) for(j=0. Program 1. 2. 1.&columns). 0. } printf(“The 3-tuple representation of the matrix is:\n”).i. 4.&rows. 3.rows.i++) for(j=0. printf("enter the order of the matrix.
Program 1.match. it checks each element of the matrix for a non zero. Arithmetic operations like addition and multiplication of polynomials are common and most often. we need to write a program to implement these operations.h> void main() { int poly1[6][2]. True/False 2.&term2). } printf("Enter the coefficient and exponent of each term of the second polynomial:\n"). coefficient and exponent.i. The simplest way to represent a polynomial of degree ‘n’ is to store the coefficient of (n+1) terms of the polynomial in an array. for(i=0. If the element is non zero. each element of the array should consist of two values. At maximum. an array can be a two-dimensional array. Check Your Progress 1 1) 2) 3) If the array is _______ at the time of declaration. They should be less than 6:\n").i<term2.2 accepts two polynomials as input and adds them.4 POLYNOMIALS Polynomials like 5x4 + 2x3 + 7x2 + 10x – 8 can be represented using arrays. it prompts for the elements of the matrix.i++) {scanf("%d %d". printf("Enter the number of terms in the first polynomial. After accepting the matrix.2 /* The program accepts two polynomials as input and prints the resultant polynomial due to the addition of input polynomials*/ #include<stdio. printf("Enter the number of terms in the second polynomial. printf("Enter the coefficient and exponent of each term of the first polynomial:\n"). then the dimension of the array is optional. 28 .&poly2[i][1]). namely.j. While maintaining the polynomial. To achieve this.i++) {scanf("%d %d".term2. scanf("%d". Program 1. After accepting the order.proceed.&term1).poly2[6][2]. it is assumed that the exponent of each successive term is less than that of the previous term.&poly1[i][0].Introduction to Algorithms and Data Structures The program initially prompted for the order of the input matrix with a warning that the order should not be greater than 5 × 5.&poly1[i][1]).term1.&poly2[i][0]. then it prints the row number and column number of that element along with its value.i<term1. for(i=0. scanf("%d". A sparse matrix is a matrix which is having good number of _____ elements. They should be less than 6:\n"). Once we build an array to represent a polynomial. we can use such an array to perform common polynomial operations like addition and multiplication.
j++) { if (match==0) if(poly1[i][1]==poly2[j][1]) { printf("%d %d\n". else proceed=0. match=1.They should be less than 6 : 5.j<term2. Enter the number of terms in the second polynomial. for(j=0. } } } for(i=0. for(j=0. for(i=0.j++) { if(proceed==1) if(poly1[i][1]!=poly2[j][1]) proceed=1. } if (proceed==1) printf("%d %d\n".i<term1.They should be less than 6 : 4.j++) { if(proceed==1) if(poly2[i][1]!=poly1[j][1]) proceed=1. } if (proceed==1) printf("%d %d".j<term1.(poly1[i][0]+poly2[j][0]).i++) { proceed=1. for(j=0.poly2[i][1]). } } Output: Enter the number of terms in the first polynomial.poly1[i][1]).j<term2. else proceed=0.poly2[i][0].i<term2.i++) { proceed=1. } for(i=0. poly1[i][1]).i<term1.poly1[i][0]. Enter the coefficient and exponent of each term of the first polynomial: 12 24 36 Arrays 29 .} printf(“The resultant polynomial due to the addition of the input two polynomials:\n”).i++) { match=0.
it prints the terms of the first polynomial who does not have corresponding terms in the second polynomial with the same exponent. it depends on the operating system. Initially. Is it that the elements are stored row wise or column wise? Again. Under this representation. the elements of an array are stored in sequence to the extent possible. then how are they sequenced. 2. the second row occupies the next set. Then. But. it prints the terms of the second polynomial who does not have corresponding terms in the first polynomial. 2. Then. it prompted for the entry of the terms of the two polynomials one after another. you have a single row of elements.5.6. Let us consider the following two-dimensional array: a e i b c d f g h j k l To make its equivalent row major representation. However. we perform the following process: Move the elements of the second row starting from the first element to the memory location adjacent to the last element of the first row. The answer depends on the operating system under which the program is running. 30 . Finally.5 REPRESENTATION OF ARRAYS It is not uncommon to find a large number of programs which process the elements of an array in sequence. If they are being stored in sequence. This is the Row major representation.1 Row Major Representation The first method of representing a two-dimensional array in memory is the row major representation.Introduction to Algorithms and Data Structures 18 57 Enter the coefficient and exponent of each term of the second polynomial: 52 69 36 57 The resultant polynomial due to the addition of the input two polynomials: 62 66 10 7 24 18 69 The program initially prompted for the number of terms of the two polynomials. does it mean that the elements of an array are also stored in sequence in memory. When this step is applied to all the rows except for the first row. and so forth. The schematic of row major representation of an Array is shown in Figure 2. it adds the coefficients of the corresponding terms of both the polynomials whose exponents are the same. The former is called row major order and the later is called column major order. the first row of the array occupies the first set of the memory location reserved for the array.
7: Schematic of a Column major representation of an Array Check Your Progress 2 1) 2) 3) An array can be stored either__________or _________. Then. g. Arrays are used in those problems when the number of items to be solved is fixed. e. j.. By application of above mentioned process. d. the representation will be same as that of the row major representation. but reliable to use in more situations than you can count. In __________. the first column of the array occupies the first set of the memory locations reserved for the array. Insertion and deletion operations will lead to wastage of memory or will increase the time complexity of the program due to the reshuffling of elements. 31 . we perform the following process: Transpose the elements of the array.2 Column Major Representation The second method of representing a two-dimensional array in memory is the column major representation.6: Schematic of a Row major representation of an Array 2. j. l } Row 0 Row 1 Row 2 …. Under this representation. g. f. b. i} Col 0 Col 1 Col 2 …..7. Arrays are used in those situations where in the size of array can be established before hand. we get {a. 2.5.6 APPLICATIONS Arrays are simple. search and sort. b. The schematic of a column major representation is shown in Figure 2. h. d. the elements of array are stored row wise. we get {a. They are easy to traverse. e. k.By application of above mentioned process. c. c. k. they are used in situations where the insertions and deletions are minimal or not present. f. It is very easy to manipulate an array rather than other subsequent data structures. In __________. The second column occupies the next set and so forth. i. i. the elements of array are stored column wise. Consider the following two-dimensional array: a b c d e f g h i j k l To make its equivalent column major representation. Col i Figure 2. h. Row i Arrays Figure 2. Also.
Introduction to Algorithms and Data Structures
2.7 SUMMARY
In this unit, we discussed the data structure arrays from the application point of view and representation point of view. Two applications namely representation of a sparse matrix in a 3-tuple form and addition of two polynomials are given in the form of programs. The format for declaration and utility of both single and two-dimensional arrays are covered. Finally, the most important issue of representation was discussed. As part of it, row major and column major orders are discussed.
2.8 SOLUTIONS / ANSWERS
Check Your Progress 1
1) 2) 3) Initialized Zero False
Check Your Progress 2
1) 2) 3) Row wise, column wise Row major representation Column major representation
2.9 FURTHER READINGS
Reference Books 1. 2. Data Structures using C and C++, Yedidyah Langsam, Moshe J.Augenstein, Aaron M Tanenbaum, Second Edition, PHI Publications. Data Structures, Seymour Lipscutz, Schaum’s outline series, McGraw Hill
Reference Websites
32
UNIT 3 LISTS
Structure
3.0
Lists
Page Nos.
33 33 33 34 38 44 46 54 56 56 56
3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 3.10
Introduction Objectives Abstract Data Type-List Array Implementation of Lists Linked Lists-Implementation Doubly Linked Lists-Implementation Circularly Linked Lists-Implementation Applications Summary Solutions/Answers Further Readings
3.0 INTRODUCTION
In the previous unit, we have discussed arrays. Arrays are data structures of fixed size. Insertion and deletion involves reshuffling of array elements. Thus, array manipulation is time-consuming and inefficient. In this unit, we will see abstract data type-lists, array implementation of lists and linked list implementation, Doubly and Circular linked lists and their applications. In linked lists, items can be added or removed easily to the end or beginning or even in the middle.
3.1 OBJECTIVES
After going through this unit, you will be able to: • • • • • define and declare Lists; understand the terminology of Singly linked lists; understand the terminology of Doubly linked lists; understand the terminology of Circularly linked lists, and use the most appropriate list structure in real life situations.
3.2 ABSTRACT DATA TYPE-LIST
Abstract Data Type (ADT) is a useful tool for specifying the logical properties of data type. An ADT is a collection of values and a set of operations on those values. Mathematically speaking, “a TYPE is a set, and elements of set are Values of that type”.
ADT List
A list of elements of type T is a finite sequence of elements of type T together with the operations of create, update, delete, testing for empty, testing for full, finding the size, traversing the elements. In defining Abstract Data Type, we are not concerned with space or time efficiency as well as about implementation details. The elements of a list may be integers, characters, real numbers and combination of multiple data types. Consider a real world problem, where we have a company and we want to store the details of employees. To store this, we need a data type which can store the type details containing names of employee, date of joining, etc. The list of employees may
33
Introduction to Algorithms and Data Structures
increase depending on the recruitment and may decrease on retirements or termination of employees. To make it very simple and for understanding purposes, we are taking the name of employee field and ignoring the date of joining etc. The operations we have to perform on this list of employees are creation, insertion, deletion, visiting, etc. We define employee_list as typedef struct { char name[20]; ………………. …………………. } emp_list; Operations on emp_list can be defined as Create_emplist (emp_list * emp_list ) { /* Here, we will be writing create function by taking help of ‘C’ programming language. */ } The list has been created and name is a valid entry in emplist, and position p specifies the position in the list where name has to inserted insert_emplist (emp_list * emp_list , char *name, int position ) { /* Here, we will be writing insert function by taking help of ‘C’ programming language. */ } delete_emplist (emp_list * emp_list, char *name) { /* Here, we will be writing delete function by taking help of ‘C’ programming language. */ } visit_emplist (emp_list * emp_list ) { /* Here, we will be writing visit function by taking help of ‘C’ programming language. */ } The list can be implemented in two ways: the contiguous (Array) implementation and the linked (pointer) implementation. In contiguous implementation, the entries in the list are stored next to each other within an array. The linked list implementation uses pointers and dynamic memory allocation. We will be discussing array and linked list implementation in our next section.
3.3 ARRAY IMPLEMENTATION OF LISTS
In the array implementation of lists, we will use array to hold the entries and a separate counter to keep track of the number of positions are occupied. A structure will be declared which consists of Array and counter. typedef struct { int count; int entry[100]; }list; For simplicity, we have taken list entry as integer. Of course, we can also take list entry as structure of employee record or student record, etc.
34
1 will demonstrate the insertion of an element at desired position /* Inserting an element into contiguous list (Linear Array) at specified position */ /* contiguous_list. 66 to 7th position. elements are stored in continuous locations. From the above example. the count will be incremented. To add an element to the list at the end. The (n–1)th element to (n)th position and this will continue until the ( r ) th element to ( r + 1 )th position. 35 . if we want to add element ‘35’ after element ‘33’. so on.C */ # include<stdio. then we have to rewrite all the elements after the position where the element has to be inserted. we can add it without any problem. int).h> /* definition of linear list */ typedef struct { int data[10]. void create(list *). where ‘n’ is number of elements in the list. }list. We have to shift (n)th element to (n+1)th position. 44 to 5th position. But. For doing this.Lists Count 1 11 2 22 33 3 44 4 5 55 6 66 77 7 8 Insertion In the array implementation of lists. int. /*prototypes of functions */ void insert(list *. We have to shift 77 to 8th position. where ‘r’ is the position of insertion. Before Insertion Count 1 11 1 11 1 11 1 11 1 11 1 11 2 22 2 22 2 22 2 22 2 22 2 22 33 33 3 35 33 3 44 4 33 3 44 4 33 3 44 4 33 3 44 4 3 44 4 4 5 55 5 55 5 55 5 55 5 44 5 44 6 66 6 66 6 66 6 55 6 55 6 55 66 66 7 66 7 66 7 77 7 77 7 8 77 8 77 8 77 8 77 8 77 7 Step 1 Count Step 2 Count Step 3 Count Step 4 Count Step 5 Count Program 3. int count. suppose if we want to insert the element at the beginning or middle of the list.
start->data[i]). start->count++ . if(start->data[i] == 0) test=0. list l. i). scanf("%d". i< start->count. } start->count=i. else i++. } /* OUTPUT FUNCTION TO PRINT ON THE CONSOLE */ void traverse(list *start) { int i. i. 36 . &start->data[i]). create(&l). i++) { printf("\n Value at the position: %d: %d ". element. fflush(stdin). printf("\n Entered list as follows:\n"). /* Definition of the insert funtion */ void insert(list *start. int element) { int temp = start->count. temp --. int position. traverse(&l). test=1. } /* definition of create function to READ data values into the list */ void create(list *start) { int i=0. for(i = 0.Introduction to Algorithms and Data Structures void traverse(list *). printf("\n input value value for: %d:(zero to come out) ". } start->data[position] = element. } } /* main function */ void main( ) { int position. while(test) { fflush(stdin). while( temp >= position) { start->data[temp+1] = start->data[temp].
/* definition of delete_list function*/ /* the position of the element is given by the user and the element is deleted from the list*/ void delete_list(list *start. We have to shift 55 to 4th position. the (r + 2)th element to (r + 1)th position. 77 to 6th position. insert(&l. fflush(stdin). But. while( temp <= start->count-1) 37 . We have to shift (r+1)th element to rth position . traverse(&l).2 will demonstrate deletion of an element from the linear array /* declaration of delete_list function */ void delete_list(list *. element). 66 to 5th position.fflush(stdin). scanf("%d". we have to rewrite all the elements after the position where the element that has to be deleted exists. and this will continue until the (n)th element to ( n–1 )th position. Before deletion Count 1 11 2 22 33 3 44 4 5 55 6 66 77 7 Step 1 Count 1 11 2 22 33 3 55 4 5 55 6 66 77 7 Step 2 Count 1 11 2 22 33 3 55 4 5 66 6 66 77 7 Step 3 Count 1 11 2 22 33 3 55 4 5 66 6 77 Program 3. int position) { int temp = position. printf("\n input the value for the position:"). printf("\n information which we have to delete: %d". we can delete it without any problem. } Program 3. Lists Deletion To delete an element in the list at the end. &position). printf("\n input the position where you want to add a new data item:"). then. where ‘r’ is position of deleted element in the list. position. where n is the number of elements in the list. &element). suppose if we want to delete the element at the beginning or middle of the list.1: Insertion of an element into a linear array. scanf("%d". if we want to delete an element ‘44’ from list. From the above example. int).l->data[position]). And then the count is decremented.
&position).4 LINKED LISTS . we can see that start is a pointer which is pointing to the node which contains data as madan and the node madan is pointing to the node mohan and the last node babu is not pointing to any node.1. When you get to a bogie that isn’t holding (linked) on to another bogie. you can find any bogie of the train. temp ++. printf("\n input the position of element you want to delete:"). traverse(&l). scanf("%d".IMPLEMENTATION The Linked list is a chain of structures in which each structure consists of data as well as pointer. fflush(stdin). which stores the address (link) of the next logical structure in the list. 1000. ………………. } start->count = start->count . A single node is defined in the same way as any other user defined type or object.1200 are memory addresses. } list. 38 . except that it also contains a pointer to a variable of the same type as itself. } /* main function */ void main() { ………………. you can follow its link to the next one. delete_list(&l.2: Deletion of an element from the linear array 3. A linked list is a data structure used to maintain a dynamic series of data.Introduction to Algorithms and Data Structures { start->data[temp] = start->data[temp+1]. position). struct node *next. By following links. except programmers usually refer to nodes instead of bogies. If you know where the first bogie is. Think of a linked list as a line of bogies of train where each bogie is connected on to the next bogie.1: A Singly linked list 1200 Consider the following definition: typedef struct node { int data. We will be seeing how the linked list is stored in the memory of the computer.1 . 1000 madan 1050 mohan 1200 babu null start 1000 1050 Figure 3. you know you are at the end.1050. Linked lists work in the same way.. } Program 3. In the following Figure 3.
/* initialize list head to NULL */ if (head == NULL) { printf("The list is empty!\n"). printf("\n number of elements in the list %d \n". It is as simple as that! You now have a linked list data structure. traverse(head).h> #define NULL 0 struct linked_list { int data. int main() { list *head = NULL. head=(list *)malloc(sizeof(list)). } void create(list *start) { printf("inputthe element -1111 for coming oout of the loop\n"). &start->data).h> typedef struct node { int data. struct linked_list *next. } } Program 3. void main() { list *head.h> #include<stdlib. }. printf(" \n traversing the list \n"). #include <stdio. called the “head”.4).Once you have a definition for a list node. You can see if the list is empty. void create(list *). #include<stdio. int count(list *). void traverse(list *). typedef struct linked_list list. } list. List can be defined as list *head. we shall look to the process of addition of new nodes to the list with the function create_list(). you can create a list simply by declaring a pointer to the first element. It isn’t altogether useful at the moment. We will be seeing how to declare and define list-using pointers in the following program 3. scanf("%d". 39 .3.3: Creation of a linked list Lists In the next example (Program 3. struct node *next. A pointer is generally used instead of a regular variable. create(head). count(head)).
start->data). } } void traverse(list *start) { if(start->next!=NULL) { printf("%d --> ". then insert the new element as the end element.2. } } int count(list *start) { if(start->next == NULL) return 0. find function returns the address of the found element to the insert_list function.3 depicts the scenario of a linked list after insertion of a new element into the linked list of Figure 3. then insert the new element as start element. traverse(start->next). else. } Program 3. else return (1+count(start->next)).4: Insertion of elements into a Linked list ALGORITHM (Insertion of element into a linked list) Step 1 Step 2 Step 3 Step 4 Step 5 Begin if the list is empty or a new element comes before the start (head) element. Before insertion f next f NULL new element NULL 40 .2 depicts the scenario of a linked list of two elements and a new element which has to be inserted between them. insert the new element in the list by using the find function. create(start->next). Figure 3. Figure 3. else. End. if the new element comes after the last element.Introduction to Algorithms and Data Structures if(start->data == -1111) start->next=NULL. else { start->next=(list*)malloc(sizeof(list)).
printf("enter value of new element"). list * find(list *. if(start->data ==key) { n=(list *)mallo(sizeof(list)).Lists Figure 3. 41 . &element). *f. } else { f = find(start. int). start=n.&key). scanf("%d". /*definition of insert function */ list * insert_list(list *start) { list *n. n->data=element. INSERT FUNCTION /*prototypes of insert and find functions */ list * insert_list(list *).2: A linked list of two elements and an element that is to be inserted After insertion NULL f next new element Figure 3.5 depicts the code for the insertion of an element into a linked list by searching for the position of insertion with the help of a find function. int key. n->next = start. scanf("%d". element.3: Insertion of a new element into linked list Program 3. key). printf("eneter value of key element").
printf(" \n traversing the created list \n"). Step 4 else. create(head). } } return(start). Step 5 End Figure 3. printf("\n number of elements in the list %d \n". head=insert_list(head). then element cannot be deleted Step 3 else. n->next=f->next. if element to be deleted is first node.4 depicts the process of deletion of an element from a linked list. } /*definition of find function */ list * find(list *start. void traverse(list *). printf(" \n traversing the list after insert \n"). After Deletion f next 42 . traverse(head). n->data=element. delete the element from the list by calling find function and returning the found address of the element.5: Insertion of an element into a linked list at a specific position ALGORITHM (Deletion of an element from the linked list) Step 1 Begin Step 2 if the list is empty. if(start->next->next == NULL) return(NULL).Introduction to Algorithms and Data Structures if(f == NULL) printf("\n key is not found \n"). head=(list *)malloc(sizeof(list)). } void main() { list *head. then make the start (head) to point to the second element. int key) { if(start->next->data == key) return(start). count(head)). } Program 3. key). int count(list *). traverse(head). f->next=n. else find(start->next. else { n=(list*)malloc(sizeof(list)). void create(list *).
int count(list *).4: Deletion of an element from the linked list (Dotted line depicts the link prior to deletion) Program 3. head=(list *)malloc(sizeof(list)). It includes a function which specifically searches for the element to be deleted.Lists f key node Figure 3. list * f. * temp. f->next=temp. DELETE_LIST FUNCTION /* prototype of delete_function */ list *delete_list(list *). &key). if(start->data == key) { temp=start->next. 43 . list *find(list *. scanf(“%d”. /*definition of delete_list */ list *delete_list(list *start) { int key. start=temp. } void main() { list *head. } } return(start).key). else { temp = f->next->next. free(f->next). void create(list *). printf(“\n enter the value of element to be deleted \n”). } else { f = find(start. free(start).6 depicts the deletion of an element from the linked list. if(f==NULL) printf(“\n key not fund”). void traverse(list *). create(head). int).
5) is defined as a collection of elements. head=insert(head). } Program 3. traverse(head). that is. traverse(head).5: A Doubly Linked List Doubly linked list (Figure 3. traverse(head). To enable this.Introduction to Algorithms and Data Structures printf(“ \n traversing the created list \n”). traversing is possible only in one direction. each element consisting of three fields: • • • pointer to left element. head=delete_list(head). data field.6: Deletion of an element from the linked list by searching for element that is to be deleted 3. right link of the rightmost element is set to NULL which means that there is no right element to that. RIGHT LINK LEFT LINK RIGHT LINK LEFT LINK NULL DATA DATA DATA NULL Figure 3. count(head)). Left link of the leftmost element is set to NULL which means that there is no left element to that. we have to traverse the list in both directions to improve performance of algorithms. And. ALGORITHM (Creation) Step 1 Step 2 begin define a structure ELEMENT with fields Data Left pointer Right pointer declare a pointer by name head and by using (malloc()) memory allocation function allocate space for one element and store the address in head pointer Head = (ELEMENT *) malloc(sizeof(ELEMENT)) read the value for head->data head->left = NULL head->right = (ELEMENT *) malloc(size of (ELEMENT)) repeat step3 to create required number of elements Step 3 Step 4 Step 5 44 .5 DOUBLY LINKED LISTS-IMPLEMENTATION In a singly linked list. we require links in both the directions. We have seen this before. the element should have pointers to the right element as well as to its left element. printf(“ \n traversing the list after insert \n”). In single linked list. Sometimes. and pointer to right element. This type of list is called doubly linked list. each element contains a pointer to the next element. printf(“\n number of elements in the list %d \n”. printf(“ \n traversing the list after delete_list \n”).
start=start->left. }while(start->right). }. struct dl_list *right.7 depicts the creation of a Doubly linked list. /* CREATION OF A DOUBLY LINKED LIST */ /* DBLINK. do { printf(" %d =". } 45 .C */ # include <stdio. if(start->data != -1111) { start->right = (dlist *) malloc(sizeof(dlist)). /* Show value of last start only one time */ printf("\n traversing the list using left pointer\n").Step 6 end Lists Program 3. /* Function creates a simple doubly linked list */ void dl_create(dlist *start) { printf("\n Input the values of the element -1111 to come out : "). } /* Display the list */ void traverse (dlist *start) { printf("\n traversing the list using right pointer\n"). struct dl_list *left. start->data). void dl_create (dlist *). start->data). start = start->left. void traverse (dlist *). } else start->right = NULL. do { printf(" %d = ". start->right->left = start. } while (start->right). start = start->right. &start->data). scanf("%d". dl_create(start->right).h> # include <malloc.h> struct dl_list { int data. typedef struct dl_list dlist. start->right->right = NULL.
head->left=NULL. The chains do not indicate first or last element. } Program 3. and Traversing Figure 3.6 CIRCULARLY LINKED LISTS IMPLEMENTATION A linked list in which the last element points to the first element is called CIRCULAR linked list. dl_create(head). The external pointer provides a reference to starting element. Figure 3. last element does not contain the NULL pointer.Introduction to Algorithms and Data Structures void main() { dlist *head. head = (dlist *) malloc(sizeof(dlist)). traverse(head).6: A Circular Linked List head 46 .6 depicts a Circular linked list.7: Creation of a Doubly Linked List OUTPUT Input the values of the element -1111 to come out : 1 Input the values of the element -1111 to come out : 2 Input the values of the element -1111 to come out : 3 Input the values of the element -1111 to come out : -1111 Created doubly linked list is as follows traversing the list using right pointer 1=2=3= traversing the list using left pointer 3=2=1= 3. Deletion. printf("\n Created doubly linked list is as follows"). head->right=NULL. The possible operations on a circular linked list are: • • • Insertion.
h> #define NULL 0 struct linked_list { int data. void main() { void create_clist(clist *). create_clist(start->next). else { start->next=(clist*)malloc(sizeof(clist)). traverse(head).8 depicts the creation of a Circular linked list. s=head. &start->data).h> #include<stdlib. printf("\n number of elements in the clist %d \n". if(start->data == -1111) start->next=s. } void create_clist(clist *start) { printf("input the element -1111 for coming out of the loop\n"). printf(" \n traversing the created clist and the starting address is %u \n". clist *head. count(head)). head=(clist *)malloc(sizeof(clist)).6 : A Circular Linked List Program 3. *s. #include<stdio. struct linked_list *next. head). void traverse(clist *). }. } } void traverse(clist *start) { 47 . typedef struct linked_list clist.Lists Figure 3. int count(clist *). scanf("%d". create_clist(head).
8: Creation of a Circular linked list ALGORITHM (Insertion of an element into a Circular Linked List) Step 1 Step 2 Step 3 Begin if the list is empty or new element comes before the start (head) element.Introduction to Algorithms and Data Structures if(start->next!=s) { printf("data is %d \t next element address is %u\n". start>next).7. traverse(start->next). then insert the new element at the end element and adjust the pointer of last element to the start element. else. if the new element comes after the last element. else return(1+count(start->next)). End. then insert the new element as start element. start->data. The new element is inserted before the ‘key’ element by using above algorithm. insert the new element in the list by using the find function.8 depicts a Circular linked list with the new element inserted between first and second nodes of Figure 3.start->data. Step 4 Step 5 If new item is to be inserted after an existing element. } int count(clist *start) { if(start->next == s) return 0. then. find function returns the address of the found element to the insert_list function. } if(start->next == s) printf("data is %d \t next element address is %u\n". else. call the find function recursively to trace the ‘key’ element. Figure 3. } Program 3. start>next).7 depicts the Circular linked list with a new element that is to be inserted. 48 f next . Figure 3.
&x). #include<stdio.&key). printf("eneter value of key element").8: A Circular Linked List after insertion of the new element between first and second nodes (Dotted lines depict the links prior to insertion) Program 3. struct linked_list *next.h> #define NULL 0 struct linked_list { int data. /*definition of insert_clist function */ clist * insert_clist(clist *start) { clist *n. *s.h> #include<stdlib. *n1. clist * insert_clist(clist *).Lists f f next new element NULL Figure 3. if(start->data ==key) { 49 . clist *head. int key. x. /* prototype of find and insert functions */ clist * find(clist *. int). printf("enter value of new element"). scanf("%d".9 depicts the code for insertion of a node into a Circular linked list. scanf("%d". typedef struct linked_list clist. }.
n->next=n1->next. scanf("%d". if(start->data == -1111) start->next=s. } else { n1 = find(start. int count(clist *).head). traverse(head). start=n. n->data=x. else { start->next=(clist*)malloc(sizeof(clist)). s=head. n->next = start. &start->data). void traverse(clist *). else find(start->next. count(head)). printf(" \n traversing the created clist and the starting address is %u \n". printf("\n traversing the clist after insert_clist and starting address is %u \n". key). head=insert_clist(head). } } return(start). head). printf("\n number of elements in the clist %d \n". int key) { if(start->next->data == key) return(start). traverse(head). n1->next=n. } /*definition of find function */ clist * find(clist *start. head=(clist *)malloc(sizeof(clist)). create_clist(head). 50 . if(start->next->next == NULL) return(NULL). if(n1 == NULL) printf("\n key is not found\n"). n->data=x. } void main() { void create_clist(clist *). } void create_clist(clist *start) { printf("inputthe element -1111 for coming oout of the loop\n"). key). else { n=(clist*)malloc(sizeof(clist)). create_clist(start->next).Introduction to Algorithms and Data Structures n=(clist *)malloc(sizeof(clist)).
Step 5 End.h> #include<stdlib. then element cannot be deleted. start>next). start->data. ALGORITHM (Deletion of an element from a Circular Linked List) Step 1 Begin Step 2 if the list is empty.} } void traverse(clist *start) { if(start->next!=s) { printf("data is %d \t next element address is %u\n". then make the start (head) to point to the second element. Step 4 else.h> #define NULL 0 struct linked_list { int data. struct linked_list *next.9 A Circular Linked List from which an element was deleted (Dotted line shows the linked that existed prior to deletion) Program 3. } if(start->next == s) printf("data is %d \t next element address is %u\n".start->data. Step 3 else. }. 51 . delete the element from the list by calling find function and returning the found address of the element. Lists f f next Figure 3. } int count(clist *start) { if(start->next == s) return 0.10 depicts the code for the deletion of an element from the Circular linked list.9 Insertion of a node into a Circular Linked List Figure 3.9 depicts a Circular linked list from which an element was deleted. } Program 3. else return(1+count(start->next)). start>next). if element to be deleted is first node. traverse(start->next). #include<stdio.
clist *head. clist * f. int key) { if(start->next->data == key) return(start). } void main() { void create_clist(clist *). clist * find(clist *. /*definition of delete_clist */ clist *delete_clist(clist *start) { int key. if(start->next->next == NULL) return(NULL). } } return(start). key). else find(start->next. scanf("%d". printf("\n number of elements in the clist %d \n".key). if(f==NULL) printf("\n key not fund"). s=head. else { temp = f->next->next. *s. * temp. int count(clist *).Introduction to Algorithms and Data Structures typedef struct linked_list clist. &key). free(f->next). void traverse(clist *). start=temp. } /*definition of find function */ clist * find(clist *start. if(start->data == key) { temp=start->next. head=delete_clist(head). printf("\n enter the value of element to be deleted \n"). traverse(head). int). head=(clist *)malloc(sizeof(clist)). printf(" \n traversing the created clist and the starting address is %u \n". } else { f = find(start. head). create_clist(head). count(head)). free(start). 52 . /* prototype of find and delete_function*/ clist * delete_clist(clist *). f->next=temp.
traverse(head).start->data. traverse(start->next). start->data. For example. we have a function f(x)= 7x5 + 9x4 – 6x³ + 3x². else return(1+count(start->next)). 1000. start>next).head). scanf("%d". } int count(clist *start) { if(start->next == s) return 0.10 depicts the representation of a Polynomial using a singly linked list.10: Deletion of an element from the circular linked list Lists 3. } if(start->next == s) printf("data is %d \t next element address is %u\n". } } void traverse(clist *start) { if(start->next!=s) { printf("data is %d \t next element address is %u\n".1050.10: Representation of a Polynomial using a singly linked list 53 .7 APPLICATIONS Lists are used to maintain POLYNOMIALS in the memory. Figure 3. create_clist(start->next). if(start->data == -1111) start->next=s. else { start->next=(clist*)malloc(sizeof(clist)).printf(" \n traversing the clist after delete_clistand starting address is %u \n".1300 are memory addresses. } void create_clist(clist *start) { printf("inputthe element -1111 for coming oout of the loop\n"). &start->data). } Program 3.1200. 1000 7 5 1000 1050 9 4 1050 1200 −6 3 1200 1300 3 2 1300 Start Figure 3. start>next).
It uses linked list to represent the Polynomial. i+1). int coef. create_poly(start->next). printf("\n Input the exponent value: %d: ". printf("\n Input the coefficient value: %d: ". i+1).Introduction to Algorithms and Data Structures Polynomial contains two components. printf(" %d". /* Function create a ploynomial list */ void create_poly(poly *start) { char ch. struct link *next. void create_poly(poly *). void display(poly *). printf("X^%d". scanf("%d". }. start->coef). coefficient and an exponent. void insertion(poly *). } } /* counting the number of nodes */ 54 . &start->sign). In computer. /* Representation of Polynomial using Linked List */ # include <stdio. if(ch != 'n') { printf("\n Input the sign: %d: ". display(start->next).h> struct link { char sign. each of which consists of coefficient and an exponent. int expo. we implement the polynomial as list of structures consisting of coefficients and an exponents. printf("\n Input choice n for break: ").h> # include <malloc. ch = getchar(). start->expo). } /* Display the polynomial */ void display(poly *start) { if(start->next != NULL) { printf(" %c". The polynomial is a sum of terms. start->sign). &start->expo). fflush(stdin). static int i.11 accepts a Polynomial as input. scanf("%c". } else start->next=NULL. start->next = (poly *) malloc(sizeof(poly)). It also prints the input polynomial along with the number of nodes in it. typedef struct link poly. &start->coef). i+1). i++. and ‘x’ is a formal parameter. Program 3. scanf("%d".
………………………………………………………………………………………… ………………………….. .………………………………………………………………………………………… …………………………………………………………………………………………. 2) Can we use doubly linked list as a circular linked list? If yes.11: Representation of Polynomial using Linked list Lists Check Your Progress 1) Write a function to print the memory location(s) which are used to store the data in a single linked list ? ………………………………………………………………………………………… ……………. …………………………………………………………………………………………. The drawback of lists is that the links themselves take space which is in addition to the space that may be needed for data. else return(1+count_poly(start->next)). 4) Write a program to count the number of items stored in a single linked list. } Program 3. we need to traverse a long path to reach a desired node.8 SUMMARY The advantage of Lists over Arrays is flexibility. } /* Function main */ void main() { poly *head = (poly *) malloc(sizeof(poly)). With lists. . 5) Write a function to check the overflow condition of a list represented by an array.. it may be difficult to determine the amount of contiguous storage that might be in need for the required arrays... One more drawback of lists is that they are not suited for random access.. Changes in list. more quickly than in the contiguous lists. ……….…………………………………………………….………. Explain. there is no need to attempt to allocate in advance. display(head).…………………………………………………………………………… .int count_poly(poly *start) { if(start->next == NULL) return 0. Over flow is not a problem until the computer memory is exhausted. count_poly(head)). With dynamic allocation. create_poly(head). ………………………………………………………………………………………… ………………………………………………………………………………………… 3.………………………………………………………………………………… 3) Write the differences between Doubly linked list and Circular linked list. When the individual records are quite large. printf("\n Total nodes = %d \n". insertion and deletion can be made in the middle of the list. 55 .
10 FURTHER READINGS 1. temp). while(temp->next !=NULL) { printf("%u". while(temp->next !=NULL) 4) { count++. } count++. Fundamentals of Data Structures in C++ by E.Introduction to Algorithms and Data Structures 3.L. 56 . Galgotia Publications Data Structures and Program Design in C by Kruse.Leung. } printf("%u". temp=head. count). temp).org . } 3. else printf("not Overflow"). int last_element_position) { if(last_element_position == max_size) printf("List Overflow"). } void count_items(struct node *head) { int count=0.Mehta.Horowitz. temp=temp->next.Tonodo and B.9 1) SOLUTIONS/ANSWERS void print_location(struct node *head) { temp=head. pintf("total items = %d". C.webopedia.com. Pearson Education Reference Websites. 5) } void Is_Overflow(int max_size. Sahni and D. 2.
We also call these lists as “piles” or “push-down list”. two operations are associated with the stacks named Push & Pop.5 4.7 4. There are certain situations when we can insert or remove an item only at the beginning or the end of the list.8 Introduction Objectives Abstract Data Type-Stack Implementation of Stack 4.1 4. Figure 4.3. 5 6 7 7 13 14 14 15 15 Algorithmic Implementation of Multiple Stacks Applications Summary Solutions / Answers Further Readings 4. 5 .2 4.UNIT 4 STACKS Structure 4. we shall examine this simple data structure and see why it plays such a prominent role in the area of programming.2 Implementation of Stack Using Arrays Implementation of Stack Using Linked Lists Stacks Page Nos. (ii) a list of the elements to be inserted on to stack.1 depicts a stack of dishes. We can observe that any dish may top of the stack Figure 4.1: A stack of dishes be added or removed only from the top of the stack. Generally.4 4.6 4.1 Now we see the effects of push and pop operations on to an empty stack. Therefore. for example.0 INTRODUCTION One of the most useful concepts in computer science is stack.3. Figure 4. It concludes that the item added last will be the item removed first.2(a) shows (i) an empty stack. A stack is a linear structure in which items may be inserted or removed only at one end called the top of the stack.0 4. stacks are also called LIFO (Last In First Out) or FILO (First In Last Out) lists. A stack may be seen in our daily life. Pop is an operation used to delete an element from the top Example 4.1 4.3 4. In this unit. • • Push is an operation used to insert an element at the top.
Stacks, Queues and Trees
and (iii) a variable top which helps us keep track of the location at which insertion or removal of the item would occur.
List: A,B,C List: B,C List: C List:
top[3] top[2] top[1] Shift Stack A 1 Stack (a) Push operation List: C List: B,C List: A,B,C B A Stack 2 1
C B A
3 2 1 Stack
top[2]
B A Stack
2 1 top[1] A Stack (b) Pop Operation top[0] Stack
Figure 4.2: Demonstration of (a) Push operation, (b) Pop operation
Initially in Figure 4.2(a), top contains 0, implies that the stack is empty. The list contains three elements, A, B &C. In Figure 4.2(b), we remove an element A from the list of elements, push it on to stack. The value of top becomes 1, pointing to the location of the stack at which A is stored. Similarly, we remove the elements B & C from the list one by one and push them on to the stack. Accordingly, the value of the top is incremented. Figure 4.2(a) explains the pushing of B and C on to stack. The top now contains value 3 and pointing to the location of the last inserted element C. On the other hand, Figure 4.2(b) explains the working of pop operation. Since, only the top element can be removed from the stack, in Figure 4.2(b), we remove the top element C (we have no other choice). C goes to the list of elements and the value of the top is decremented by 1. The top now contains value 2, pointing to B (the top element of the stack). Similarly, in Figure 4.2(b), we remove the elements B and A from the stack one by one and add them to the list of elements. The value of top is decremented accordingly. There is no upper limit on the number of items that may be kept in a stack. However, if a stack contains a single item and the stack is popped, the resulting stack is called empty stack. The pop operation cannot be applied to such stacks as there is no element to pop, whereas the push operation can be applied to any stack.
4.1 OBJECTIVES
After going through this unit, you should be able to:
6
• • • • •
understand the concept of stack; implement the stack using arrays; implement the stack using linked lists; implement multiple stacks, and give some applications of stack.
Stacks
4.2 ABSTRACT DATA TYPE-STACK:
• • • • •
start a new stack; place new information on the top of a stack; take the top item off of the stack; read the item on the top; and determine whether a stack is empty. (There may be nothing at the spot where the stack should be).
When discussing these operations, it is conventional to call the addition of an item to the top of the stack as a push operation and the deletion of an item from the top as a pop operation. (These terms are derived from the working of a spring-loaded rack containing a stack of cafeteria trays. Such a rack is loaded by pushing the trays down on to the springs as each diner removes a tray, the lessened weight on the springs causes the stack to pop up slightly).
4.3 IMPLEMENTATION OF STACK
Before programming a problem solution that uses a stack, we must decide how to represent a stack using the data structures that exist in our programming language. Stacks may be represented in the computer in various ways, usually by means of a one-way list or a linear array. Each approach has its advantages and disadvantages. A stack is generally implemented with two basic operations – push and pop. Push means to insert an item on to stack. The push algorithm is illustrated in Figure 4.3(a). Here, tos is a pointer which denotes the position of top most item in the stack. Stack is represented by the array arr and MAXSTACK represents the maximum possible number of elements in the stack. The pop algorithm is illustrated in Figure 4.3(b).
Step 1: [Check for stack overflow] if tos >=MAXSTACK print “Stack overflow” and exit Step 2: [Increment the pointer value by one] tos=tos+1 Step 3: [Insert the item] arr[tos]=value Step 4: Exit
Figure 4.3(a): Algorithm to push an item onto the stack
7
Stacks, Queues and Trees
The pop operation removes the topmost item from the stack. After removal of top most value tos is decremented by 1.
Step 1: [Check whether the stack is empty] if tos = 0 print “Stack underflow” and exit Step 2: [Remove the top most item] value=arr[tos] tos=tos-1 Step 3: [Return the item of the stack] return(value)
Figure 4.3(b): Algorithm to pop an element from the stack
4.3.1
Implementation of Stack Using Arrays
A Stack contains an ordered list of elements and an array is also used to store ordered list of elements. Hence, it would be very easy to manage a stack using an array. However, the problem with an array is that we are required to declare the size of the array before using it in a program. Therefore, the size of stack would be fixed. Though an array and a stack are totally different data structures, an array can be used to store the elements of a stack. We can declare the array with a maximum size large enough to manage a stack. Program 4.1 implements a stack using an array. #include<stdio.h> int choice, stack[10], top, element; void menu(); void push(); void pop(); void showelements(); void main() { choice=element=1; top=0; menu(); } void menu() { printf("Enter one of the following options:\n"); printf("PUSH 1\n POP 2\n SHOW ELEMENTS 3\n EXIT 4\n"); scanf("%d", &choice); if (choice==1) { push(); menu(); } if (choice==2) { pop();menu();
8
printf("Popped element:%d\n". element = stack[top]. ++top. stack[top]=element. } else { printf("Stack is full\n"). scanf("%d". } return.} if (choice==3) { showelements(). &element). } Program 4.1: Implementation of stack using arrays Explanation Stacks 9 . } else { printf("Stack is empty\n"). element). } return. menu(). stack[i]). i<top. else for(int i=0. } } void push() { if (top<=9) { printf("Enter the element to be pushed to stack:\n"). ++i) printf("%d\n". } void showelements() { if (top<=0) printf("Stack is empty\n"). } void pop() { if (top>0) { --top.
showelements will display the elements of the stack. pop. then s/he will select exit option.h> #include<stdlib. struct node *next. in a program. temp=(node*)malloc(sizeof(node)).3. we need to provide two more options . namely. The main operations that can be performed on a stack are push and pop. its size cannot be increased or decreased once it is declared. As a result. showelements and exit operations.int item) { node *temp.2 implements a stack using linked lists. In case. push and pop will perform the operations of pushing the element to the stack and popping the element from the stack respectively. }. the user is not interested to perform any operation on the stack and would like to get out of the program. The array stack can hold at most 10 elements. one ends up reserving either too much space or too less space for an array and in turn for a stack.h> #include<conio. we shall push and pop nodes from one end of a linked list. as linked list is represented as a singly connected list. /* put the item in the data portion of node*/ temp->next=*tos. choice is a variable which will enable the user to select the option from the push. *tos=temp. The stack. } else { temp->data=item. top points to the index of the free location in the stack to where the next element can be pushed. 4. showelements and exit. element is the variable which accepts the integer that has to be pushed to the stack or will hold the top element of the stack that has to be popped from the stack.2 Implementation of Stack Using Linked Lists In the last subsection. we have implemented a stack using an array. /* Definition of push function */ void push(node **tos. How ever. /* return NULL to temp */ getch().h> /* Definition of the structure node */ typedef struct node { int data. This problem can be overcome if we implement a stack using a linked list. it suffers from the basic limitation of an array – that is. /* create a new node dynamically */ if(temp==NULL) /* If sufficient amount of memory is */ { /* not available. In the case of a linked stack.Stacks. stack cannot hold more than 10 elements. It will log the user out of the program. return. #include<stdio. Queues and Trees The size of the stack was declared as 10. When a stack is implemented using arrays. /*insert this node at the front of the stack */ /* managed by linked list*/ /* otherwise*/ 10 . So. Each node in the linked list contains the data and a pointer that gives location of the next node in the list. Program 4. the function malloc will */ printf("\nError: Insufficient Memory Space").
return (item). /* display all the values of the stack*/ temp=temp->next. return. int item. temp=*tos. 11 . char choice=‘y’. node *p=NULL. else { *tos=(*tos)->next. free(temp). do { clrscr(). item=temp->data. printf("\t\t\t\t*****MENU*****"). ch.L*/ /* Definition of main function */ void main() { int item. if(temp==NULL) /* Check whether the stack is empty*/ { printf("\nStack is empty"). } } /*end of function pop*/ /* Definition of display function */ void display(node *tos) { node *temp=tos. /* from the front node to the last node*/ } } } /*end of function display*/ /*end of function push*/ Stacks /* To pop an element from stack*/ /* remove the front node of the */ /* stack managed by L. if(*tos==NULL) return(NULL).} } /* Definition of pop function */ int pop(node **tos) { node *temp. } else { while(temp!=NULL) { printf("\n%d".temp->data).
printf("\n\t\t\t4. whereas pop function will delete the node from the front of the linked list. printf("\n Detected item is%d". Exit"). switch(ch) { case 1: printf("\n Enter an element which you want to push "). Stack allows Push and Pop from both ends. Explanation Initially. printf("\n\n\n\t\t\tEnter your choice:-"). if(item!=NULL). The Push function will insert a node at the front of the linked list. break. we executed it thrice and pushed 3 elements (10.Stacks. } while(choice==’y’). scanf(“%c”. } /*end of function main*/ Program 4.item). push(&p. break.item). To DISPLAY the elements of stack"). TOS (top of the stack) gives the bottom most element in the stack. case 3: printf(“\nThe elements of stack are”). (a) (b) (c) Stacks are sometimes called FIFO lists. scanf("%d". } /*switch closed */ printf("\n\n\t\t Do you want to run it again y/n”). 20. as we did in the implementation of stack using arrays. scanf("%d". Each node contains two portions. Check Your Progress 1 1) State True or False.2: Implementation of Stack using Linked Lists Similarly. case 4: exit(0).&ch). we defined a structure called node. to know the working of this program. printf("\n\t\t\t2. Then we call the function display in the next run to see the elements in the stack. break. To POP an element"). To PUSH an element"). data and a pointer that keeps the address of the next node in the list. 30). case 2: item=pop(&p). 12 . The function display will print the elements of the stack.&choice). display(p).&item). There is no need to declare the size of the stack in advance as we have done in the program where in we implemented the stack using arrays since we create nodes dynamically as well as delete them dynamically. printf("\n\t\t\t3. Queues and Trees printf("\n\t\t\t1.
say n. See the Figure 4. Suppose A and B are two stacks. we cannot represent these in the same way because a one-dimensional array has only two fixed points X(1) and X(m) and each stack requires a fixed point for its bottom most element. with roughly equal initial segments we have BM (i) = TM (i) = m/n (i – 1). When more than two stacks. we shall assume that the indexes of the array commence from 1 and end at m. X(1:m) may be divided into equal segments. instead of that. Suppose. then. then. we can initially divide the available memory X(1:m) into n segments. (a) (b) Why is the linked list representation of the stack better than the array representation of the stack? Discuss the underflow and overflow problem in stacks. We shall use the boundary condition BM (i) = TM (i) iff the ith stack is empty (refer to Figure 4. we can allocate the segments to them in proportion to the expected sizes of the various stacks. 1 < i < n will point to the topmost element of stack i. then the solution is simple.Stacks 2) Comment on the following. Let the stack A “grow” to the right. and stack B “grow” to the left. 1 < i < n. as the initial values of BM (i) and TM (i). in the case of more than 2 stacks. TM(i).4 ALGORITHMIC IMPLEMENTATION OF MULTIPLE STACKS So far. are to be represented sequentially.4: Implementation of multiple stacks using arrays But. It does not matter how many elements individually are there in each stack. We can define an array stack A with n1 elements and an array stack B with n2 elements. What happens when a data representation is needed for several stacks? Let us see an array X whose dimension is m. then. 1 2 3 4 Stack A Stack B n-3 n-2 n-1 n Stack A grows to right and Stack B grows to left Bottom most element of Stack A Bottom most element of Stack B Figure 4. now we have been concerned only with the representation of a single stack. 4. If we grow the ith stack in lower memory indexes than the i+1st stack. For each stack i. overflow will occur only when A and B together have more than n = n1 + n2 elements.4 below. X 1 2 m/n 2 m/n m 13 . Overflow may occur when either stack A contains more than n1 elements or stack B contains more than n2 elements. we shall use BM (i) to represent a position one less than the position in X for the bottom most element of that stack. If the sizes of the stacks are known.5). If the sizes of the stacks are not known. we define a single array stack with n = n1 + n2 elements for stack A and B together. For convenience. In this case. If we have only 2 stacks to implement in the same array X.
Stacks are widely used inside computer when recursive functions are called. Conversions of different notations (Prefix. The computer evaluates an arithmetic expression written in infix notation in two steps.e) Step1: if TM (i)=BM (i+1) Print “Stack is full” and exit Step2: [Increment the pointer value by one] TM (i) Step3: Exit Figure 4.7 depicts an algorithm to delete an element from the ith stack. In each step.Stacks.5 APPLICATIONS Stacks are frequently used in evaluation of arithmetic expressions. ADD(i.7: Algorithm to delete an element from ith stack X(TM (i)) TM (i) TM(i)-1 4. it converts the infix expression to postfix expression and then it evaluates the postfix expression. An arithmetic expression consists of operands and operators. DELETE(i.6 depicts an algorithm to add an element to the ith stack. Figure 4. First. Infix) into one another are performed using stacks. Postfix.e) Step1: if TM (i)=BM (i) Print “Stack is empty” and exit Step2: [remove the topmost item] e Step3: Exit Figure 4. Polish notations are evaluated by stacks.5: Initial configuration for n stacks in X(1:m) All stacks are empty and memory is divided into roughly equal segments. Queues and Trees BM(1) TM(1) BM(2) TM(2) B(3) T (3) Figure 4.6: Algorithm to add an element to ith stack TM (i)+1 e X(TM (i)) //delete the topmost elements of stack i. stack is used to accomplish the task. Figure 4. 14 .
Augenstein. Yedidyah Langsam. when a stack is implemented using arrays. 4. _________ are evaluated by stacks. stacks are implemented using linked lists. For example. 2.8 FURTHER READINGS 1. it suffers from the basic limitations of an array (fixed memory). Mc GrawHill. we have studied how the stacks are implemented using arrays and using liked list.ca 15 .PHI publications.queensu. Seymour Lipschutz.Moshe J. Reference Websites. the advantages and disadvantages of using these two schemes were discussed.4. This unit also introduced learners to the concepts of multiple stacks.cs. Second Edition. Schaum’s Outline series. Stack is used whenever a __________ function is called. Data Structures.Aaron M Tenenbaum. Also. Data Structures Using C and C++.6 SUMMARY In this unit. Stacks Check Your Progress 2 1) 2) 3) Multiple stacks can be implemented using _________. The problems associated with the implementation of multiple stacks are also covered. To overcome this problem.7 SOLUTIONS / ANSWERS Check Your Progress 1 1) (a) False (c) False (b) False Check Your Progress 2 1) 2) 3) Arrays or Pointers Postfix expressions Recursive 4.
5.8 5.2 5. various processes will wait in a queue for their turn to avail a service. Like people stand in a queue to get a particular service. for which the following operations are defined: createQueue(Q) : creates an empty queue Q. 5. 16 . Queue is a collection of elements.1 5.1 5. isEmpty(Q): is a boolean type predicate that returns ``true'' if Q exists and is empty.3 5.6. next(Q) removes the least recently added item that remains in the queue Q.6 5.2 5.1 5.0 5. item) : delete an item from the queue Q. In computer science.5.5.4 5. you should be able to • • • define the queue as an abstract data type. or items.1 5.3.item) adds the given item to the queue Q. and returns ``false'' otherwise.2 Array implementation of a queue Linked List implementation of a queue Page Nos. multiple queues. and returns it as the value of the function.1 OBJECTIVES After going through this unit.6. understand the terminology of various types of queues such as simple queues. it is also called a FIFO (first in first out) list. The properties can be implemented independent of any implementation in any programming language.7 5. 16 16 16 17 21 22 25 30 30 30 Implementation of Multiple Queues Implementation of Circular Queues 5. In this chapter. and get an idea about the implementation of different types of queues using arrays and linked lists.0 INTRODUCTION Queue is a linear data structure used in various applications of computer science. addQueue(Q. and deleteQueue (Q.9 Introduction Objectives Abstract Data Type-Queue Implementation of Queue 5.3. we will study about various types of queues.2 Array Implementation of a circular queue Linked List Implementation of a circular queue Array Implementation of a dequeue Linked List Implementation of a dequeue Implementation of DEQUEUE Summary Solutions / Answers Further Readings 5.Stacks.5 5.2 ABSTRACT DATA TYPE-QUEUE An important aspect of Abstract Data Types is that they describe the properties of a data structure without specifying the details of its implementation. circular queues and dequeues. Queues and Trees UNIT 5 QUEUES Structure 5.
and deleteQueue(createQueue(Q)) : error The primitive isEmpty(Q) is required to know whether the queue is empty or not. Thus. Different implementations are usually of different efficiencies. Unlike stack. Like a stack. ------------------------------------|a|b|c| d| e| f|g| ----------------------------------front rear Figure 5.1 after addition of new element 17 . But we avoid defining this here as it would depend on the actual length of the Queue defined in a specific problem. Like stack. In most cases. a queue also (usually) holds data elements of the same type. The word “queue” is like the queue of customers at a counter for any service. because calling next on an empty queue should cause an error. first in first out (FIFO) order. An example of the queue in computer science is print jobs scheduled for printers.1: A queue of characters The rule followed in a queue is that elements are added at the rear and come off of the front of the queue. the situation may be such when the queue is “full” in the case of a finite queue. At a booking counter. Same is the scenario for job scheduling in the CPU of computer. These jobs are maintained in a queue. Abstract Data Types describe the properties of a structure without specifying an implementation in any way.2).2: Queue of figure 5. an algorithm which works with a “queue” data structure will work wherever it is implemented.1 depicts a queue of 5 characters. We usually graphically display a queue horizontally.isEmpty (createQueue(Q)) : is always true. As pointed out earlier. After the addition of an element to the above queue. Queues 5. The job fired for the printer first gets printed first. the first customer in the queue is the first to be served. the position of rear pointer changes as shown below. customers are added at the rear end and deleted from the front end in a queue (FIFO).3 IMPLEMENTATION OF QUEUE A physical analogy for a queue is a line at a booking counter. ------------------------------------|a|b|c| d| e| f| ----------------------------------front rear Figure 5. Figure 5. customers go to the rear (end) of the line and customers are attended to various services from the front of the line. Now the rear is pointing to the new element ‘g’ added at the rear of the queue(refer to Figure 5.e. in which customers are dealt with in the order in which they arrive i.
If it is the last element in the queue.3 1 Array implementation of a queue As the stack is a list of elements. int front. then go to step 3. printf (“enter 1 for add and 2 to remove element front the queue”) printf(“Enter your choice”) scanf(“%d”. If empty. main() { int choice.1 lists the implementation of a queue using arrays. the queue is also a list of elements. } struct queue q. else perform step 4 Step 3: Make the front and rear point this element Step 4: Add the element at the end of the queue and shift the rear pointer to the newly added element. The stack and the queue differ only in the position where the elements can be added or deleted.&choice). 18 . Algorithm for deletion of an element from the queue Step 1: Check for Queue empty condition. the queue changes to the following with the front pointer pointing to ‘b’ (refer to Figure 5.3: Queue of figure 5. rear.y.h” #define QUEUE_LENGTH 50 struct queue { int element[QUEUE_LENGTH].x. else go to step 3 Step 2: Message “Queue Empty” Step 3: Delete the element from the front of the queue. then go to step 2. switch (choice) { case 1 : printf (“Enter element to be added :”). #include “stdio. Program 5.2 after deletion of an element Algorithm for addition of an element to the queue Step 1: Create a new element to be added Step 2: If the queue is empty.x. Queues and Trees After the removal of element ‘a’ from the front.3). choice. ------------------------------------|b|c| d| e| f|g| ----------------------------------front rear Figure 5. queues can also be implemented using arrays. Like other liner data structures. then perform step a else step b a) make front and rear point to null b) shift the front pointer ahead to point to the next element in the queue 5.Stacks.
x). } } add(y) { ++q.front++. add(&q.rear.2 Linked List Implementation of a queue The basic element of a linked list is a “record” structure of at least two fields.4: Structure of a node The data component may contain data of any type. else printf(“Queue overflow”) } delete() { if q. The object that holds the data and refers to the next element in the list is called a node (refer to Figure 5.element[q. break.rear < QUEUE_LENGTH) q.3. Figure 5.&x).5 depicts the linked list representation of a queue.5: A linked list representation of a Queue 19 .element[q. Ptrnext is a reference to the next element in the queue structure. break.rear printf(“Queue empty”).4). if (q. } Program 5.scanf(“%d”. else{ x = q. q.front > q. } retrun x.1: Array implementation of a Queue Queues 5. case 2 : delete(). Data Ptrnext Figure 5.front]. Figure 5.rear] = y.
3 gives the program segment for the deletion of an element from the queue. } } } Program 5. Program 5.3: Program segment for deletion of an element from the queue Check Your Progress 1 1) 2) 3) The queue is a data structure where addition takes place at _________ and deletion takes place at _____________. } else { front=front->next. Queues and Trees Program 5. Compare the array and linked list representations of a queue.2 gives the program segment for the addition of an element to the queue.Stacks. add(int value) { struct queue *new. rear=new. new->value = value. { delvalue = front->value. The queue is also known as ________ list. if (front == NULL) printf(“Queue Empty”). new = (struct queue*)malloc(sizeof(queue)). free(queueptr). Explain your answer. queueptr=front. if (front == NULL) { queueptr = new. 20 . } } Program 5.2: Program segment for addition of an element to the queue delete() { int delvalue = 0. if (front->next==NULL) { free(front). queueptr=front=rear=NULL. new->next = NULL. front = rear = queueptr } else { rear->next = new.
else { rear[i] = rear[i]+1.6: Multiple queues in an array front [k+1] rear [k+1] A multiqueue implementation using a single dimensional array with m elements is depicted in Figure 5..x. addmq(i. mqueue[rear[i]] = x. { x = mqueue[front[i]]. 1 2 ……… n n + 1 ……… 2n 2n + 1 ………….x. we have seen the representation of a single queue. if ( front[i] == rear[i]) printf(“Queue is empty”).4 IMPLEMENTATION OF MULTIPLE QUEUES So far. This type of data structures are used for process scheduling.x) /* Add x to queue i */ { int i.4 gives the program segment using arrays for the addition of an element to a queue in the multiqueue. but many practical applications in computer science require several queues. front[i] = front[i]-1 . Array Implementation of a multiqueue Program 5. } } Program 5.4: Program segment for the addition of an element to the queue Program 5. kn + 1 ………… m Queues front [1] rear [1] front [2] rear [2] Figure 5. if ( rear[i] == front[i+1]) printf(“Queue is full”).5: Program segment for the deletion of an element from the queue 21 . Each queue has n elements which are mapped to a liner array of m elements. We may use one dimensional array or multidimensional array to represent a multiple queue. return x.5 gives the program segment for the deletion of an element from the queue. Multiqueue is a data structure where multiple queues are maintained. delmq(i) /* Delete an element from queue i */ { int i.5. } } Program 5.6. ++rear[i].
In this way. However. the first element in the queue will be at index 0 and the last element will be at n-1 when all the positions between index 0 and n-1(both inclusive) are filled. front will point to 0 and rear will point to n-1. In case.7 depicts a circular queue.5 IMPLEMENTATION OF CIRCULAR QUEUES One of the major problems with the linear queue is the lack of proper utilisation of space. Under such circumstances. then the element can be added to that position and rear can be adjusted accordingly. Queues and Trees 5. when a new element is to be added and if the rear is pointing to n-1. In a circular queue. The alternative representation is to depict the queue as circular. the utilisation of space is increased in the case of a circular queue. we can conclude that the circular queue is empty in case both front and rear point to the same index. then. So. space utilisation in the case of linear queues is not efficient. clearly . it needs to be checked if the position at index 0 is free. This problem is arising due to the representation of the queue. If yes. then. Suppose that the queue can store 100 elements and the entire queue is full. we are representing the queue using arrays.7 : A circular queue (Front = 0. When the circular queue is created. if the first element is at position 4 in the array. it means that the queue is holding 100 elements.Stacks. [4] 20 [3] 15 50 [2] [n-4] 10 [1] [n-3] [0] [n-1] [n-2] Figure 5. Figure 5. a queue with n elements starts from index 0 and ends at n-1. some of the elements at the front are deleted. In case. then both front and rear point to index 1.So. then the front will point to position 3. front will point to one position less to the first element anti-clock wise. Also. In this way. Rear = 4) 22 . the element at the last position in the queue continues to be at the same position and there is no efficient way to find out that the queue is not full. So.
else continue Step-2: Delete the “front” element Step-3: If the “front” is pointing to the last position of the queue then go to step-4 else go to step-5 Step-4: Make the “front” point to the first position in the queue and quit Step-5: Increment the “front” position by one Queues 5.&x).x. case 2 : deleteelement(). /*queue is initially empty*/ void main() { int choice. printf ("enter 1 for addition and 2 to remove element front the queue and 3 for exit"). add the new value for the queue position pointed by the "rear" Algorithm for deletion of an element from the circular queue: Step-1: If the queue is empty then say “queue is empty” and quit. } } void add(int y) { if(rear == max-1) rear = 0. Program 5. so quit. if the “front” points where “rear” is pointing and the queue holds a not NULL value for it. int front=0. scanf("%d".h" void add(int). #include "stdio.Algorithm for Addition of an element to the circular queue: Step-1: If “rear” of the queue is pointing to the last position then go to step-2 or else Step-3 Step-2: make the “rear” value as 0 Step-3: increment the “rear” value by one Step-4: a.6 gives the array implementation of a circular queue. scanf("%d". else 23 . then its a “queue overflow” state. else rear = rear + 1.&choice). break.5. else go to step-b b. /*the maximum limit for queue has been set*/ static int queue[10].1 Array implementation of a circular queue A circular queue can be implemented using arrays or linked lists. add(x). switch (choice) { case 1 : printf ("Enter the element to be added :"). break. printf("Enter your choice"). int max=10. void deleteelement(void). if( front == rear && queue[front] != NULL) printf("Queue Overflow"). rear=-1.
typedef struct cq *cqptr cqptr p.Queue empty"). switch (choice) { case 1 : printf (“Enter the element to be added :”). *rear. #include “stdio.Stacks. else { deleted_front = queue[front]. Program 5.x.6: Array implementation of a Circular queue 5. else front = front + 1. /* Initialise the circular queue */ cqptr = front = rear = NULL. of course with the extra cost of storing the pointers. int *next.h” struct cq { int value. } void deleteelement() { int deleted_front = 0. }. if (front == NULL) printf("Error .2 Linked list implementation of a circular queue Link list representation of a circular queue is more efficient as it uses space more efficiently. if (front == max-1) front = 0.7 gives the linked list representation of a circular queue. break. case 2 : delete(). } } Program 5.x). *front. Queues and Trees queue[rear] = y. add(&q. printf (“Enter 1 for addition and 2 to delete element from the queue”) printf(“Enter your choice”) scanf(“%d”.&choice).5. main() { int choice.&x). queue[front] = NULL. scanf(“%d”. 24 .
rear=new. a dequeue can also be implemented using arrays or linked lists. } } } Program 5. } } /************ Add element ******************/ add(int value) { struct cq *new.break. front = rear = queueptr. queueptr = front = rear = NULL.6 IMPLEMENTATION OF DEQUEUE Dequeue (a double ended queue) is an abstract data type similar to queue. } else { rear->next = new. } else { front=front->next. new = (struct cq*)malloc(sizeof(queue)). if (front == NULL) { cqptr = new. delvalue = front->value. } } /* *************** delete element ***********/ delete() { int delvalue = 0. if (front->next==NULL) { free(front). Like a linear queue and a circular queue. free(queueptr). new->value = value new->next = NULL. where addition and deletion of elements are allowed at both the ends. 25 .7 : Linked list implementation of a Circular queue Queues 5. queueptr = front. if (front == NULL) { printf(“Queue is empty”).
break.&x). 26 . scanf(“%d ”.&choice). front = rear = -1. main() { int choice. if (front == -1 ) front = 0. scanf(“%d”.h” #define QUEUE_LENGTH 10. case 2: delete_front(). printf (“enter 3 for addition and 4 to remove element from the rear of the queue"). } } /**************** Add at the front ***************/ add_front(int y) { if (front == 0) { printf(“Element can not be added at the front“). break.e empty queue */ printf (“enter 1 for addition and 2 to remove element from the front of the queue"). break. rear.Stacks. printf(“Enter your choice”).1. switch (choice) { case 1: printf (“Enter element to be added :”). choice.x. Queues and Trees 5.1 Array implementation of a dequeue If a Dequeue is implemented using arrays.y.8 gives the array implementation of a Dequeue.x. case 3: printf (“Enter the element to be added :”). else { front = front .6. } } /**************** Delete from the front ***************/ delete_front() { if front == -1 printf(“Queue empty”). int front. dq[front] = y. scanf(“%d”. case 4: delete_rear(). then it will suffer with the same problems that a linear queue had suffered. int dq[QUEUE_LENGTH]. Program 5.&x). #include “stdio. return. /* initialize the front and rear to null i. add_front(x). add_rear(x). break.
if (rear = = -1 ) rear = 0.6. }. typedef struct dq *dqptr.else return dq[front]. tp.h” #define NULL 0 struct dq { int info. else { rear = rear + 1. } /**************** Add at the rear ***************/ add_rear(int y) if (front == QUEUE_LENGTH -1 ) { printf(“Element can not be added at the rear “) return. } } /**************** Delete at the rear ***************/ delete_rear() { if rear == -1 printf(“deletion is not possible from rear”). dq[rear] = y. if (front = = rear) front = rear = -1 else front = front + 1. The right pointer points to the next node on the right where as the left pointer points to the previous node on the left. else { if (front = = rear) front = rear = -1 else { rear = rear – 1. A doubly link list can traverse in both the directions as it has two pointers namely left and right. int *left. int *right.8: Array implementation of a Dequeue Queues 5. dqptr p. } } } Program 5. return dq[rear].2 Linked list implementation of a dequeue Double ended queues are implemented with doubly linked lists. 27 . Program 5.9 gives the linked list implementation of a Dequeue. # include “stdio.
switch (choice) { case 1: create_list(). 28 . case 4: dq_front().Stacks. p = getnode(). Queues and Trees dqptr head. break. scanf(“%d”. p->left = getnode(). break. dqptr tail. x. break. printf(“\n Enter 1: Start 2 : Add at Front 3 : Add at Rear 4: Delete at Front 5: Delete at Back”). case 2: eq_front(). while (1) { printf(“\n 1: Start 2 : Add at Front 3 : Add at Back 4: Delete at Front 5: Delete at Back 6 : exit”). case 5: dq_back(). return p. break. } } } create_list() { int I. return. &choice). dqptr n. case 3: eq_back(). } dq_empty(dq q) { return q->head = = NULL. dqptr t. break. x. tp = p. p->info = 10. p_right = getnode(). main() { int choice. dqptr getnode(). I. } dqptr getnode() { p = (dqptr) malloc(sizeof(struct dq)). case 6 : exit(6).
return info. void *info) { if (dq_empty(q)) q->head = q->tail = dcons(info. q->tail -> right -> left = q->tail. NULL. q->head -> left ->right = q->head. if (q->head = = NULL) q -> tail = NULL. q ->head = q->head-> right. } } dq_front(dq q) { if dq is not empty { dq tp = q-> head. NULL. free(tp).9 : Linked list implementation of a Dequeue Queues 29 . NULL. if (q->tail = = NULL) q -> head = NULL. NULL) else { q-> tail -> right =dcons(info. NULL). *info = tp -> info. else q -> head -> left = NULL. NULL). q ->head = q->head ->left. } } dq_back(dq q) { if (q!=NULL) { dq tp = q-> tail. else { q-> head -> left =dcons(info. void *info) { if (dq_empty(q)) q->head = q->tail = dcons(info. q ->tail = q->tail-> left. free(tp). } } eq_back(dq q. return info.} eq_front(dq q. else q -> tail -> right = NULL. } } Program 5. void *info = tp -> info. NULL). NULL. q ->tail = q->tail ->right.
the queue is having a limited capacity.ee. Moshe J. then this problem is solved. The only overhead is the memory occupied by the pointers. we discussed the data structure Queue.Stacks. 0 5. rear .uwa.html 30 . Yedidyah Langsam. There are a number of variants of the queues. One is front from where the elements can be deleted and the other if rear to where the elements can be added. Each representation is having it’s own advantages and disadvantages.edu/~wayne/libwayne/libwayne. Normally. False 3.au/~morris/Year2/PLDS210/queues. 5. 2. Data Structures using C by Aaron M. The problems with arrays are that they are limited in space. Now. If queues are implemented using linked lists.Augenstein . 5.toronto. Hence. Apart from linear queues.html. Queues and Trees Check Your Progress 2 1) 2) 3) _________ allows elements to be added and deleted at the front as well as at the rear. A special type of queue called Dequeue was also discussed in this unit. It had two ends.7 SUMMARY In this unit. First in First out (FIFO) list Check Your Progress 2 1. A queue can be implemented using Arrays or Linked lists.cs. (True/False) The index of a circular queue starts at __________. queues mean circular queues.9 FURTHER READINGS Reference Books 1. Dequeues permit elements to be added or deleted at either of the rear or front. We also discussed the array and linked list implementations of Dequeue. It is not possible to implement multiple queues in an Array. PHI publications Algorithms+Data Structures = Programs by Niklaus Wirth. we also discussed circular queues in this unit.edu. Dequeue 2. PHI publications Reference Websites SOLUTIONS/ANSWERS Check Your Progress 1 1. front 2. there is no limit on the capacity of the queue.
and • give some applications of Tree.0 INTRODUCTION Have you ever thought how does the operating system manage our files? Why do we have a hierarchical file system? How do files get saved and deleted under hierarchical directories? Well.5 6.2 6.4 6. 6.3 6. we will go through the basic tree structures first (general trees). then come to your parent and finally.1 6. we have answers to all these questions in this section through a hierarchical data structure called Trees! Although most general form of a tree can be defined as an acyclic graph.2 Recursive Implementation of Binary Tree Traversals Non-Recursive Implementation of Binary Tree Traversals Trees Page Nos. Since the data values and operations are defined with mathematical precision. • to implement the Tree and Binary tree.7. • learn the different properties of a Tree and a Binary tree. In this unit.2 ABSTRACT DATA TYPE-TREE Definition: A set of data values and associated operations that are precisely specified independent of any particular implementation. 6.0 6.1 6.UNIT 6 TREES Structure 6. rather than as an implementation in a computer language. 31 31 31 34 35 37 38 40 43 45 45 46 Applications Summary Solutions/Answers Further Readings 6. Let us say that we start with your grand parent.11 Introduction Objectives Abstract Data Type-Tree Implementation of Tree Tree Traversals Binary Trees Implementation of a Binary Tree Binary Tree Traversals 6. data and files in a hierarchical fashion.9 6. Tree is a data structure which allows you to associate a parent-child relationship between various pieces of data and thus allows us to arrange our records.1 OBJECTIVES After going through this unit.6 6.8 6. Consider a Tree representing your family structure. we may reason about effects of the 31 . and then go into the specific and more popular tree called binarytrees. we will consider in this section only rooted tree as general tree does not have a parent-child relationship.7 6. you and your brothers and sisters.7. you should be able • to define a tree as abstract data type (ADT).10 6.
Stacks. T.T. Tree . right: Tree -> Tree contents: Tree -> Element height (nil) = 0 | height (fork(e. T. nil)) = true leaf(fork(e. T')) = e contents(nil) = error Look at the definition of Tree (ADT). Consider the following abstract data type: Structure Tree type Tree = nil | fork (Element . whether a programming language implements the particular data type. T and T are two sub tree leaf(fork(e. T')) = false if not null(T) or not null(T') leaf(nil) = error left(fork(e. Tree) Operations: null : Tree -> Boolean leaf : Tree -> Boolean fork : (Element . etc.1: A binary tree Rules: null(nil) = true null(fork(e. nil. Fork operation joins two sub tree with a parent node and 32 . T'))= false // nil is an empty tree // e : element . T')) = T left(nil) = error right(fork(e. height(T')) weight (nil) = 0 | weight (fork(e. T.T. Tree . A way to think of a binary tree is that it is either empty (nil) or contains an element and two sub trees which are themselves binary trees (Refer to Figure 6. T. T')) = T' right(nil) = error contents(fork(e. Queues and Trees operations.T')) = 1+weight(T)+weight(T') root left tree right tree Figure 6.T')) = 1+max(height(T). Tree) -> Tree left : Tree -> Tree // It depicts the properties of tree that left of a tree is also a tree. relationship to other abstract data types. T.1).
. respectively.produces another Binary tree. 33 . The general tree is a generic tree that has one root node. Branching factor defines the maximum number of children to any node.3 : A rooted tree In a more formal way. It does not contain any cycles (circuits. A rooted tress has a single root node which has no parents. Tk each of which is a tree. The root is at level 1. and edges which represent a relationship between two nodes.2 : Tree as a connected acyclic graph Root Level 1 Internal node Edge Level 2 Leaf node Level 3 Figure 6. So. In Figure 6. are children of r.. or closed paths). and may be converted into the more familiar form by designating a node as the root. we can define a tree T as a finite set of one or more nodes such that there is one designated node r called the root of T.3. which would imply the existence of more than one path between two nodes. • • • • • • A tree consists of nodes connected by edges. It may be noted that a tree consisting of a single leaf is defined to be of height 1.. Definition : A tree is a connected. T2. We can represent a tree as a construction consisting of nodes. One popular use of this kind of tree is a Family Tree.. . The depth (height) of a Binary tree is equal to the number of levels in it.. The child nodes of nodes at level 2 are at level 3 and so on. and every node in the tree can have an unlimited number of child nodes. we will consider most common tree called rooted tree. This is the most general kind of tree. a branching factor of 2 means a binary tree. A root is a node without parent. Figure 6. Trees It is so connected that any node in the graph can be reached from any other node by exactly one path. and whose roots r1 . Leaves are nodes with no children.2).. rk . The child nodes of root are at level 2. A tree is an instance of a more general category called graph. and the remaining nodes in (T – { r } ) are partitioned into n > 0 disjoint subsets T1. r2 . acyclic graph (Refer to Figure 6. .
other implementations build up an entire sub-tree before adding it to a larger tree. + dh-1 = (dh 1)/(d .4 depicts the structure of the node of a general k-ary tree. and every node has a parent except the root. The most common implementations insert the nodes one at a time.Stacks. then add the node to the parent’s child list. and all the non-leaves having the same degree • • • • Level h of a full tree has dh-1 nodes. There are a number of specialized trees. There is no first node or last node. However. Trees can expand and contract as the program executes and are implemented through pointers. Hence. 6. A tree of height h and degree d has at most d h . an edge represents the relationship between a child and a parent. Full Tree : A tree with all the leaves at the same level. As the nodes are added and deleted dynamically from a tree. A node in a Binary tree has at most 2 children. They are binary trees. it is simpler to write algorithms for a data representation where the numbers of nodes are fixed. AVL-trees. Complete Trees A complete tree is a k-ary position tree in which all levels are filled from left to right. The number of edges = the number of nodes – 1 (Why? Because. The following are the properties of a Tree.4 : Node structure of a general k-ary tree 34 . Items of a tree can be partially ordered into a hierarchy via parent-child relationship. Data structure. red-black trees. binary search trees. Root node is at the top of the hierarchy and leafs are at the bottom layer of the hierarchy. There is no ordering relationship among elements of tree.1) nodes where d is the degree of nodes. Data link1 link2 link k Figure 6.1 elements. tree are often implemented by link lists. 2-3 trees. Queues and Trees • • • Breadth defines the number of nodes at a level.Tree Tree is a dynamic data structures.3 IMPLEMENTATION OF TREE The most common way to add nodes to a general tree is to first find the desired parent of the node you want to insert. but since each node can be considered a tree on its own. A tree deallocates memory when an element is deleted. The depth of a node M in a tree is the length of the path from the root of the tree to M. The first h levels of a full tree have 1 + d + d2 + d3 + d4 + ……. Non-linear data structures: Linear data structures have properties of ordering relationship (can the elements/nodes of tree be sorted?). Figure 6. trees can be termed as hierarchical data structures.
Postorder and Inorder.1. visit root node traverse left sub-tree in preorder traverse right sub-tree in preorder Example of pre order traversal: Reading of a book. Preorder.1 Section 4. 2. as we do not read next chapter unless we complete all sections of previous chapter and all it’s sections (refer to Figure 6. Algorithm for pre order traversal: 1.4 TREE TRAVERSALS There are three types of tree traversals. 3.5: A linked list representation of tree (3-ary tree) Figure 6.6 : Reading a book : A preorder tree traversal 35 .5 depicts a tree with one data element and three pointers. Preorder traversal: Each node is visited before its children are visited. 6. namely.6).1 Section 4.1 Section 4 Section 4. book Preface Chapter 1 Chapter 8 Summary Section 1 Section 1.1 Trees 2 3 4 5 6 7 Figure 6.2 Figure 6. The number of pointers required to implement a general tree depend of the maximum degree of nodes in the tree. the root is visited first.
Algorithm for postorder traversal: 1. 3. 2.7) /root /dir1 /dir2 /dir3 File 1 File n File 1 Figure 6. then the node and then right sub-tree. the time complexity of post order traversal is T(n) = O(n). the time complexity of preorder traversal is T(n) = O(n). where n is number of nodes in the tree.Stacks. traverse right sub-tree + * ─ / 7 3 1 4 2 Figure 6. visit node 3. Every node is visited after its descendents are visited. where n is number of nodes in the tree. traverse left sub-tree 2. the root is visited last. Inorder traversal: The left sub tree is visited. Queues and Trees As each node is traversed only once. Algorithm for inorder traversal: 1. traverse left sub-tree in post order traverse right sub-tree in post order visit root node.8 : An expression tree : An inorder traversal 36 .7 : Calculation of space occupied by a file system : A post order traversal As each node is traversed only once. Postorder traversal: The children of a node are visited before the node itself. Finding the space occupied by files and directories in a file system requires a postorder traversal as the space occupied by directory requires calculation of space required by all files in the directory (children in tree structure) (refer to Figure 6.
higher/lower.Inorder traversal can be best described by an expression tree. where the operators are at parent node and operands are at leaf nodes. where all the nodes of the same level are travelled first starting from the root (refer to Figure 6. we can define a binary tree as a finite set of nodes which is either empty or partitioned in to sets of T0. Most important types of trees which are used to model yes/no. How many internal vertices does it have? 3) Suppose a full 3-ary tree has 100 internal vertices.1)) There is another tree traversal (of course.. 37 . Tl. Recursive Definition: A binary tree is either empty or a node that has left and right sub-trees that are binary trees. i. postorder and inorder traversal are given below: preorder Traversal : + * / 4 2 7 .9: Tree Traversal: Level Order Check Your Progress 1 1) If a tree has 45 edges. 6. on/off.9). In a formal way.3 1 postorder traversal : 4 2 / 7 * 3 1 . How many leaves does it have? 4) Prove that if T is a full m-ary tree with v vertices.+ inorder traversal : . where T0 is the root and Tl and Tr are left and right binary trees. Trees Figure 6. The preorder. how many vertices does it have? 2) Suppose a full 4-ary tree has 100 leaves. then T has ((m-1)v+1)/m leaves.5 BINARY TREES A binary tree is a special tree where each non-leaf node can have atmost two child nodes.((((4 / 2) * 7) + (3 . not very common) is called level order. Empty trees are represented as boxes (but we will almost always omit the boxes).e. binary decisions are binary trees. respectively. Tr . Let us consider the above expression tree (refer to Figure 6.8).
it has at most 2n nodes at a level l+1 The total number of nodes in a binary tree with depth d (root has depth zero) is N = 20 + 21 + 22 + …….+ 2d = 2d+1 . then it has all nodes to the left side. If there is no sub tree on the left. That is the tree has been filled in the level order from left to right.10 : Node structure of a binary tree The binary tree creation follows a very simple principle. then make your new 38 . Queues and Trees Properties of a binary tree • • • • • If a binary tree contains n nodes. 6. If we have a binary tree containing n nodes.10): struct NODE { struct NODE *leftchild. A typical node in a Binary tree has a structure as follows (refer to Figure 6. Figure 6. For the new element to be added. }. binary trees are implemented through linked lists.6 IMPLEMENTATION OF A BINARY TREE Like general tree. The “leaf node” (not shown) here will have NULL values for these pointers. and all levels. except possibly level d. int nodevalue. left child value right child The ‘left child’and ‘right child’ are pointers to another tree-node. Complete Binary Trees: A binary tree whereby if the height is d. A Binary tree of height h has 2h – 1nodes or less.1 Full Binary Trees: A binary tree of height h which had 2h –1 elements is called a Full Binary Tree. then make your new element as the left child of that current element or else compare it with the existing left child and follow the same rule. Exactly. then go to step-2 or else step -3 Step-2: If the current element does not have a left sub-tree. /* this can be of any data type */ struct NODE *rightchild. this logic is often suitable to search for a key value in the binary tree.Stacks. then move towards the left side of that element or else to its right. Algorithm for the implementation of a Binary tree: Step-1: If value of new element < current element. If its value is less than the current element in the tree. compare it with the current element in the tree. Though this logic is followed for the creation of a Binary tree. then it contains exactly n – 1 edges. If a binary tree has n nodes at a level l then. are completely full. If the bottom level is incomplete. then the height of the tree is at most n and at least ceiling log2(n + 1). the same has to done for the case when your new element is greater than the current element in the tree but this time with the right child.
..1 : Binary tree creation Array-based representation of a Binary Tree Consider a complete binary tree T having n nodes where each node contains an item (value).. struct NODE *new ) { if(new->value <= curr->value) { if(curr->left != NULL) create_tree(curr->left. create_tree( struct NODE *curr. } } Program 6.element the left child of the current element. } else { if(curr->right != NULL) create_tree(curr->right. . new). else make the existing right child as your current element and go to step-1 Program 6. we can easily and efficiently compute the index of its parent and left and right children: Index of Parent: (i – 1)/2. . Figure 6. i = 0.. else curr->left = new.1 depicts the segment of code for the creation of a binary tree. int value. 1. }. Index of Right Child: 2i + 2. else make the existing left child as your current element and go to step-1 Step-3: If the current element does not have a right sub-tree. else curr->right = new.16.. new). 1. then make your new element the right child of the current element. Index of Left Child: 2i + 1. struct NODE *right. struct NODE { struct NODE *left. n-1..11 depicts the array representation of a Binary tree of Figure 6.11 : Array Representation of a Binary Tree 39 . n-1. Node # 0 1 2 3 4 5 6 7 8 9 Item A B C D E G H I J ? Left child 1 3 -1 5 7 -1 -1 -1 -1 ? Right child 2 4 -1 6 8 -1 -1 -1 -1 ? Trees Figure 6. Label the nodes of the complete binary tree T from top to bottom and from left to right 0. Associate with T the array A where the ith entry of A is the item in the node labelled i of T. Given the index i of a node.
Well.2. If it has.4 depict the inorder. As we have already visited + so we track back to * . 5 doesn't have any left or right sub-tree. If it has. It has a right sub-tree rooted at 5 and so we move to 5. then go to step-2 or else go to step-3 Step-2: Repeat step-1 for this left child Step-3: Visit (i. * We are supposed to visit its left sub-tree then visit the node itself and its right sub-tree.1 Recursive Implementation of Binary Tree Traversals There are three classic ways of recursively traversing a binary tree. the inorder traversal results in 4 + 5 * 3 Algorithm: Inorder Step-1: For the current node. In each of these. we move to + and check for its left sub-tree (we are suppose repaeat this for every node). Program 6. we'll go back and visit node +. we have to check for 4's left sub-tree now. which is 3.3 and Program 6. we just visit 5 (print 5)and track back to +. we visit 3. 6. So.e. preorder and postorder traversals of a Binary tree. + has a left sub-tree rooted at 4. So. the left and right sub-trees are visited recursively and the distinguishing feature is when the element in the root is visited or processed.12): We start from the root i. So. printing the node in our case) the current node Step-4: For the current node check whether it has a right child. but 4 doesn't have any left sub-tree and thus we will visit node 4 first (print in our case) and check for its right sub-tree. Again.12 : A binary tree The preoreder and postorder traversals are similar to that of a general binary tree.Stacks. As 3 does not have any left or right sub-trees. Here. check whether it has a left child. then go to step-5 Step-5: Repeat step-1 for this right child * + 3 5 4 Figure 6.) 6. The general thing we have seen in all these tree traversals is that the traversal mechanism is inherently recursive in nature.7. root has a left sub-tree rooted at +. The same three different ways to do the traversal – preorder.e. and check for the right sub-tree of +. second column consist of the item stored in the node and third and fourth columns indicate the positions of left and right children (–1 indicates that there is no child to that particular node. As 4 doesn't have any right sub-tree.7 BINARY TREE TRAVERSALS We have already discussed about three tree traversal methods in the previous section on general tree. Program 6. inorder and postorder are applicable to binary tree also. struct NODE { 40 . Queues and Trees First column represents index of node. Let us discuss the inorder binary tree traversal for following binary tree (refer to Figure 6. So. As we are yet to visit the node itself and so we visit * before checking for the right sub-tree of *.
3 : Preorder traversal of a binary tree struct NODE { struct NODE *left.4 : Postorder traversal of a binary tree In a preorder traversal. followed by right sub-tree. /* can be of any type */ struct NODE *right. } Program 6.2 : Inorder traversal of a binary tree struct NODE { struct NODE *left.struct NODE *left. curr->value). curr->value). postorder(struct NODE *curr) { if(curr->left != NULL) postorder(curr->left). In a postorder traversal. curr->value). }. if(curr->right != NULL) inorder(curr->right). followed by right sub-tree which is then followed by root. the left sub-tree is visited first. inorder(struct NODE *curr) { if(curr->left != NULL) inorder(curr->left). }. }. if(curr->right != NULL) postorder(curr->right). int value. In an inorder traversal. the root is visited first (pre) and then the left and right subtrees are traversed. /* can be of any type */ struct NODE *right. } Program 6. /* can be of any type */ struct NODE *right. if(curr->right != NULL) preorder(curr->right). } Program 6. printf("%d". Trees 41 . int value. printf("%d". if(curr->left != NULL) preorder(curr->left). preorder(struct NODE *curr) { printf("%d". followed by root. the left subtree is visited first. int value.
A threaded binary tree is a binary tree in which every node that does not have a right child has a THREAD (a third link) to its INORDER successor. meaning. Algorithm : Non-recursive preorder binary tree traversal Stack S push root onto S repeat until S is empty { v = pop S if v is not NULL visit v push v’s right child onto S push v’s left child onto S } Program 6. in the case of a non-recursive method for traversal. Another method of traversing binary tree non-recursively which does not use stack requires pointers to the parent node (called threaded binary tree).5 depicts the program segment for the implementation of nonrecursive preorder traversal. stack = create_stack(). A node structure of threaded binary is : The node structure for a threaded binary tree varies a bit and its like this – struct NODE { 42 .Stacks. stack). as the traversal mechanisms were inherently recursive. push(tree->right. which makes use of a lot of memory and time. the stack will grow to size n/2. /* push the first element of the tree to the stack */ while (!empty(stack)) { tree = pop(stack). However. it has to be an iterative procedure. visit(tree). all the steps for the traversal of a node have to be under a loop so that the same can be applied to all the nodes in the tree.5: Non-recursive implementation of preorder traversal In the worst case. for preorder traversal. /* push left child to the stack */ } } Program 6. where n is number of nodes in the tree. implemented using a stack */ void preorder(binary_tree_type *tree) { stack_type *stack. push(tree. stack). Queues and Trees 6. the implementation was also simple through a recursive procedure. /* push right child to the stack */ push(tree->left. stack). By doing this threading we avoid the recursive method of traversing a tree and use of stack.7. /* preorder traversal of a binary tree.2 Non-recursive implementation of binary tree traversals As we have seen.
Imagine a game offering just two choices for every move to the next level at the end of each level in a ten level game.struct NODE *leftchild. the player (playing with X) ultimately looses in subsequent moves. and every node in the tree can have an unlimited number of child nodes. Data compression (Huffman trees). in game losses losses losses losses Figure 6. however. AVL trees. The big problem with tree based level progressions. decision trees.8 APPLICATIONS Trees are used enormously in computer programming. 3D graphics programming (quadtrees. pathfinding trees). 43 . int node_value. Arithmetic Scripting languages (arithmetic precedence trees). A computer program might need to make a decision based on an event that happened. tries ). sparse indexed trees. /* third pointer to it’s inorder successor */ } Trees 6. One popular use of this kind of tree is in Family Tree programs. These can be used for improving database search times (binary search trees.13 : A tic-tac-toe game tree showing various stages of game In all of the above scenario except the first one. The interesting thing about using a tree for decision-making is that the options are cut down for every level of the tree as we go down. A more complex AI decision tree would definitely have a lot more options. This would require a tree of 1023 nodes to be created.13 depicts a tic-tac-toe game tree showing various stages of game. octrees). Figure 6. red-black trees). greatly simplifying the subsequent moves and improving the speed at which the AI program makes a decision. and file systems (Btrees. But this is just a simple tree for demonstration. many games use these types of trees for decision-making processes as shown above for tic-tac-toe. struct NODE *thread. is that sometimes the tree can get too large and complex as the number of moves (level in a tree) increases. 2-3 trees. In game programming. Game programming (minimax trees. struct NODE *rightchild. The General tree (also known as Linked Trees) is a generic tree that has one root node.
If it is not a leaf.right) } Binary Trees are also used for evaluating expressions. the keys appear in sorted order: inorder(root) { inorder(root. 2. The key of a node is always greater than the keys of the nodes in its left sub-tree The key of a node is always smaller than the keys of the nodes in its right sub-tree root 14 10 16 8 11 15 18 Figure 6. Such trees are called Binary Search trees(refer to Figure 6. Figure 6. 2. 1.14). If a node is a leaf. + * ─ 1 5 8 6 Figure 6. then evaluate the children and combine them according to the operation specified by the element.left) print(root. A binary tree can be used to represent and evaluate arithmetic expressions. Queues and Trees Binary trees are used for searching keys.15 depicts a tree which is used to evaluate expressions.14 : A binary search tree (BST) It may be seen that when nodes of a BST are traversed by inorder traversal.Stacks.key) inorder(root. then the element in it specifies the value.6 44 . A Binary Search Tree (BST) is a binary tree with the following properties: 1.15 : Expression tree for 1 * 5 + 8 .
true/false etc. However. then e=n – 1. there are N+1 children with both the links as null (leaf node). Various tree traversal mechanisms include inorder. 3) A full 3-ary tree with 100 internal vertices has l = (3 – 1)*100+ 1=201 leaves 45 .Check Your Progress 2 A Trees B C D E G H I J Figure 6. if a tree has 45 edges. then it has 46 vertices.16.9 SUMMARY Tree is one of the most widely used data structure employed for representing various problems. rooted trees are most prominent of all trees. Binary tree are the special case of trees which have at most two children. inorder and level order traversal of above binary tree Give array representation of the binary tree of Figure 6. Binary trees are mostly implemented using link lists. Various tree traversal methods are also discussed. preorder and post order. Hence. 6. We discussed definition and properties of general trees with their applications.12 Show that in a binary tree of N nodes. post order. find a) the leave nodes in the binary tree b) sibling of J c) Parent node of G d) depth of the binary tree e) level of node J Give preorder. 6. 2) A full 4-ary tree with 100 leaves has i=(100 – 1)/(4 – 1)=33 internal vertices.16 : A binary tree 1) 2) 3) 4) With reference to Figure 6. These tree traversals can be implemented using recursive procedures and non-recursive procedures. We studied tree as a special case of an acyclic graph. Binary trees have wider applications in two way decision making problems which use yes/no.10 SOLUTIONS / ANSWERS Check Your Progress 1 1) If a tree has e edges and n vertices.
Fundamentals of Data Structures in C++ by E. Queues and Trees Check Your Progress 2 1) Answers a.Tonodo and B. I c.umbc.Horowitz.L. Galgotia Publications.11 FURTHER READINGS 1. D d.edu. Data Structures and Program Design in C by Kruse. Pearson Education. 4 2) Preorder : ABDCHEIJC Postorder : GHDIJEBCA Inorder : GDHBIEFAC level-order: ABCDEGHIJ 3) Array representation of the tree in Figure 6. Sahni and D.12 Index of Node 0 1 2 3 4 5 Item * + 3 4 5 ? Left child 1 3 -1 -1 -1 ? Right child 2 4 -1 -1 -1 ? 6. Reference websites. G.Mehta. 2.edu. C.Leung.H.Stacks.ucsc.I and J b.com 46 . 4 e.
2 Introduction Objectives Binary Search Trees 7. insertion and deletion of node in the ordered list to be achieved in the minimum amount of time.1 7.0 7. understand the concept of AVL trees. understand the concept of B-trees.2 7.1 OBJECTIVES After going through this unit. AVL trees and B-Trees.Advanced Trees UNIT 7 ADVANCED TREES Structure 7.1 7. and perform various operations on B-trees.3 Traversing a Binary Search Tree Insertion of a node into a Binary Search Tree Deletion of a node from a Binary Search Tree Page Nos.3 AVL Trees 7. 7.0 INTRODUCTION Linked list representations have great advantages of flexibility over the contiguous representation of data structures. 7. they have few disadvantages also.4.2. left child and right child.6 7. Some of these trees are special cases of other trees and Trees are having a large number of applications in real life.3 7.3.7 B-Trees 7.3.3. We cover only fundamentals of these data structures in this unit.2 7.4 Insertion of a node into an AVL tree Deletion of a node from an AVL tree AVL tree rotations Applications of AVL trees 9 7.4. 5 5 5 7.2. But.2 BINARY SEARCH TREES A Binary Search Tree is a binary tree that is either empty or a node containing a key value.3. These data structures allow the searching.2.1 7. The data structures that we discuss primarily in this unit are Binary Search Trees.2 Operations on B-trees Applications of B-trees 14 18 18 19 Summary Solutions/Answers Further Readings 7.4 7. 5 . Data structures organised as trees have a wide range of advantages in various applications and it is best suited for the problems related to information retrieval. you should be able to • • • • • know the fundamentals of Binary Search trees.5 7. perform different operations on the Binary Search Trees.1 7.
while the non-empty BST has three components. Traverse the left subtree in preorder 3. In Order Traversal 3. Traverse the right subtree in preorder In Order Traversal. Traverse the left subtree in inorder 2. Pre Order Traversal 2.we perform the following three operations: 1.2. Visit the root 3.Graph Algorithms and Searching Techniques By analysing the above definition. 7. The empty BST has no further structure. we perform the following three operations: 1. They are as follow: 1. Traverse the right subtree in inorder. b) The key in the right child of a node (if exists) is greater than the key in its parent node. 6 . c) The left and right subtrees of the root are again binary search trees. Visit the node 2. we note that BST comes in two variants namely empty BST and non-empty BST.1 Traversing a Binary Search Tree Binary Search Tree allows three types of traversals through its nodes. The following are some of the operations that can be performed on Binary search trees: • • • • • • • • • • • Creation of an empty tree Traversing the BST Counting internal nodes (non-leaf nodes) Counting external nodes (leaf nodes) Counting total number of nodes Finding the height of tree Insertion of a new node Searching for an element Finding smallest element Finding largest element Deletion of a node. The non-empty BST satisfies the following conditions: a) The key in the left child of a node (if exists) is less than the key in its parent node. Post Order Traversal In Pre Order Traversal.
1: Preorder : Inorder : Postorder: KJFGSMLPU FGJKLMPSU GFJLPMUSK 7. we perform the following three operations: 1.In Post Order Traversal. In inserting a new node.2. the node is inserted into the tree Example: Consider the BST of Figure 7. Visit the root Consider the BST of Figure 7.1: A Binary Search Tree(BST) The following are the results of traversing the BSTof Figure 7. Insertion must maintain the order of the tree. Traverse the left subtree in postorder 2.1 K Advanced Trees J S F M U G L P Figure 7. the BST of Figure 7. The value to the left of a given node must be less than that node and value to the right must be greater. the following two tasks are performed : • • Tree is searched to determine where the node is to be inserted. On completion of search. Traverse the right subtree in postorder 3.3 results.2 After insertion of a new node consisting of value 5.2: A non-empty 7 .2 Insertion of a node into a Binary Search Tree A binary search tree is constructed by the repeated insertion of new nodes into a binary tree structure. 10 7 15 3 Figure 7.
The node has one child 10 10 5 15 7 15 7 3. Thus. then it may be deleted without further adjustment to the tree. then its only son can be moved up to take its place.2. The inorder successor cannot have a left subtree. The node to be deleted has two children. then its inorder successor s must take its place. The node p to be deleted has two subtrees. 8 . The node to be deleted has no children.2 after insertion of 5 7. The order of the binary tree must be kept intact. • • • If the node to be deleted has no sons. 1.Graph Algorithms and Searching Techniques 10 7 15 3 5 Figure 7. the right son of s can be moved up to take the place of s. Example: Consider the following cases in which node 5 needs to be deleted.3 Deletion of a node from a Binary Search Tree The algorithm to delete a node with key from a binary search tree is not simple where as many cases needs to be considered. This case is complex.3: Figure 7. If the node to be deleted has only one subtree. 10 10 7 15 15 2.
BALANCED (balance factor 0) Both children have the same height RIGHT – HIGH (balance factor +1) The right child has a height that is greater by 1. Every sub tree is an AVL tree. The three possibilities are: Left – HIGH (balance factor -1) The left child has a height that is greater than the right child by 1.4 : Balance requirement for an AVL tree: the left and right subtree differ by at most one in height AVL stands for the names of G. Landis. two Russian mathematicians. Adelson – Velskii and E. 9 . Figure 7. 7. even in the worst case.4 depicts an AVL tree. Here.1 Insertion of a node into an AVL tree Nodes are initially inserted into an AVL tree in the same manner as an ordinary binary search tree. Figure 7.M. n is the number of nodes. An AVL tree which remains balanced guarantees O(log n) search time.3.3 AVL TREES An AVL tree is a binary search tree which has the following properties: • • The sub-tree of every node differs in height by at most one. who came up with this method of keeping the tree balanced.Check Your Progress 1 1) What are the different ways of traversing a Binary Search Tree? ………………………………………………………………………………… ………………………………………………………………………………… What are the major features of a Binary Search Tree? ………………………………………………………………………………… ………………………………………………………………………………… Advanced Trees 2) 7. The AVL data structure achieves this property by placing restrictions on the difference in heights between the subtrees of a given node and rebalancing the tree even if it violates these restrictions.M. each node stores an extra piece of information: the current balance of its subtree. An AVL tree is a binary search tree which has the balance property and in addition to its key.
Once the node has been swapped. The four possible rotations that can be performed on an unbalanced AVL tree are given below.2 Deletion of a node from an AVL tree The deletion algorithm for AVL trees is a little more complex as there are several extra steps involved in the deletion of a node. 7. If a node is found that is unbalanced (if it has a balance factor of either -2 or +2) then rotation is performed.3 AVL tree rotations AVL trees and the nodes it contains must meet strict balance requirements to maintain O(log n) search time. we traverse back up the path to the root node. the insertion algorithm for an AVL tree travels back along the path it took to find the point of insertion and checks the balance at each node on the path. 7.7 and 7. If a deletion node was originally a leaf node.6.3.8). 7. then it can simply be removed. checking the balance of all nodes along the path. Then the node must be swapped with either its in-order successor or predecessor. If the node is not a leaf node.5: LL Rotation 10 . then it has at least one child. we can delete it. If unbalanced. then the respective node is found and an appropriate rotation is performed to balance that node. 7. As done in insertion.5.3. +2 x x +1 T3 O h O r h T2 h+1 h+1 T1 T1 h T2 T3 h h Figure 7.Graph Algorithms and Searching Techniques However. based on the inserted nodes position relative to the node being examined (the unbalanced node). These balance restrictions are maintained using various rotation functions. The before and after status of an AVL tree requiring the rotation are shown (refer to Figures 7.
8: RL Rotation 11 .6: RR Rotation w O x +1 x r w h T4 h T1 h T1 T2 T3 T4 T2 h T3 Figure 7.7: LR Rotation r -2 w O x h T1 +1 r x w Th h h T1 T2 h T3 T4 h T2 h T3 Figure 7.r -2 Advanced Trees x O -1 x r O h T1 h T2 h T3 h+1 h T1 h T2 T3 h+1 r +2 Figure 7.
Before the insertion.9: LL Rotation The rectangles marked A.(c)). make c the right child move the old root (7) down to the left together with its left subtree A and finally move subtree B across and make it the new right child of 7. when a new node is inserted into the AVL tree (LL Rotation)) (refer to Figure 7. for the right child was taller then the left child by one. The shaded rectangle stands for a new insertion in the tree C.(b). The balance was broken when we inserted a node into the right child of 7.Graph Algorithms and Searching Techniques Example: ( Single rotation in AVL tree. the tree was balanced. since the difference in height became 7. 7 9 A 8 D B C (a) 12 .10 ( a). To fix the balance we make 8 the new root. 8 7 7 8 C A A B C B Figure 7.9). Example: (Double left rotation when a new node is inserted into the AVL tree (RL rotation)) (refer to Figure 7. B and C are trees of equal height.
int bf. int info. As a result we get correct AVL tree equal balance. bf is the balance factor. placing the C subtree into the left child of 9. But. We first make a right rotation around the node 9.Advanced Trees 7 8 8 7 9 A B 9 A B C D C D (b) (c) Figure 7. making the tree off balance by 2 at the root. }.10: Double left rotation when a new node is inserted into the AVL tree A node was inserted into the subtree C. 7. An AVL tree can be represented by the following structure: struct avl { struct node *left. Then a left rotation around the root brings node 9 (together with its children) up a level and subtree A is pushed down a level (together with node 7).4 Applications of AVL Trees AVL trees are applied in the following situations: • • • There are few insertion and deletion operations Short search time is needed Input data is sorted or nearly sorted AVL tree structures can be used in situations which require fast searching. 13 .3. info is the value in the node. the large cost of rebalancing may limit the usefulness. struct node *right.
The number of subtrees of each node may also be large. This means that only a small number of nodes must be read from disk to retrieve an item. telephone directory. 3. Check Your Progress 2 1) Define the structure of an AVL tree. A classic problem in computer science is how to store information dynamically so as to allow for quick look up. based on the keys (alphabetical or numerical) order. where n is the number of inserted records. The balanced nature of the tree limits its height to O (log n). A B-Tree is designed to branch out in this large number of directions and to contain a lot of keys in each node so that height of the tree is relatively small. But. AVL tree also has applications in file systems. have a moderately high cost for addition and deletion. deleted and retrieved with guaranteed worst case performance. 2. A B-Tree of order m is multiway search tree of order m such that • • • • All leaves are on the bottom level All internal nodes (except root node) have atleast m/2 (non empty) children The root node can have as few as 2 children if it is an internal node and can have no children if the root node is a leaf node Each leaf node must contain atleast (m/2) – 1 keys. ………………………………………………………………………… ………………………………………………………………………… 7. the balanced (AVL) binary tree is a good choice for a data structure.4 B – TREES B-trees are special m–ary balanced trees used in databases because their structure allows records to be inserted. symbol tables for compilers and while storing business records etc.11 depicts a B-tree of order 5. A B-Tree is a specialised multiway tree. item_type key[3]. The records are stored in a balanced binary tree.Graph Algorithms and Searching Techniques Consider the following: 1. The following is the structure for a B-tree : struct btree { }. In a B-Tree each node may contain a large number of keys. This searching problem arises often in dictionaries. If application does a lot more searches and replacements than it does addition and deletions. long branch [4]. // number of keys stored in the current node // array to hold 3 keys // array of fake pointers (records numbers) 14 . int count. AVL trees are very fast on searches and replacements. Figure 7.
The following is the algorithm for searching a B-tree: B-Tree Search (x. it must be written out to secondary storage with write operation. Similarly.4. once a node is modified and it is no longer needed. k) i<-1 while i < = n [x] and k > keyi[x] do i ← i + 1 if i < = n [x] and k = key1 [x] then return (x. Instead of choosing between a left and right child as in binary tree. i) if leaf [x] then return NIL else Disk – Read (ci[x]) return B – Tree Search (Ci[x]. 15 . a B-tree search must make an n-way choice.Advanced Trees M E H P T X B D F G I K L N O Q S V W Y Z Figure 7. k) The search operation is similar to binary tree. All references to a given node are preceded by a read operation.11: A B-tree of order 5 7.1 Operations on B-Trees The following are various operations that can be performed on B-Trees: • • • Search Create Insert B-Tree strives to minimize disk access and the nodes are usually stored on disk All the nodes are assumed to be stored in secondary storage rather than primary storage.
k) else B – Tree-Insert-Non full (r. Example:Insertion of a key 33 into a B-Tree (w/split) (refer to Figure 7. Else.K) r ← root (T) if n[r] = 2t – 1 then S ← Allocate-Node ( ) root[T] ← S leaf [S] ← FALSE n[S] ← 0 C1 ← r B–Tree-Split-Child (s.Graph Algorithms and Searching Techniques The correct child is chosen by performing a linear search of the values in the node. the key must be inserted into the node. The following is the algorithm for the creation of a B-tree: B-Tree Create (T) x ← Allocate-Node ( ) Leaf [x] ← True n [x] ← 0 Disk-write (x) root [T] ← x The above mentioned algorithm creates an empty B-tree by allocating a new root that has no keys and is a leaf node. another split operation is required. The exact running time of search operation depends upon the height of the tree. k) To perform an insertion on B-tree. If the node is not full prior to the insertion. Next. the child pointer to the immediate left to that value is followed. the parent node must not be full. The following is the algorithm for insertion into a B-tree: B-Tree Insert (T. r) B–Tree-Insert-Non full (s. After finding the value greater than or equal to desired value. then no special action is required. the appropriate node for the key must be located. Since splitting the node results in moving one key to the parent node. then the node must be split to make room for the new key. This process may repeat all the way up to the root and may require splitting the root node. Key 30 was found. I.12) Step 1: Search first node for key nearest to 33. 10 20 30 2 4 6 12 15 17 19 21 27 32 35 36 41 53 16 . If node is full.
12 : A B-tree Deletion of a key from B-tree is possible. Node is split and 36 is shifted upwards. 10 20 30 Advanced Trees 2 4 6 12 15 17 19 21 27 32 35 36 41 53 Step 3: Key 33 is inserted between 32 and 35. is searched for inserting 33. the children must be rearranged. this violation must be connected by combining several nodes and possibly reducing the height if the tree.Step 2: Node pointed by key 30. but care must be taken to ensure that the properties of b-tree are maintained if the deletion reduces the number of keys in a node below the minimum degree of tree.13)) Step 1: Search for key 21 in first node. 10 20 30 2 4 6 12 15 17 19 21 27 32 35 36 41 53 Step2 : Searching is conducted on the nodes connected by 30. 10 20 30 36 2 4 6 12 15 17 19 21 27 32 33 35 41 53 Figure 7. Example (Searching of a B – Tree for key 21(refer to Figure 7. If the key has children. 21 is between 20 and 30.13 : A B-tree 17 . 10 20 30 2 4 6 12 15 17 19 21 27 32 35 36 41 53 Figure 7.
Check Your Progress 2 1) The following is the structure of an AVL tree: struct avl { struct node *left. If the same data is indexed with a b-tree. There can be multiple elements in each node of a B-tree. we discussed Binary Search Trees. retrieval and management of the data. AVL trees and B-trees.6 SOLUTIONS/ANSWERS Check Your Progress 1 1) 2) preorder. 18 . postorder and inorder The major feature of a Binary Search Tree is that all the elements whose values is less than the root reside in the nodes of left subtree of the root and all the elements whose values are larger than the root reside in the nodes of right subtree of the root. 7. int info. The same rule is applicable for all the subtrees in a BST. Searching an unindexed database containing n keys will have a worst case running time of O (n). The striking feature of Binary Search Trees is that all the elements of the left subtree of the root will be less than those of the right subtree. …………………………………………………………………………………. Indexing large amounts of data can significantly improve search performance. An AVL tree is a Height balanced tree. Check Your Progress 3 1) Create a B – Tree of order 5 for the following: CNGAHEKQMSWLTZDPRXYS …………………………………………………………………………………. struct node *right. then the same search operation will run in O(log n) time..5 SUMMARY In this unit.2 Applications of B-trees A database is a collection of data organised in a fashion that facilitates updation. …………………………………………………………………………………… 2) 7. }. …………………………………………………………………………………… Define a multiway tree of order m. delete and retrieve records from the databases. A B-tree is a m-ary binary tree..Graph Algorithms and Searching Techniques 7. The same rule is applicable for all the subtrees of the AVL tree.4. The same rule is applicable to all the left and right subtrees of a BST.. int bf. B-trees are used extensively to insert . The heights of left and right subtrees of root of an AVL tree differ by 1.
PHI Publications.7 FURTHER READINGS 1. Patel. If the keys and subtrees are arranged in the fashion of a search tree. 7. 2. For each node. Augenstein and Aaron M. Data Structures using C and C ++ by Yedidyah Hangsam.B.cs. Tanenbaum.edu. Moshe J.fredosaurus. if k is the actual no. of children in the node. Fundamentals of Data Structures in C by R.Check Your Progress 3 1) Advanced Trees M D G Q T A C E F H K L N P R S W X Y Z 2) A multiway tree of order n is an ordered tree where each node has at most m children. then k-1 is the number of keys in the node.com 19 . then this is multiway search tree of order m. PHI Publications. Reference Websites http:// www.
3 8. E) Consider the graph of Figure 8.10 Breadth First Search Depth First Search Finding Strongly Connected Components Summary Solutions/Answers Further Readings 34 34 36 38 39 39 8.4. we will discuss a data structure called Graph. The unit also includes information on different algorithms which are based on graphs.3. construct minimum cost spanning trees. 20 . know about directed and undirected graphs along with their representations.1 8.6 8. 8.9 8.4 Dijkstra’s Algorithm Graphs with Negative Edge costs Acyclic Graphs All Pairs Shortest Paths Algorithm Page Nos. 8. In fact. The notation used is as follows: Graph G = (V. know different shortest path algorithms. apply depth first search and breadth first search algorithms.1 8. and finding strongly connected components of a graph.2 8.7 8.0 8.1. graphs represent a relatively less restrictive relationship between the data items.8 8.1 8. 20 20 20 23 8.4. Graphs have many applications in computer science and other fields of science.0 INTRODUCTION In this unit. you should be able to • • • • • • know about graphs and related terminologies.1 OBJECTIVES After going through this unit.4. We shall discuss about both undirected graphs and directed graphs.5 8.3.2 8.3.2 8.3 Introduction Objectives Definitions Shortest Path Algorithms 8.Graph Algorithms and Searching Techniques UNIT 8 GRAPHS Structure 8.3.3 Kruskal’s Algorithm Prims’s Algorithm Applications 30 8.2 DEFINITIONS A graph G may be defined as a finite set V of vertices and a set E of edges (pair of connected vertices).4 Minimum cost Spanning Trees 8. graph is a general tree with no parent-child relationship. In general.
4. For example.The set of vertices for the graph is V = {1. p = <v0. The elements of E are always a pair of elements.1: A graph It may be noted that unlike nodes of a tree. of length.5) and (5. then the graph is called an Undirected graph.3). Directed graph and Undirected graph: If every edge (a. 5}. 21 . (5. A path. In a Directed graph. p. vertices 5 and 4 are adjacent. in Figure 8. Graphs are used in various types of modeling. There is no direct relationship between the vertices 1 and 4 although they are connected through 3. in which each vertex is adjacent to the next. is a set of |V|-1 edges that connect all vertices of the graph. the path from 1 to 4 can be defined as a sequence of adjacent vertices (1.. (5. The set of edges for the graph is E = {(1. (1. graphs can be used to represent connecting roads between cities. (4. graph has a very limited relationship between the nodes (vertices). Path length : It is the number of edges on the path..4).3). 2 Graphs 1 3 5 4 Figure 8.5) and (5. 3. k.vk> such that v0 = vk.1. For example.5).v). On the other hand... Graph terminologies : Adjacent vertices: Two vertices a and b are said to be adjacent if there is an edge connecting a and b.. Simple path : It is the set of all distinct vertices on a path (except possibly first and last). For example.. 2.4)...1) represent two different edges whereas in an Undirected graph. Loop: It is an edge of the form (v. (1. (1.1) represent the same edge.5). Edge weight : It is the cost associated with edge. Path: A path is defined as a sequence of distinct vertices. G. the edges (1.b) in a graph is marked by a direction from a to b.vk> Cycle : A graph contains cycles if there is a path of non-zero length through the graph.v1. through a graph is a sequence of connected vertices: p = <v0. Spanning Trees: A spanning tree of a graph.3) }. then we call it a Directed graph (digraph). if directions are not marked on the edges. (2.2).v1.
4} An adjacency matrix representation of a Graph G=(V. 5} adj [4] = {2. 1 2 3 4 5 Figure 8. 4.2 is given below: 1 2 3 4 1 2 0 1 1 0 1 0 0 1 3 1 0 0 1 4 0 1 1 0 5 1 1 1 1 5 1 0 1 1 0 Observe that the matrix is symmetric along the main diagonal. If we define the adjacency matrix as A and the transpose as AT . 3. E} consists of an array of adjacency lists denoted by adj of V list. then for an undirected graph G as above.2.Graph Algorithms and Searching Techniques There are different representations of a graph.2: adj [1] = {2. 5} adj [5] = {1. 5} adj [2] = {1. 22 . Consider the graph of Figure 8. j) belongs to E 0 otherwise The adjacency matrix for the graph of Figure 8. 3.2: A Graph The following is the adjacency list representation of graph of Figure 8. They are: • • Adjacency list representation Adjacency matrix representation Adjacency list representation An Adjacency list representation of a Graph G = {V. adj[u] consists of all vertices adjacent to u in the graph G. 3. For each vertex uєV. A = AT. 4} adj [3] = {1. E) is a matrix A(aij) such that aij = 1 if edge (i.
Single source shortest path problem : To find a shortest path from a single source to every vertex of the Graph.3: A Directed Graph 1 2 3 Figure 8. We wish to find out the shortest path from a single source vertex sєV.4) by adjacency matrix: 1 2 3 4 Figure 8.3 SHORTEST PATH ALGORITHMS A driver takes shortest possible route to reach destination. but not strongly connected. A strongly connected graph is a directed graph in which every pair of distinct vertices are connected with each other. Graphs Check Your Progress 1 1) 2) 3) A graph with no cycle is called _______ graph. The weight could be time.3 and Figure 8. losses other than distance designated by numerical values. Represent the following graphs(Figure 8. to every vertex vєV. cost.Graph connectivity : A connected graph is a graph in which path exists between every pair of vertices. The single source shortest path algorithm (Dijkstra’s Algorithm) is based on assumption that no edges have negative weights. E). A complete graph is a graph in which there exists edge between every pair of vertices.4: A Graph 4 8. The problem that we will discuss here is similar to this kind of finding shortest route in a graph. Adjacency matrix of an undirected graph is __________ on main diagonal. 23 . The graphs are weighted directed graphs. A weakly connected graph is a directed graph whose underlying graph is connected. Consider a Graph G = (V.
24 . predecessor is an array of vertices to which shortest path has already been determined. Before describing the algorithms formally. If this is less than d[v]. Dijkstra) solves the problem of finding the shortest path from a point in a graph (the source) to a destination with non-negative weight edge. This path will have length d[u]+w(u. we can replace the current value of d[v] with the new value.W. let us study the method through an example. The basic operation of Dijkstra’s algorithm is edge relaxation. 3 1 6 3 6 5 2 1 8 4 6 9 Figure 8. Dutch computer scientist E. The other data structures needed are: d array of best estimates of shortest path to each vertex from the source pi an array of predecessors for each vertex. If there is an edge from u to v. Each vertex entry contains the index of its predecessor in a path through the graph. Dijkstra’s algorithm is a greedy algorithm. It turns out that one can find the shortest paths from a given source to all vertices (points) in a graph in the same time. then the shortest known path from s to u can be extended to a path from s to v by adding edge (u. 8. Hence.Graph Algorithms and Searching Techniques The procedure followed to find shortest path are based on a concept called relaxation. Please note that shortest path between two vertices contains other shortest path within it.3. The predecessor list is an array of indices.v). one for each vertex of a graph.1 Dijkstra’s Algorithm Djikstra’s algorithm (named after its discover.5: A Directed Graph with no negative edge(s) Dijkstra’s algorithm keeps two sets of vertices: S is the set of vertices whose shortest paths from the source have already been determined Q = V-S is the set of remaining vertices . which finds shortest path between all pairs of vertices in the graph. this problem is sometimes called the single-source shortest paths problem. This method repeatedly decreases the upper bound of actual shortest path of each vertex from the source till it equals the shortest-path weight.v) at the end.
x.Operation of Algorithm The following sequence of diagrams illustrate the operation of Dijkstra’s Algorithm. v and y. it’s s.s Predecessor of y = x . choose the closest vertex to the source s. all the vertices have infinite costs except the source vertex which has zero cost From all the adjacent vertices. Graphs Initialize the graph. Choose the nearest vertex. Predecessor of x = s Predecessor of v = x .s add x to S Now y is the closest vertex. (shown in bold circle) Add it to S Relax all vertices adjacent to s. The bold vertices indicate the vertex to which shortest path has been determined. Relax v and adjust its predecessor. As we initialized d[s] to 0. 25 . Relax all vertices adjacent to x Update predecessors for u.e u and x Update vertices u and x by 10 and 5 as the distance from s. Add it to S. i.
Graph Algorithms and Searching Techniques
u is now closest, add it to S and adjust its adjacent vertex, v.
Finally, add v to S. The predecessor list now defines the shortest path from each node to s.
Dijkstra’s algorithm * Initialise d and pi* for each vertex v in V( g ) g.d[v] := infinity g.pi[v] := nil g.d[s] := 0; * Set S to empty * S := { 0 } Q := V(g) * While (V-S) is not null* while not Empty(Q) 1. Sort the vertices in V-S according to the current best estimate of their distance from the source u := Extract-Min ( Q ); 2. Add vertex u, the closest vertex in V-S, to S, AddNode( S, u ); 3. Relax all the vertices still in V-S connected to u relax( Node u, Node v, double w[][] ) if d[v] > d[u] + w[u]v] then d[v] := d[u] + w[u][v] pi[v] := u In summary, this algorithm starts by assigning a weight of infinity to all vertices, and then selecting a source and assigning a weight of zero to it. Vertices are added to the set for which shortest paths are known. When a vertex is selected, the weights of its adjacent vertices are relaxed. Once all vertices are relaxed, their predecessor’s vertices
26
are updated (pi). The cycle of selection, weight relaxation and predecessor update is repeated until the shortest path to all vertices has been found. Complexity of Algorithm The simplest implementation of the Dijkstra’s algorithm stores vertices of set Q in an ordinary linked list or array, and operation Extract-Min(Q) is simply a linear search through all vertices in Q. In this case, the running time is Θ(n2).
Graphs
8.3.2
Graphs with Negative Edge costs
We have seen that the above Dijkstra’s single source shortest-path algorithm works for graphs with non-negative edges (like road networks). The following two scenarios can emerge out of negative cost edges in a graph: • • Negative edge with non- negative weight cycle reachable from the source. Negative edge with non-negative weight cycle reachable from source.
5 0 S 5 A
─3 10 5 B
Figure 8.6 : A Graph with negative edge and non-negative weight cycle
The net weight of the cycle is 2(non-negative)(refer to Figure 8.6). ─8 5 A S 5 10 B
5 0
Figure 8.7: A graph with negative edge and negative weight cycle
The net weight of the cycle is ─3(negative) (refer to Figure 8.7). The shortest path from A to B is not well defined as the shortest path to this vertex are infinite, i.e., by traveling each cycle we can decrease the cost of the shortest path by 3, like (S, A, B) is path (S, A, B, A, B) is a path with less cost and so on. Dijkstra’s Algorithm works only for directed graphs with non-negative weights (cost).
8.3.3 Acyclic Graphs
A path in a directed graph is said to form a cycle is there exists a path (A,B,C,…..P) such that A = P. A graph is called acyclic if there is no cycle in the graph.
27
Graph Algorithms and Searching Techniques
8.3.4 All Pairs Shortest Paths Algorithm
In the last section, we discussed about shortest path algorithm which starts with a single source and finds shortest path to all vertices in the graph. In this section, we shall discuss the problem of finding shortest path between all pairs of vertices in a graph. This problem is helpful in finding distance between all pairs of cities in a road atlas. All pairs shortest paths problem is mother of all shortest paths problems. In this algorithm, we will represent the graph by adjacency matrix. The weight of an edge Cij in an adjacency matrix representation of a directed graph is represented as follows 0 if i = j weight of the directed edge from i to j i.e (i,j) if i ≠ j and (i j) belongs to E Cij = ∞ if i ≠ j and (i, j) does not belong to E
Given a directed graph G = (V, E), where each edge (v, w) has a non-negative cost C(v , w), for all pairs of vertices (v, w) to find the lowest cost path from v to w. The All pairs shortest paths problem can be considered as a generalisation of singlesource-shortest-path problem, by using Dijkstra’s algorithm by varying the source node among all the nodes in the graph. If negative edge(s) is allowed, then we can’t use Dijkstra’s algorithm. In this section we shall use a recursive solution to all pair shortest paths problem known as Floyd-Warshall algorithm, which runs in O(n3) time. This algorithm is based on the following principle. For graph G let V = {1, 2, 3,…,n}.Let us consider a sub set of the vertices {1, 2, 3, …..,k. For any pair of vertices that belong to V, consider all paths from i to j whose intermediate vertices are from {1, 2, 3, ….k}. This algorithm will exploit the relationship between path p and shortest path from i to j whose intermediate vertices are from {1, 2, 3, ….k-1} with the following two possibilities: 1. If k is not an intermediate vertex in the path p, then all the intermediate vertices of the path p are in {1, 2, 3, ….,k-1}. Thus, shortest path from i to j with intermediate vertices in {1, 2, 3, ….,k-1} is also the shortest path from i to j with vertices in {1, 2, 3, …, k}. 2. If k is an intermediate vertex of the path p, we break down the path p into path p1 from vertex i to k and path p2 from vertex k to j. So, path p1 is the shortest path from i to k with intermediate vertices in {1, 2, 3, …,k-1}. During iteration process we find the shortest path from i to j using only vertices (1, 2, 3, …, k-1} and in the next step, we find the cost of using the kth vertex as an intermediate step. If this results in lower cost, then we store it. After n iterations (all possible iterations), we find the lowest cost path from i to j using all vertices (if necessary). Note the following: Initialize the matrix
28
Matrix P. i++) { for (j=0.C[i][ j] = ∞ if (i. i < N. j. k++) { for (i=0. D[i][j] = C[i][j] We also define a path matrix P where P[i][j] holds intermediate vertex k on the least cost path from i to j that leads to the shortest path from i to j . AllPairsShortestPaths(int N. j++) { D[i][j] = C[i][j]. J<N. P[i][j] = k. } for (k=0. j) does not belong to E for graph G = (V.1 gives the program segment for the All pairs shortest paths algorithm. k<N. } } } } } /*********** End *************/ Graphs 29 . k if i = j then C[i][j] = 0 for ( i = 0. i++) { for (j = 0. J++) { if (D[i][k] + D[k][j] < D[i][j]) { D[i][j] = D[i][k] + D[k][j]. dik(k-1) + dkj(k-1)) ) Enddo Enddo Enddo where dij(k-1) = minimum path from i to j using k-1 intermediate vertices where dik(k-1) = minimum path from j to k using k-1 intermediate vertices where dkj(k-1) = minimum path from k to j using k-1 intermediate vertices Program 8. Algorithm (All Pairs Shortest Paths) N = number of rows of the graph D[i[j] = C[i][j] For k from 1 to n Do for i = 1 to n Do for j = 1 to n D[i[j]= minimum( dij(k-1) . Matrix D) { int i. Matrix C. j < N. E) Initially. } D[i][j] = 0. P[i][j] = -1. i<N.
Check Your Progress 2 1) 2) _________ is the basis of Dijkstra’s algorithm What is the complexity of All pairs shortest paths algorithm? …………………………………………………………………………………………. Initially the forest consists of n single node trees (and no edges). it would simply mean that it links two nodes that were already connected. if the graph is a weighted graph (length associated with each edge).8: A Graph Figure 8. Suppose.8. At each step. Our objective is to find the minimum length (weight) spanning tree. It’s spanning trees are shown in Figure 8.Graph Algorithms and Searching Techniques Program 8.4. If it forms a cycle.9.. Electrical Engineering and other related areas. Shortest path algorithms had numerous applications in the areas of Operations Research. 8. Figure 8.9 : Spanning trees of the Graph of Figure 8.4 MINIMUM COST SPANNING TREES A spanning tree of a graph is just a subgraph that contains all the vertices and is a tree (with no cycle). we have a group of islands that we wish to link with bridges so that it is possible to travel from one island to any other in the group.1 : Program segment for All pairs shortest paths algorithm From the above algorithm. Obviously. 30 . Computer Science. 8. different spanning trees have different weights or lengths. we add one (the cheapest one) edge so that it links two trees together. The set of bridges which will enable one to travel from any island to any other at minimum capital cost to the government is the minimum cost spanning tree. So. it is evident that it has O(N3) time complexity. A graph may have many spanning trees.1 Kruskal’s Algorithm Krushkal’s algorithm uses the concept of forest of trees.9 Consider the graph of Figure 8. we reject it. Now. The weight of the tree is just the sum of weights of its edges.
Hence reject it. As the algorithm progresses. Extract the cheapest edge from the queue. Let us see the sequence of operations to find the Minimum Cost Spanning Tree(MST) in a graph using Kruskal’s algorithm. 2. 2. The forest is constructed from the graph G .The steps in Kruskal’s Algorithm are as follows: 1. Else add it to the forest.10 : A Graph Graphs 22 3 9 7 12 4 8 6 14 Step 1 22 3 9 7 31 . Figure 8. then a link already exists between the concerned nodes. 1. The edges are placed in a priority queue. If it forms a cycle. The forest of trees is a partition of the original set of nodes. Do until we have added n-1 edges to the graph.. Consider the graph of Figure 8.with each node as a separate tree in the forest. Initially all the trees have exactly one node in them. 3. 12 4 8 6 14 Figure 8. 3.10. until eventually the partition has only one sub-set containing all the nodes.10.11 shows the construction of MST of graph of Figure 8. Adding it to the forest will join two trees together. we form a union of two of the trees (sub-sets).
2 Prim’s Algorithm Prim’s algorithm uses the concept of sets. This results in the MST of the original graph. So.11 : Construction of Minimum Cost Spanning Tree for the Graph of Figure 8. Prim’s algorithm works by iterating through the nodes and then finding the shortest edge from the set A to that of set A (i.4. Rather than building a sub-graph by adding one edge at a time. Now the MST contains all the vertices of the graph.10 by application of Kruskal’s algorithm The following are various steps in the construction of MST for the graph of Figure 8. this algorithm processes the edges in the graph randomly by building up disjoint sets. 32 .10 using Kruskal’s algorithm. Step 4 : The next lowest cost edge which is not in MST is added (edge with cost 7). Step 3 : The next lowest cost edge which is not in MST is added (edge with cost 6).Graph Algorithms and Searching Techniques 12 22 4 8 6 14 Step 2 Step 3 12 22 4 3 9 7 8 3 6 14 9 7 12 22 8 4 6 14 Step 4 Step 5 12 22 4 3 9 7 8 6 14 3 9 7 Figure 8. Step 5 : The next lowest cost edge which is not in MST is 8 but will form a cycle. it is discarded . When all the nodes are processed. Instead of processing the graph by sorted order of edges. The lowest cost edge is 3 which is added to the MST (shown in bold edges) Step 2: The next lowest cost edge which is not in MST is added (edge with cost 4). Prim’s algorithm builds a tree one vertex at a time. _ It uses two disjoint sets A and A. out side A). The next lowest cost edge 9 is added.e. Step 1 : The lowest cost edge is selected from the graph which is not in MST (initially MST is empty). followed by the addition of the node to the new graph. we have a minimum cost spanning tree. 8.
12 shows the various steps involved in the construction of Minimum Cost Spanning Tree of graph of Figure 8. Let T be the minimum spanning tree. Figure 8. Let T be a single vertex x.12 : Construction of Minimum Cost Spanning Tree for the Graph of Figure 8. Step 1 : We start with a single vertex (node). The edge with cost 4 is added.10. 33 . Add the edge with the lowest cost from A to A. 12 22 4 8 6 14 Step 1 Graphs 3 9 7 12 22 8 4 3 6 14 Step 2 Step 3 12 22 4 9 7 8 6 14 3 9 7 12 22 4 8 6 14 Step 4 Step 5 12 22 4 8 7 6 14 3 9 7 3 9 Figure 8. while (T has fewer than n vertices) { find the smallest edge connecting T to G-T add it to T } Consider the graph of Figure 8.10 by application of Prim’s algorithm The following are various steps in the construction of MST for the graph of Figure 8.10 using Prim’s algorithm. Now the set A contains this single node and set A contains rest of the nodes.10.The steps in Prim’s algorithm are as follows: Let G be the graph with n vertices for which minimum cost spanning tree is to be generated.
The unvisited vertex adjacent of C is D.Graph Algorithms and Searching Techniques Step 2: Lowest cost path from shaded portion of the graph to the rest of the graph (edge with cost 3) is selected and added to MST. This results in the MST of the original graph. Then. The vertices. it is discarded. which are visited as part of the search and those vertices. the visit to B. So. Prim’s algorithm is more efficient. the vertices (from left to right) adjacent to S which are not visited as part of the search are B. C and A is followed by F. the visit to F is followed by D. Comparison of Kruskal’s algorithm and Prim’s algorithm Kruskal’s algorithm Principle Based on generic minimum cost spanning tree algorithms Operates on a single set of edges in the graph O(E log E) where E is the number of edges in the graph Prim’s algorithm A special case of generic minimum cost spanning tree algorithm. It represents many complicated real world problems like: 1. In electronic circuit design. Step 5: The next lowest cost edge to the set not in MST is 8 but forms a cycle. Hence. A.5 BREADTH FIRST SEARCH (BFS) When BFS is applied. 8. 8. Spanning tree also finds their application in obtaining independent set of circuit equations for an electrical network. 3.13. 2. it may be observed that for dense graphs having more number of edges for a given number of vertices.4. There are no 34 . the number of vertices that are visited as part of the search is 1 and all the remaining vertices need to be visited. Minimum distance for travelling all cities at most one (travelling salesman problem). Step 4: Lowest cost path from shaded portion of the graph to the rest of the graph (edge with cost 73) is selected and added to MST. Operates on two disjoint sets of edges in the graph O(E log V). all the vertices of the graph are searched. Once you started at source. search the vertices which are adjacent to the visited vertex from left to order. Suppose that the search started from S. Now. In this way.C and A are visited after S as part of the BFS. Step 3: Lowest cost path from shaded portion of the graph to the rest of the graph (edge with cost 6) is selected and added to MST. So. to connect n pins by using n-1 wires. Hence. F is the unvisited vertex adjacent to B. the vertices of the graph are divided into two categories. The strategy adopted in breadth first search is to start search at a vertex(source). Operates like Dijkstra’s algorithm for finding shortest path in a graph. which are not visited as part of the search.3 Applications The minimum cost spanning tree has wide applications in different fields. which is asymptotically same as Kruskal’s algorithm Operation Running time For the above comparison. C. Consider the digraph of Figure 8. The next lowest cost edge 9 is added. Now the MST contains all the vertices of the graph. B. Then. using least wire.
D and E. if a vertex is found visited in this process. it backtracks to previous vertex to find out whether there are still unvisited vertices. Find vertex that is adjacent to the source and not previously visited using adjacency matrix and mark it visited. F. at last to S. the unvisited vertex E adjacent to D is visited.14 shows a DFS tree with a sequence of visits. A. Now S has an unvisited vertex B.13. backtrack to previous vertex D as it also has no unvisited vertex. This algorithm repeatedly searches deeper by visiting unvisited vertices and whenever an unvisited vertex is not found. Now there are no adjacent vertices of E to be visited next. The DFS is more or less similar to pre-order tree traversal. Start with S and mark it visited. Now backtrack to C. Start DFS with B as a root node and then visit F. The first number indicates the time at which the vertex is visited first and the second number indicates the time at which the vertex is visited during back tracking. As seen. Hence. now. We can find a very simple recursive procedure to visit the vertices in a depth first search. C. Now all the nodes of the graph are visited. the sequence of vertices visited as part of BFS is S.13 : A Digraph Consider the digraph of Figure 8. Graphs 8. then DFS from the originally selected source is complete and start DFS using any unvisited vertex. Then visit the next vertex A. If returning back to source is not possible. 35 . S A E B C D F Figure 8. The process can be described as below: Start from any vertex (source) in the graph and mark it visited.6 DEPTH FIRST SEARCH (DFS) The strategy adopted in depth first search is to search deeper whenever possible. then A. So. then C and then D and at last E. the search defined above is inherently recursive. Figure 8. Finally.unvisited vertices adjacent to A. Repeat this process for all vertices that is not visited. B. then return to the previous step and start the same procedure from there.
Step 2: Find a vertex that is adjacent to the souce vertex and start a new search if it is not already visited. Given a directed graph G a transpose of G is defined as GT. If G is represented by an adjacency list and the number of edges of G are e. u ) belongs to E } 36 .13 The DFS forest is shown with shaded arrow in Figure 8. return to previous source vertex and continue search from there.Graph Algorithms and Searching Techniques 1/10 2/9 5/6 11/14 3/8 4/7 12/13 Figure 8. GT is defined as a graph with the same number of vertices and edges with only the direction of the edges being reversed. 8. E) . v ): ( v. GT is obtained by transposing the adjacency matrix of the directed graph G. Definition: For graph G = (V. When all adjacent vertices are visited. In this section we will use another concept called transpose of a graph. GT = ( V. G = ( V. GT. If n is the number of vertices in the graph and the graph is represented by an adjacency matrix.7 FINDING STRONGLY CONNECTED COMPONENTS A beautiful application of DFS is finding a strongly connected component of a graph. then the time taken to perform DFS is O(e). where ET = { ( u. The algorithm for finding these strongly connected components uses the transpose of G.14. That is. Algorithm for DFS Step 1: Select a vertex in the graph and make it the source vertex and mark it visited. v belongs to U such that. there is a path from u to v and v to u. we define a strongly connected components as follows: U is a sub set of V such that u. then the total time taken to perform DFS is O(n2). E ). where V is the set of vertices and E is the set of edges. all pairs of vertices are reachable from each other.14 : DFS tree of digraph of Figure 8. Step 3: Repeat step 2 using a new source vertex. ET ).
16: Transpose and strongly connected components of digraph of Figure 8. Figure 8.15 shows a directed graph with sequence in DFS (first number of the vertex shows the discovery time and second number shows the finishing time of the vertex during DFS. f[u] = finishing time of a vertex u during DFS. The strongly connected components are shown in zig-zag circle in Figure 8.16. To find strongly connected component we start with a vertex with the highest finishing time and start DFS in the graph GT and then in decreasing order of finishing time. vertices with finishing times 8 and then 5.16 shows the transpose of the graph in Figure 8.15 Figure 8.Graphs 11/12 9/14 1/8 6/7 10/13 3/4 2/5 Figure 8. GT = Transpose of the adjacency matrix Step 1: Use DFS(G) to compute f[u] ∀u∈V Step 2: Compute GT Step 3: Execute DFS in GT 37 . Similarly. DFS with vertex with finishing time 14 as root finds a strongly connected component. Algorithm for finding strongly connected components of a Graph: Strongly Connected Components (G) where d[u] = discovery time of the vertex u during DFS . when selected as source vertices also lead to strongly connected components.15 whose edges are reversed.15: A Digraph 11/12 9/14 1/8 6/7 10/13 3/4 2/5 Figure 8.
9 SOLUTIONS/ANSWERS Check Your Progress 1 1) 2) 3) an acyclic symmetric The adjacency matrix of the directed graph and undirected graph are as follows: 0 0 0 1 1 0 0 1 1 0 0 0 0 0 1 0 (Refer to Figure 8.Graph Algorithms and Searching Techniques Step 4: Output the vertices of each tree in the depth-first forest of Step 3 as a separate strongly connected component. 2) Which graph traversal is recursive by nature? …………………………………………………………………………………. Check Your Progress 3 1) Which graph traversal uses a queue to hold vertices that are to be processed next ? ……………………………………………………………………………………… ………………………………………………………………………………………. it is called an undirected graph. Visiting all nodes in a graph systematically in some manner is called traversal. Prim’s algorithm is faster than Kruskal’s algorithm True/False Which graph traversal technique is used to find strongly connected component of a graph? …………………………………………………………………………………… …………………………………………………………………………………… 8. …………………………………………………………………………………… 3) 4) For a dense graph. Otherwise. Graphs are represented by adjacency lists and adjacency matrices. Finding the minimum distance between single source and all other vertices is called single source shortest path problem. Two most common methods are depth-first and breadth-first searches. Dijkstra’s algorithm is used to find shortest path from a single source to every other vertex in a directed graph. Graphs can be used to represent a road network where the edges are weighted as the distance between the cities. A graph where the edges are directed is called directed graph. Kruskal’s and Prim’s algorithms find minimum cost spanning tree in a graph. Finding shortest path between every pair of vertices is called all pairs shortest paths problem.3) 38 .8 SUMMARY Graphs are data structures that consist of a set of vertices and a set of edges that connect the vertices. 8. A spanning tree of a graph is a tree consisting of only those edges of the graph that connects all vertices of the graph with minimum cost.
3.com/vcsharp/programming/datastructures/. Data Structures and Program Design in C by Kruse.org/wiki/Graph_theory 39 . C. FURTHER READINGS Fundamentals of Data Structures in C++ by E. Sahni and D.com/engineering/data-structure. Galgotia Publications.Graphs 0 1 1 1 1 0 0 1 1 0 0 1 1 1 1 0 (Refer to Figure 8.html 1.Aho.wikipedia.microsoft.L.onesmartclick. Reference Websites. Addison Wesley. Data Structures and Algorithms by Alfred V.3) Check Your Progress 2 1) 2) Node relaxation O(N3) Check Your Progress 3 1) 2) 3) 4) BFS DFS True DFS 8.Tonodo and B. 2.Leung. Pearson Education.
what kind of data it is. what happens if we have to find a number? The answer is search that number in the array according to name (given). their types.4 9. So.0 9. and even where the data is stored . Now.5 9.1 9. you should be able to: • • • • know the basic concepts of searching. basically a search algorithm is an algorithm which accepts an argument ‘a’ and tries to find the corresponding data where the match of ‘a’ occurs in a file or in a table. In this unit. in searching either for some needful at home or office or market. we see that if the things are organised in some manner. All the above facts apply to our computer programs also.7 Introduction Objectives Linear Search Binary Search Applications Summary Solutions / Answers Further Readings Page Nos. If the names were organised in some order. Suppose we have a telephone directory stored in the memory in an array which contains Name and Numbers.6 9.1 OBJECTIVES After going through this unit. 40 . Till now. depending on how much data there is to look through.inside computer memory or on some external medium.2 9. or searching a word in dictionary.3 9. know the process of performing the Linear Search. what type of structure the data is stored in. used to search for a data item. There are basically two types of searching techniques. we have studied a variety of data structures.Graph Algorithms and Searching Techniques UNIT 9 SEARCHING Structure 9.0 INTRODUCTION Searching is the process of looking for something: Finding one piece of data that has been stored within a whole group of data. Linear or Sequential Search and Binary Search. searching would have been fast. 40 40 41 44 47 48 48 48 9. or algorithms. their use and so on. where we are involved some or other time. 9. In this unit. Searching is very common task in day-to-day life. we will concentrate on some techniques to search a particular data or piece of information from a large amount of data. There are a variety of methods. know the process of performing the Binary Search and know the applications of searching. then search becomes efficient and fast. It is often the most time-consuming part of many computer programs.
The Linear Search is applicable to a table which it should be organised in an array. As the file has ‘n’ records. Now. A field. one by one. However.. In addition.2…. it is the only reasonable way to search. all the records in a file are searched sequentially. for many situations. for the matching of key value. let us define some terms related to search.2 LINEAR SEARCH Linear search is not the most efficient way to search for an item in a collection of items. let us write a simple algorithm for Linear Search. the telephone directory that we discussed in previous section can be considered as a file. Let us assume that a file contains ‘n’ records and a record has ‘a’ fields but only one key. until a match occurs. A file is a collection of records and a record is in turn a collection of fields. the size of array will be ‘n’ and value at position R(i) will be the key of record at position i. Thus. if the array is small. Now. The simplest of all the searching techniques is Linear or Sequential Search. there aren’t many elements to search and the amount of time it takes is not even noticed by the user. m represents the unordered array of elements n represents number of elements in the array and el represents the value to be searched in the list Sep 1: [Initialize] k=0 flag=1 Step 2: Repeat step 3 for k=0. Algorithm Here. For example. The values of key are organised in an array say ‘m’. it depends on the application whose field will be the ‘key’. Moreover. which is used to differentiate among various records.n-1 Step 3: if (m[k]=el ) then flag=0 print “Search is successful” and element is found at location (k+1) stop endif Step 4: if (flag=1) then print “Search is unsuccessful” Searching 41 . is known as a ‘key’. linear search is a perfectly valid approach. We will locate any particular record by matching the input argument ‘a’ with the key value. As the name suggests. Also. Before studying Linear Search. where each record contains two fields: name of the person and phone number of the person. efficiency becomes important only in large arrays. It can be the name of person (usual case) and it can also be phone number. it is very simple to implement. if the array elements are arranged in random order.1. let us assume that ‘el’ is the value for which search has to be made or it is the search argument.9.
int. int. k<n. flag = 0. int). el). /*Function Declarations*/ int input (int *. int n.h> /*Global Variables*/ int search. /*Functions */ void linear_search(int m[ ]. int n.i++) { m[i]=rand( )%100. } } if(flag==1) printf(“\n Search is unsuccessful”). k++) { if(m[k]==el { printf(“\n Search is Successful\n”). void linear_search (int *. printf(“Number of elements in the list : %d”.Graph Algorithms and Searching Techniques endif Step 5: stop Program 9. for(i=0. int el) { int i. for(k=0. i++) { printf(“%d”. n = 20. element. search = el. int). k+1). el = 30. /*Program for Linear Search*/ /*Header Files*/ #include<stdio. n). for(i=0. m[i]. } /* Main Function*/ 42 . } } int input(int m[ ]. } void display(int m[ ]. void display (int *.i<20. int flag. flag = 1.h> #include<conio. int).1 gives the program for Linear Search. printf(“\n Element : %i Found at location : %i”. int el) { int k. } printf(“\n Element to be searched :%d”. int n) { int i. i< 20. return n.
e.void main( ) { int n. If record is at the first place. n. In any case. Let us assume that the names are stored in array ‘m’ i. el). linear_search(m. m(0) to m(9) and the search has to be made for name “Radha Sharma”.e. display(m. 43 . n). element = “Radha Sharma”. then. the order of the above algorithm is O(n). Efficiency of Linear Search How many number of comparisons are there in this search in searching for a given element? The number of comparisons depends upon where the record with the argument key appears in the array.N. el = search.Singh Arvind Chittora Anil Rawat Phone No. If it is equally likely for that the record can appear at any position in the array. which is stored at position 7 i. number of comparisons is ‘1’.S. m[200]. 6+1. 25161234 22752345 23405678 22361111 24782202 26254444 26150880 25513653 26252794 26257149 Searching The above algorithm will search for element = “Radha Sharma” and will stop at 6th index of array and the required phone number is “26150880”. } Program 9. printf(“\n In the following list\n”). one by one and stops when a match occurs or the total array is searched. Example: A telephone directory with n = 10 records and Name field as key.el). display(m. n. printf(“\n Entered list as follows: \n”).Singh Radha Sharma S. number = input(m.e. el. i. n).1: Linear Search Program 9. Telephone Directory Name Nitin Kumar Preeti Jain Sandeep Singh Sapna Chowdhary Hitesh Somal R.1 examines each of the key values in the array ‘m’. a successful search will take (n+1)/2 comparisons and an unsuccessful search will take ‘n’ comparisons. if record is at last position ‘n’ comparisons are made.
In either case. k(n) is an array which stores all the keys of a file containing ‘n’ records Step 2: i 0 Step 3: low 0. When a match is found. if the key value is less than the middle value then the key must be in the first half of the array. if the value of the key item is greater than that of the middle value in the array. so it divides an array into two halves for searching. Let us write an algorithm for Binary Search and then we will discuss it. “throw out” one half of the search space or array with only one comparison. Where. The algorithm thus narrows the search area by half at each step until it has either found the key data or the search fails. The reason for sorting an array is that we search the array “quickly”. Algorithm Step 1: Declare an array ‘k’ of size ‘n’ i.3 BINARY SEARCH An unsorted array is searched by linear search that scans the array elements one by one until the desired element is found. Now.1 44 . which brilliantly halves the size of the search space each time it examines one array element. we can employ binary search. we can. then it is known that the key lies in the second half of the array. Because. An array-based binary search selects the middle element in the array and compares its value to that of the key value. if the array is sorted. Likewise. would the key element be located? …………………………………………………………………………………… 9. This search is applicable only to an ordered table (in either ascending or in descending order).Graph Algorithms and Searching Techniques Check Your Progress 1 1) Linear search uses an exhaustive method of checking each element in the array against a key value. the search halts.e. the element was found with the fewest number of comparisons. Now. As the name suggests. the array is sorted. the binary search examines the mid value of the half in which the key must reside. Will sorting the array before using the linear search have any effect on its order of efficiency? …………………………………………………………………………………… 2) In a best case situation. knowing that the key must be in one half of the array or the other. The array consists of elements stored in ascending order. binary means two. high n-1 Step 4: while (low <= high)do mid = (low + high)/2 if (key=k[mid]) then write “record is at position”. in effect. mid+1 //as the array starts from the 0th position else if(key < k[mid]) then high = mid . in the list.
printf(“\n\n Looking for %d\n”. } } if (found==1 printf(“Search successful”).h> /*Functions*/ void binary_search(int array[ ].i<100. mid = (high+low)/2. mid = (high+low)/2. int value. else printf(“Key value not found”). binary_searchy(array.else low = mid + 1 endif endif endwhile Step 5: Write “Sorry. binary_searchy(array. i.mid+1). 75. value). } /*Main Function*/ void main(void) { int array[100]. /*Header Files*/ #include<stdio.100)).2 : Binary Search Searching 45 . high).1. mid. else low = mid+1.100)). scanf(“%d”. int size) { int found=0. } else {if (value<array[mid]) high = mid-1. binary_searchy(array.h> #include<conio. key value not found” Step 6: Stop Program 9. found=1.33.2 gives the program for Binary Search. } Program 9.100)). low=0. /*Inputting Values to Array*/ for(i=0.i++) { printf(“Enter the name:”). while((!found)&&(high>=low)) { printf(“Low %d Mid%d High%d\n”. printf(“Result of search %d\n”. printf(“Result of search %d\n”. } printf(“Result of search %d\n”. mid. int high=size-1. array[i]). if(value==array[mid] ) {printf(“Key value found at position %d”. low.
high = 4 Iteration 1: mid = (0+4)/2 = 2 k(mid) = k (2) = 33 Now key > k (mid) So low = mid + 1 = 3 Iteration 2: low = 3. high = 4 (low<= high) Mid = (4+4)/2 = 4 Here key = k(mid) So. Thus. the order of binary search is O (log n).Graph Algorithms and Searching Techniques Example: Let us consider a file of 5 records. 5 Efficiency of Binary Search Each comparison in the binary search reduces the number of possible candidates where the key value can be found by a factor of 2 as the array is divided in two halves in each iteration. k 11 22 33 44 55 0 1 2 3 4 Let key = 55. n = 5 And k is a sorted array of the keys of those 5 records. i. the maximum number of key comparisons are approximately log n.5 ~ 3 (integer value) Here key > k (mid) So low = 3+1 = 4 Iteration 3: low = 4..e. case) (worst case) -------------------------------------------------------8 | 4 4 128 | 64 8 256 | 128 9 1000 | 500 11 100. the binary 46 . i. So. It is noteworthy that. the record is at mid+1 position. Comparative Study of Linear and Binary Search Binary search is lots faster than linear search.. However. low = 0.e.000 18 A binary search on an array is O(log2 n) because at each test. you can “throw out” one half of the search space or array whereas a linear search on an array is O(n). as the size of the array to be searched increases.000 | 50. for very small arrays a linear search can prove faster than a binary search. Here are some comparisons: NUMBER OF ARRAY ELEMENTS EXAMINED array size | linear search binary search | (avg. high = 4 (low <= high) Mid = 3+4 / 2 = 3.
47 . When a robot discovers a new site. Search Engines Search engines use software robots to survey the Web and build their databases. Any word that is found in the list is assumed to be spelled correctly. it uses any of the Search algorithms. How quickly and comprehensively they carry out these tasks vary from one search engine to the next. efficiently and faster.search is the clear winner in terms of number of comparisons and therefore overall speed. Linear search is more efficient than Binary search. etc. may it be Internet. A robot is a piece of software that automatically follows hyperlinks from one document to the next around the Web. First. Still. spiders or crawlers. True/False c. 9. For checking. For Binary search.. 2. If there is even one element out of order in the data being searched. robots also update previously catalogued sites. The order of linear search in worst case is O (n/2) True/False b.4 APPLICATIONS The searching techniques are applicable to a number of places in today’s world. Search Engines use software programs known as robots. That is. it can throw off the entire process. the efficient programmer must decide whether to sort the data and apply a binary search or simply apply the less-efficient linear search. text pattern matching. search engines. 1. True/False 2) Write the Binary search algorithm where the array is sorted in descending order. When you enter a query at a search engine website. Is the cost of sorting the data is worth the increase in search speed gained with the binary search? If you are searching only once. it requires that the data to be searched be in sorted order. Searching Check Your Progress 2 1) State True or False a. Spell Checker This application is generally used in Word Processors. which it checks and searches sequentially. When presented with a set of unsorted data. Web documents are retrieved and indexed using keywords. on line enquiry. the binary search has some drawbacks. Let us discuss some of the applications of Searching in the world of computers. the array has to be sorted in ascending order only. it uses the concept of Linear Search. It is based on a program for checking spelling. The most important application of searching is to track a particular record from a large file. it sends information back to its main site to be indexed. The program looks up a word in a list of words from a dictionary.e. Because Web documents are one of the least static forms of publishing (i. they change a lot). The best matches are then returned to you as hits. finding a record from database. your input is checked against the search engine’s keyword indices. then it is probably to better do a linear search in most cases. Any word that isn’t found is assumed to be spelled wrong.
B. Binary search will directly search for the key value in the given sorted list.com Reference Websites 48 . Galgotia Publications. String Pattern matching Document processing is rapidly becoming one of the dominant functions of computers. 9. Web ‘surfing’ and Web searching are becoming significant and important computer applications. 3. Data Structures using C and C ++ by Yedidyah Hangsam. Searching a list consisting of 100000 elements is not the same as searching a list consisting of 10 elements. Mehta. Linear Search will directly search for the key value in the given list.fredosaurus. So. it had the overhead that the list should be sorted before search can start. Computers are used to edit. which is a tree-based structure that allows for faster searching in a collection of strings.umbc. Fundamentals of Data Structures in C++ by E. search and transport documents over the Internet.Graph Algorithms and Searching Techniques 3. with added tags for multimedia content. This is accomplished using trie data structure.5 SUMMARY Searching is the process of looking for something. Making sense of the many terabytes of information on the Internet requires a considerable amount of text processing. and many of the key computations in all of this document processing involves character strings and string pattern matching.cs.6 SOLUTIONS / ANSWERS Check Your Progress 1 1) 2) No It will be located at the beginning of the list Check Your Progress 2 1) (a) F (b) F (c) F 9. Binary search is efficient in most of the cases. 9. Patel. and to display documents on printers and computer screens. We discussed two searching techniques in this unit namely Linear Search and Binary Search. Tanenbaum. PHI Publications. Though. Horowitz. There are a large number of applications of Searching out of whom a few were discussed in this unit. Augenstein and Aaron M. Sahai and D.edu. the Internet document formats HTML and XML are primarily text formats. it is very well compensated through the time (which is very less when compared to linear search) it takes to search. Moshe J. http:// www. the major difference is the way the given list is presented.7 FURTHER READINGS Reference Books 1. For example. 2. Fundamentals of Data Structures in C by R. PHI Publications.
Searching 49 .
Different environments require different sorting methods.3 10. and describe sorting methods on several keys. Sorting algorithms can be characterised in the following two ways: 1.2.2. thus resulting in fewer comparisons. therefore.2.1 10. discuss the performance of several sorting methods. Read and write access times are a major concern in determining sorting performances of such methods. 2.5 Insertion Sort Bubble Sort Quick Sort 2-way Merge Sort Heap Sort Sorting Page Nos. Sorting is. External sorting methods are applied to larger collection of data which reside on secondary devices.0 10.4 10. Performance of a sorting algorithm can also depend on the degree of order already present in the data. 5 . so that items settle into the proper order sooner.0 INTRODUCTION Retrieval of information is made easier when it is stored in some predefined order.1 10. The next unit will discuss methods of external sorting. 10.4 10. In this unit. The difference lies in the fact that the first method moves data only over small distances in the process of sorting.3 10. Sophisticated algorithms that require the O(nlog2n) comparisons to sort n items. Many sorting algorithms are available.2 10. Internal sorting is applied when the entire collection of data to be sorted is small enough so that the sorting can take place within the main memory. 5 5 6 10. There are two basic categories of sorting methods: Internal Sorting and External Sorting.1 OBJECTIVES After going through this unit.2.2 Introduction Objectives Internal Sorting 10. The time required to read or write is not considered to be significant in evaluating the performance of internal sorting methods.2. you should be able to: • • • list the names of some sorting methods. Simple algorithms which require the order of n2 (written as O(n2))comparisons to sort n items. whereas the second method moves data over large distances.5 10. we will study some methods of internal sorting. a very important computer application activity.UNIT 10 SORTING Structure 10.6 Sorting on Several Keys Summary Solutions/Answers Further Readings 13 13 14 14 10.
multiple swappings take place in one pass.1 Insertion Sort This is a naturally occurring sorting method exemplified by a card player arranging the cards dealt to him. He picks up the cards as they are dealt and inserts them into the required position. Insertion sort Bubble sort Quick sort Two-way Merge sort Heap sort 10. The detailed algorithm follows: Algorithm: BUBBLE SORT 6 1.1) before presenting the formal algorithm. Begin . all the data to be sorted is available in the high speed main memory of the computer. Shift all the items from this point one down the list. This results in sorted list. adjacent members of the list to be sorted are compared.1: Insertion sort Thus to find the correct position search the list till an item just greater than the target is found. In this method. Smaller elements move or ‘bubble’ up to the top of the list.File Structures and Advanced Data Structures 10. 3. hence the name given to the algorithm. We will illustrate insertion sort with an example (refer to Figure 10. we insert an item into its proper place in an already ordered list. Example : Sort the following list using the insertion sort method: Figure 10. This process is carried on till the list is sorted.2 INTERNAL SORTING In internal sorting.2 Bubble Sort In this sorting algorithm. Insert the target in the vacated slot.If the item on top is greater than the item immediately below it.2. Repeat this process for all the elements in the list. Thus at every step. 5.2. 4. 10. 2. We will study the following methods of internal sorting: 1. then they are swapped.
sublist1 & sublist are ready. 6. 10. This is implemented as follows: Choose one item A[I] from the list A[ ]. Hoare in 1960. A[I-1] in sublist 1 A[I] Place A[I + 1].a[j-1]) End // of Bubble Sort Sorting Total number of comparisons in Bubble sort : = (N-1) +(N-2) . Place A[0]. The basis of quick sort is the divide and conquer strategy i. moderate use of resources and acceptable behaviour for a variety of sorting cases. 7. .. 3. 3 & 4 till the scan pointers cross. 7 . Now do the same for each of sublist1 & sublist2.e. Swap A[R] & A[L]. i. Stop at this stage. From the left end of the list (A[O] onwards) scan till an item A[R] is found whose value is greater than A[I].. Continue steps 2. At this point.R. This is usually implemented as follows: 1. 2. From the right end of list [A[N] backwards] scan till an item A[L] is found whose value is less than A[1]. Read the n elements for i=1 to n for j=n downto i+1 if a[j] <= a[j-1] swap(a[j]. this algorithm has a recursive structure.3 Quick Sort This is the most widely used internal sorting algorithm.. + 2 + 1 = (N-1)*N / 2 =O(N2) This inefficiency is due to the fact that an item moves only to the next position in each pass. .2. all preceding items have a lesser value and all succeeding items have a greater value than this item. Its popularity lies in the ease of implementation. As can be seen. Choose A[I] as the dividing element. 3. A[I + 2] . 5. it was invented by C. 2. In its basic form. 4. 4.2. Divide the problem [list to be sorted] into sub-problems [sub-lists]. 1. The divide' procedure is of utmost importance in this algorithm..A. until solved sub problems [sorted sub-lists] are found. A[N] in sublist 2 Repeat steps 1 & 2 for sublist1 & sublist2 till A[ ] is a sorted list. Rearrange the list so that this item is in the proper position. A[1] . 3.e.
The performance can be improved by keeping in mind the following points.2.1 gives the program segment for Quick sort. A[i] = A[j]. j=n+1. Quicksort(A. This is also called Concatenate sort. These are merged to get n/2 lists of size 2. j. do do ++i.m. do --j. 1.1: Quick Sort The Quick sort algorithm uses the O(N Log2N) comparisons on average. This can be better understood by the following example. A[j] = temp. The basic idea in this is to divide the list into a number of sublists. Switch to a faster sorting scheme like insertion sort when the sublist size becomes comparatively small. The illustrative implementation of 2 way merge sort sees the input initially as n lists of size 1.2 depicts 2-way merge sort. k. Use a better dividing element in the implementations. } Program 10. if (i < j) { temp = A[i].j-1).j+1. It uses recursion. 2. These n/2 lists are merged pair wise and so on till a single list is obtained. A[j] = temp. 10.m.File Structures and Advanced Data Structures Program 10. It is also possible to write the non-recursive Quick sort algorithm. temp = A[m]. Figure 10. k=A[m].n { int i. while (A[j] > k). A[m] = A[j].4 2-Way Merge Sort Merge sort is also one of the ‘divide and conquer’ class of algorithms. sort each of these sublists and merge them to get a single sorted list. Quicksort(A. if m<n { i=m.m.n). } while (i<j).n) int A[ ]. Mergesort is the best method for sorting linked lists in random order. while (A[i] < k). 8 . The total computing time is of the 0(n log2n ). Quicksort(A.
the corresponding array is depicted in Figure 10.5 Heap Sort We will begin by defining a new structure called Heap. If a node is at position j. Figure 10. Thus the root node will have the largest key value. 10.2. The total computing time is of the 0(n log2n ).merge sort Mergesort is the best method for sorting linked lists in random order. It is at position 5. to sort a list of size n. it needs space for 2n elements. That is.2: 2-way .4: Array for the binary tree of figure 10. Figure 10. at position 9 . Its parent node is. The disadvantage of using mergesort is that it requires two arrays of the same size and space for the merge phase. That is. Consider the node M. For the example tree. Figure 10. Its parent will be at position └J/2┘. therefore.The disadvantage of using mergesort is that it requires two arrays of the same size and space for the merge phase.3: A Binary Tree A complete binary tree is said to satisfy the ‘heap condition’ if the key of each node is greater than or equal to the key in its children. its children will be at positions 2j and 2j + 1.4. it needs space for 2n elements. Trees can be represented as arrays. to sort a list of size n. Sorting Figure 10.3 The relationships of a node can also be determined from this array representation. The key values of the nodes are then assigned to array positions whose index is given by the number of the node. by first numbering the nodes (starting from the root) from left to right.3 illustrates a Binary tree.
5: A Heap This algorithm is guaranteed to sort n elements in (n log2n) time. So. the heap condition is still violated. R > P. We will first see two methods of heap construction and then removal in order from the heap to sort the list. 2.e. Swap R and P. . satisfying the heap condition at all steps. 10 + 11 respectively i. 3.e.File Structures and Advanced Data Structures 5/2┘ = 2 i. 1. The above operation may cause violation of the heap condition so the heap is traversed and modified to rectify any such violations. 1. Initially R is added as the right child of J and given the number 13. Top down heap construction • 2. 2. 5. Its children are at positions 2 × 5 & (2 × 5) + 1. 4. We will exemplify this with an example. Bottom up heap construction • • 10 Build a heap with the items in the order presented. R > J. Insert items into an initially empty heap. in which each node satisfies the heap condition. Move R upto position 6 and move J down to position 13. i. the parent is R. But. 4. Examples: Consider the insertion of a node R in the heap 1. E & I are its children. 1. The required node is inserted/deleted/or replaced. the heap condition is violated.e. The heap condition is now satisfied by all nodes to get the heap of Figure 10.5. We will now study the operations possible on a heap and see how these can be combined to generate a sorting algorithm. Figure 10. represented as an array. A Heap is a complete binary tree. From the right most node modify to satisfy the heap condition. The operations on a heap work in 2 steps. Therefore.
When the file is interpreted as a binary tree.64.Sorting Example: Build a heap of the following using top down approach for heap construction.7: A Binary tree Figure 10.7.36. 2 81 3 81 64 36 64 4 25 36 16 49 25 2 16 9 49 3 9 4 Figure 10.6: Heap Sort (Top down Construction) Example: The input file is (2. Figure 10.8: Heap of figure 10.9. PROFESSIONAL Figure 10. 6 (a) 6 (b) 6 (c) 6 (d) 6 (e) 6 (f) 6 (g) 6 (h) 6 (i) 6 (j) 6 (k) Figure 10.6 shows different steps of the top down construction of the heap. 49).8 depicts the heap.25. it results in Figure 10.7 11 .81.16.4.3.
36 Heap size:6 16 9 9 4 3 3 2 2 4 Sorted: 81.64.File Structures and Advanced Data Structures Figure 10.49 Heap size:7 Sorted:81.8 as the sorting takes place. 49. 36. 49. 16 Size:4 12 .64.64 Heap size: 8 36 25 16 25 16 3 9 4 3 2 9 4 2 Sorted: 81. 64. 64. 25.49. 36.9 illustrates various steps of the heap of Figure 10. 64 49 49 36 16 36 16 4 25 2 9 4 25 2 3 9 3 Sorted: 81 Heap size: 9 Sorted: 81. 25 Size: 5 Sorted:81.
sorting the cards in ascending order from Ace. 36. 49. 4. Digit stands for a key. we may want to sort the data on several keys. within each suit. Many sorting algorithms are available. we have been considering sorting based on single keys. 36. 25. 4 Size: 2 Sorted: 81. The simplest example is that of sorting a deck of cards.Sorting 4 3 2 3 2 Sorted: 81. 49. 25. spades. The first method is called the MSD (Most Significant Digit) sort and the second method is called the LSD (Least Significant Digit) sort. in real life applications. each is efficient for a particular situation or a particular kind of data. Sort each of the 4 piles according to face value of the cards.8 for a sorted file 10. 3.9 : Various steps of figure 10. 25. Check Your Progress 1 1) 2) 3) 4) The complexity of Bubble sort is _________ Quick sort algorithm uses the programming technique of _________ Write a program in ‘C’ language for 2-way merge sort. The first key for sorting is the suit-clubs. 25. diamonds and hearts. 36. 64. Then. twos to king. 4.3 SORTING ON SEVERAL KEYS So far. 36. 49. The complexity of Heap sort is _________ 10. Stack these piles in order and then sort into 4 piles based on suit. 16. This is not a conclusive list and the student is advised to read the suggested books for 13 . But.4 SUMMARY Sorting is an important application activity. 16. 64. this can be done in 2 ways. 16. In this unit we have studied many sorting algorithms used in internal sorting. Now. 2 Result Figure 10. The choice of a sorting algorithm is crucial to the performance of the application. 16. The actual sorting could be done by any of the sorting methods discussed in this unit. MSD and LSD sorts only decide the order of sorting. Sort the 52 cards into 13 piles according to face value. But. 9. This is thus a case of sorting on 2 keys. Though they are called sorting methods. 9. 64. 1 2 • • • • • • • • Sort the 52 cards into 4 piles according to the suit. 9 Size: 3 2 Sorted:81. 64. 3 Size : 1 Sorted: 81. 49. 9.
10.5 SOLUTIONS/ANSWERS 1) 2) 3) O(N2) where N is the number of elements in the list to be sorted.au/Subjects/ cp2001/1998/LectureNotes/Sorting/. Moshe J.it. 10. Algorithms+Data Structures = Programs by Niklaus Wirth.File Structures and Advanced Data Structures exposure to additional sorting methods and for detailed discussions of the methods introduced here.jcu. The following are the three most important efficiency criteria: • • • use of storage space use of computer time programming effort. O(NlogN) where N is the number of elements to be sorted.com/Algorithms/Files/Algorithms. 2. Data Structures using C by Aaron M. Reference Websites. Divide and Conquer. PHI publications.Augenstein.6 FURTHER READINGS Reference Books 1.html 14 . PHI publications.edu. Yedidyah Langsam.
11.2. this process of rebuilding the tree every time as the preferences for the records change is tedious and time consuming. AVL tree.4 11. and know about Treaps. This would call for readjustment or rebuilding of the tree to attain the desired shape. If this new record is to be accessed very frequently. • • • • Splay tree Red_Black tree AA_tree Treap The key factors which have been discussed in this unit about the above mentioned data structures involve complexity of code in terms of Big oh notation. 11.3. Such a self-adjusting tree is the Splay tree.2 11. But.3 Splaying steps Splaying Algorithm Advanced Data Structures Page Nos.6 11. then we cannot afford to spend much time in reaching it but would require it to be positioned close to the root node.1 11. i.3. 15 15 15 20 Red-Black trees Properties of a Red-Black tree Insertion into a Red.Black tree Deletion from a Red-Black tree 11. These may be considered as alternative to a height balanced tree.0 INTRODUCTION In this unit.5 11.2 SPLAY TREES Addition of new records in a Binary tree structure always occurs as leaf nodes. cost involved in searching a node.3.1 11. which are further away from the root node making their access slower. know about Red-Black tree. the following four advanced data structures have been practically emphasized.e. There must be some measure so that the tree adjusts itself automatically as the frequency of accessing the records changes. the process of deletion of a node and the cost involved in inserting a node. know about AA-trees.0 11.7 AA-Trees Summary Solutions/Answers Further Readings 26 29 29 30 11.UNIT 11 ADVANCED DATA STRUCTURES Structure 11.1 11. you should be able to: • • • • know about Splay trees.2 11.1 OBJECTIVES After going through this unit..2 11.3 Introduction Objectives Splay Trees 11. 15 .2.
Zag-Zag: Movement of two steps down to the right. But over a long sequence of accesses. wherein a single access may be extremely expensive. The task would be achieved this way. Single rotations are possible in the left or right direction for moving a node to the root position. Zig-Zag: Movement of one step left and then right. With these two basic steps. these expensive cases may be averaged out by the less expensive ones to produce excellent results over a long sequence of operations. pushing the other nodes out of the way to make room for this new root of the modified tree. Zag-Zig: Movement of one step right and then left. the key idea of splaying is to move the accessed node two levels up the tree at each step. Basic terminologies in this context are: Zig: Movement of one step down the path to the left to fetch a node up. Hence. the possible splay rotations are: Zig-Zig: Movement of two steps down to the left. This will be discussed in detail in the following sections. while the most infrequently accessed nodes would move farther and farther away from the root. Zag: Movement of one step down the path to the right to fetch a node up. but the performance of the tree amortized over many accesses may not be good.File Structures and Advanced Data Structures Splay trees are self-adjusting binary search trees in which every access for insertion or retrieval of a node.1 depicts the splay rotations. lifts that node all the way up to become the root. the frequently accessed nodes will frequently be lifted up and remain around the root position.1 Splaying Steps Readjusting for tree modification calls for rotations in the binary search tree.2. 11. Figure 11. Zig: Zig-Zig: 16 . The analytical tool used for this purpose is the Amortized algorithm analysis. Instead. This process of readjusting may at times create a highly imbalanced splay tree.
In the next section. Initially. we would be discussing the top-down splaying procedure: As top-down splaying proceeds. Right SubTree: This is also initially empty and is created similar to left subtree. While in top-down splaying. It consists of nodes with values more than the target node.2 depicts the splaying procedure with an example. b) c) Figure 11. splaying begins at the accessed node. It consists of nodes with values less than the target being searched. moving up the chain to the root.1: Splay rotations Splaying may be top-down or bottom-up.Zig-Zag: Advanced Data Structures Figure 11. Search proceeds by comparison of the target value with the root and ends with the root of the central tree being the node containing the target if present or null node if the target is not present. attempting to splay at 20. In bottom-up splaying. Left SubTree: This is initially empty and is created as the central subtree is splayed. splaying begins from the top while searching for the node to access. the tree is split into three parts: a) Central SubTree: This is initially the complete tree and may contain the target node. The first step is Zig-Zag: 17 .
splaying is used both for insertion and deletion. to find the proper position for the target element and avoiding duplicity and in the latter case to bring the desired node to root position.2. Hence. if found. But. If. reassembling the three trees. 18 .2 Splaying Algorithm Insertion and deletion of a target key requires splaying of the tree. In case of insertion. then we have a duplicate and the original value is maintained.File Structures and Advanced Data Structures The next step is Zig-Zig: The next step is the terminal zig: Finally. In the former case. then the target is inserted as the root. Then. target key is found. we get: Figure 11.2: Splaying procedure 11. In case of deletion. the target is searched by splaying the tree. if it is not found. it is deleted from the root position and the remaining trees reassembled. the tree is splayed to find the target.
then the search will be more to the right and in the process. • Accounting method: The amortized cost is different for all operations and charges a credit as prepaid credit on some operations. That is. Amortized analysis considers a long sequence of operations instead of just one and then gives a worst-case estimate. are: • Aggregate analysis: It finds the average cost of each operation. Target is found: In this. the central. Advanced Data Structures b) We repeat the comparison process till either of the following conditions are satisfied: a) b) Now. the central subtree is the complete tree and left and right subtrees are empty. namely. Amortized Algorithm Analysis In the amortized analysis. the root and its left subtree are shifted to the left tree. • Potential method: It also has different amortized cost for each operation and charges a credit as the potential energy to other operations. There are different operations such as stack operations (push. Check Your Progress 1 1) Consider the following tree. Initially. the largest node is the left subtree and is connected to its left child and the smallest node in the right subtree is connected as its right child. Splay at node 2. target is inserted in the null node position. moving the root and its right subtree to right tree. Every operation on a splay tree and all splay tree operations take O(log n) amortized time. For the target node. The target key is compared to the root of the central subtree where the following two conditions are possible: a) Target > Root: If target is greater than the root. original node is maintained. Hence. 19 . left and right subtrees. then the search is shifted to the left. the tree is reassembled. three trees are maintained. The three methods. There are three different methods by which the amortized cost can be calculated and can be differentiated from the actual cost. the time required to perform a set of operations is the average of all operations performed. Target is not found and we reach a null node: In this case. multipop) and an increment which can be considered as examples to examine the above three methods. pop.Splaying procedure For splaying. which is the new root of our tree. Deletion would lead to removing the root node. T(n)/n. insertion would create a duplicate node. Target < Root: If the target is less than the root. The amortized cost is same for all operations.
its color which can either be red or black.3 depicts a Red-Black Tree. If a node is red. These trees are such that they guarantee O(log n) time in the worst case for searching. left. i. 1. For every node. then the pointer field of that node contains NULL value. Each node of a red black tree contains the field color. right and p (parent).3 RED-BLACK TREES A Red-Black Tree (RBT) is a type of Binary Search tree with one extra bit of storage per node.File Structures and Advanced Data Structures 11. 3. then its children should be black. Now the nodes can have any of the color (red.1 Properties of a Red-Black Tree Any binary search tree should contain following properties to be called as a red-black tree. we must change the color of the nodes as well as the pointer structure. all the paths from a node to its leaves contain the same number of black nodes. the black height of the node is denoted by bh (x). 2.5).4 and 11. black) from root to a leaf node. If a child or a parent node does not exist. 4. We define the number of black nodes on any path from but not including a node x down to a leaf.3. The root node is always black. namely INSERT and DELETE. There are two kinds of rotations: left rotation and right rotation (refer to Figures 11. Figure 11. We can change the pointer structure by using a technique called rotation which preserves inorder key ordering. key. Figure 11.e.3: A Red-Black tree 20 Red-black trees contain two main operations. When the tree is modified. the result may violate red-black properties. Each node of a tree should be either red or black. 11. . To restore the tree properties.
21 . the insertion proceeds in a similar manner but after insertion of nodes x into the tree T.. there may be chances of loosing Red-Black Properties in a tree. y are red in color. Case 1(Z’s uncle y is red): This is executed when both parent of Z (P(Z)) and uncle of Z. then P(Z) is black Now if any of the properties i.5: Right rotation When we do a left rotation on a node y. then we consider 3 cases in the fix up algorithm. So. the inserted node is always red. we assume that its right child x is non null. Let us now discuss those cases.e. Node Z is red If P(Z) is the root. At the start of each iteration of the loop. In order to guarantee that the red-black properties are preserved. Let us now look at the execution of fix up. 11. Now. Let Z be the node which is to be inserted and is colored red.Advanced Data Structures Figure 11. thereby maintaining one more property. 3. 2. we can maintain one of the property of Red-Black tree by making both P(Z) and y black and making point of P(Z) to be red. The same procedure is repeated vice versa for the right rotation. 1.3. some cases are to be considered inorder to retain those properties. The following are the two procedures followed for insertion into a Red-Black Tree: Procedure 1: This is used to insert an element in a given Red-Black Tree.2 Insertion into a Red-Black Tree The insertion procedure in a red-black tree is similar to a binary search tree i. this while loop is repeated again until color of y is black.e. it is made red. It involves the method of insertion used in binary search tree. make parent of Z to be Z itself and apply left rotation to newly obtained Z. we then fix up the updated tree by changing the color of the nodes and performing rotations. Let us write the pseudo code for insertion. so. and after insertion.4: Left rotation Figure 11.e. During the insertion procedure. i. After inserting a node. and. we color it red. Procedure 2: Whenever the node is inserted in a tree. it is necessary to notify that which of the red-black properties are violated. Case 2 (Z’s uncle is black and Z is the right child): So. property 2 is violated if Z is the root and is red OR when property 4 is violated if both Z and P(Z) are red. The left rotation makes x as the new root of the subtree with y as x’s left child and x’s left child as y’s right child.
7). Figure 11.6). we should first check the position of P(Z) i.. Now. if it is towards left of its parent.Consider a red-black tree drawn below with a node z (17 inserted in it) (refer to Figure 11. then the above cases will be executed but. then the above 3 cases are considered conversely.File Structures and Advanced Data Structures Case 3 (Z’s uncle is black and Z is the left child): This case executes by making parent of Z as black and P(P(Z)) as red and then performing right rotation to it i.e. the above cases will be executed and another node called y is assigned which is the uncle of Z and now cases to be executed are as follows: 22 Figure 11. if it is towards the right of its parent.7: Z is to the left of it’s parent . The above 3 cases are also considered conversely when the parent of Z is to the right of its own parent. it is seen that Z is towards the left of its parent (refer to Figure 11.6: A Red-Black Tree after insertion of node 17 Before the execution of any case. So. to (P(Z)). All the different cases can be seen through an example.e.
Case 1: Property 4 is violated as both z and parent(z) are red (refer to Figure 11.9: Result of application of case-2 23 .8: Both Z and P(Z) are red Now.8). let us check to see which case is executed. Case 2: The application of this case results in Figure 11. Figure 11.9. Advanced Data Structures Figure 11.
File Structures and Advanced Data Structures Case 3: The application of this case results in Figure 11.3. the red. and after deletion. there may be chances of losing Red-Black Properties in a tree and so. Now.10: Result of application of case-3 Finally. this procedure starts with a loop to make the extra black up to the tree until • • • X points to a black node Rotations to be performed and recoloring to be done X is a pointer to the root in which the extra black can be easily removed 24 . Procedure 2: Whenever the node is deleted from a tree. It involves the method of deletion used in binary search tree. This procedure is called only when the successor of the node to be deleted is Black.10). namely. Procedure 1: This is used to delete an element in a given Red-Black Tree. it resulted in a perfect Red-Black Tree (Figure 11. but if y is red. some cases are to be considered in order to retain those properties. 11. the node (say x) which takes the position of the deleted node (say z) will be called in procedure 2. Figure 11.black properties still hold and for the following reasons: • • • No red nodes have been made adjacent No black heights in the tree have changed y could not have been the root Now.10.3 Deletion from a Red-Black Tree Deletion in a RBT uses two main procedures.
i.e.12: Application of case-2 Figure 11. x = left (p(x)). Now. Here. we make w to be red leaving x with only one black and assign parent (x) to be the new value of x. a new node (say w) is taken which is the sibling of x. There are four cases which we will be considering separately as follows: Case 1: If color of w’s sibling of x is red Since W must have black children. Now.This while loop will be executed until x becomes root and its color is red. Advanced Data Structures Figure 11.13: Application of case-3 25 . 3 and 4. the condition will be again checked. Case 2: If the color of w is black and both its children are also black. we can change the colors of w and p (x) and then left rotate p (x) and the new value of w to be the right node of parent of x. Since w is black. the conditions are satisfied and we switch over to case 2.11: Application of case-1 Figure 11.
Setting x to be the root causes the while loop to terminate. γ. This can be done by making some color changes and performing a left rotation on p(x). if the node is red. Note: In the above Figures 11. Now. We can remove the extra black on x.14: Application of case-4 Case 3: If the color of w is black.File Structures and Advanced Data Structures Figure 11. Any red-black tree can be converted into an AA-tree by translating its color structure to levels such that left child is always one level lower than its parent and right child is always same or at one level lower than its parent. in their operations like insertion.12. but its left child is red and w’s right child is black. then a horizontal link is established between them.. Case 4: When w is black and w’s right child is red.14. we conclude that it is necessary that horizontal links are always at the right side and that there may not be two consecutive links. AA-trees are defined in terms of level of each node instead of storing a color bit with each node. After entering case-3. Taking into consideration of all the above properties. it becomes difficult to retain all the properties. Same of its parent. 3. 11. When the right child is at same level to its parent.e. especially in case of deletion. 11. ε are assumed to be either red or black depending upon the situation. The new sibling w of x is now a black node with a red right child and thus case 4 is obtained. One if the node is a leaf.15). β’. i. we show a AA-tree as follows (refer to Figure 11. 11.4 AA-Trees Red-Black trees have introduced a new property in the binary search tree. The level of a node will be as follows: 1. A red-black tree used to have various conditions to be satisfied regarding its color and AA-trees have also been designed in such a way that it should satisfy certain conditions regarding its new property. Thus. an extra property of color (red.e.11. as these trees grow. 2. α. deletion. Level will be one less than the level of its parent. making it single black. level. a new type of binary search tree can be described which has no property of having a color. we change the color of left child of w to black and that of w to be red and then perform right rotation on w without violating any of the black properties. 26 .. black). i. if the node is black. This information of the level of a node is stored in a small integer (may be 8 bits). but has a new property introduced based on the color which is the information for the new.13 and 11. But. Thus. α’. β.
but can also arise the other condition.16). Thus. Figure 11. we use two new functions namely skew( ) and split( ) based on the rotations of the node. we now look at different operations that can be performed at such trees. According to the condition. Insertion: The insertion procedure always start from the bottom level. the node 50 will be inserted at the bottom level in such a way that it satisfies Binary Search tree property also (refer to Figure 11. But.Advanced Data Structures Figure 11. While studying the properties of AA-tree. The condition that (a) two consecutive horizontal links in an AA-tree can be removed by a left rotation by split( ) whereas the condition (b) can be removed by right rotations through function show( ). either of the two problems can occur: (a) Two consecutive horizontal links (right side) (b) Left horizontal link. so that all the properties of AA-trees are retained. while performing this function. we have to insert node 50. in order to remove conditions (a) and (b). we said that conditions (a) and (b) should not be satisfied. The following are various operations on a AA-tree: 1. in the AA-tree of Figure 11.15.16: After inserting node 50 Figure 11. Let us demonstrate it with an example.15: AA-tree After having a look at the AA-tree above. Either of these functions can remove these condition. 2. Searching: Searching is done by using an algorithm that is similar to the search algorithm of a binary search tree. Suppose.17: Split at node 39(left rotation) 27 .
File Structures and Advanced Data Structures
Now, we should be aware as to how this left rotation is performed. Remember, that rotation is introduced in Red-black tree and these rotations (left and right) are the same as we performed in a Red-Black tree. Now, again split ( ) has removed its condition but has created skew conditions (refer to Figure 11.17). So, skew ( ) function will now be called again and again until a complete AA-tree with a no false condition is obtained.
Figure 11.18: Skew at 55 (right rotation)
Figure 11.19: Split at 45
A skew problem arises because node 90 is two-level lower than its parent 75 and so in order to avoid this, we call skew / split function again.
28
Figure 11.20: The Final AA-tree
Thus, introducing horizontal left links, in order to avoid left horizontal links and making them right horizontal links, we make 3 calls to skew and then 2 calls to split to remove consecutive horizontal links (refer to Figures 11.18, 11.19 and 11.20). A Treap is another type of Binary Search tree and has one property different from other types of trees. Each node in the tree stores an item, a left and right pointer and a priority that is randomly assigned when the node is created. While assigning the priority, it is necessary that the heap order priority should be maintained: node’s priority should be at least as large as its parent’s. A treap is both binary search tree with respect to node elements and a heap with respect to node priorities.
Advanced Data Structures
Check Your Progress 2
1)
Explain the properties of red-black trees along with an example. ………………………………………………………………………………… …………………………………………………………………………………
11.5 SUMMARY
This is a unit of which focused on the emerging data structures. Splay trees, RedBlack trees, AA-trees and Treaps are introduced. The learner should explore the possibilities of applying these concepts in real life. Splay trees are binary search trees which are self adjusting. Self adjusting basically means that whenever a splay tree is accessed for insertion or deletion of a node, then that node pushes all the remaining nodes to become root. So, we can conclude that any node which is accessed frequently will be at the top levels of the Splay tree. A Red-Black tree is a type of binary search tree in which each node is either red or black. Apart from that, the root is always black. If a node is red, then its children should be black. For every node, all the paths from a node to its leaves contain the same number of black nodes. AA-trees are defined in terms of level of each node instead of storing a color bit with each node. AA-trees have also been designed in such a way that it should satisfy certain conditions regarding its new property i.e. level. The priorities of nodes of a Treap should satisfy the heap order. Hence, the priority of any node must be as large as it’s parent’s. Treap is the simplest of all the trees.
11.6 SOLUTIONS/ANSWERS
Check Your Progress 1
Ans. 1
29
File Structures and Advanced Data Structures
Check Your Progress 2
1) Any Binary search tree should contain following properties to be called as a redblack tree. 1. 2. 3. 4. Each node of a tree should be either red or black. The root node is always black. If a node is red then its children should be black. For every node, all the paths from a node to its leaves contain the same number of black nodes.
Example of a red-black tree:
11.7 FURTHER READINGS
Reference Books
1. Data Structures and Algorithm Analysis in C by Mark Allen Weiss, Pearson Education
Reference Websites
30
Advanced Data Structures 31 .
Enum 012786345 98387123 Name John Suresh Address D-51.4.UNIT 12 FILE STRUCTURES Structure 12. The typical records of such file are shown in Figure 12.4.4 Introduction Objectives Terminology File Organisation Sequential Files 12. you should be able to • • learn the terminology of file structures. Banjara Hills State Delhi Hyderabad Country India India Programme BCA MCA Figure 12.3 12.4. we shall discuss the fundamentals of file structures along with the generic file organisations.0 12.0 INTRODUCTION The structures of files change from operating system to operating system. Nebsarai.1 OBJECTIVES After going through this unit.6 12. 31 31 32 32 33 12.4 Structure Operations Disadvantages Areas of use File Structures Page Nos. in this unit. In other words. it is also a collection of records and records are words in the file. We focus. Consider a file consisting of information about students. the total number of keys in each record etc.2 12. Though.1 12.1. 12.8 12.1 12. Selection of a particular way of storing the file on the device depends on factors such as retrieval records.7 12.2 12.9 Direct File Organisation Indexed Sequential File Organisation Summary Solutions/Answers Further readings 35 35 37 37 37 12. We may name such a file as Student file.3 12. A file may be defined as a collection of records.4. a text file doesn’t conform to this definition at the first glance. In this unit. • 31 . on the ways of storing the files on external storage devices. learn the underlying concepts of Sequential files. Maidan Garhi E-345.5 12. queries should be able to be executed with out much hassle.1: Typical records of a Student file A file should always be stored in such a way that the basic operations on it can be performed easily. the way the queries can be put on the file. and know the Indexed sequential file organisation.
Also. It’s records contain a key field and a pointer to that record of the data file which has the same value of the key field. and names of courses. 12. The data stored in files is accessed by software which can be divided into the following two categories: i) ii) User Programs: These are usually written by a programmer to manipulate retrieved data in the manner required by the application. File operations can be categorised as • • • • • • CREATION of the file INSERTION of records into the file UPDATION of previously inserted records RETRIEVAL of previously inserted records DELETION of records DELETION of the file. 4) Index: An index file corresponds to a data file.3 FILE ORGANISATION File organisation may be defined as a method of storing records in file. namely. 3) File: Data is organised for storage in files. It has an identifying name. The following are the factors involved in selecting a particular file organisation: 32 . For example. length and type. User programs effectively use file operations through appropriate programming language syntax The File Management System manages the independent files and acts as the software interface between the user programs and the file operations. related records. File Operations: These deal with the physical movement of data.File Structures and Advanced Data Structures 12. For example: A university could use a student record with the fields. A file is a collection of similar. For example. “STUDENT” could be a file consisting of student records for all the students in a university. Name Age : : a character type of size 10 a numeric type 2) Record: It is a collection of related fields that can be treated as a unit from an application point of view. the subsequent implications on the way these records can be accessed. in and out of files. University enrolment no.2 TERMINOLOGY The following are the definitions of some important terms: 1) Field: It is an elementary data item characterised by its size.
4. This key. 12. • Sequential Files Data records are stored in some specific sequence e.g. except that the key value need not be an integer. The index. 12. • Relative Files Each data record has a fixed place in a relative file. Random as well as sequential access is possible. however. therefore. • Indexed Flies In this file organisation. Relative files can exist only on random access devices like disks. no sequence is imposed on the storage of records in the data file. The choice must be made depending upon the individual needs of the particular application in question. A sequentially organised file may be stored on either a serial-access or a direct-access storage medium. • Indexed Sequential Files An index is added to the sequential file to provide random access. Sequential files will be dealt with at length in the next section.. We now introduce in brief.1 Structure To provide the ‘sequence’ required. As is obvious. File Structures Different file organisations accord different weightages to the above factors.4. therefore. • Direct Files These are similar to relative files.• • • • • • Ease of retrieval Convenience of updates Economy of storage Reliability Security Integrity. If a single field cannot fulfil this criterion. will be used for insertion and retrieval of the record. Insertion: Records must be inserted at the place dictated by the sequence of the keys. direct insertions into the main data file would lead to 33 . Multiple indexes are allowed on a file to improve access. Each record must have associated with it in integer key value that will help identify this slot. the various commonly encountered file organisations. Records of a sequential file cannot be accessed at random i.e. order of arrival value of key field etc. a ‘key’ must be defined for the data records. The user can specify keys which make sense to his application. Sequential files have data records stored in a specific sequence. 12. is maintained in strict sequence. no overflow area is needed.. we shall discuss about Sequential file organisation. Usually a field whose values can uniquely identify data records is selected as the key. then a combination of fields can serve as the key. one must traverse the preceding (n–1) records.2 Operations 1. to access the nth record. An overflow area needs to be maintained to permit insertion in sequence.4 SEQUENTIAL FILES In this section.
....... Such insertions are usually done in a batch mode when the activity/program which populates the transaction file have ceased........ it is vital that this data reflects the latest state of the data. • By definition... the corresponding data record will be dropped from the primary data file..... the data record should be merged with the transaction record (if any) for that key value. Retrieval is usually done for a particular value of the key field...... The common method is to use transaction logging.... Retrieval: User programs will often retrieve data for viewing prior to making decisions.. This works as follows: • • • It collects records for insertion in a transaction file in their order of their arrival...... 34 ...... The structure of the transaction files records will be identical to that of the primary file.4 Areas of Use Sequential files are most frequently used in commercial batch oriented data processing where there is the concept of a master file to which details are added periodically.. This problem could be mitigated by reserving’ overflow areas’ in the file for insertions.. The concerned record is written to a transaction file...4.... This is also done using transaction files......... sort the transaction file in the order of the key of the primary data file Merge the two files on the basis of the key to get a new copy of the primary sequential file........ Before return in to the user. Usually deletion (like-insertion) is not done immediately. If a new field has to be added... • All records must be structurally identical...... 12. The other two operations ‘creation’ and ‘deletion’ of files are achieved by simple programming language statements... When population of the transactions file has ceased... Updation: Updation is a combination of insertion and deletions. Check Your Progress 1) Describe the record structure to be used for the lending section of a library.... 4..4. if the merging activity has not yet taken place... 2. Therefore... Example is payroll applications. then every record must be rewritten to provide space for the new field.......File Structures and Advanced Data Structures frequent rebuilding of the file....... The space occupied by the record should be freed for use.3 Disadvantages Following are some of the disadvantages of sequential file organisation: • Updates are not easily accommodated...... The record with the new values is inserted and the earlier version deleted..... 3.. random access is not possible............... 12.. …………………………………………………………………………………… …………………………………………………………………………………… …………………………………………………………………………………… . At the time of merging.... Deletion: Deletion is the reverse process of insertion.... this leads to wastage of space. But.....
A bucket is a space that can accommodate multiple records. The calculation applied is called a hash function. have divisor n. Two unequal keys have been calculated to have the same address. Consider the sequential file with a two-level index. Level 1 of the index holds an entry for each three-record section of the main file. we discus a very commonly used hash function called Division . 12. One of these is to hash to buckets. The keys are called synonyms. If it is known that the file is to contain n records. calculated address may not be unique. The index file has a tree structure and data file has a sequential structure. then we must. To access a record directly (or random access) a relationship is used to translate the key value into a physical address.5 DIRECT FILE ORGANISATION It offers an effective way to organise data when there is a need to access individual records directly. locate an approximate location for the word and then proceed to find the word sequentially. Also we may have a very large key space as compared to the address space.. where K1 = K2. It is called Collision.e. assuming that only one record can be stored at a given address.Remainder. However an index is provided in terms of thumb tabs. we consider an approach in which the index part and data part reside on a separate file. Here. Key space refers to all the possible key values. File Structures Division-Remainder Hashing According to this method. A calculation is performed on the key value to get an address. Hence. The Level 2 indexes Level 1 in the same way. 35 . Since the data file is sequenced. generally a prime number. The address space possibly may not match actually for the key values in the file.6 INDEXED SEQUENTIAL FILE ORGANISATION When there is need to access records sequentially by some key value and also to access records directly by the same key value. There are various approaches to handle the problem of collisions. R(K1) = R(K2). To search for a word we do not search sequentially. key value is divided by an appropriate number. We access the index. This address calculation technique is often termed as hashing. This is called the mapping function R. the collection of records may be organised in an effective manner called Indexed Sequential Organisation. You must be familiar with search process for a word in a language dictionary. The student is advised to read some text on bucket Addressing and related topics. i.12. The data in the dictionary is stored in sequential manner. it is not necessary for the index to have an entry for each record. That is. R(key value) = Address Direct files are stored on DASD (Direct Access Storage Device). To implement the concept of indexed sequential file organisations. The choice of appropriate divisor may not be so simple. and the division of remainder is used as the address for the record.
One of the prominent indexing techniques is Cylinder-Surface indexing. But. insertions and deletions in the main data file may lead to changes in the index structure. the corresponding surface index is retrieved to look the record for which the search has started. Since. Two approaches used to implement indexes are static indexes and dynamic indexes. for each query.File Structures and Advanced Data Structures When the new records are inserted in the data file. Suppose that the file needs m cylinders. Such files are called index files and the number of index files vary from file to file. the time consumed to execute the query is more when compared to another file which is having keys. each surface index has n entries. Whenever a search is initiated for the surface index. different versions of the files which result due to sorting on the keys are stored in the directory of that file. consider the file of Figure 12. there may be arising necessity where in the file has to be sorted on the field(s) on which the query is based. After finding the cylinder to be accessed and after finding the surface to be accessed. If we designate Enrolment Number (Enum) and Name as keys. but the structure does not change. Usually. it depends on the number of surfaces. In case of dynamic indexing approach. Both dynamic and static indexing techniques are useful depending on the type of application. there will be an index file based on the primary key. The index file will have two fields. Of course. we can have more than two index files. the records of the file are stored one after another in such a way that the primary keys of the records are in increasing order. Recall the change in height of B-Tree as records are inserted and deleted. Different software store index files in a different manner so that the operations on the records can be performed as soon as possible after the query is submitted. For example. Each cylinder will be having one entry which corresponds to the largest key value in that cylinder. the corresponding track is loaded into memory and that track is searched for the needed record. usually. Based on the cylinder. A directory is a component of file. After the search of cylinder index.1. the file has to be sorted on the field on which the query is based which is cumbersome. each cylinder index occupies only one track as the number of cylinders are only few. When a query is executed on such a file. the search takes O(log m) time. then we may have two index files based on the each key. Consider a file which doesn’t have any keys for its records. The cylinder which holds the desired record is found by searching the cylinder index. In the case of files which have keys. Suppose that the need arises to search for a record whose key value is B. 36 . The k-th entry in surface index for cylinder lth cylinder if the value of the largest key on the lth track of the kth surface. Then. Usually. then cylinder index will have m entries. the sequence of records need to be preserved and also the index is accordingly updated. Assume that the disk has n surfaces which can be used. to deal with queries which use both the keys.n indicates the total number of surface index entries. m. There will be multiple cylinders and there are multiple surfaces corresponding to each cylinder. Cylinder-Surface Indexing is useful for such index file. usually sequential search is used. the corresponding cylinder is determined. They are cylinder index and corresponding surface indexes. So. the first step is to load the cylinder index of the file into memory. In this type of indexing. the contents of the static index may change. Then. the number of surfaces are less. there exists a primary key for each of the files. because. Hence. As the main data file changes due to insertions and deletions. Of course.
Issue Date.fredosaurus.edu. Data Structures using C and C ++ by Yedidyah Hangsam. 3. The various file organisation techniques were discussed. given its key value. Fundamentals of Data Structures in C++ by E. Patel.7 SUMMARY This unit dealt with the methods of physically storing data in the files.. Book Name. 2. Book Classification. Mehta. 12. Author. An Indexed Sequential file supports both sequential access by key value and direct access to a particular record. Reference Websites http:// 37 . there exists a predictable relationship between the key used and to identify a particular record on secondary storage. Horowitz. Tanenbaum.. PHI Publications.9 FURTHER READINGS Reference Books 1. records and files were defined. They do not provide adequate support for interactive applications.12. Due Date. i. Galgotia Publications. Sequential files are simple to use and can be stored on inexpensive media.B. They are suitable for applications that require direct access to only particular records of the collection. Moshe J. The organisation types were introduced. Direct files are used extensively in application areas where interactive processing is used. Member Name. A direct file must be stored on a direct access device.8 SOLUTIONS/ANSWERS Check Your Progress 1) The following record structure could take care of the general requirements of a lending library. Sequential File Organisation finds use in application areas where batch processing is more common. File Structures 12. It is implemented by building an index on top of a sequential data file that resides on a direct access storage device. In Direct file organisation. Member No. PHI Publications. The terms fields.cs. Fundamentals of Data Structures in C by R. Augenstein and Aaron M. Sahani and D.e.umbc. | https://www.scribd.com/doc/69533045/mcs-021 | CC-MAIN-2017-13 | refinedweb | 48,843 | 67.86 |
Creating an Application in Kivy: Part 9
Welcome to the final part of this series on creating a working Jabber client in Kivy. While there are numerous things we could do with our code to take it to Orkiv version 2, I think it’s currently in a lovely state that we can call “finished” for 1.0.
In this article, we’re going to talk about releasing and distributing Kivy to an Android mobile device. We’ll use a tool called buildozer that the Kivy developers have written. These commands will, in the future, extend to other operating systems.
We aren’t going to be writing a lot of code in this part (some adaptation to make it run on android is expected). If you’re more interested in deploying than coding, you can check out my state of the orkiv repository as follows:
git clone git checkout end_part_eight
Remember to create and activate a virtualenv populated with the appropriate dependencies, as discussed in part 1.
Table Of Contents
Here are links to all the articles in this tutorial series:
- Part 1: Introduction to Kivy
- Part 2: A basic KV Language interface
- Part 3: Handling events
- Part 4: Code and interface improvements
- Part 5: Rendering a buddy list
- Part 6: ListView Interaction
- Part 7: Receiving messages and interface fixes
- Part 8: Different views for different screens
- Part 9: Deploying your kivy application
Restructuring the directory
Back in Part 1, we chose to put all our code in
__main__.py so that we could run it from a zip file or directly from the directory. In some ways, this is pretty awesome for making a package that can easily be distributed to other systems (assuming those systems have the dependencies installed). However, Kivy’s build systems presume that the file is called
main.py.
I’m not overly happy about this. Kivy sometimes neglects Python best practices and reinvents tools that are widely used by the Python community. That said, it is possible for us to have the best of both worlds. We can move our code into
orkiv/main.py and still have a
__main__.py that imports it.
First, move the entire
__main__.py file into
main.py, using git:
git mv orkiv/__main__.py orkiv/main.py
Next, edit the new
main.py so that it can be imported from other modules without automatically running the app. There is a standard idiomatic way to do this in python. Replace the code that calls
Orkiv().run() with the following:
def main(): Orkiv().run() if __name__ == "__main__": main()
There are two things going on here. First, we moved the code that instantiates the application and starts it running in a function called
main(). There is nothing exciting about this function. It does the same thing that used to happen as soon as the module was imported. However, nothing will happen now unless that function is called.
One way to regain the previous functionality of immediately running the app when we type
python main.py would be to just call
main() at the bottom of the module. However, that is not what we want. Instead, our goal is to make the application automatically run if the user runs
python orkiv/main.py, but if they run
import main from a different module or the interpreter, nothing is explicitly run.
That’s what the
if __name__ conditional does. Every module that is ever imported has a magic
__name__ attribute. This is almost always the name of the module file without the
.py extension (eg:
main.py has a
__name__ of
main). However, if the module is not an imported module, but is the script that was run from the command line, that
__name__ attribute is set to
__main__ instead of the name of the module. Thus, we can easily tell if we are an invoked script or an imported script by testing if
__name__ is
__main__. Indeed, we are taking advantage of the same type of magic when we name a module
__main__.py. When we run
python zipfile_or_directory it tries to load the
__main__ module in that directory.
However, we do not currently have a
__main__ module, since we just moved it. Let’s rectify that:
from main import main main()
Save this as a new
orkiv/__main__.py and test that running
python orkiv from the parent directory still works.
You’ll find that the window runs, however, if you try to connect to a jabber server, you get an exception when it tries to display the buddy list. This is because our
orkiv.kv file was explicitly importing from
__main__. Fix the import at the top of the file so that it imports from
main instead:
#:import la kivy.adapters.listadapter #:import ok main OrkivRoot:
While we’re editing code files, let’s also add a
__version__ string to the top of our
main.py that the Kivy build tools can introspect:
__version__ = 1.0
Deploying to Android with buildozer
Deploying to Android used to be a fairly painful process. Back in December, 2012, I described how to do it using a bulky Ubuntu virtual machine the Kivy devs had put together. Since then, they’ve designed a tool called buildozer that is supposed to do all the heavy lifting. Once again, I have some issues with this plan; it would be much better if Kivy integrated properly with setuptools, the de facto standard python packaging system than to design their own build system. However, buildozer is available and it works, so we’ll use it. The tool is still in alpha, so your mileage may vary. However, I’ve discovered that Kivy products in alpha format are a lot better than final releases from a lot of other projects, so you should be ok.
Let’s take it out for a spin. First activate your virtualenv and install the buildozer app into it:
. venv/bin/activate pip install buildozer
Also make sure you have a java compiler, apache ant, and the android platform tools installed. This is OS specific; these are the commands I used in Arch Linux:
sudo pacman -S jdk7-openjdk sudo pacman -S apache-ant yaourt android-sdk-platform-tools
Then run the init command to create a boilerplate
buildozer.spec:
buildozer init
Now edit the
buildozer.spec to suit your needs. Most of the configuration is either straightforward (such as changing the application name to
Orkiv) or you can keep the default values. One you might overlook is that
source.dir should be set to
orkiv the directory that contains
main.py.
The
requirements should include not only
kivy and
sleekxmpp but also
dnspython, a library that sleekxmpp depends on. You can use the
pip freeze command to get a list of packages currently installed in your virtualenv. Then make a conscious decision as to whether you need to include each one, remembering that many of those (cython, buildozer, etc) are required for development, but not Android deployment.
For testing, I didn’t feel like creating images for presplash and the application icon, so I just pointed those at
icons/available.png. It’s probably prettier than anything I could devise myself, anyway!
If you want to see exactly what changes I made, have a look at the diff.
The next step is to actually build and deploy the app. This takes a while, as buildozer automatically downloads and installs a wide variety of tools on your behalf. I don’t recommend doing it over 3G!
First, Enable USB debugging on the phone and plug it into your development machine with a USB cable. Then tell buildozer to do all the things it needs to do to run a debug build on the phone:
buildozer android debug deploy run
I had a couple false starts before this worked. Mostly I got pertinent error messages that instructed me to install the dependencies I mentioned earlier. Then, to my surprise, the phone showed a Kivy “loading…” screen. Woohoo! It worked!
And then the app promptly crashed without any obvious error messages or debugging information.
Luckily, such information does exist. With the phone plugged into usb, run
adb logcat from your development machine. Review the Python tracebacks and you’ll discover that it is having trouble finding the sound file
in.wav:
I/python (25368): File "/home/dusty/code/orkiv/.buildozer/android/platform/python-for-android/build/python-install/lib/python2.7/site-packages/kivy/core/audio/__init__.py", line 135, in on_source I/python (25368): File "/home/dusty/code/orkiv/.buildozer/android/platform/python-for-android/build/python-install/lib/python2.7/site-packages/kivy/core/audio/audio_pygame.py", line 84, in load I/python (25368): File "/home/dusty/code/orkiv/.buildozer/android/platform/python-for-android/build/python-install/lib/python2.7/site-packages/android/mixer.py", line 202, in __init__ I/python (25368): IOError: [Errno 2] No such file or directory: '/data/data/ca.archlinux.orkiv/files/orkiv/sounds/in.wav' I/python (25368): Python for android ended.
The problem here is that the code specifies a relative path to the sound file as
orkiv/sounds/in.wav. This worked when we were running
python orkiv/ from the parent directory, but python for android is running the code as if it was inside the
orkiv directory. We can reconcile this later, but for now, let’s focus on getting android working and just hard code the file location:
self.in_sound = SoundLoader.load("sounds/in.wav")
While we’re at it, we probably also need to remove “orkiv” from the location of the icon files in
orkiv.kv:
source: "icons/" + root.online_status + ".png"
Finally, running
buildozer and
adb logcat again indicates the error message hasn’t changed. This is because we aren’t explicitly including the
.wav file with our distribution. Edit
buildozer.spec to change that:
# (list) Source files to include (let empty to include all the files) source.include_exts = py,png,jpg,kv,atlas,wav
Run the
buildozer command once again and the app fire up and start running! Seeing your Python app running on an Android phone is a bit of a rush, isn’t it? I was able to log into a jabber account, but selecting a chat window goes into “narrow” mode because the phone’s screen has a much higher pixel density than my laptop. We’ll have to convert our size handler to display pixels somehow.
I was able to send a chat message, which was exciting, but when the phone received a response, it crashed. Hard.
adb logcat showed a segmentation fault or something equally horrifying. I initially guessed that some concurrency issue was happening in
sleekxmpp, but it turned out that the problem was in Kivy. I debugged this by putting print statements between each line in the
handle_xmpp_message method and seeing which ones executed before it crashed. It turned out that Kivy is crashing in its attempt to play the
.wav file on an incoming message. Maybe it can’t handle the file format of that particular audio file or maybe there’s something wrong with the media service. Hopefully the media service will be improved in future versions of Kivy. For now, let’s use the most tried and true method of bugfixing: pretend we never needed that feature! Comment out the line:
#self.in_sound.play()
and rerun
buildozer android debug deploy run.
Now the chat application interacts more or less as expected, though there are definitely some Android related quirks. Let’s fix the display pixel issue in orkiv.kv:
<OrkivRoot>: mode: "narrow" if self.width < dp(600) else "wide" AccountDetailsForm:
All I did was wrap the
600 in a call to
dp, which converts the 600 display pixels into real pixels so the comparison is accurate. Now when we run the app, it goes into “narrow” mode because the screen is correctly reporting as “not wide enough for wide mode”. However, if you start the app in landscape mode, it does allow side-by-side display. Perfect!
And that’s the app running on Android. Unfortunately, in all honesty, it’s essentially useless. As soon as the phone display goes to sleep, the jabber app closes which means the user is logged out. It might be possible to partially fix this using judicious use of Pause mode. However, since the phone has to shut down internet connectivity to preserve any kind of battery life, there’s probably a lot more involved than that.
On my phone, there also seems to be a weird interaction that the
BuddyList buttons all show up in a green color instead of the colors specified in the Kivy language file and the args_converter.
Third, the app crashes whenever we switch orientations. This is probably a bug fixable in our code rather than in Kivy itself, but I don’t know what it is.
Also, touch events are erratic on the phone. Sometimes the scroll view won’t allow me to scroll when I first touch it, but immediately interprets a touch event on the ListItem. Sometimes touch events appear to be forgotten or ignored and I have to tap several times for a button to work. Sometimes trying to select a text box utterly fails. I think there must be some kind of interaction between Kivy’s machinery and my hardware here, but I’m not certain how to fix it.
Occasionally, when I first log in, the BuddyList refuses to act like a
ScrollView and the first touch event is interpreted as opening a chat instead of scrolling the window. This is not a problem for subsequent touch events on the buddy list.
Finally, there are some issues with the onscreen keyboard. When I touch a text area, my keyboard (I’m using SwiftKey) pops up, and it works well enough. However, the swipe to delete behavior, which deletes an entire word in standard android apps, only deletes one letter here. More alarmingly, when I type into the password field, while the characters I typed are astrisk’d out in the field, they are still showing up in the SwiftKey autocomplete area. There seems to be some missing hinting between the OS and the app for password fields.
Before closing, I’d like to fix the problem where the app is no longer displaying icons on the laptop when I run
python orkiv/. My solution for this is not neat, but it’s simple and it works. However, it doesn’t solve the problem if we put the files in a
.zip. I’m not going to worry about this too much, since
buildozer is expected, in the future, to be able to make packages for desktop operating systems. It’s probably better to stick with Kivy’s tool. So in the end, this
__main__.py feature is probably not very useful and could be removed. (Removing features and useless code is one of the most important parts of programming.) However, for the sake of learning, let’s make it work for now! First we need to add a
root_dir field to the
Orkiv app. This variable can be accessed as
app.root_dir in kv files and as
Orkiv.get_running_app().root_dir in python files.
class Orkiv(App): def __init__(self, root_dir): super(Orkiv, self).__init__() self.root_dir = root_dir self.xmpp = None
Of course, we also have to change the
main() function that invokes the app to pass a value in. However, we can have it default to the current behavior by making the
root_dira keyword argument with a default value:
def main(root_dir=""): Orkiv(root_dir).run()
Next, we can change the
__main__.py to set this
root_dir variable to
orkiv/:
from main import main main("orkiv/")
The idea is that whenever a file is accessed, it will be specified relative to
Orkiv.root_dir. So if we ran
python main.py from inside the
orkiv/ directory (this is essentially what happens on android), the root_dir is empty, so the relative path is relative to the directory holding
main.py. But when we run
python orkiv/ from the parent directory, the
root_dir is set to
orkiv, so the icon files are now relative to the directory holding
orkiv/.
Finally, we have to change the icons line in the kivy file to reference
app.root_dir instead of hardcoding the path (a similar fix would be required for the sound file if we hadn’t commented it out):
Image: source: app.root_dir + "icons/" + root.online_status + ".png"
And now, the app works if we run
python orkiv/ or
python main.py and also works correctly on Android.
There are a million things that could be done with this Jabber client, from persistent logs to managed credentials to IOS deployment. I encourage you to explore all these options and more. However, this is as far as I can guide you on this journey. Thus part 9 is the last part of this tutorial. I hope you’ve enjoyed the ride and have learned much along the way. Most importantly, I hope you’ve been inspired to start developing your own applications and user interfaces using Kivy.
Financial Feedback
Writing this tutorial has required more effort and consumed more time than I expected. I’ve put on average five hours per tutorial (with an intended timeline of one tutorial per week) into this work, for a total of around 50 hours for the whole project.
The writing itself and reader feedback has certainly been compensation enough. However, I’m a bit used up after this marathon series and I’m planning to take a couple months off before writing anything like this tutorial again in my free time.
That said, I have a couple of ideas for further tutorials in Kivy that would build on this one. I may even consider collecting them into a book. It would be ideal to see this funded on gittip, but I’d want to be making $20 / hour (far less than half my normal wage, so I’d still be “donating” much of my time) for four hours per week before I could take half a day off from my day job to work on such pursuits — without using up my free time.
A recent tweet by Gittip founder Chad Whitacre suggests that people hoping to be funded on gittip need to market themselves. I don’t want to do that. I worked as a freelancer for several years, and I have no interest in returning to that paradigm. If I have to spend time “selling” myself, it is time I’m not spending doing what I love. In my mind, if I have to advertise myself, then my work is not exceptional enough to market itself. So I leave it up to the community to choose whether to market my work. If it is, someone will start marketing me and I’ll see $80/week in my gittip account. Then I’ll give those 4 hours per week to tutorials like this or other open source contributions. If it’s not, then I’ll feel more freedom to spend my free time however I like. Either way, I’ll be doing something that I have a lot of motivation to do.
Regardless of any financial value, I really hope this tutorial has been worth the time you spent reading it and implementing the examples! Thank you for being an attentive audience, and I hope to see you back here next time, whenever that is!
Creating Apps In Kivy: The Book
My request for crowd funding above didn’t take off. I’m making just under $12 per week on Gittip, all of which I’m currently regifting. Instead of my open contribution dream being fulfilled, I opted to follow the traditional publishing paradigm and wrote a book Creating Apps In Kivy, which was published with O’Reilly. I’m really excited about this book; I think it’s the highest quality piece of work I have done. If you enjoyed this tutorial, I encourage you to purchase the book, both to support me and because I think you’ll love it! | http://archlinux.me/dusty/tag/orkiv/ | CC-MAIN-2014-42 | refinedweb | 3,357 | 63.9 |
# General syntax to import specific functions in a library: ##from (library) import (specific library function) from pandas import DataFrame, read_csv from numpy import random # General syntax to import a library but no functions: ##import (library) as (give the library a nickname/alias) import matplotlib.pyplot as plt import pandas as pd
print 'Pandas version ' + pd.__version__
Pandas version 0.11.0?
''' randint(low, high=None, size=None) Return random integers from `low` (inclusive) to `high` (exclusive). ''' randint?
''' len(object) -> integer Return the number of items of a sequence or mapping. ''' len?
''' range([start,] stop[, step]) -> list of integers Return a list containing an arithmetic progression of integers. ''' range?
''' zip(seq1 [, seq2 [...]]) -> [(seq1[0], seq2[0] ...), (...)] Return a list of tuples, where each tuple contains the i-th element from each of the argument sequences. The returned list is truncated in length to the length of the shortest argument sequence. '''.
seed(500) random_names = [names[randint(low=0,high=len(names))] for i in range(1000)] # Print first 10 records print random_names[:10]
['Mary', 'Jessica', 'Jessica', 'Bob', 'Jessica', 'Jessica', 'Jessica', 'Mary', 'Mary', 'Mary']
# The number of births per name for the year 1880 births = [randint(low=0,high=1000) for i in range(1000)] print births[:10]
[968, 155, 77, 578, 973, 124, 155, 403, 199, 191]
BabyDataSet = zip(random_names,births) print = DataFrame(data = BabyDataSet, columns=['Names', 'Births']) df[:10]
''' df.to_csv(self, path_or_buf, sep=',', na_rep='', float_format=None, cols=None, header=True, index=True, index_label=None, mode='w', nanRep=None, encoding=None, quoting=None, line_terminator='\n') Write DataFrame to a comma-separated values (csv) file ''' df.to_csv?
The only parameters we will use is index and header. Setting these parameters to True.
read_csv(filepath_or_buffer, sep=',', dialect=None, compression=None, doublequote=True, escapechar=None, quotechar='"', quoting=0, skipinitialspace=False, lineterminator=None, header='infer', index_col=None, names=None, prefix=None, skiprows=None, skipfooter=None, skip_footer=0, na_values=None, true_values=None, false_values=None, delimiter=None, converters=None, dtype=None, usecols=None, engine='c', delim_whitespace=False, as_recarray=False, na_filter=True, compact_ints=False, use_unsigned=False, low_memory=True, buffer_lines=None, warn_bad_lines=True, error_bad_lines=True, keep_default_na=True, thousands=None, comment=None, decimal='.', parse_dates=False, keep_date_col=False, dayfirst=False, date_parser=None, memory_map=False, nrows=None, iterator=False, chunksize=None, verbose=False, encoding=None, squeeze=False)
read_csv?
Even though this functions has many parameters, we will simply pass it the location of the text file.
Location = C:USERNAME.xy\startups1880.txt
Note: Depending on where you save your notebooks, you may need to modify the location above.
Location = r'C:\Users\hdrojas\.xy\startups\births1880.txt' df = read_csv(Location)
Notice the r before the string. Since the slashes are special characters, prefixing the string with a r will escape the whole string.
df
<class 'pandas.core.frame.DataFrame'> Int64Index: 999 entries, 0 to 998 Data columns (total 2 columns): Mary 999 non-null values 968 999 non-null values dtypes: int64(1), object(1)
When the dataframe is large, pandas will print out a summary of the data.
Summary says:
* There are 999 records in the data set
* There is a column named Mary with 999 values
* There is a column named 539 with 999 values
* Out of the two columns, one is numeric, the other is non numeric = read_csv(Location, header=None) df
<class 'pandas.core.frame.DataFrame'> Int64Index: 1000 entries, 0 to 999 Data columns (total 2 columns): 0 1000 non-null values 1 1000 non-null values dtypes: int64(1), object(1)
Summary now says:
* There are 1000 records in the data set
* There is a column named 0 with 1000 values
* There is a column named 1 with 1000 values
* Out of the two columns, one is numeric, the other is non numeric
Now lets take a look at the last five records of the dataframe
df.tail()
If we wanted to give the columns specific names, we would have to pass another paramter called names. We can also omit the header parameter.
df = csv file now that we are done using it.
import os os.remove(Location)
The data we have consists of baby names and the number of births in the year 1880. We already know that we have 999(self, by=None, axis=0, level=None, as_index=True, sort=True, group_keys=True) Group series using mapper (dict or key function, apply given function to group, return result as series) or by a series of columns ''' df.groupby?
# Create a groupby onject Name = df.groupby(df['Names']) # Apply the sum function to the groupby object df = Name.sum() df
To find the most popular name or the baby name with the higest birth rate, we can do one of the following.
# Method 1: Sorted = df.sort(['Births'], ascending=[0]).
plot() is a convinient attribute where pandas lets you painlessly plot the data in your dataframe. We learned how to find the maximum value of the Births column in the previous section. Now to find the actual baby name of the 998 value looks a bit tricky, so lets go over it.
Explain the pieces:
df['Names'] - This is the entire list of baby names, the entire Names column
df['Births'] - This is the entire list of Births in the year 1880, the entire Births column
df['Births'].max() - This is the maximum value found in the Births column
[df['Births'] == df['Births'].max()] IS EQUAL TO [Find all of the records in the Births column where it is equal to 998]
df['Names'][df['Births'] == df['Births'].max()] IS EQUAL TO Select all of the records in the Names column WHERE [The Births column is equal to 998]
An alternative way could have been to use the Sorted dataframe:
Sorted['Names'].head(1).value
The str() function simply converts an object into a string.
# Create graph df['Births'].plot() # Maximum value in the data set MaxValue = df['Births'].max() # Name associated with the maximum value MaxName = df[df['Births'] == df['Births'].max()].index[0] # Text to display on graph Text = str(MaxValue) + " - " + MaxName # Add text to graph plt.annotate(Text, xy=(1, MaxValue), xytext=(8, 0), xycoords=('axes fraction', 'data'), textcoords='offset points') print "The most popular name" df[df['Births'] == df['Births'].max()] #Sorted.head(1) can also be used
The most popular name | http://nbviewer.ipython.org/urls/bitbucket.org/hrojas/learn-pandas/raw/master/lessons/02%20-%20Lesson.ipynb | CC-MAIN-2013-48 | refinedweb | 1,040 | 52.49 |
I wrote this test code trying to work out a problem. The code gives the correct answer for the example (zero) but to understand the problem i need to understand what the code is doing on a sort of step by step basis. In other words how to do it with a pen and paper.
The answer is 3240 i just don't know why. Can anyone help?
#include <iostream> using namespace std; int fun(int n); int a=50; int b=2000; int c=40; int main() { cout<<fun(0)<<endl; return 0; } int fun(int n) { if(n>b) return n-c; if(n<=b) return fun(a+fun(a+fun(a+fun(a+n)))); } | https://www.daniweb.com/programming/software-development/threads/366458/help-with-recursion | CC-MAIN-2017-09 | refinedweb | 117 | 80.82 |
>
Streaming API
NCache API now allows you to read and write binary data stream in the cache. NCache has implemented a CacheStream which is derived from the standard Stream and provides major functionality of streaming in NCache. However, it does not support 'seeking' which is the querying and modifying of the current position within a stream.
Streaming can be used with the following namespace:
using
Alachisoft.NCache.Web.Caching;
using
System.IO;
Stream Modes:
Stream works in two modes:
CacheStream Reader:
Read mode can either be acquired with lock or without lock. In read with lock mode, multiple read operations can be performed simultaneously on a stream but no write operation is allowed in this mode. While in read without lock, write operations can be done parallel to read operations.
Stream Read: (With Lock):
Multiple read operations can be performed on a stream, no write operation is allowed in this mode. CacheStream Reader class will throw an exception if stream is already in use by other application.
CacheStream
stream = _cache.GetCacheStream(key,
StreamMode
.Read);
stream.Read(readBuffer, 0, readBuffer.Length);
stream.Close();
Stream Read: (Without Lock):
Multiple read and write operations can be performed on a stream.
stream = _cache.GetCacheStream(key,
StreamMode
.ReadWithoutLock);
stream.Read(readBuffer, 0, readBuffer.Length);
stream.Close();
CacheStream Writer:
In write mode only single write operation can be performed on a stream. No read operation is allowed in this mode. CacheStream Writer class will throw an exception if stream is already in use by other application.
stream = _cache.GetCacheStream(key,
StreamMode
.Write);
stream.Write(writeBuffer, 0, writeBuffer.Length);
stream.Close();
Stream Close:
After using the stream in any mode, users are supposed to close the stream in order to release the lock.
Buffered Stream:
Cache stream directly reads and writes data from the cache. It directly calls cache for both operations which thereby increases the calls to the cache and reduces the performance of an application. To overcome the above issue NCache introduces the
Buffer stream
. Buffer stream is the set of bytes used for storing the data up to the certain limit before reading or writing it to the cache. Until the buffer reaches its maximum limit the data will not be read or write from/to the cache. Buffer stream reads or writes a chunk of data to the cache in a single call which reduces the number of calls to the cache. This chunk will be equal to the buffer size. Buffer size varies according to the requirement of application. Following code shows how to
GetBufferedStream
:
Stream wstream = _cache.GetCacheStream(key,
StreamMode
.Write).GetBufferedStream(1000);
wstream.Write(writeBuffer, 0, 200);
wstream.Write(writeBuffer, 200, 600);
wstream.Write(writeBuffer, 600, 1000);
wstream.Close();
See Also
Using Cache Dependency
|
Adding items with priority settings
|
Using Bulk Operations
|
Using Data Grouping
Send comments on this topic. | http://www.alachisoft.com/resources/docs/ncache/help-4-1/streaming-api.html | CC-MAIN-2013-48 | refinedweb | 470 | 66.13 |
The comments period for the XML 1.0 fifth edition revision finished last Friday 16th May. I didn’t make a submission, in part because I felt I have had a good run in the past and my concerns are pretty well known and unchanged.
In XML 1.0, we went strongly against accepted wisdom which held 1) that the future was Unicode so you didn’t need to support existing encodings, 2) that the present was beautifully layered so one standard shouldn’t try to overcome the deficiencies in others, and 3) that we should all live in a Standards Fantasyland (on the map near Boogie Wonderland) where even if the world had gone one way that didn’t agree with what the existing standards said, we should follow the standard. A complete triumph of engineering (systematizing what works) over schematising (insisting on the right way to do things).
So for 1) the XML encoding header allows multiple encodings. Now, ten years later, we are finally reaching the stage where UTF-8 for web pages has exceeded ASCII and 8879/Windows encoded pages (Unicode wrangler Mark Davis, now with Google but for a long time with IBM, recently released some figures on this), so it may indeed be coming closer to the time when XML can be simplified so as to only support UTF-* encodings: I doubt it will have any demand because it is handy, free (everyone has large transcoder libraries) and doesn’t get in anyone’s way.
For 2) the example is that XML adopted what we now call IRIs for System identifiers in entities: it took IETF almost a decade to catch up and formalize this, surely a record for any standard. “Internet time” are you kidding? XML deliberately didn’t use the official URL syntax, but opted for the approach that it was better to have the software shield the user from the details of delimiting. I think there are very few advocates of XML simplification who would be prepared to go using vanilla URL syntax. But now 10 years later, entities are fast disappearing (mind you, just this week I had a seminar where there were surprisingly many questions on trying to use entities schemas) and the IRI spec is out. Namespaces and XLink should be using IRIs now, but there is an underlying problem that character-by-character comparison of IRIs is not robust unless they are canonicalized.
For 3) the example was again the XML header specifying the encoding header, despite the information supposedly being available in the HTTP MIME headers. But the standards got it wrong: the person who creates a file is not the person who sets the HTTP MIME header, in effect. Now 10 years later the relative reduction in the number of encodings in widespread use does make encoding sniffing a much more workable approach, but still too fallible and time-wasting for mission critical data.
In XML 1.1, engineering won again. The decision was made to open up the naming rules from XML 1.0 to remove a dependency on versions of Unicode. However, because this meant in turn that XML 1.1 processors would not as reliably detect encoding errors (when you see “encoding error” think “database corruption” or “spurious data” or “spurious rejected documents”) the treatment of the C1 range of control characters (0×80-FF in IS8859-* encodings) was clarified to be non-well-formed (with special treatment for IBM’s NEL character). Control characters have no place in markup, as confirmed by Unicode Technical Reports and as emphasized recently by the OOXML BRM which required MS to change a couple of places where some control characters could be entered even though harmlessly delimited. I was startled during the OOXML debates how strongly this was held to be a vital, core part of the XML story from all sides.
XML 1.1 was an enormous flopperoony, for the unsurprising reason that if you put
version="1.1" then an XML 1.0 processor would spit the dummy. Some people have tried to claim that it failed because previously well-formed 1.0 documents that had C1 controls in them became non-WF. I have never seen such a document in the last decade, nor have I ever had any credible reports of one, and I can see no cases where putting C1 control characters in a document would be legitimate practice, so I think it is just bluffing: there has always been a wing of users of XML whose life would be easier if they could embed raw binary into XML and they deserve no sympathy or help.
So along comes XML 1.0 (fifth edition) as a draft. It has only a couple of changes of significance. The first is that it finally puts in place a rudimentary versioning system: E10 allows an XML 1.0 processor to parse an XML 1.x document on the understanding that it only reports things in terms of XML 1.0 rules and capabilities.
The second change then makes a mockery of the first. It introduces the lax naming rules from XML 1.1. Now such a change is not required for any reason, because XML 1.1 exists and could be used. So rather than go into a well-managed regime where documents are well-labelled, and XML minor versions chug along, XML 1.0 draft fifth edition just allows a new XML 1.0 parser to accept documents that all the other old XML 1.0 parsers will reject: and remember this is not because of previous bad practice being more consistently exposed, but because some innocent person has created a document with the new name characters and the XML 1.0 processors deployed in the last decade reject it.
Basically, the W3C XML WG is saying that if you get a document that breaks in this way, it is the receiver’s problem. The sender can say “But it is well-formed against the latest version of XML 1.0″ and the XML WG washes their hands. It is the triumph of bad engineering practice, of doing what can be guaranteed to fail, of putting the responsibility on the wrong person. It will cause problems first for the nominal beneficiaries of these extra name characters (since they will be unreliable) and second for people using non-UTF-8 encodings who won’t get as many WF errors. So who will benefit: the makers of standards who will have less housekeeping. They are not an unworthy set of stakeholders.
The W3C XML WG needs to revise the goals of XML (in s 1.1) to accomodate these changes. In particular
6. XML documents should be human-legible and reasonably clear.
no longer holds. The new rules allow a blank check, so you could have a document entirely made with element and attribute names from code points which have never even been allocated a character by Unicode. With the fifth edition, the goal becomes
6. XML documents may be human-legible and reasonably clear.
And the goal 5. needs changing
5. The number of optional features in XML is to be kept to the absolute minimum, ideally zero
because in effect support for these new naming characters becomes an optional feature: does your XML 1.0 parser support editions 1-4 or edition 5?
I didn’t write a comment to the W3C XML WG because nothing has changed over the last 10 years that makes the decisions in XML 1.0 and in XML 1.1 inappropriate. I don’t have any new information that changes anything, and the XML WG certainly has produced none. All that is needed is for the fifth edition to fix up the minor versioning issue, and then we could all transition to 1.1 on an as-needs basis. This minor-versioning fix is already at least five years overdue: fixing it opens the door for XML 1.1 to have a snowflake’s hope and will allow a better transition to XML 1.2 potentially including some other overdue changes (building in xml:id, namespaces, etc.)
To summarize: XML 1.0 (fifth edition) is bad from a standardization and engineering viewpoint, betrays the goals of XML 1.0 which have served well for the last decade, and may hurt the end-users it is intended to support. It sets up a workable versioning mechanism then fails to use it for a significant change. It provides a good foundation for workable minor versioning, then ignores the foundation and builds on sand with its allowing of incompatible names.
I may be wrong, but it looks like a hack to me. However, fortunately it barely impacts anyone in the West, including me nowadays, so who cares? Interoperability, schminteroparibility! Unambiguous labelling of data formats, gedoudahere!
I am not trying to suggest the W3C XML WG is doing this because they prefer to sit by some giddy swimming pool in their floral-printed bathing costumes sipping umbrella-ed beverages, that they clear their desk by making incompatibility problems someone else’s problem, or any laziness! But I think they at least owe it to explain why they are doing a substantive minor version change as an edition change, failing to use the edition mechanism they are setting up at the same time which would allow people who needed this feature to access an already-existing minor version!
(Disclaimer: I speak for myself, not the XML Core WG; nevertheless, I had a lot to do with XML 1.1 and not a little with XML 1.0 5e.)
The fact that (as everyone knows) there were no documents with C1 controls in them is irrelevant to the XML 1.1 flop. What mattered very much is that it became a political stick to beat XML 1.1 with; it helped people who didn't want it anyhow to make it irrelevant. Including that feature hurt XML 1.1's chances of success.
And it's disingenuous to say that XML 1.1 "exists and is available". For people who need and want native element and attribute names, XML 1.1 effectively does not exist and is not available, because there is essentially no support for it.
Goals are goals, not requirements. It's already possible to use names that nobody can read because they use barely-distinguishable or ultra-obscure Chinese characters. Likewise XML 1.0 had plenty of optional features, and in one sense every time XML 1.0 changes in any way an option exists: parsers can be fixed or not fixed. (Talk to Elliotte Rusty Harold about this sometime.)
Is it a hack? Yes. I tried once, the right way; now I'm trying again, the wrong way.
John: Cart before the horse. When there is inadequate provision of the layering infrastructure to positively support plurality (i.e. the old inadequate versioning) then it imposes artificial decisions.
Xerces2 supports XML 1.1, and consequently Java apps. I don't think that is "essentially no support." (Nor disingenuous!)
As for right/wrong: why not try it the systematic way? Each layer builds on the last and allows plurality at the next level. That is the only successful architecture for these standards and for evolution: TCP/*, MIME content types, etc etc. The layering/selection capability goes first.
We *did* try it the systematic way. Now we try it the brutally pragmatic way. And the 800-pound gorilla is nodding and smiling this time. | http://www.oreillynet.com/xml/blog/2008/05/xml_10_draft_fifth_edition_bui.html | crawl-002 | refinedweb | 1,923 | 63.9 |
Dependent Multiple Choice questions
in OpenSesame
Hello,
I'm currently setting up a questionnaire in OpenSesame. It contains some Multiple Choice questions that are dependent from each other in the following sense:
I have a question “1”. Possible answers are "a", "b", "c", "d". Now if the subject selects "a", the experiment continues with question 2. If the subject selects answers "b", "c" or "d", then a question 1b must be answered.
A picture of the sequence to make it more clear:
I tried to solve this problem with inline scripts the following way: I made an overall sequence which contains many sequences. Those all contain a single inline script with either a question 1, 2, 3, etc. or a question 1, 2, 3 AND a question 1b, 2b, 3b, etc. These "b-items" are only presented when the previous question was answered with “b”, “c”, or “d”. So far, so good. The problem I have now is that the question that has been answered already is displayed completely new (the subject has to answer it again, first answer is overwritten). What I would like to have is that the participant’s answer to the first question (the one which he already answered and led to the display of the “b”-question) is already selected on the page where the “b”-question appears. So that it looks as if it wasn't a new page but the "b-item" is displayed on the same page additionally.
This is the code I am using in the "b-items":
def form_validator(): options = [0,1,2,3] return var.q01b in options title = Label(text=u'title') q1 = Label(text=u'question1', center=False) q2 = Label(text=u'question1b', center=False) ratingScale1 = RatingScale( var=u'q01', nodes=[u'a', u'b', u'c', u'd'] ) ratingScale2 = RatingScale( var=u'q01b', nodes=[u'a', u'b', u'c', u'd'] ) nextButton = Button(text=u'Weiter') form = Form(validator=form_validator, rows=[1,1,1,1], cols=[4,7], margins=(10,40,10,40), spacing=25) form.set_widget(title, (0, 0), colspan=2) form.set_widget(q1, (0, 1)) form.set_widget(ratingScale1, (1, 1)) form.set_widget(q2, (0, 2)) form.set_widget(ratingScale2, (1, 2)) form.set_widget(nextButton, (0, 3), colspan=2) form._exec()
Is there any solution to this problem? Thank you in advance,
Nicole
Hi Nicole,
I think the easiest solution is to abandon the sequence structure and have all the elements within a single inline_script (that is within a single trial, across trials you can still use an inline_script)
Like you already do, prepare all the questions in advance, but on separate forms and execute the first one, once a response has been given check the answer and if the variable is '0' move on to question 2, if not, go on to question 1b. This should keep it simple, but still implements the functionality you want.
By the way, I don't think you need a form validator.
Does that make sense, or do I misunderstand you?
Eduard
Hi Eduard
Thank you for your fast reply! This is what I did first but it is rather important to us that the first and the "b-item" are displayed on the same page.
I also tried to put the first and the "b-item" into the same inline script but it didn't work beacause of the following reason: I can only check whether the b-item should be displayed after the first item was executed and the participants chose their answer. Afterwards I would have to set up a new form, which would lead to the same problem as described above.
I need the form validator since we want to prevent people from clicking the next button without answering a question. Or did I miss something there?
Regards,
Nicole
Hi Nicole,
I can only check whether the b-item should be displayed after the first item was executed and the participants chose their answer.
In this case, you are right. You will need to first execute one form without the b-item, once the participant responds the "right" thing, you make the same form again, but add the b field to it, and show it. I think you know how to add that, right? I also think you can set the default option that is shown (the "default" argument?) to the response that triggered the B-item.
I need the form validator since we want to prevent people from clicking the next button without answering a question.
Oh, okay, yeah that makes sense maybe.
Eduard
Hey Eduard
The default option was exactly what I was looking for!
Thank you for your help.
Regards,
Nicole | https://forum.cogsci.nl/discussion/comment/16269/ | CC-MAIN-2019-26 | refinedweb | 780 | 71.34 |
Welcome! Got a question? Do you have -Ypartial-unification turned on? Other FAQs:
Managed to shave off outer
Evals, doesn't look like I lose non-strictness here
def loeb[F[_]: Functor, A](x: F[Eval[F[Eval[A]]] => Eval[A]]): F[Eval[A]] = { x.fmap(a => a(Later(loeb(x)))) }
probably can't get simpler than this? since this is self-referential, i can't avoid both accepting and returning evals like this, which is as close (or as far) from haskell as i can get
Monoid
def foldLeft[F[_], A, B](values: F[A], seed: B)(fold: (B, A) => B)(implicit monad: Monad[F], monoid: Monoid[F[B]]): F[B] = { var accumulation = seed val work = monad.flatMap(values) { value => accumulation = fold(accumulation, value) monoid.empty } monoid.combine(work, monad.pure(accumulation)) }
Listand
fs2.Stream, hence needing the result to be wrapped in an
F. I see that
Streamdoes not implement
Foldablefor this reason (as the result would need to be a single-element stream, not a
B). However, since
Listand
fs2.Streamboth implement
Monadand
Monoid, the above function works. It probably breaks all kinds of laws, but just wondering if there's something equivalent already?
def test(implicit U: UserRepoSC[SampleApp], I: ImageRepoSC[SampleApp]): Free[SampleApp, String] = { import U._ import I._ for { u <- findUserI("001") _ <- EitherT.leftT[Future, String]("err") //Free.liftT would work? _ <- getImageI("002") } yield { print(u) u.get.name } } test.foldMap(sampleApp)
Free.injectbut just check it (disclaimer: I have little experience with free)
I already do this , these are smart constructors
implicit def UserRepoSC[F[_]](implicit I: InjectK[UserRepoAlg, F]): UserRepoSC[F] = new UserRepoSC[F] class ImageRepoSC[F[_]](implicit I: InjectK[ImageRepoAlg, F]) { def getImageI(id: String) = Free.inject[ImageRepoAlg, F](GetImage(id)) } // We need this implicit to convert to the proper instance when required implicit def ImageRepoSC[F[_]](implicit I: InjectK[ImageRepoAlg, F]): ImageRepoSC[F] = new ImageRepoSC[F] }
But my understanding is till limited.
I have a bit silly question and I'm not sure I'll explain it well. So I had the discussion with my F# friend the other day, and while he likes the FP, he doesn't do pure FP. So I tried to explain him referential transparency and
IO monad. He said that in F# you can easily convert code block to function/lazy-value by adding parentheses to variable. So this:
let a = let read = readline() logReading(read) printTimeOfDay() read
Becomes:
let a() = let read = readline() logReading(read) printTimeOfDay() read
I guess that now you have RT because everywhere where you use
a(), you can "swap" it with its chunk of code block - both ways whole code block will be executed every time (not just returning
read).
So since you can do same thing in Scala by making every function accept call-by-name arguments, can we call that kind of programming "pure FP" if we are disciplined and keep away from mutating class' fields? Instead of
flatMaping, we would have
compose and/or our code would start to look like
f(g(h(j(x)))), but I guess it's valid.
Anyway, I started to ask myself the benefits of
IO data structure. I can name a few: you can have whole range of "helper" methods on IO object (
map,
flatMap,
traverse, ...) which you simply don't have on
Function; you structure you code more-or-less sequentially due to
Monad nature (although you have sequential code in
f(g(x)) case too). When it come to mutation, you still have to be disciplined, nothing prevents you from cheating and mutating some field in your
IO.flatMap.
I guess my question is: what are (all) the benefits of
IO, especially compared to
Function since both can be looked in a way as description of computation?
F#
but I think it's misleading to frame IO in terms of evaluation semantics
Anyone can expand on this?
Function1implementation like that one
() => A, which is synchronous
(A => Unit) => Unit
(Either[Throwable, A] => Unit) => Unit)
IOis capable of embedding both things
() => A, in its
=> Aform, it's the argument to
Sync[F].delay
(Either[Throwable, A] => Unit) => Unit)is the argument to
Async[F].async | https://gitter.im/typelevel/cats?at=5c7d766f35c01307537529b2 | CC-MAIN-2020-29 | refinedweb | 704 | 54.32 |
.
So what's so great about Agents anyway? What's wrong with using the tried and true agentless methods for monitoring AIX hosts, like SNMP?.
There have been several Orion features released throughout the years which had previously only been available for nodes running other operating systems, such as Linux or Windows. AIX had largely been left out in the cold. That is, until today..
Last, and unquestionably most important, are the wide array of various SAM Application Component Monitors supported by the AIX Agent. From these components, you can create templates to monitor virtually any application, commercial, open source, or homegrown..
When charge!
Hopefully you are already reaping the benefit from the many improvements that were made in Network Performance Monitor 12.1, Server & Application Monitor 6.4, Storage Resource Monitor 6.4, Virtualization Manager 7.1, Netflow Traffic Analyzer 4.2.2, and Network Configuration Manager 7.6. If you haven't yet had a chance to upgrade to these releases, I encourage you to do so at your earliest convenience, as there are a ton of exciting new features that you're missing out on.
Something a few who already upgraded may have seen, is one or more deprecation notices within the installer. These may have included reference to older Windows operating systems or Microsoft SQL versions. Note that these deprecation notices will only appear when upgrading to any of the product versions listed above, provided you are installing on any of the Windows OS or SQL versions deprecated in those releases. But what does it mean when a feature or software dependency has been deprecated? Does this mean it's no longer supported, or those versions can't be used anymore?
Many customers throughout the years have requested advance notice whenever older operating systems and SQL database versions would no longer be supported in future versions of Orion, allowing them sufficient time to properly plan for those upgrades. Deprecation does not mean that those versions can't be used, or that they are no longer supported at the time the deprecation notice is posted. Rather, those deprecated versions continue to remain fully supported, but that future Orion product releases will likely no longer support them. As such, all customers affected by these deprecation notices should take this opportunity to begin planning their migrations if they wish to stay current with the latest releases. So what exactly was deprecated with the Q1'17 Orion product releases?
Released on October 22, 2009, Microsoft ended mainstream support for Windows Server 2008 R2 SP1 six years later on January 13, 2015. For customers, this means that while new security updates continue to be made available for the aging operating system, bug fixes for critical issues will require a separate purchase of an Extended Hotfix Support contract agreement; in addition to paying for each fix requested. Since so few of our customers have such agreements with Microsoft, the only recourse, often times, is an unplanned, out-of-cycle, operating system upgrade.
Microsoft routinely launches new operating system versions, with major releases on average every four years, and minor version releases approximately every two. As new server operating system versions are released, customer adoption begins immediately thereafter; sometimes even earlier, during Community Technical Preview, where some organizations place production workloads on the pre-released operating system. Unfortunately, in order to leverage the technological advances these later versions of Windows provide, it occasionally requires losing backwards compatibility support for some older versions along the way. Similar challenges occur also during QA testing whenever a new operating system is released. At some point it's simply not practical to thoroughly and exhaustively test every possible permutation of OS version, language, hotfix rollup or service pack. Eventually the compatibility matrix becomes so unwieldy that a choice between quality or compatibility must be made; and really that's not a choice at all.
SQL Server 2008 was released on August 6, 2008, with SQL 2008 R2 being released just four months shy of two years later, on April 21, 2010. Seven years later, there have been tremendous advances in Microsoft's SQL server; from the introduction of new redundancy options, to technologies like OLTP and columnstore indexes, which provide tremendous performance improvements. Maintaining compatibility with older versions of Microsoft SQL precludes Orion from being able to leverage these and other advances made in later releases of Microsoft SQL Server, many of which have potential to tremendously accelerate the overall performance and scalability of future releases of the Orion platform.
If you happen to be running SQL Server 2008 or SQL 2008 R2 on Windows Server 2008 or 2008 R2, not to worry. There's no need to forklift your existing SQL server prior to upgrading to the next Orion release. In fact, you don't even need to upgrade the operating system of your SQL server, either. Microsoft has made the in-place upgrade process from SQL 2008/R2 to SQL 2014 extremely simple and straightforward. If your SQL server is running on Windows Server 2012 or later, then we recommend upgrading directly to SQL 2016 SP1 or beyond so you can limit the potential for additional future upgrades when/if support SQL 2012 is eventually deprecated.
Once new Orion product module versions are eventually released which no longer support running on Windows Server 2008, 2008 R2, or SQL 2008/R2, SolarWinds will continue to provide official support for those previous supported Orion module versions running on these older operating systems and SQL server versions. These changes only affect Orion module releases running Orion Core versions later than 2017.1. If you are already running the latest version of an Orion product module on Windows Server 2008/R2 or SQL 2008/R2 and have no ability to upgrade either of those in the near future, not to worry. Those product module versions will continue to be supported on those operating system and SQL versions for quite some time to come.
While the next release of Orion will no longer support running on Windows or SQL 2008/R2, support for monitoring systems which are running on these older versions of Windows and SQL will absolutely remain supported. This also includes systems where the Orion Agent is deployed. What that means is if you're using the Orion agent to monitor systems running on Windows Server 2008 or Windows Server 2008 R2, rest assured that support for monitoring those older systems with the Orion Agent remains fully intact in the next Orion release. The same is also true if you're monitoring Windows or SQL 2008/R2 agentlessly via WMI, SNMP, etc. You're next upgrade will not impact your ability to monitor these older operating systems or SQL versions in any way.
Support for installing evaluations on 32bit operating systems will also be dropped from all future releases of Orion product modules, allowing us to begin the migration of Orion codebase to 64bit. In doing this, it should improve the stability, scalability, and performance for larger Orion deployments. Once new product versions begin to be released without support for 32bit operating systems, users wishing to evaluate Orion based products on a 32bit operating system are encouraged to contact Sales to obtain earlier product versions which support 32bit operating systems.
Current Orion product module releases, such as Network Performance Monitor 12.1 and Server & Application Monitor 6.4, require a minimum version of .NET 4.5.1. All future Orion product module releases built atop Core versions later than 2017.1 will require a minimum version of Microsoft's .NET 4.6.2, which was released on 7/20/2016. This version of .NET is also fully compatible with all current shipping and supported versions of Orion product module releases, so there's no need to wait until your next Orion module upgrade to update to this newer version of .NET. Subsequently, .NET 4.7 was released on 5/2/2017 and is equally compatible with all existing Orion product module versions in the event you would prefer to upgrade directly to .NET 4.7 and bypass .NET 4.6.2 entirely.
It's important to note that Microsoft's .NET 4.6.2 has a hard dependency on Windows Update KB2919355, which was released in May 2014 for Windows Server 2012 R2 and Windows 8.1. This Windows Update dependency is rather sizable, coming in between 319-690MB. It also requires a reboot before .NET 4.6.2 can be installed and function properly. As a result, if you don't already have .NET 4.6.2 installed, you may want to plan for this upgrade during your next scheduled maintenance window to ensure your next Orion upgrade goes smoothly and as quick as possible.
With many of the changes referenced above, minimum system requirements have also needed adjustment as well. Windows Server 2012 and later operating systems utilize more memory than previous versions. Similarly, .NET 4.6 can also utilizes slightly more memory than .NET 4.5.1. As we move forward however, 64bit processes inherently use more memory than the same process compiled for 32bit. To ensure users have a pleasurable experience running the next version of Orion products, we will be increasing the absolute minimum memory requirements from 4GB to 6GB of RAM for future versions of Orion product modules. The recommended minimum memory requirement however, will remain at 8GB.
While most readers themselves today would never consider running and using a Windows 10 laptop on a day-in-day-out basis with just 4GB of RAM, those same people also likely wouldn't imagine running an enterprise grade server based monitoring solution on a system with similar such specs either. If you do, however, find yourself in an environment running Orion on 4GB of RAM today, an 8GB memory upgrade can typically be had for less than $100.00. This can be done before the next release of Orion product modules and will even likely provide a significant and immediate improvement to the overall performance of your Orion server.
All items listed above can be completed prior to the release of the next Orion product module versions and will ensure your next upgrade goes off without a hitch. This posting is intended to provide anyone impacted by these changes with sufficient notice to plan any of these upgrades during their regularly scheduled maintenance periods, rather than during the upgrade process itself. In-place upgrades of SQL, as stated above, are a fairly simple and effective process to get upgraded quickly with the least possible amount of effort. If you're running Orion on Windows Server 2008 or 2008 R2, in-place OS upgrades are also feasible. If either of these are not feasible or desirable for any reason, you can migrate your Orion installation to a new server or migrate your Orion database to a new SQL server by following the steps outlined in our migration guide.
If for any reason you find yourself running Orion on Windows Server 2008, Server 2008 R2, or on SQL 2008/R2 and unable to upgrade, don't fret. The current releases of Orion product modules will continue to remain fully supported for quite some time to come. There is absolutely zero requirement to be on the latest releases to receive technical support. In almost all cases, you can also utilize newly published content from Thwack's Content Exchange with previous releases, such as Application Templates, Universal Device Pollers, Reports, and NCM Configuration Change Templates. When you're ready to upgrade, we'll be here with plenty of exciting new features, enhancements and improvements.
At any given time, Orion supports running on a minimum of three major versions of the Windows Operating System and SQL database server. When a new server OS or SQL version is released by Microsoft, SolarWinds makes every effort possible to support up to four OS and SQL versions for a minimum of one Orion product module release. If at any time you find yourself four releases behind the most current OS or SQL server version, you may want to begin planning an in-place upgrade or migration to a new server during your next regularly scheduled maintenance window to ensure your next Orion product module upgrade goes flawlessly.
For your reference, below is a snapshot of Windows Operating Systems and SQL Server versions which will be supported for the next release of Orion product modules. This list is not finalized and is still subject to change before release. However, nothing additional will be removed from this list, though there could be additional version support added after this posting.
We.
Since..
Up until this point, much of the noise surrounding the Server & Application Monitor 6.2 beta has been focused exclusively on a new optional agent that allows for (among many other things) polling servers that reside in the cloud or DMZ. More information regarding this new optional agent can be found in my three part series entitled "Because Sometimes You Feel Like A Nut" linked below.
If you can believe it, this release is positively dripping with other incredibly awesome new features and it's time now to turn the spotlight onto one that's sure to be a welcome addition to the product.
Since the advent of AppInsight for SQL in SAM 6.0, and AppInsight for Exchange in 6.1, you may have grown accustomed to the inclusion of a new AppInsight application with each new release. Well I'm happy to announce that this beta release of SAM 6.2 is no different, and includes the oft requested AppInsight for Microsoft's Internet Information Services (IIS).
AppInsight for IIS leverages PowerShell to collect much of its information about the IIS server. As such, PowerShell 2.0 must be installed on the local Orion server or Additional Poller to which the node is assigned. PowerShell 2.0 must also be installed on the IIS Server being monitored. Windows 2008 R2 and later operating systems include PowerShell 2.0 by default. Only if you are running Orion on Windows 2008 (non-R2) or are planning to monitor servers running IIS 7.0, will you need to worry about the PowerShell 2.0 requirement.
Beyond simply having PowerShell installed, Windows Remote Management (WinRM) must also be configured. This is true both locally on the Orion server, as well as on the remotely monitored IIS host. If you're not at all familiar with how to configure WinRM, don't worry. We've made this process as simple as clicking a button.
After discovering your IIS servers and choosing which of them you wish to monitor, either through the Add Node Wizard, List Resource, or Network Sonar Discovery, you will likely find them listed in the All Applications tree resource on the SAM Summary view in an "Unknown" state. This is because WinRM has not been configured on either the local Orion server or the remotely monitored IIS host. Clicking on any AppInsight for IIS application in an "Unknown" state from the All Applications resource launches the AppInsight for IIS configuration wizard.
When the AppInsight for IIS configuration wizard is launched you will be asked to enter credentials that will be used to configure WinRM. These credentials will also be used for the ongoing monitoring of the IIS application once configuration has successfully completed. By default the same credentials used to manage the node via WMI are selected. Under some circumstances however, the permissions associated with that account may not be sufficient for configuring WinRM. If that is the case, you can select from the list of existing credentials available from your Credential Library, or enter new credentials for use with AppInsight for IIS.
Once you've selected the appropriate existing, or newly defined credential for use with AppInsight for IIS, simply click "Configure Server". The configuration wizard will do the rest. It should only take a minute or two and you're up and monitoring your IIS server.
If configuring WinRM to remotely monitor your IIS server isn't your jam, or if perhaps you'd simply prefer not using any credentials at all to monitor your IIS servers, AppInsight for IIS can be used in conjunction with the new optional agent, also included as part of this SAM 6.2 beta. When AppInsight for IIS is used in conjunction with the agent you can monitor IIS servers running in your DMZ, remote sites, or even in the cloud, over a single encrypted port that's NAT friendly and resilient enough to monitor across high latency low bandwidth links.
As with any AppInsight application, AppInsight for IIS is designed to provide a near complete hands off monitoring experience. All Sites and Application Pools configured on the IIS server each appear in their respective resources. As Sites or Application Pools are added or removed through the Windows IIS Manager, those Sites and Application Pools are automatically added or removed respectively from monitoring by AppInsight for IIS. This low touch approach allows you to spend more time designing and building your IT infrastructure, rather than managing and maintaining the monitoring of it.
Each website listed in the Sites resource displays the current status of the site, it's state (started/stopped/etc.), the current number of connections of that size, the average response time, and whether the side is configured to automatically start when the server is booted.
It is all too common for people to simply stop or disable the Default Web Site or other unused sites in IIS rather than delete them entirely. To reduce or eliminate false positive "Down" alert notifications in these scenarios, any sites that are in a Stopped state when AppInsight for IIS is first assgined to a node are placed into an unmanaged state automatically. These sites can of course be easily re-managed from the Site Details view at anytime should you wish to monitor them.
The Application Pools resource also displays a variety of useful information, such as the overall status of the Application Pool, it's current state (stopped/started/etc.), the current number of worker processes associated with the pool, as well as the total CPU, Memory and Virtual Memory consumed by those worker processes. It's the perfect at-a-glance view for helping identify runaway worker processes, or noisy neighbor conditions that can occur as a result of resource contention when multiple Application Pools are competing for the same limited share of resources.
As one might expect, clicking on a Site or Application Pool listed in either of these resources will direct you to the respective Site or Application Pool details view where you will find a treasure trove of valuable performance, health, and availability information.
The release of Network Performance Monitor v11 included an entirely new method of monitoring end user application performance in relationship to network latency with the advent of Deep Packet Inspection, from which the Quality of Experience (QoE) dashboard was born. The TopXX Page Requests by Average Server Execution Time resource is powered by the very same agent as the Server Packet Analysis Sensor that was included in NPM v11. AppInsight for IIS complements the network and application response time information provided by QoE by showing you exactly what pages are taking the longest to be served up by the IIS server.
This resource associates user requested URLs with their respective IIS "site", and its average execution time. Expanding any object in the list displays the HTTP verb associated with that request. Also shown is the date and time of the associated web request, the total elapsed time, IP address of the client who made the request, and any URL query parameters passed as part of the request.
New linear gauges found in this resource now make it possible to easily understand how the value relates to the warning and critical thresholds defined. Did I just barely cross over into the "warning" threshold? Am I teetering on the brink of crossing into the "critical threshold? These are important factors that weigh heavily into the decision making process of what to do next.
Perhaps you've just barely crossed over into the "warning" threshold and really no corrective action is required. Or maybe you've blown so far past the "critical" threshold that you can barely even see the "good" or "critical" thresholds anymore and a full scale investigation into what's going on is in order. In these cases, understanding where you are in relation to the thresholds defined is critical to determining both the severity of the incident, as well as measuring a sufficiently adequate response.
Server execution time is the time spent by the server processing the users request. This includes the web server's CPU processing time, as well as backend database query time, and everything in between. The values shown in this resource are irrespective of network latency; meaning, page load times will never be better than what's shown here in this resource without backend server improvements, regardless of what network performance looks like. Those improvements could be as simple as rebuilding database indexes, defragmenting the hard drive, or adding additional RAM to the server. Either way, high server execution time means users are waiting on the web server or backend database queries to complete before the page can be fully rendered.
What good is the entire wealth of information that AppInsight for IIS provides if there is no way to remediate issues when they occur?
These are just a few of the capabilities of AppInsight for IIS. If you'd like to see more, you can try it out for yourself by participating in the SAM 6.2 beta. To do so, simply sign-up here. We thrive on feedback, both positive and negative. So kick the tires on the new SAM 6.2 beta and let us know what you think in the SAM Beta Forum.
Please note that you must currently own Server & Application Monitor and be under active maintenance to participate in this beta. Betas must be installed on a machine separate from your production installation and used solely for testing purposes only. Also, if you're already participating in the NPM v12 beta, the SAM 6.2 beta can be run on the same server, alongside the NPM v12 beta...
There's been quite a bit of chatter recently surrounding the hotly anticipated release of Network Performance Monitor v11, featuring the entirely new Quality of Experience (QoE) dashboard. At the center of what makes all of this amazing QoE information possible are Packet Analysis Sensors, which can be deployed either to the servers running the business critical applications themselves, or to a dedicated machine connected to a SPAN port which collects the same information completely out-of-band for multiple servers simultaneously. For all intents and purposes, these Packet Analysis Sensors could be considered specialized agents, solely dedicated to the purpose of collecting packet data from the network. But what if these "agents" could be used to monitor other aspects of the servers they were installed on, or leveraged to address many of the complicating factors and limitations associated with agentless monitoring? These were precisely the kind of questions we asked ourselves as we were developing the Packet Analysis Sensors for NPM.
What are these "complicating factors" you might ask? It depends on your environment's architecture. It's quite possible you have numerous uses for an agent today that you're not even aware of yet. Either due to network design obstacles or security requirements and concerns, there are many organizations that have had to make compromises regarding what they monitor, how, and to what extent. This has left blind spots on the network, where some servers or applications simply cannot be monitored to the full extent desired, or not at all in some cases. With the soon-to-be-released beta release of Server & Application Monitor (SAM) 6.2 we take Orion into a brave new world without compromise.
So what exactly are some of the challenges many of us face when attempting to monitor our server infrastructure and the applications that reside upon them?
Whether it's the NSA, those willing to perform corporate espionage, or the black hat hacker who hangs out at your local Starbucks, it's important to keep prying eyes from peering into your organizations packets. While SNMPv3 has existed for quite a long time, all versions up to and including Windows 2012 R2 still rely upon the older and less secure SNMPv2, a protocol which provides no encryption or authentication. While Microsoft's WMI protocol addresses the authentication aspects that are sorely lacking in SNMPv2, encryption is different matter altogether. While it's possible to force the use of encryption in the WMI protocol, this is not the default behavior and is seldom ever done. This requires modifications to WMI namespaces to force the use of encryption, a process that must be repeated on each host you wish to manage. Beyond that, your monitoring solution must also work with WMI encryption, something very few solutions on the market today support.
The Agent included in the SAM 6.2 beta has been designed from the ground up with security first and foremost on our mind. To that end, the agent utilizes FIPS compatible 2048 bit TLS encryption to ensure all communication between the Agent and the Orion Poller are fully encrypted and safe from would-be cybercriminals.
Not all protocols are created equal. WMI and RPC may be right at home on todays gigabit Ethernet networks, but that is because these protocols were designed almost two decades ago as LAN protocols. These protocols were never designed to traverse bandwidth-contentious WAN links,nor function in high latency environments or across the internet. Attempting to use either of these agentless protocols in these scenarios is very likely to result in frequent polling timeout issues. Roughly translated, this means you are completely blind to what's going on.
The Agent in SAM 6.2 eliminates the issues associated with these protocols by utilizing the standards based HTTPS protocol, which is both bandwidth-efficient and latency-friendly. This means the agent could be used to monitor such extreme scenarios as servers running on a cruise ship or oil platform in the middle of the south pacific from a datacenter in Illinois via a satellite internet link without issue, something that would be otherwise impossible using traditional agentless protocols such as WMI or RPC.
There are still plenty more challenges this new Agent is aimed at addressing that I will cover in a follow-up post. In the meantime, however, you might be wondering what this means for the future of agentless monitoring capabilities that Orion was built upon.
Absolutely nothing! SolarWinds pioneered the industry in agentless monitoring, and remains 100% committed to our "agentless first" approach in everything that we do. SolarWinds will continue to push the boundaries of agentless technologies to the very limit of their capabilities and beyond. We will continue to lead the industry by being at the forefront of new agentless technologies as they emerge, now or at any time in the future.
The war between agent-based and agentless IT monitoring solutions has gone on as long as there have been things in the environment that needed to be monitored. Agentless monitoring solutions have always had the advantage of not requiring any additional software that needs to be deployed, managed, and maintained throughout the devices lifecycle. There is typically little concern over resource contention on the host being monitored because there is essentially zero footprint on the machine in an agentless configuration. Due to the nature of agentless monitoring solutions, they can be deployed and providing value within a couple of hours in most environments. Agent based monitoring solutions typically require rigorous testing, as well as going through a tedious internal configuration change approval process before any agent software can be deployed into production. Agent deployment is commonly a manual process that requires running the installation locally on each server before they can be monitored. Then there are the security concerns associated with having any piece of software running on a server that could potentially be exploited by a hacker as a means of entry into the system.
If the agent vs agentless war has taught us anything, it is that each approach has its own unique advantages and disadvantages. There is no single method that suits all scenarios best or equally. This is why we fundamentally believe that for full coverage, any monitoring solution you choose must provide excellent agentless monitoring capabilities, as well as provide an optional agent for those scenarios where agentless monitoring simply isn't feasible or prudent.
We here at SolarWinds believe that, given our agentless heritage, we are uniquely qualified to understand and address many of the problems that have plagued agent-based monitoring solutions of the past. It is our intent to make agent-based monitoring as simple and painless as agentless monitoring is today.
The agent included in SAM 6.2 will be capable of monitoring virtually everything you can monitor today on a WMI managed node in SAM. This includes, but is not limited to node status (up/down), response time, latency (all with no reliance on ICMP), CPU, Memory, Virtual Memory, Interfaces, Volumes, Hardware Health, Asset Inventory, Hyper-V virtualization, as well as application monitoring. This very same agent can also be utilized as a Packet Analysis Sensor for deep packet inspection if so desired and appropriately licensed. The agent is officially supported on the following Windows operating systems.
While the agent should also work on Windows 2003 and 2003 R2 hosts, these operating systems are not officially supported. Non-Windows based operating systems such as Linux/Unix are also not supported by the agent at this time. If you are at all interested in a Linux/Unix agent for SAM that provides monitoring of Linux/Unix systems and applications, you can vote for this idea here.
The agent software is essentially free. You remain bound by the limits of the license you own regardless of how you're polling that information, either via an agent or agentless. For example, if I own a SAM AL150 license, I can monitor 150 nodes, volumes, and components. This remains true if I'm monitoring those servers with an Agent installed or agentlessly.
There's still plenty more agent stuff to talk about, including additional scenarios where the agent could be used to overcome common obstacles you might encounter with agentless monitoring. In my follow-up post I will discuss some of those as well as cover the various different agent deployment options and agent management, so stay tuned for more information.
If you're anything like me, you'd much rather try something out yourself then read about it. Fortunately for you this new Agent is included as part of the SAM 6.2 beta, which will be available soon. If you currently own Server & Application Monitor and it's under active maintenance, you can sign-up here. You will then be notified via email when the SAM 6.2 beta is available for download.
We've just wrapped up the Server & Application Monitor 6.1.1 Service Release, meant to address any major outstanding issues identified in SAM 6.1 since its official release. Now it's time once again to turn our attention to the future of Server & Application Monitor. With that said, below is a list of items the team is currently!
Server & Application Monitor 6.1.1 Release Candidate is now available for all SAM customers under active maintenance through your Customer Portal. Release Candidates are fully supported and upgradable to the final release when available. If you are experiencing any of the issues outlined below, we recommend downloaded the SAM 6.1.1 Service Release and upgrade now.
This release of SAM includes the following fixes and additions:
The following table provides the internal Development ID numbers and external support ID numbers for fixed SAM issues as well as new feature requests in this release. Search in the support ID number column for the number assigned to your support case.
We here at SolarWinds are continuously looking to improve our products in both functionality and user experience. Failover Engine (FoE) as you can imagine, doesn't get a tremendous amount of feedback from the Thwack community. This is because FoE is akin to the spare tire sitting in the trunk of your car. You hardly ever think about it until you need it. With that in mind, I've compiled a few thought provoking questions that I hope will engage those of you in the community to think about how you use FoE. This should help to give us a better understanding how and where we can improve FoE in the future,
What has your experience been like Installing/Upgrading FoE?
In an Failover Engine LAN Configuration How do you Maintain the Standby Host?
Are your Failover Engine member servers joined to the Domain?
How do you prefer to manage administrative tasks in Failover Engine?
What is the primary reason your Orion server is down?
How much redundancy is enough for your environment?
SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy. | https://thwack.solarwinds.com/community/solarwinds-community/product-blog/blog/authors/aLTeReGo | CC-MAIN-2018-39 | refinedweb | 5,583 | 50.67 |
In this article we will discuss ASP.NET MVC Interview Questions. We are focusing on Objective Questions that can be asked in an interview.Since we are talking about MVC we should have some knowledge of MVC. Objective Interview Questions on ASP.NET MVCQuestion 1: What does MVC stand for>Answer: Model-View-Controller.Question 2: What are three main components or aspects of MVC?Answer: Model-View-ControllerQuestion 3: Which namespace is used for ASP.NET MVC? Or which assembly is used to define the MVC framework?Answer: System.Web.MvcQuestion 4: What is the default view engine in ASP.NET MVC?Answer: Web Form(ASPX) and Razor View Engine.Question 5: Can we remove the default View Engine?Answer: Yes, by using the following code:
Question 6: Can we have a Custom View Engine?Answer: Yes, by implementing the IViewEngine interface or by inheriting from the VirtualPathProviderViewEngine abstract class.Question 7: Can we use a third-party View Engine?Answer: Yes, ASP.NET MVC can have Spark, NHaml, NDjango, Hasic, Brail, Bellevue, Sharp Tiles, String Template, Wing Beats, SharpDOM and so on third-party View Engine.Question 8: What are View Engines?Answer: View Engines are responsible for rendering the HTML from your views to the browser.Question 9: What is Razor Engine?Answer: The Razor view engine is an advanced view engine from Microsoft, packaged with MVC 3. Razor uses an @ character instead of aspx's <% %> and Razor does not require you to explicitly close the code-block.Question 10: What is scaffolding?Answer: Quickly generating a basic outline of your software that you can then edit and customize.Question 11: What is the name of Nuget scaffolding package for ASP.NET MVC3 scaffolding?Answer: MvcScaffoldingQuestion 12: Can we share a view across multiple controllers?Answer: Yes, it is possible to share a view across multiple controllers by putting a view into the shared folder.Question 13: What is unit testing?Answer: The testing of every smallest testable block of code in an automated manner. Automation makes things more accurate, faster and reusable.Question 14: Is unit testing of MVC application possible without running the controller?Answer: Yes, by the preceding definition.Question 15: Can you change the action method name?Answer: Yes, we can change the action method name using the ActionName attribute. Now the action method will be called by the name defined by the ActionName attribute.
Question 16: How to prevent a controller method from being accessed by an URL?Answer: By making the method private or protected but sometimes we need to keep this method public. This is where the NonAction attribute is relevant.Question 17: What are the features of MVC5?Answer:
Question 18: What are the various types of filters in an ASP.NET MVC application?Answer:
Question 19: If there is no match in the route table for the incoming request's URL, which error will result?Answer: 404 HTTP status codeQuestion 20: How to enable Attribute Routing?Answer: By adding a Routes.MapMvcAttributeRoutes() method to the RegisterRoutes() method of the RouteConfig.cs file.I think 20 questions are enough for one day.Your comments and suggestions are always welcomed.
View All | http://www.c-sharpcorner.com/UploadFile/c325de/Asp-Net-mvc-interview-questions-objective-part-1/ | CC-MAIN-2017-43 | refinedweb | 532 | 61.43 |
#include <authorizer.hpp>
Checks with the identity server back end whether
request is allowed by the policies of the identity server, i.e.
request.subject can perform
request.action with
request.object. For details on how the request is built and what its parts are, refer to "authorizer.proto".
trueif the action is allowed, the future is set to
true, otherwise
false. A failed future indicates a problem processing the request, and it might be retried in the future.
Implements mesos::Authorizer.
Returns an
ObjectApprover which can synchronously check authorization on an object.
The returned
ObjectApprover is valid throuhout its whole lifetime or the lifetime of the authorizer, whichever is smaller.
Calls to
approved(...) method can return different values depending on the internal state maintained by the authorizer (which can change due to the need to keep
ObjectApprover up-to-date).
ObjectApproverfor the given
subjectand
action.
Implements mesos::Authorizer. | https://mesos.apache.org/api/latest/c++/classmesos_1_1internal_1_1LocalAuthorizer.html | CC-MAIN-2022-40 | refinedweb | 149 | 50.23 |
- Advertisement
marrshalMember
Content Count16
Joined
Last visited
Everything posted by marrshal
Transparency problem
marrshal posted a topic in Graphics and GPU ProgrammingI'm implementing a simple particle system. I draw a textured polygon where the particle is suposed to be. This texture has a transparent regions (in the edges). Here is the problem: [attachment=7397:scr.JPG] As you can see the in the transparent regions is draw the backround, not the part of the particle behind. During the rendering of the fountain I didn't disable depth testing or something similar. Thanks in advance!
Drawing near objects problem
marrshal posted a topic in Graphics and GPU ProgrammingWhen I draw object that supose to be realy close to each other I have results like those: [attachment=7027:fail01.JPG] [attachment=7030:fail02.JPG] In this case the water is about 0.1f above the ground in the "problem" area. If I zoom in a lot the problem goes off. And if I zoom out the problem gets at much more areas: [attachment=7029:fail03.JPG] Do you know where the problem comes from? I think is somthing about the depth testing, but still can't fix it.. Thank you in advance!
- What toolkit are you using to create your window/OpenGL context? Depth buffer precision has to be specified as you create the context. [/quote] This helps, I was using 16 bit depth buffer.
simple c++ project
marrshal replied to phil67rpg's topic in For Beginners's ForumWell, my first attempts to make a game was a very simple game in the good old console. Actually it was a snake game.. You can use this ), good looking (i mean with graphics better than in the first suggestion) and cool. (Actually I have started writing Chicken Invaders using Allegro because it looks too bad in the c++ console ) .!
- Thank you very much for the answers. I have places near and far dist closer and i have separated the surfaces of the water and the ground, so the water plane and the ground plane are crossing, not overlapping as before. This fix the problem from the viewpoint of the player but if I'm looking with my global camera the problem is still there. @swiftcoder how can I check the bits of depth buffer I'm requesting? Everything I'm doing about my depth buffer is: glEnable (GL_DEPTH); glEnable (GL_DEPTH_TEST);
Rigid body dynamics
marrshal posted a topic in Math and PhysicsI'm writing a simple physics simulator and I have a problem with dynamics of rigid body. How the body (for example box) rotates and translates when a force (or several forces) are apllied at different points of the body?
- In a bitmap this "surface" should be a certain color (if you mean other bitmap will be more complex). So you can use getpixel to get the "[font="verdana, helvetica, sans-serif"]value of the pixel in the color format of the bitmap" and then getr, getg, getb, geta to determine the RGBA color of this pixel. And if this color is gray (the color of the road) your IsRoad func to return true. This will work (I hope) but it isn't the safest solution. It would be better to use a separate texture for the whole road and to color the non-road space into an "unusable" color or in transparent.[/font]
marrshal posted a topic in General and Gameplay ProgrammingFrom a couple of weeks I'm having big troubles with the game I'm trying to write. For the models and the animations I'm using 3ds max but I can't use them in my OpenGL game. Actually I have succed to load a model from an .ASE file but loading animation from .ASE is impossible for me. A have found a demo where are loaded .md2 but i can't find reliable exporter... I can't install lib3ds (missing file) and PortaLib3D can't even download (broken links)... I feel I'm going crazy. Please, help me.
- Well this is the easiest solution according to me... Can you post your code and the errors?
- You shoud have a function like this bool IsRoad(int x, int y); which returns true if is the point (x, y) lies on a road and false otherwise. And in the function which moves the character you should perform a movement only if the player's position after the movement is on road. For example: class Player { int posX, posY; //... void Update() { int newPlayerPosX, newPlayerPosY; if(keyIsPressed['w']) { newPlayerPosX = posX; newPlayerPosY = posY - 1; } if(keyIsPressed['s']) //... //... if(IsRoad(newPlayerPosX, newPlayerPosY)) { posX = newPlayerPosX; posY = newPlayerPosY; } } }; If you load the world as a texture will be hard to write IsRoad function. I think is easier (and faster) to describe the world in a array such as 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 T 0 0 0 0 0 R R R 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 R R R 0 0 0 0 0 0 0 0 0 0 0 0 0 0 H 0 0 0 R R 0 0 0 0 0 0 0 0 0 0 0 R R R R R R R R R R 0 0 0 0 ... ... where 0 is empty space, R is a road, T is a tree, H is a house. And now you should have texture for tree, for house, for road etc. Now the IsRoad func will be: bool IsRoad(int x, int y) { return world[x][y] == 'R'; } Hope I was helpful.
- I found the problem. Somehow in the middle of the night I decided not to draw [font="Lucida Console"]prop_menu.skin[/font] and I was wondered when I saw all of my small boxes drawn. So, because of the sprite priority first are drawn the "missing" boxes, then prop_menu.skin [font="Arial"]and in the end the "visible" boxes. To fix the problem I should just change[/font] [font="Arial"] spriteBatch.Begin(SpriteSortMode.BackToFront, BlendState.NonPremultiplied); base.Draw(gameTime); spriteBatch.End();[/font] [font="Arial"]int the main Draw func to [/font] [font="Arial"] spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.NonPremultiplied); base.Draw(gameTime); spriteBatch.End();[/font] [font="Arial"]and reorder some of the draw func calls. [/font]
XNA does not draw everything
marrshal posted a topic in Graphics and GPU ProgrammingI'm writing a simple game in XNA but some elements are not drawn to the screen. The code is too big to post it so I'm posting only the draw functions. Main draw func: protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.LightGreen); spriteBatch.Begin(SpriteSortMode.BackToFront, BlendState.NonPremultiplied); base.Draw(gameTime); spriteBatch.End(); //-Draw Cursor spriteBatch.Begin(); if(scr_cursor.unit == null) spriteBatch.Draw(scr_cursor.cursor, new Vector2(Mouse.GetState().X, Mouse.GetState().Y), Color.White); else spriteBatch.Draw(scr_cursor.unit.small, new Vector2(Mouse.GetState().X, Mouse.GetState().Y), Color.White); spriteBatch.End(); } Draw func of one of Properties_menu components: public override void Draw(GameTime gameTime) { sBatch = (SpriteBatch)Game.Services.GetService(typeof(SpriteBatch)); menus[mode].DrawIt(); sBatch.Draw(skin, position, Color.White); base.Draw(gameTime); } And Menu.DrawIt(): public void DrawIt() { if (main_obj != null) prop_menu.sBatch.Draw(main_obj, prop_menu.position + new Vector2(20, 20), Color.White); prop_menu.sBatch.Draw(prop_menu.box_large, prop_menu.position + new Vector2(20, 20), Color.White); for (int j, i = 0; i < 3; i++) for (j = 0; j < 3; j++) { prop_menu.sBatch.Draw(prop_menu.box_small, prop_menu.position + new Vector2(i * 70 + 20, j * 70 + 160), Color.White); } } here prop_menu is set by this in the Properties_menu constructor. The result is attached. [attachment=4237:img.JPG] Why are not all small boxes drawn and how to fix this? MarrShal
- for (int j, i = 0; i < 3; i++) for (j = 0; j < 3; j++) { prop_menu.sBatch.Draw(prop_menu.box_small, prop_menu.position + new Vector2(i * 70 + 20, j * 70 + 160), Color.White); } This generates the positions of all elements. (prop_menu.position is the position of the whole menu)
XNA 4.0 'Keyboard Input' Help
marrshal replied to FlameTheLoner's topic in Graphics and GPU ProgrammingWell, when you are wondering what a piece of code was doing - delet it and see the difference. When you deletit you'll see if you hold a key it would be drawn many times. This code avoids that. It is very bad formated so it is hard understand it. Look that:[font="CourierNew, monospace"] [/font] //Get pressed keys and display them Keys[] pressedKeys; pressedKeys = keyState.GetPressedKeys(); // work through each key presently pressed for (int i = 0; i < pressedKeys.Length; i++) { //set a flag to indicate we have not found the key bool foundIt = false; //work through each key previously pressed for (int j = 0; j < oldKeys.Length; j++) { if (pressedKeys == oldKeys[j]) { //we found the key in previously pressed keys foundIt = true; //no need to look further break; } } //... } oldKeys are the keys pressed (at the same time) in the last call of Update pressedKeys are the currently pressed keys. First loop "visits" each currently pressed key. The second loop compare the current key from the first loop. When (if) it is found the search stops (that do the break statement if the second loop) and that key is not drawn. Analogeus, if it is not found the key is drawn. Note: When you release all keys pressedKeys is empty and because of that you can draw single letter several times. Hope I was helpful
XNA 4.0 'Keyboard Input' Help
marrshal replied to FlameTheLoner's topic in Graphics and GPU ProgrammingChange the end of Update to: if (foundIt == false) { //if we get here we didnt find the key in old keys, so // add the key to the end of the message string string keyString = ""; //empty string switch (pressedKeys) { //digits case Keys.D0: keyString = "0"; break; case Keys.D1: keyString = "1"; break; // other keys default: keyString = pressedKeys.ToString(); break; } messageString = messageString + keyString; }
Console Snake game in C++
marrshal posted a topic in For Beginners's ForumThis is simple snake game I wrote when I was learning C++. About the implementation - it could be more object-oriented but I think for a person who would like to see a prehistorical game it'd be ok. I'm not permited to attach the code so I'm posting it: /* * Author: Marin Shalamanov * Date: 14.09.2010 */ #include <iostream> #include <cstdio> #include <fstream> #include <string> #include <ctime> #include <conio.h> #include <queue> #include <windows.h> using namespace std; #define H 30 #define W 30 char scr[H][W]; short int length; short int i, j; short int diff; bool eaten = false; void init(); void print(); void clear_screen (); void intro(); void game_over(); void set_diff(); void main_menu(); void play(); class snake{ public: queue<short int> x; // coordinates of each piece of the snake queue<short int> y; // Yes, it would be better if there was a class Coordinate // but I'm too lazy to rewrite it void init(){ x.push(H/2); y.push(W/2-1); x.push(H/2); y.push(W/2); } void add(){ scr[x.back()][y.back()] = scr[x.front()][y.front()] = 'X'; } //Movements void up(){ if(x.back()==0) game_over(); x.push(x.back()-1); y.push(y.back()); if(scr[x.back()][y.back()] == 'X') game_over(); if(scr[x.back()][y.back()] != 'o'){ scr[x.front()][y.front()] = ' '; x.pop(); y.pop(); }else eaten = true; scr[x.back()][y.back()] = 'X'; } void down(){ if(x.back()==H-1) game_over(); x.push(x.back()+1); y.push(y.back()); if(scr[x.back()][y.back()] == 'X') game_over(); if(scr[x.back()][y.back()] != 'o'){ scr[x.front()][y.front()] = ' '; x.pop(); y.pop(); }else eaten = true; scr[x.back()][y.back()] = 'X'; } void left(){ if(y.back()==0) game_over(); x.push(x.back()); y.push(y.back()-1); if(scr[x.back()][y.back()] == 'X') game_over(); if(scr[x.back()][y.back()] != 'o'){ scr[x.front()][y.front()] = ' '; x.pop(); y.pop(); }else eaten = true; scr[x.back()][y.back()] = 'X'; } void right(){ if(y.back()==W-1) game_over(); x.push(x.back()); y.push(y.back()+1); if(scr[x.back()][y.back()] == 'X') game_over(); if(scr[x.back()][y.back()] != 'o'){ scr[x.front()][y.front()] = ' '; x.pop(); y.pop(); }else eaten = true; scr[x.back()][y.back()] = 'X'; } }sn; int main(){ intro(); main_menu(); return 0; } void main_menu(){ clear_screen(); printf("\n\tMAIN MENU \n \n"); printf("\t1. Start game \n"); printf("\t2. Exit \n"); printf("\n\n \n\t"); short int choice; scanf("%d", &choice); switch(choice){ case 1: set_diff(); play(); break; case 2: exit(0); break; default: main_menu(); } } void play(){ sn.init(); init(); sn.add(); length = 2; char key; short int foodx, foody; //apple's coordinates short int moves = 29; while(true){ clear_screen(); print(); if(kbhit()) key = getch(); //If key is pressed switch(key){ case 'w': sn.up(); break; case 's': sn.down(); break; case 'a': sn.left(); break; case 'd': sn.right(); break; case '0': exit(0); break; } moves++; if(moves==30 || eaten){ //the apple changes its location if(!eaten) scr[foodx][foody] = ' '; else length++; moves = 0; foodx = rand()%H; foody = rand()%W; scr[foodx][foody] = 'o'; eaten = false; } Sleep(diff); } } void init(){ for(i = 0; i < H; i++) for(j = 0; j < W; j++) scr[j] = ' '; } void print(){ for(i = 0; i < H; i++){ for(j = 0; j < W; j++) printf("%c ", scr[j]); printf("\n"); } printf("Length: %d ", length); for(j = 5; j < W; j++) printf(" "); printf("+"); } void clear_screen (){ //Actually, this func does not clear, COORD coord = {0}; //it just set the cursor at the HANDLE h = GetStdHandle ( STD_OUTPUT_HANDLE ); // top left corner of the screen SetConsoleCursorPosition ( h, coord ); } void intro(){ clear_screen(); printf("\n\n\t\tSNAKE\n\n"); printf("\tControls: WASD\n\n"); getch(); } void game_over(){ clear_screen(); printf("\n\n\n\n\n\n\n\n\n\n\n\n\n\t\t\t\tGAME OVER!\n\n"); Sleep(1000); main_menu(); } void set_diff(){ printf("\n\tDifficulty (1-5): "); scanf("%d", &diff); diff = (5 - diff)*25; } Note: Because of the "clear" function it only works for Windows. MarrShal
- Advertisement | https://www.gamedev.net/profile/186764-marrshal/content/ | CC-MAIN-2019-26 | refinedweb | 2,374 | 73.78 |
The names of variables, functions, and object classes in VMD follow certain conventions in their capitalization and syntax. Also, the files used for VMD code, documentation, and data are formatted according to specific guidelines. This section describes these style guidelines.
Classes derived from a base class should generally append a descriptive word to the base class name. If the new word begins with a number, the additional word should be added to the end of the base class name. Examples: GraphicsFltkRepDynamicBonds and Displayable3D
void Scene::prepare_draw(DisplayDevice * d)
DisplayDevice.h and DisplayDevice.C NameList.h (no NameList.C needed)
C source code files should have a .c extension (that is, use a lower case c), while C++ files should have a .C extension. ALL header files should have a .h extension, and any Fortran files should have a .f extension. Latex files should end in .tex.
vmd_macros.tex pg_chapters.tex ug.tex
Many types of files (particularly, C/C++/Fortran source code or header files, Latex documentation files, and shell script files) require an CVS header at the beginning of the file. This header should be placed at the very beginning, before any other text in the file. It consists of a set of comment lines which describe the name, purpose, and history of revisions to the file. This is done by using CVS keywords embedded in the comments, which are replaced by the proper values when the file is checked out, and by having a section in the comments for a basic description of the purpose of the file. Templates of CVS headers for each of the different file types which require them are provided in the directory CVS. When a new file is created, a copy of the relevant header template should be placed at the top of the file, and the file description inserted as comments in the section of the template provided for this purpose. The descriptions below of how to format each file also describe the name of the CVS template to use.
All header files should bracket their text between a ifndef ... endif pair, and define a macro to indicate the header file has been processed. For example, right after the CVS header should come the text
ifndef DISPLAYDEVICE_H define DISPLAYDEVICE_H endif
-c"# "added to the options to the rcs program (described in the CVS usage section. | http://www.ks.uiuc.edu/Research/vmd/doxygen/codestyle.html | CC-MAIN-2020-05 | refinedweb | 393 | 63.09 |
XML Namespaces have been suggested as part of Ant2, however I think the discussion has concentrated
on what would be the best way to use then which is simple and effective.
To me the only good reason to have namespaces is to be able to use in the same project tasks
from different sources that may be using the same names. In current Ant this is not possible
since the name space is flat. For the use of libraries, we do not really require namespaces
but just a way to import (as Donald mentioned already) the task belonging to a particular
library.
I also prefer the manual declaration of the libraries being used as opposed to some auto-install
sort of facility. The reason for this is my believe that is much better to document in the
buildfile which resources are required than to assume that every user will find out by other
means.
Once you accept this there is no reason to enforce that libraries are located in some especific
place, they can be anywhere including a remote location. ANT may provide a convinient location
(${ant.lib}) but it should be just that a convinience.
The next question is how this libraries interact with ClassLoaders. Should each library have
its own ClassLoader? Should they be able to share them? How about external libraries being
required? Shall one be able to specify an additional ClassPath for the library?
<tasklib name="lib1" location="mydir/mytasks.jar" />
<lib1:mytask .... />
<tasklib location="${ant.lib}/optional.jar">
<classpath path="${ant.lib}/junit.jar" />
</tasklib>
<junit ..... />
This would allow a way to provide namespace support and otherwise simplifying things.
Now with all that many view from each one of us, no wonder we haven't really settle on one
yet.
Jose Alberto
----- Original Message -----
From: "Ted Neward" <tneward@javageeks.com>
To: <ant-dev@jakarta.apache.org>
Sent: Tuesday, October 09, 2001 6:57 AM
Subject: RE: Optional tasks
> What about some kind of idea similar to XML namespaces?
>
> MyTasks.jar
> -----------
> contains com.develop.MyCustomAntTask
> and the manifest holds
>
> Manifest-Version: 1.0
> Ant-Task-Namespace: urn:developmentor.anttasks.v1
> Ant-Task: mytaskdef com.develop.MyCustomAntTask
>
> (or maybe this needs to be a "task descriptor" buried somewhere inside the
> .jar file; either way, the idea is the same)
>
>
> In my build.xml file I write
>
> <ant:project xmlns: xmlns:
>
> <mytasks:mytaskdef
>
> </ant:project>
>
>
> When Ant starts up, it'd scan the .jar files in the "tasklib" directory,
> establish the namespace-per-tasklib mapping, and then use the XML
> namespaces-to-prefixes syntax to allow the author of the build script to use
> whatever prefix is desired. Very similar to how JSP taglibs work.
>
> Just a random thought out of the blue.
>
> Ted Neward
> {.NET||Java} Course Author & Instructor
> DevelopMentor ()
>
>
> > -----Original Message-----
> > From: Conor MacNeill [mailto:conor@cortexebusiness.com.au]
> > Sent: Friday, October 05, 2001 4:52 AM
> > To: ant-dev@jakarta.apache.org
> > Subject: Re: Optional tasks
> >
> >
> > Kevin Jones wrote:
> >
> > > How, there isn't much magic. All of the optional tasks that come with
> > Ant are defined in the default.properties resource within Ant, along
> > with the core tasks. The distinction between a core and an optional task
> > is somewhat fuzzy. In general a core task requires no additional
> > resources beyond those provided by the JDK and Ant itself. Optional
> > tasks usually require something more, that some Ant users may not have
> > installed. It maybe a third party library (jar) such as JUnit or it
> > could be something from the javax namespace. Further, IMHO, core tasks
> > are somewhat more fundamental in some way. Optional tasks are
> > effectively taskdef'd for you, but without the opportunity to specify
> > the classpath, etc. They are required to be on the classpath, which the
> > optional.jar normally is.
> >
> > Overall the mechanism is rather unsatisfactory and we have long
> > considered a different approach of task libraries which when you drop
> > them into your ANT_HOME/lib area would automatically make the tasks
> > available, subject to some mechanism to handle ambiguous tasknames.
> >
> > Conor
> >
> >
> >
> >
> >
> >
> >
> | http://mail-archives.apache.org/mod_mbox/ant-dev/200110.mbox/%3C00d001c150b9$27705690$0100a8c0@jose%3E | CC-MAIN-2018-30 | refinedweb | 670 | 57.98 |
Asked by:
Getting the state of a computer
Question
Hi,
I don't know whether this is the right place to ask my question ? If not can you please reroute me to correct place.
I'm developing a NFC(Near Field Communication) device based on a Arduino microcontroller board that act like a keyboard to the computer and type the password which is stored in a Proximity Inductive Coupling Card(PICC-A NFC card). Because of this device act as a keyboard, if someone use this device and the card in a notepad the stored password will be instantly typed in notepad. To avoid this I need something like this,
When Arduino board aske from the computer that "Is you locked and your username is this"
The computer must reply to the Arduino board the answer and microcontroller will decide what to do next depend on the answer.
I need to know that is such a communication possible? Is there any way that I can get the current state of the computer(Whether it's locked or not) ? Sorry for my terible English as it's not my first language.
Hasitha Dilshan Dissanayaka
- Moved by Bruce Eitman Friday, April 26, 2019 12:54 PM Not a Windows Compact question
All replies
Can you communicate over USB to a computer? Yes, of course you can. USB.org has lots of information about it.
Can you send a message to a computer to ask it about its state? Yes.
Bruce Eitman
Senior Enginer
Bruce.Eitman AT Synopsys DOT com
My BLOG
I work for Synopsys
- Proposed as answer by IoTGirlMicrosoft employee Tuesday, April 23, 2019 6:44 PM
Dear Bruce Eitman,
Thank you very much for the reply. But I want to how can I achieve this? I mean what API/Library I must use in this task. However I want skip the process of writing a computer program that'll be installed on the computer to help Arduino in this case, if possible.
Hasitha Dilshan Dissanayaka
- Edited by Hasitha Dilshan Tuesday, April 23, 2019 6:56 PM
- You have not told us enough for me to answer your questions or discuss possible solutions.
Bruce Eitman
Senior Enginer
Bruce.Eitman AT Synopsys DOT com
My BLOG
I work for Synopsys
As this is an Arduino based question, you might be better to look in their forms for help. forum.arduino.cc
A search for Arduino and NFC there would probably find you more folks looking at the same scenario and hardware. Also, as the NFC is likely an add-on you could reach out to the NFC part manufacturer for samples of how they expect it to be used.
Hi,
I want to know about a C/C++ library/API that I can use to Identify whether a windows 10 pc is locked or not (If Possible current user name that I have to enter password). I'm not an experienced user and I don't know whether I'm asking a valid question either. But I think through out the years of experience people in here know such things. Actually at the current time I only have the idea and in these days I'm trying to play with the NFC hardware and Arduino. I'll try to be more detailed as soon as I dig into this. In simple form what I want is when Arduino micro controller aske "Is you locked" from the computer, computer must reply back to micro controller. The micro controller will act like a typical USB device.
If I waste your time asking a stupide undetailed question, I'm very sorry. But I'll try be more detailed as I go far on this project. Highly appreciate your help on this question and thank you vey much.
Hasitha Dilshan Dissanayaka
- Edited by Hasitha Dilshan Wednesday, April 24, 2019 7:25 PM
Hi IoTGirl,
Maybe you are right. I must aske this in the Arduino forum. What I want is a link between the Arduino and windows 10 OS. So the Arduino micro controller can know whether the Pc is locked or not.
Hasitha Dilshan Dissanayaka
- Edited by Hasitha Dilshan Wednesday, April 24, 2019 7:22 PM
I want to know about a C/C++ library/API that I can use to Identify whether a windows 10 pc is locked or not
For windows 7 and above, WTS API can be used with WTSQuerySessionInformation:
There is a sample method:
#include <Wtsapi32.h> #pragma comment(lib, "Wtsapi32.lib") bool IsSessionLocked() { WTSINFOEXW * pInfo = NULL; WTS_INFO_CLASS wtsic = WTSSessionInfoEx; bool bRet = false; LPTSTR ppBuffer = NULL; DWORD dwBytesReturned = 0; LONG dwFlags = 0; DWORD dwSessionID = WTSGetActiveConsoleSessionId(); if (WTSQuerySessionInformation(WTS_CURRENT_SERVER_HANDLE, dwSessionID, wtsic, &ppBuffer, &dwBytesReturned)) { if (dwBytesReturned > 0) { pInfo = (WTSINFOEXW*)ppBuffer; if (pInfo->Level == 1) { dwFlags = pInfo->Data.WTSInfoExLevel1.SessionFlags; } if (dwFlags == WTS_SESSIONSTATE_LOCK) { bRet = true; } } WTSFreeMemory(ppBuffer); ppBuffer = NULL; } return bRet; }
hope it helps.. | https://social.msdn.microsoft.com/Forums/en-US/b0031856-c912-4bb3-bf18-93c0d58e3c11/getting-the-state-of-a-computer?forum=vssmartdevicesnative | CC-MAIN-2019-35 | refinedweb | 810 | 68.91 |
Introduction
Sentiment Analysis or opinion mining is the analysis of emotions behind the words by using Natural Language Processing and Machine Learning. With everything shifting online, brands and businesses giving utmost importance to customer reviews, and due to this sentiment analysis has been an active area of research for the past 10 years. Businesses are investing hugely to come up with an efficient sentiment classifier.
Why Fine-Grained Sentiment Analysis?
On exploring I mostly found those classifiers which use binary classification(just positive and negative sentiment), one good reason as faced by myself is that fine-grained classifiers are a bit more challenging, and also there are not many resources available for this.
It’s attention to detail that makes the difference between average and stunning. If you need more precise results, you can use fine-grained analysis. Simply put, you can not only identify who talks about a product but also what exactly is talked about in their feedback. For example, for comparative expressions like “Scam 1992 was way better than Mirzapur 2.” — a fine-grained sentiment analysis can provide much more precise information than a normal binary sentiment classifier. In addition to the above advantage, dual-polarity reviews like The location was truly bad… but the people there were glorious.” can confuse binary sentiment classifiers giving incorrect predictions.
I think the above advantage will give enough motivation to go for fine-grained sentiment analysis.
How to conduct fine-grained sentiment analysis: Approaches and Tools
Data collection and preparation. For data collection, we scraped the top 100 smartphone reviews from Amazon using python, selenium, and beautifulsoup library. If you don’t know how to use python and beautifulsoup and request a library for web-scraping here is a quick tutorial. Selenium Python bindings provide a simple API to write functional/acceptance tests using Selenium WebDriver.
Let’s begin coding now!!
import requests from fake_useragent import UserAgent import csv import re from selenium import webdriver from bs4 import BeautifulSoupfrom selenium import webdriver
We begin by importing some libraries. the requests library is used to sent requests to the URL and receive the content of the webpage. BeautifulSoup is used to format the content of the webpage in a more readable format. selenium is used to automate the process of scraping web-page without selenium you have to send the headers and cookies and I found that process more tedious.
Searching for products and getting the ASIN(Amazon Standard Identification Number)
Now we will create helper functions that are based on the searched query gets the ASIN number of all the products. These ASIN numbers will help us to create the URL of each product later. We created two functions searching() and asin() which searches for the webpage and stores all the ASIN numbers in a list. We found that when we search for a particular product on amazon. in then its URL can be broken into three parts.
“” + search query+ page number. So we searched for a smartphone and till 7pages, you can extend this to any number of pages as you like it.
def searching(url,query,page_no): """ This is a function which searches for the page based on the url and the query Parameters : url = main site from which the data to be parsed query = product/word that to be searches returns : page if found or else error """ path = url + query +"&page=" + str(page_no) page = requests.get(path, headers =header) if page.status_code == 200: return page.content else: return "Error"def asin(url,query,page_no): """ Get the ASIN(Amzon Standard Identification Number for the products) Parameters: url = main url from where the asin needs to be scraped query = product category from which the asins to be scraped returns : list of asins of the products """ product_asin = [] response = searching(url,query,page_no) soup = BeautifulSoup(response,'html.parser') for i in soup.find_all("div",{"class":"sg-col-20-of-24 s-result-item s-asin sg-col-0-of-12 sg-col-28-of-32 sg-col-16-of-20 sg-col sg-col-32-of-36 sg-col-12-of-16 sg-col-24-of-28"}): product_asin.append(i['data-asin']) return product_asin
Getting product details
Now the next step is to create a URL for each product and go to that URL and scrape all the necessary details we need for that page. For this, we use selenium to automate the process of extracting details. For amazon. in the URL for each product can be broken down as
“”+asin
We then created a function to go to each of the URLs made using asin numbers and get the reviews, ratings, names of each of the products. Then we store these values as a CSV file using the CSV module in python.
Pre-processing and Exploratory Data Analysis.
Loading the saved CSV file using the pandas library and some EDA like distribution of ratings, count of words in reviews and which words are more dominant in positive reviews and negative reviews are done and then preprocessing like cleaning of reviews and title, etc is done.
Distribution of positive and negative scores in the dataset.
The above graph shows the distribution of the number of words in positive and negative reviews. We can see that the frequency of the number of words is greater in positive reviews than that of negative reviews and also negative reviews are generally shorter in comparison to positive reviews.
Positive Reviews
We can’t make much out of this maybe because of the smaller dataset but we can notice that the word “good” which can be positive is one of the dominant words in positive reviews.
Negative Review
In the above word cloud “don’t, buy, phone” are dominant words here.
Word Count
For negative word count, a normal distribution graph is clearly visible but for a positive review, there is no clear pattern.
Textblob for fine-grained sentiment analysis:
TextBlob is a python library for Natural Language Processing (NLP). TextBlob actively uses Natural Language ToolKit (NLTK) to accomplish its tasks. NLTK is a library that gives easy access to a lot of lexical resources and allows users to work with categorization, classification, and many other tasks. TextBlob is a simple library that supports complex analysis and operations on textual data.
We will create a function that returns the polarity score of sentiment and then use this function to predict the sentiment score from 1–5.
from textblob import TextBlob def textblob_score(sentence): return TextBlob(sentence).sentiment.polarity
Pass each review to the above function and store the score returned and save it to the dataframe.
| https://www.analyticsvidhya.com/blog/2020/11/fine-grained-sentiment-analysis-of-smartphone-review/ | CC-MAIN-2021-17 | refinedweb | 1,100 | 52.39 |
we could use baixar dragon vpn a static IP address here, and it saves us editing this file each time we want to use the connection. But this is useful because many users IPs are prone to change,
Baixar dragon vpn
"Guides FAQ" section. Configure the bot as baixar dragon vpn usual and enjoy! CHANGELOG Initial release! IMPORTANT! (Current version: 1.12b)) HOW TO USE THE BOT. You can find anything you need on the mBot forum,
add vpnv6 uni! Route-policy RP_IPV4_BGP_LU_OUT if community matches-any (ios-regex _.:1 then pass endif end-policy!) route-policy RP_PASSALL pass end-policy! Vrf TOR2 mpls activate interface GigabitEthernet.11! Router static vrf TOR2 address-family ipv4 unicast vpn details free /32 GigabitEthernet.11!!! Router bgp 65000 bgp router-id bgp log neigh chan det add vpnv4 uni! :
This value is used in level of service and capacity analysis. The equivalency is dependent upon size, weight, and operating characteristics of the large vehicle, and the design speed and gradient of the highway. Anchor: #i1010024 passenger trip A passenger trip is the number of.
India: Baixar dragon vpn!
11) However, congress passed the last update Monday Holiday Law, how to openweb vpn for George Washingtons birthday was originally celebrated on February baixar dragon vpn 22 for. In 1968, on more than a century (though records indicate he was born on Feb.)
visit the. Host Name User baixar dragon vpn Agent Your Host Name: how to download via proxy server m Your User Agent: Mozilla/4.0 MSIE 4.5. For even more information, more Info About You page.
.
AirVPN up against NordVPN. In this contest, Ill be looking at the 10 most critical categories to pay attention to when deciding on a. VPN service. Ill explain why each category is critical, reveal how well both providers performed and then declare a winner for.
dedicated connection from your infrastructure into AWS. AWS baixar dragon vpn Direct Connect bypasses the public Internet and establishes a secure, aWS Direct Connect.
samsung Smart TV DNS addresses of this baixar dragon vpn tutorial. Go step-by-step through following instructions: Part I. Smart DNS manually, change your. Validate Your IP Address. If you have already validated your IP address go straight to the Part II.check point VPN setup in windows 10 Checkpoint VPN connection fails with build 10049 Please baixar dragon vpn remember to mark the replies as answers if they help,although the Samsung Smart TV is unable to directly connect to a Virtual Private Network. As a result, vPN services can baixar dragon vpn also be used to unlock Netflix,
pSV. Read more baixar dragon vpn NETGEAR.. XR500,this is not hard, this results in problems for companies that only know how to vpn jailbreak ios 9 run Microsoft Windows based systems, as suddenly they are going to need to be able to run a Unix or Linux system for a new application.there are also many aggregators, some are offered by companies to promote other paid Internet services. Some even charge a subscription baixar dragon vpn fee to provide easy access to the services listed. Choosing a proxy service Many proxy services are free. Few free services, most are ad supported in one way or another. Providing constantly updated lists of free proxy servers. Such as Proxy 4 Free,
Cisco vpn tunnel troubleshoot:
aDSL bonding technology allowing multiple ADSL lines to be bonded to create larger internet baixar dragon vpn connections. With the use of multiple Internet connections you can create a single virtual connection. INS offers advanced.pIA (Private Internet Access)), baixar dragon vpn some are better than others. There are many different VPN providers. They are both competitively priced with good speeds. PIA is slightly cheaper but has less gateways. The two which we would recommend are. And IP Vanish.
logging in to your router is very simple and easy and you will see that in a baixar dragon vpn minute. Public IP address is being used. Believe us, well, for a public access a so called. How to use? Although it already looks complicated,so I can t actually fiddle with the knobs, iPSec over ADSL Need expert opinion. Odd situation as we have outsourced IT infrastructure baixar dragon vpn to a local IT firm,( ) . , .
so don't worry about that if you see it the next time you open the window) Press vpn free ios 10 Generate Shared Secret and then Save Configuration. (This always defaults to Feitian Serial for some reason,)
02:27 PM #1 My brother has set up a VPN connection with my iPad 2 few weeks ago. As this setup is no longer needed, what should I do baixar dragon vpn for that? I want to remove the corresponding configuration from my iPad.it is more in depth and baixar dragon vpn connects at start up with no trouble at all.proxy IP:Port Response Time. Here we provide free HTTP proxy lists full of IP addresses that you can freely download and use. If you want more than HTTP proxies, you can buy proxy list for a very reliable price baixar dragon vpn of 6.55 per month. Proxy IP List - Download Proxy List - USA Proxy List 3128. A paid VPN service with dedicated new IPs for each of your connections and the highest anonymous,
openElec v7 onwards and LibreElec v5 onwards already include OpenVPN. This can be found in the Unofficial OpenElec repository ikev2 vpn server setup which baixar dragon vpn sits in the repository category of the official OpenElec repository. If you have previous versions then you will need to install OpenVPN. | http://babyonboard.in/tipps/baixar-dragon-vpn.html | CC-MAIN-2019-39 | refinedweb | 940 | 66.54 |
how it would function. Later features would also include templates, exceptions, namespaces, new casts, and a Boolean type. For users' convenience, the language has received many important updates and has also had an influence on the creation of Java.
Deriving much of its language from C and C++, Java was released in 1995 by James Gosling at Sun Microsystems. Built as a general-purpose, class-based, and object-oriented language, Java is now an invisible force behind numerous applications and devices we use every day. Java was released with the intent to allow application developers to run code from one platform to another, without the need to recompile it. Using any computer architecture it can accomplish this by compiling code to bytecode, enabling it to run on any Java Virtual Machine. In addition, with the Internet starting to take off in the 1990s, Java jumped right in, announcing in 1995 that the Netscape Navigator would incorporate Java technology. Using such tech, developers could now create programs within a Web browser and access available Web services. Today, 1.1 billion desktops run Java, 3 billion mobile phones run Java, and there are more than 930 million Java Runtime Environment downloads each year. | https://www.designnews.com/design-hardware-software/impact-coding-over-30-years/78219507629351?doc_id=258080&page_number=1&piddl_msgid=924947§ion_id=1394 | CC-MAIN-2017-47 | refinedweb | 201 | 51.48 |
This notebook builds a machine learning model that can be used to predict professional soccer games. The notebook was created for the "Predicting the Future with the Google Cloud Platform" talk at Google I/O 2014 by Jordan Tigani and Felipe Hoffa. A link to the presentation is here:
Once the machine learning model is built, we use it to predict outcomes in the World Cup. If you are seeing this after the world cup is over, you can use it to predict hypothetical matchups (how would the 2010 World Cup winners do against the current champions?). You can also see how various different strategies would affect prediction outcomes. Maybe you'd like to add player salary data and see how that affects predictions (likely it will help a lot). Maybe you'd like to try Poisson Regression instead of Logistic Regression. Or maybe you'd like to try data coercion techniques like whitening or PCA.
The model uses Logistic Regression, built from touch-by-touch data about three different soccer leagues (English Premier League, Spainish La Liga, and American Major League Soccer) over multiple seasons. Because the data is licensed, only the aggregated statistics about those games are available. (If you have ideas of other statistics you'd like to see, create a new issue in the GitHub repo and we'll see what we can do.) The match_stats.py file shows the raw queries that were used to generate the stats.
There are four python files that are used by this notebook. They must be in the path. These are:
Since we're providing this notebook as part of a Docker image that can be run on Google Compute Engine, we'll override the authorization used in the Pandas BigQuery connector to use GCE auth. This will mean that you don't have to do any authorization on your own. You must, however, have the BigQuery API enabled in your Google Cloud Project (). Because the data sizes (after aggregation) are quite small, you may not need to enable billing.
from oauth2client.gce import AppAssertionCredentials from bigquery_client import BigqueryClient from pandas.io import gbq def GetMetadata(path): import urllib2 BASE_PATH = '' request = urllib2.Request(BASE_PATH + path, headers={'Metadata-Flavor': 'Google'}) return urllib2.urlopen(request).read() credentials = AppAssertionCredentials(scope='') client = BigqueryClient(credentials=credentials, api='', api_version='v2', project_id=GetMetadata('project/project-id')) gbq._authenticate = lambda: client
from pandas.io import gbq # Import the four python modules that we use. import match_stats import features import world_cup import power query = "SELECT * FROM (%(summary_query)s) LIMIT 1" % { 'summary_query': match_stats.team_game_summary_query()} gbq.read_gbq(query)
Waiting on bqjob_r337c26ad9dfd06bd_00000147233b4372_1 ... (0s) Current status: DONE
1 rows × 20 columns
This will return a pandas dataframe that contains the features that will be used to build a model.
The features query will read from the game summary table that has prepared per-game statistics that will be used to predict outcomes. The data has been aggregated from touch-by-touch data from Opta. However, since that data is not public, we use these prepared statistics instead of the raw data.
In order to predict a game, we look at the previous N games of history for each team, where N is defined here as history_size.
import features reload(features) # Sets the history size. This is how far back we will look before each game to aggregate statistics # to predict the next game. For example, a history size of 5 will look at the previous 5 games played # by a particular team in order to predict the next game. history_size = 6 game_summaries = features.get_game_summaries() data = features.get_features(history_size)
Waiting on bqjob_rdad0a47a2ca5106_0000014722ccfb80_2 ... (0s) Current status: DONE Waiting on bqjob_r61ce926a2f57863e_0000014722cd178a_3 ... (0s) Current status: DONE
The features include rollups from the last K games. Most of them are averages that are computed per-minute of game time. Per-minute stats are used in order to be able to normalize for games in the world cup that go into overtime.
The following columns are the features that will be used to build the prediction model:
The following columns are included as metadata about the match:
The following columns are target variables that we will be attempting to predict. These columns must be dropped before any prediction is done, but are useful when building a model. The models that we will build below will just try to predict outcome (points) but other models may choose to predict goals, which is why they are also included here.
# Partition the world cup data and the club data. We're only going to train our model using club data. club_data = data[data['competitionid'] <> 4] # Show the features latest game in competition id 4, which is the world cup. data[data['competitionid'] == 4].iloc[0]
matchid 731828 teamid 366 op_teamid 632 competitionid 4 seasonid 2013 is_home 0 team_name Netherlands op_team_name Argentina timestamp 2014-07-09 21:00:00.000000 goals 0 op_goals 0 points 1 avg_points 2.166667 avg_goals 2 op_avg_goals 0.8333333 pass_70 0.412262 pass_80 0.1391892 op_pass_70 0.3897345 op_pass_80 0.114534 expected_goals 1.799292 op_expected_goals 0.7054955 passes 3.518422 bad_passes 1.014758 pass_ratio 0.7588293 corners 0.04906867 fouls 0.1302936 cards 2.666667 shots 0.1469179 op_passes 4.158118 op_bad_passes 1.018166 op_corners 0.04081354 op_fouls 0.1938453 op_cards 2.5 op_shots 0.1107791 goals_op_ratio 1.75 shots_op_ratio 1.428914 pass_op_ratio 0.9701803 Name: 0, dtype: object
Compute the crosstabs for goals scored vs outcomes. Scoring more than 5 goals means you're guaranteed to win, and scoring no goals means you lose about 75% of the time (sometimes you tie!).
import pandas as pd pd.crosstab( club_data['goals'], club_data.replace( {'points': { 0: 'lose', 1: 'tie', 3: 'win'}})['points'])
We're going to train a logistic regression model based on the club data only. This will use an external code file world_cup.py to build the model.
The output of this cell this will be a logistic regression model and a test set that we can use to test how good we are at predicting outcomes. The cell will also print out the Rsquared value for the regression. This is a measaure of how good the fit was to the model (higher is better).
import world_cup reload(world_cup) import match_stats pd.set_option('display.max_rows', 5000) pd.set_option('display.max_columns', 500) pd.set_option('display.width', 1000) # Don't train on games that ended in a draw, since they have less signal. train = club_data.loc[club_data['points'] <> 1] # train = club_data (model, test) = world_cup.train_model( train, match_stats.get_non_feature_columns()) print "\nRsquared: %0.03g" % model.prsquared
Rsquared: 0.164
The logistic regression model is built using regularization; this means that it penalizes complex models. It has the side effect of helping us with feature selection. Features that are not important will be dropped out of the model completely.
We can divide the features into three buckets:
def print_params(model, limit=None): params = model.params.copy() params.sort(ascending=False) del params['intercept'] if not limit: limit = len(params) print("Positive features") params.sort(ascending=False) print np.exp(params[[param > 0.001 for param in params]]).sub(1)[:limit] print("\nDropped features") print params[[param == 0.0 for param in params]][:limit] print("\nNegative features") params.sort(ascending=True) print np.exp(params[[param < -0.001 for param in params]]).sub(1)[:limit] print_params(model, 10)
Positive features is_home 0.712618 pass_70 0.215699 opp_op_expected_goals 0.198712 opp_op_corners 0.180812 shots 0.146956 opp_bad_passes 0.145576 op_passes 0.091629 expected_goals 0.079620 avg_points 0.075306 fouls 0.047963 dtype: float64 Dropped features op_avg_goals 0 goals_op_ratio 0 op_cards 0 op_bad_passes 0 op_shots 0 corners 0 cards 0 opp_pass_op_ratio 0 pass_ratio 0 passes 0 dtype: float64 Negative features opp_pass_70 -0.177428 op_expected_goals -0.165771 op_corners -0.153125 opp_shots -0.128127 bad_passes -0.127077 opp_op_passes -0.083938 opp_expected_goals -0.073748 opp_avg_points -0.070032 opp_fouls -0.045768 opp_avg_goals -0.020472 dtype: float64
This cell uses the test set (which was not used during the creation of the model) to predict outcomes. We can a few of the predictions to see how well we did. We'll show 5 each from two buckets: cases where we got it right, and cases where we got it wrong. We can see if these make sense. When we display these, the home team is always on the left.
For example, it might show that we predicted Manchester United playing at home beating Sunderland. This is completely reasonable and we'd expect that the outcome would be 3 points (a victory).
The columns of the output are:
reload(world_cup) results = world_cup.predict_model(model, test, match_stats.get_non_feature_columns()) predictions = world_cup.extract_predictions( results.copy(), results['predicted']) print 'Correct predictions:' predictions[(predictions['predicted'] > 50) & (predictions['points'] == 3)][:5]
Correct predictions:
print '\nIncorrect predictions:' predictions[(predictions['predicted'] > 50) & (predictions['points'] < 3)][:5]
Incorrect predictions:
Next, we want to actually quantify how good our predictions are. We can compute the lift ("How much better are we doing than random chance?"), AUC (the area under the ROC curve) and plot the ROC curve. AUC is arguable the most interesting number, it ranges between 0.5 (your model is no better than dumb luck) and 1.0 (perfect prediction).
import pylab as pl # Compute a baseline, which is the percentage of overall outcomes are actually wins. # (remember in soccer we can have draws too). baseline = (sum([yval == 3 for yval in club_data['points']]) * 1.0 / len(club_data)) y = [yval == 3 for yval in test['points']] world_cup.validate(3, y, results['predicted'], baseline, compute_auc=True) pl.show()
(3) Lift: 1.45 Auc: 0.745
One thing that is missing, if you're predicting the next game based on the previous few games, is that some teams may have just played a really tough schedule, while other teams have played against much weaker competition.
We can solve for schedule difficulty by running another regression; this one computes a power ranking, similar to the FIFA/CocaCola power ranking for international soccer teams (there are power rankings for other sports like college (american) football that may be familiar.)
Once we compute the power ranking (which creates a stack ranking of all of the teams), we can add that power ranking as a feature to our model, then rebuild it and re-validate it. The regression essentailly automated the process of looking at relationships like "Well, team A beat team B and team B beat team C, so A is probably better than C".
The output here will show the power ranking for various teams. This can be useful to spot check the ranking, since if we rank Wiggan at 1.0 and Chelsea at 0.0, something is likely wrong.
Note that because there isn't a strict ordering to the data (if team A beats team B and team B beats team C, sometimes team C will then beat team A) we sometimes fail to assign ordering to all of the teams (especially where the data is sparse). For teams that we can't rank, we put them in the middle (0.5).
Additionally, because the rankings for international teams are noisy and sparse, we chunk the rankings into quartiles. So teams that have been ranked will show up as 0, .33, .66, or 1.0.
Once we add this to the model, the performance generally improves significantly.
import power reload(power) reload(world_cup) def points_to_sgn(p): if p > 0.1: return 1.0 elif p < -0.1: return -1.0 else: return 0.0 power_cols = [ ('points', points_to_sgn, 'points'), ] power_data = power.add_power(club_data, game_summaries, power_cols) power_train = power_data.loc[power_data['points'] <> 1] # power_train = power_data (power_model, power_test) = world_cup.train_model( power_train, match_stats.get_non_feature_columns()) print "\nRsquared: %0.03g, Power Coef %0.03g" % ( power_model.prsquared, math.exp(power_model.params['power_points'])) power_results = world_cup.predict_model(power_model, power_test, match_stats.get_non_feature_columns()) power_y = [yval == 3 for yval in power_test['points']] world_cup.validate(3, power_y, power_results['predicted'], baseline, compute_auc=True, quiet=False) pl.plot([0, 1], [0, 1], '--', color=(0.6, 0.6, 0.6), label='Luck') # Add the old model to the graph world_cup.validate('old', y, results['predicted'], baseline, compute_auc=True, quiet=True) pl.legend(loc="lower right") pl.show() print_params(power_model, 8)
New season 2014 New season 2013 QC check did not pass for 19 out of 20 parameters Try increasing solver accuracy or number of iterations, decreasing alpha, or switch solvers Could not trim params automatically due to failed QC check. Trimming using trim_mode == 'size' will still work. New season 2013 New season 2012 QC check did not pass for 24 out of 24 parameters Try increasing solver accuracy or number of iterations, decreasing alpha, or switch solvers Could not trim params automatically due to failed QC check. Trimming using trim_mode == 'size' will still work. New season 2012 New season 2011 QC check did not pass for 24 out of 24 parameters Try increasing solver accuracy or number of iterations, decreasing alpha, or switch solvers Could not trim params automatically due to failed QC check. Trimming using trim_mode == 'size' will still work. [u'Blackburn Rovers: 0.000', u'Real Betis: 0.000', u'D.C. United: 0.000', u'Celta de Vigo: 0.004', u'Deportivo de La Coru\xf1a: 0.009', u'Wolverhampton Wanderers: 0.021', u'Reading: 0.022', u'Real Zaragoza: 0.026', u'Real Valladolid: 0.044', u'Granada CF: 0.062', u'Queens Park Rangers: 0.073', u'Mallorca: 0.089', u'Aston Villa: 0.092', u'Bolton Wanderers: 0.102', u'Osasuna: 0.109', u'Espanyol: 0.112', u'Wigan Athletic: 0.124', u'Sunderland: 0.130', u'Rayo Vallecano: 0.138', u'Almer\xeda: 0.145', u'Levante: 0.148', u'Elche: 0.154', u'Getafe: 0.170', u'Swansea City: 0.192', u'Southampton: 0.197', u'Norwich City: 0.206', u'Toronto FC: 0.211', u'Chivas USA: 0.218', u'West Ham United: 0.220', u'West Bromwich Albion: 0.224', u'Villarreal: 0.231', u'Stoke City: 0.255', u'Fulham: 0.274', u'Valencia: 0.296', u'Valencia CF: 0.296', u'M\xe1laga: 0.305', u'Newcastle United: 0.342', u'Sevilla: 0.365', u'Columbus Crew: 0.366', u'Athletic Club: 0.386', u'Liverpool: 0.397', u'Everton: 0.417', u'Philadelphia Union: 0.466', u'Montreal Impact: 0.470', u'Chelsea: 0.530', u'Real Sociedad: 0.535', u'Tottenham Hotspur: 0.551', u'Arsenal: 0.592', u'Houston Dynamo: 0.593', u'FC Dallas: 0.612', u'Chicago Fire: 0.612', u'Vancouver Whitecaps: 0.615', u'San Jose Earthquakes: 0.632', u'New England Revolution: 0.634', u'Atl\xe9tico de Madrid: 0.672', u'Colorado Rapids: 0.743', u'Barcelona: 0.759', u'Seattle Sounders FC: 0.781', u'New York Red Bulls: 0.814', u'Sporting Kansas City: 0.854', u'LA Galaxy: 0.882', u'Real Salt Lake: 0.922', u'Manchester City: 0.928', u'Real Madrid: 1.000', u'Manchester United: 1.000', u'Portland Timbers: 1.000'] Rsquared: 0.238, Power Coef 2.22 (3) Lift: 1.48 Auc: 0.762 Base: 0.375 Acc: 0.682 P(1|t): 0.742 P(0|f): 0.646 Fp/Fn/Tp/Tn p/n/c: 100/228/288/416 516/516/1032 (old) Lift: 1.45 Auc: 0.745
Positive features power_points 1.222950 is_home 0.692184 pass_70 0.178619 op_passes 0.140863 fouls 0.138612 opp_op_corners 0.122122 opp_avg_points 0.055252 opp_op_fouls 0.039738 dtype: float64 Dropped features avg_goals 0 op_bad_passes 0 corners 0 op_shots 0 op_cards 0 opp_pass_op_ratio 0 pass_ratio 0 passes 0 dtype: float64 Negative features opp_power_points -0.550147 opp_pass_70 -0.151549 opp_op_passes -0.123470 opp_fouls -0.121738 op_corners -0.108831 avg_points -0.052359 op_fouls -0.038220 bad_passes -0.028956 dtype: float64
Now that we've got a model that we like, let's look at predicting the world cup. We can build the same statistics (features) for the world cup games that we did for the club games. In this case, however, we don't have the targets; that is, we don't know who won (for some of the previous games, we do know who won, but let's predict them all equally as if we didn't know).
features.get_wc_features() will return build features from the world cup games.
import world_cup import features reload(match_stats) reload(features) reload(world_cup) wc_data = world_cup.prepare_data(features.get_wc_features(history_size)) wc_labeled = world_cup.prepare_data(features.get_features(history_size)) wc_labeled = wc_labeled[wc_labeled['competitionid'] == 4] wc_power_train = game_summaries[game_summaries['competitionid'] == 4].copy()
Waiting on bqjob_r771c340a8483b8a6_0000014722cd55df_4 ... (0s) Current status: DONE Waiting on bqjob_r5df9ca3d043b572b_0000014722cd5dbe_5 ... (0s) Current status: DONE
Once we have the model and the features, we can start predicting.
There are a couple of differences between the world cup and club data. For one, while home team advantage is important in club games, who is really at home? Is it only Brazil? What about other south american teams? Some models give the 'is home' status to only Brazil, others give partial status to other teams from the same continent, since historical data shows that teams from the same continent tend to outperform.
We use a slightly modified model that is, however, somewhat subjective. We assing a value to is_home between 0.0 to 1.0 depending on the fan support (both numbers and enthusiasm) that a team enjoys. This is a result of noticing, in the early rounds, that the teams that had the more entusiastic supporters did better. For example, Chile's fans were deafining in support of their team, but Spain's fans barely showed up (Chile upset spain 2-0). There were a number of other cases like this; many involving south american sides, but many involving other teams that had sent a lot of supporters (Mexico, for example). Some teams, like the USA, had a lot of fans, but they were more reserved... they got a lower score. This factor was set based on first-hand reports from the group games.
import pandas as pd wc_home = pd.read_csv('wc_home.csv') def add_home_override(df, home_map): for ii in xrange(len(df)): team = df.iloc[ii]['teamid'] if team in home_map: df['is_home'].iloc[ii] = home_map[team] else: # If we don't know, assume not at home. df['is_home'].iloc[ii] = 0.0 home_override = {} for ii in xrange(len(wc_home)): row = wc_home.iloc[ii] home_override[row['teamid']] = row['is_home'] # Add home team overrides. add_home_override(wc_data, home_override)
The lattice of teams playing eachother in the world cup is pretty sparese. Many teams haven't played eachother for decades. Many European teams rarely play South American ones, and even more rarely play Asian ones. We can use the same technique as we did for the club games, but we have to be prepared for failure.
We'll output the power rankings from the previous games. We should eyeball them to make sure they make sense.
# When training power data, since the games span multiple competitions, just set is_home to 0.5 # Otherwise when we looked at games from the 2010 world cup, we'd think Brazil was still at # home instead of South Africa. wc_power_train['is_home'] = 0.5 wc_power_data = power.add_power(wc_data, wc_power_train, power_cols) wc_results = world_cup.predict_model(power_model, wc_power_data, match_stats.get_non_feature_columns())
New season 2013 New season 2009 New season 6 QC check did not pass for 45 out of 50 parameters Try increasing solver accuracy or number of iterations, decreasing alpha, or switch solvers Could not trim params automatically due to failed QC check. Trimming using trim_mode == 'size' will still work. [u'Australia: 0.000', u'USA: 0.017', u'Nigeria: 0.204', u"C\xf4te d'Ivoire: 0.244", u'Costa Rica: 0.254', u'Algeria: 0.267', u'Paraguay: 0.277', u'Greece: 0.284', u'Switzerland: 0.291', u'Ecuador: 0.342', u'Uruguay: 0.367', u'Japan: 0.406', u'Mexico: 0.409', u'Chile: 0.413', u'England: 0.460', u'Portugal: 0.487', u'Ghana: 0.519', u'France: 0.648', u'Spain: 0.736', u'Argentina: 0.793', u'Italy: 0.798', u'Brazil: 0.898', u'Netherlands: 0.918', u'Germany: 1.000']
Now's the moment we've been waiting for. Let's predict some world cup games. Let's start with predicting the ones that have already happenned.
We will output 4 columns:
But wait! These predictions are different from the ones you published!
There are three reasons why the prediction numbers might be different from the numbers you may have seen as published predictions:
pd.set_option('display.max_rows', 5000) pd.set_option('display.max_columns', 500) pd.set_option('display.width', 1000) wc_with_points = wc_power_data.copy() wc_with_points.index = pd.Index( zip(wc_with_points['matchid'], wc_with_points['teamid'])) wc_labeled.index = pd.Index( zip(wc_labeled['matchid'], wc_labeled['teamid'])) wc_with_points['points'] = wc_labeled['points'] wc_pred = world_cup.extract_predictions(wc_with_points, wc_results['predicted']) # Reverse our predictions to show the most recent first. wc_pred.reindex(index=wc_pred.index[::-1]) # Show our predictions for the games that have already happenned. wc_pred[wc_pred['points'] >= 0.0]
Let's look at the stats for the teams in the final. We can compare them by eyeball to see which one we think will win:
final = wc_power_data[wc_power_data['matchid'] == '731830'] final
Now let's look at the games that made up the decisions:
op = game_summaries def countryStats(d, name): pred = d['team_name'] == name return d[pred] fr = countryStats(op, 'France') ge = countryStats(op, 'Germany') ar = countryStats(op, 'Argentina') br = countryStats(op, 'Brazil') ne = countryStats(op, 'Netherlands') ge[:6]
OK now that we've looked at the data every which way possible, let's predict the final results:
wc_pred[~(wc_pred['points'] >= 0)][[ 'team_name', 'op_team_name', 'predicted']] | http://nbviewer.jupyter.org/github/GoogleCloudPlatform/ipython-soccer-predictions/blob/master/predict/wc-final.ipynb | CC-MAIN-2018-51 | refinedweb | 3,577 | 62.95 |
On Mon, 2005-09-05 at 18:50 -0500, Eugene Lazutkin wrote:
> Yep. It depends on size. It seems that web server sends the first part of
> page up to some limit and stalls for a while. After that it may send the
> rest or append "internal_error.html" nonsense.
>
> Now I have to figure out who is the culprit: Apache, FastCGI server, Django
> (e.g., FastCGI portion of it), or some weird interaction of all of them. My
> bet is it is not Django, but who knows...
I have had a similar problem with django under fcgi (using the fcgi-wsgi
connector from flup). On longer pages, most of the page is sent, then
it stalls until mod_fcgi times out listening to the fcgi-server, then
sends the rest of the page. I have found a workaround for this, which
is to use flup's gzip middleware like so:
django-fcgi.py:
--------------------------------------------------
#!/usr/bin/python
from flup.server.fcgi_fork import WSGIServer
#from flup.server.fcgi import WSGIServer
from flup.middleware.gzip import GzipMiddleware
from django.core.handlers.wsgi import WSGIHandler
handler = WSGIHandler()
handler = GzipMiddleware(handler)
WSGIServer(handler).run()
This prevents the stalls, at least for browsers that support gzip
encoding!
I am not, at this time, sure where the problem lies; whether it is in
django's WSGI interface, in flup's fcgi-wsgi adapter, or in Apache's
mod_fcgi.
--
+----------------------------------------------------------------+
| Jason F. McBrayer jmcbray <at> carcosa.net |
| "If you wish to make Pythocles wealthy, don't give him more |
| money; rather, reduce his desires." -- Epicurus | | http://article.gmane.org/gmane.comp.python.django.user/515 | crawl-002 | refinedweb | 255 | 67.96 |
NetBeans Lookup Example
By Geertjan-Oracle on Apr 25, 2008
Lookup example, producing a new message automatically every 2 seconds:
-_1<<
Hi Geertjan,
at the moment I am really interested in things like servicelocator, dependency injection (DI) and the stuff, too. I only knew spring, but some days ago I discovered this container:
(and guice ...)
I like picocontainer, because it is quite small (<200 KB), there are a lot of interesting features and the authors are quite helpful. If you use nanocontainer.org on the top you can e.g. configure the container via xml, beanshell, groovy ...
And I have learned that a singleton could be an anti-pattern:
They (from picocontainer) wouldn't use "Lookup Selection.getSelection()" they would use DI:
private HelloTopComponent(Lookup lookup){
...
}
But at the moment I don't know if the picocontainer API would provide sth. similar to the result.addLookupListener method. Maybe the start/stop methods there could help, I'll figure it out if you are interested.
If you would like to see my first (stupid) swing app you can visit:
To build it, just copy dist/lib/\* to lib and use netbeans ;-)
Regards,
Peter.
PS: Keep on good posting ;-)
Posted by Peter on April 25, 2008 at 06:54 AM PDT #
xxx
Posted by guest on April 28, 2008 at 01:44 AM PDT #
Geertjan,
thanks for nice example. I found one very weird thing.
When I create MyLookup as public class and try to invoke MyLookup.run() directly (from EDT) it does not send information into lookup.
When I use new Thread(new Runnable() { ... }).start() to fire MyLookup.run() then it works.
I don't understand why. Can you clarify it to us?
Thanks
Tom
Posted by Tom on June 09, 2008 at 05:39 PM PDT #
Hi, I an newbie to NetBeans and working to create an application. I have created a login screen with the NetBeans extending ModuleInstall. I want to stop/abort loding main window (other modules) if login is failed. Could you please provide some ideas on that?
Posted by sanjay on May 27, 2013 at 03:15 AM PDT #
Hi Sanjay, you need as described here:
Posted by Geertjan on May 27, 2013 at 03:59 AM PDT # | https://blogs.oracle.com/geertjan/entry/lookup_example | CC-MAIN-2016-07 | refinedweb | 372 | 74.69 |
Programs of any complexity make use of functions. A function is a collection of declarations and statements that carries out a specific action and/or returns a value. Functions are either defined by the user or have been previously defined and made available to the user . Previously defined functions that have related functionality or are commonly used (e.g., math or graphics routines) are stored in object code format in library (archive) files . Object code format is a special file format that is generated as an intermediate step when an executable program is produced. Like executable files, object code files are also not displayed to the screen or printed. Functions stored in library files are often called library functions or runtime library routines .
The standard location for library files in most UNIX systems is the directory /usr/lib . Ancillary library files may also be found in the /usr/local/lib directory. Two basic types of libraries are used in compilationsstatic libraries and shared object libraries. Static libraries are collections of object files that are used during the linking phase of a program. Referenced code is extracted from the library and incorporated in the executable image. Shared libraries contain relocatable objects that can be shared by more than one application. During compilation the object code from the library is not incorporated in the executable code only a reference to the object is made. When the executable that uses a shared object library is loaded into memory the appropriate shared object library is loaded and attached to the image. If the shared object library is already in memory this copy is referenced. As might be expected shared object libraries are more complex than static libraries. In Linux, by default, shared object libraries are used if present otherwise static libraries are used. Most, but not all, compiler installations include both types of libraries. In the examples below we will focus on the more ubiquitous static libraries.
By convention, the three-letter prefix for a library file is lib and the file extension for a static library is .a . The UNIX archive utility ar , which creates, modifies, and extracts members from an archive, can be used to examine library file contents. [1] For example, the command
[1] The archive utility is one of the many exceptions to the rule that all command-line options for system utilities begin with a hyphen (-).
linux$ ar t /usr/lib/libc.a pr -4 -t
will pipe the table of contents (indicated by the t command-line option) of the standard C library file ( libc.a ) to the pr utility, which will display the output to the screen in a four-column format. The object code in this library is combined by default with all C programs when they are compiled. Therefore, in a C program when a reference is made to printf , the object code for the printf function is obtained from the /usr/lib/libc.a library file. Similarly, the command
linux$ ar t /usr/lib/libstdc++-3-libc6.2-2-2.10.0.a pr -4 -t
will display the table of contents of the C++ library file used by the gcc compiler. Remember that the versions (and thus the names ) of library files can change when the compiler is updated.
Additional information can be extracted from library files using the nm utility. For example, the command
linux$ nm -C /usr/lib/libstdc++-3-libc6.2-2-2.10.0.a grep 'bool operator=='
will find all the C++ equality operators in the referenced library file. The -C command-line option for nm demangles the compiler-generated C++ function names and makes them a bit more readable.
The ar command can also be used to create a library. For example, say we have two functions. The first function, called ascii , is stored in a file called ascii.cxx . This function generates and returns an ASCII string when passed the starting and endpoint for the string. The second function, called change_case (stored in the file change_case.cxx ), accepts a string and inverts the case of all alphabetic characters in the string. The listing for the two programs is shown in Figure 1.1.
Figure 1.1 Source code for two functions to be stored in archive libmy_demo.a .
File : ascii.cxx char * ascii( int start, int finish ){ char *b = new char(finish-start+1); for (int i=start; i <= finish; ++i) + b[i-start]=char( i ); return b; } ____________________________________________________________________________________ File : change_case.cxx #include char * change_case( char *s ){ + char *t = &s[0]; while ( *t ){ if ( isalpha(*t) ) *t += islower(*t) ? -32 : 32; ++t; 10 } return s; }
Each file is compiled into object code, the archive libmy_demo.a generated, and the object code added to the archive with the following command sequence:
linux$ g++ -c change_case.cxx linux$ g++ -c ascii.cxx linux$ ar cr libmy_demo.a ascii.o change_case.o
The prototypes for the functions in the my_demo library are placed in a corresponding header file called my_demo.h . Preprocessor directives are used in this file to prevent it from being inadvertently included more than once. A small C++ program, main.cxx , is created to exercise the functions. With the "" notation for the include statement in main.cxx , the compiler will look for the my_demo.h header file in the current directory. The contents of the my_demo.h header file and the main.cxx program are shown in Figure 1.2.
Figure 1.2 Header file and test program for libmy_demo.a .
File : my_demo.h /* Prototypes for my_demo library functions */ #ifndef MY_DEMO_H + #define MY_DEMO_H char * ascii( int, int ); char * change_case( char * ); 10 #endif ____________________________________________________________________________________ File : main.cxx #include #include "my_demo.h" using namespace std; int + main( ) { int start, stop; char b[20]; // temp string buffer cout << "Enter start and stop value for string: "; 10 cin >> start >> stop; cout << "Created string : " << ascii(start, stop) << endl; cin.ignore(80,' '); cout << "Enter a string : "; cin.getline(b,20); + cout << "Converted string: " << change_case( b ) << endl; return 0; }
The compilation shown below uses the -L command-line option to indicate that when the compiler searches for library files it should also include the current directory. The name of the library is passed using the -l command-line option. As source files are processed sequentially by the compiler, it is usually best to put linker options at the end of the command sequence to avoid the generation of any undefined reference errors.
linux$ g++ -o main main.cxx -L. -lmy_demo
A sample run of the main.cxx program is shown in Figure 1.3.
Figure 1.3 Sample run testing the archived functions.
linux$ main <-- 1 Enter start and stop value for string: 56 68 Created string : 89:;<=>?@ABCD Enter a string : This is a TEST! Converted string: tHIS IS A test!
(1) If your distribution of Linux does not include "." as part of its login path you will need to invoke the program as ./main .
If your system supports the apropos command, you may issue the following command to obtain a single-line synopsis of the entire set of predefined library function calls described in the manual pages on your system:
linux$ apropos '(3'
As shown, this command will search a set of system database files containing a brief description of system commands returning those that contain the argument passed. In this case, the ' (3 ' indicates all commands in Section 3 of the manual should be displayed. Section 3 (with its several subsections) contains the subroutine and library function manual pages. The single quotes are used in the command sequence so the shell will pass the parenthesis on to the apropos command. Without this, the shell would attempt to interpret the parenthesis, which would then produce a syntax error.
Another handy utility that searches the same database used by the apropos command is the whatis command. The command
linux$ whatis exit
would produce a single-line listing of all manual entries for exit . If the database for these commands is not present, the command /usr/ sbin/makewhatis , providing you have the proper access privileges, will generate it.
A more expansive overview of the library functions may be obtained by viewing the intro manual page entry for Section 3. On most systems the command
linux$ man 3 intro
will return the contents of the intro manual page. In this invocation the 3 is used to notify man of the appropriate section. For some versions of the man command, the option -s3 would be needed to indicate Section 3 of the manual. Additional manual page information addressing manual page organization and use can be found in Appendix A, "Using Linux Manual Pages."
In addition to manual pages, most GNU/Linux systems come with a handy utility program called info . This utility displays documentation written in Info format as well as standard manual page documents. The information displayed is text-based and menu-driven . Info documents can support limited hypertext-like links that will bring the viewer to a related document when selected. When present, Info documentation is sometimes more complete than the related manual page. A few of the more interesting Info documents are listed in Table 1.1.
Table 1.1. Partial Listing of Info Documents.
The info utility should be invoked on the command line and passed the item (a general topic or a specific commandsystem call, library function, etc.) to be looked up. If an Info document exists, it is displayed by the info utility. If no Info document exists but there is a manual page for the item, then it is displayed (at the top of the Info display will be the string *manpages* to notify you of the source of the information. If neither an Info document nor a manual page can be found, then info places the user in the info utility at the topmost level. When in the info utility, use the letter q to quit or a ? to have info list the commands it knows . Entering the letter h will direct info to display a primer on how to use the utility.
Programs and Processes
Processing Environment
Using Processes
Primitive Communications
Pipes
Message Queues
Semaphores
Shared Memory
Remote Procedure Calls
Sockets
Threads
Appendix A. Using Linux Manual Pages
Appendix B. UNIX Error Messages
Appendix C. RPC Syntax Diagrams
Appendix D. Profiling Programs | https://flylib.com/books/en/1.23.1/library_functions.html | CC-MAIN-2021-21 | refinedweb | 1,716 | 56.55 |
03 February 2009 18:22 [Source: ICIS news]
[Adds response from Rohm and Haas in paragraphs 9 to 14, updates share price data]
TORONTO (ICIS news)--Forcing through Dow Chemical's merger with Rohm and Haas would benefit only one party - the current shareholders of Rohm and Haas - but it could hurt the companies' employees, Dow said in a court filing on Tuesday.
Dow, which responded to Rohm and Haas' lawsuit from last week, reiterated that under current financial market conditions and the prevailing uncertainty, the merger would threaten the viability of the new entity and was contrary to the interests of the 55,000 Rohm and Haas and Dow employees, the associated communities, suppliers and customers.
“For those employees in ?xml:namespace>
“The interests of all who make up Dow and Rohm and Haas, and not just the narrow interests of some, must be balanced carefully before the drastic remedy of specific performance is ordered,” it said.
In a point-by-point response reviewing the history of the merger deal, Dow said that until late December - and even in the ongoing financial market turmoil - it stood by its deal to acquire Rohm and Haas for $18.5bn (€14.4bn), including debt, for $78/share.
However,
"Over a period of mere days and weeks, a confluence of dramatic and unforeseeable shocks - to Dow, to the chemical industry as a whole, and to financial markets - upset all reasonable expectations and cast a dark shadow of uncertainty over the viability of the Rohm and Haas acquisition," Dow said.
However, Rohm and Haas insisted on Tuesday that Dow close the transaction.
"The difficult conditions in the chemical industry and financial markets commenced before Dow agreed to acquire Rohm and Haas and were widely expected to worsen at the time we entered into the transaction,” Rohm said in a prepared press statement.
Dow, rather than Rohm and Haas shareholders, had assumed those risks and should now honour its obligations and close the transaction, it said.
Rohm and Haas also made available a letter to Dow’s board of directors from Monday in which it urged Dow’s board to “take control of the situation”.
Dow, not Rohm and Haas, took the economic risks, and in particular the risk that the K-Dow deal might not close, it wrote.
“If Dow is in the terrible financial condition that your chairman suggests, we do not know how you could have paid the 30 January 2009 cash dividend of almost $400m,” Rohm said in the letter.
The letter outlined a number of actions Dow could take to close the merger, including the suspension of dividend payments, asset sales, and equity issues in private and public markets.
Dow’s court filing and Rohm and Haas’ letter to Dow’s board are on the companies’ respective websites.
Earlier on Tuesday, Dow reported a net loss of $1.55bn for the fourth quarter of 2008 on Tuesday as demand dropped away sharply and the company took restructuring and other charges, and it warned of more plant closure and job cuts in a weak 2009.
Dow Chemical’s shares were priced $11.08, up 0.27%, in Tuesday afternoon trading after jumping 5.8% in early morning trading. Rohm and Haas’ shares were up 3.4% to $53.78.
($1 = €0.78) | http://www.icis.com/Articles/2009/02/03/9189842/forced-merger-would-only-help-rohm-shareholders-dow.html | CC-MAIN-2014-42 | refinedweb | 551 | 56.59 |
Scala Developer Journey into Rust - Part 7 : Type Classes seventh post in the series. In this post, I will be talking about type classes. You can find all the other posts in the series here.
Type Class
Type class is a programming pattern which associates behaviour with types. Type class design pattern allows programmer to implement the various behaviours without using inheritance. This pattern is very popular in Scala to implement various libraries such as serialization ones. You can learn more about the type classes from this excellent talk.
Rust also has first class support for type classes. Rust traits follow type class pattern.
So both Scala and Rust have similar support for type class. This post explores more about the type class implementation in Scala and Rust.
Implementing Type Class
In this section, we will see different steps in implementing type classes.
Defining Behaviour using Trait in Scala
First step of type class is to define the behaviour. We can do the same using trait in Scala.
trait Serializable[T] { def serialize(v: T): String }
Here we have defined a behaviour called Serializable which defines how to convert a given type T to String.
Defining Behaviour using Trait in Rust
As we used trait in Scala, we can use trait in Rust to define the behavior.
pub trait Serializable<'a> { fn serialize(self : &Self) -> Cow<'a,str>; }
We are using Copy on Write pointer to return a string from the method. You can read more about it in this blog.
Defining the Types in Scala
Once we have defined the behaviour, we need to define types on which this behaviour we can be implemented. We can use case classes to the same.
case class Person(name: String, age: Int) case class Restaurant(name: String, brunch: Boolean)
Defining the Types in Rust
In Rust, we will use struct to define the types.
istruct Person<'a> { name : &'a str, age : i32 } struct Restaurant<'a> { name : &'a str, brunch : bool }
Implementing Serializable Behaviour for Types in Scala
Once we defined the behaviour and types, next step is to implement the serialization for Person and Restaurant.
implicit object Person extends Serializable[Person] { def serialize(v: Person): String = "Person(" + v.name + "," + v.age + ")" }
We use implicit object to attach the behaviour of serialization to type Person.
Implementing Serializable Behaviour for Types in Rust
In Rust, there is no concept of implicits. But in rust, if a given trait is implemented, by importing the type all the behaviour is automatically imported. This makes the implicit redundant. This feature is inspired by type classes of Haskell.
mpl<'a> Serializable<'a> for Person<'a> { fn serialize( self: &Self) -> Cow<'a,str> { Cow::Owned(self.name.to_owned()+" "+ &self.age.to_string()) } }
Using Type Classes in Scala
Once we have implemented the behaviour, we can define a generic method which can use this.
def serializeMethod[T](value: T)(implicit serializer: Serializable[T]) = { serializer.serialize(value) }
Using Type Classes in Rust
As in Scala, we can define generic method for serialization in rust also.
pub fn serialize_method<'a,T>(v:&T) -> Cow<'a,str> where T:Serializable<'a> { T::serialize(v) }
Extending Built in Types
In earlier example, we have added serialization behaviour for user defined types. But both in Scala and Rust, has ability to add this behaviour to built in types also. In this section, we can see how we can serialize the List type.
Implementing Serialization for List in Scala
A list of serializable objects can be serialized. The below code shows the same.
implicit def ListSerializable[T: Serializable] = new Serializable[List[T]] { def serialize(v: List[T]) = v.map(serializeMethod(_)).mkString("List(", ",", ")") }
Implementing Serialization for List in Rust
impl<'a,T:Serializable<'a>> Serializable<'a> for Vec<T> { fn serialize( self: &Self) -> Cow<'a,str> { let result = self.iter().map(|x| serialize_method(x)).collect::<Vec<Cow<'a,str>>>(); let join_result = result.join(","); Cow::Owned(join_result) } }
Code
You can find complete code for Scala on github.
You can find complete code for Rust on github.
Conclusion
Type classes are one of the powerful patterns in Scala and Rust which allows programmer to add different generic behaviour to types. | http://blog.madhukaraphatak.com/rust-scala-part-7/ | CC-MAIN-2019-47 | refinedweb | 691 | 56.86 |
I am working on a large open source Semantic CMS named Clerezza, an Apache incubator, which relied on Maven. It uses Java for the infrastructure and Scala for HTML and many other fancy things.
I have noticed that a lot of maven imports are missing. They are somehow acknowledged by IDEA 10.0.1 Build IC-99.32, that is some things work and others don't.
- In the packages view the org.apache.clerezza... packages are only very partially listed. The screen shot below shows how many package dependencies are required in Maven, but it shows only a few in the Libraries section. It seems that only org.apache.clerezza.scripting shows up, where the maven tab in the top right pane shows clearly the following being listed
- org.apache.clerezza.rdf.core
- org.apache.clerezza.rdf.scala.utils
- org.apache.clerezza.rdf.ontologies
- org.apache.clerezza.jaxrs.utils
- org.apache.clerezza.triaxrs
- and many more
- As a result (I guess) the Scala editor has a problem finding the FOAF class even though the package in which it is to be found is declared at the top of the file
import org.apache.clerezza.rdf.ontologies._On the other hand - oddly enough - the Scala editor recognises the package the FOAF class should be in, and asks to add the required import statement.
Attachment(s):
Maven Dependency problem.png
I've often noticed that Maven has lots of problems figuring out (and subsequently downloading) nested dependencies.
Top level dependencies of course are listed in the POM and recognised just fine, but dependencies by these dependencies are not properly parsed from their poms, leading to compiler and/or runtime failures.
As a result, we've taken to (if we use Maven at all, I still prefer ANT for its more fine-grained control that doesn't force things upon teams) always list all dependencies (including nested ones) as top level dependencies in the main POM to ensure everything needed gets downloaded and remains of the version we desire.
Hello Henry,
is the needed Maven artifact really accessible by IDEA/Maven? A lot of these problems arises, when the <version> tag in the pom.xml is missing and you have multiple possible versions of this artifact available. Please check in the project view (View as: Project) under External Libraries the needed Maven dependency is listed as something like this:
Maven: org.apache.clerezza:org.apache.clerezza.rdf.core:VERSION
If you see none or multiple entries with different versions, you have to exclude the wrong versions out of your pom.xml.
You can check, if Maven is able to get the right dependency by the help plugin. Run this in a shell:
mvn dependency:tree -Dincludes=org.apache.clerezza
I never saw such a problem with Maven. In fact, this is one advantage (if not the only one advantage beside convention-building) of Maven against Ant. Usually the cause of these problems are misconfigured dependency sections in the pom.xml. If transitive dependencies are not downloaded, almost always it is due to a version conflict of different other dependencies or not correctly set versions. Sometimes it is because of a transitive dependency marked as optional, because there are more than one providers available.
I made the best experience with explicitly setting all versions of all dependencies. To keep life simple, use properties for this (IDEA has a nice refactoring for this, btw). After this, the most important task is to check for mutliple versions with the help-plugin, like I mentioned in my answer below. And the Maven versions plugin is useful, too.
There are no problems with Maven here. Clerezza can be built fine using Maven. This is an issue of IntelliJ not picking the dependencies up correctly. Furthermore Netbeans works for clerezza here.
There is no problem running
mvn dependency:tree -Dincludes=org.apache.clerezza
it completes successfully. The whole project also compiles successfully with this
$ export MAVEN_OPTS=-Xmx512m
More details on getting it working on this e-mail "Clerezza and WebID: How to get it going"
(note: now the shell has changed, so you need to add :f before any command)
Also recent changes have made Clerezza require somewhat more memory. I need this now
$ java -Dfile.encoding=utf-8 -Xmx512m -XX:MaxPermSize=248M -jar org.apache.clerezza.platform.launcher.sesame-0.5-incubating-SNAPSHOT.jar --https_keystore_clientauth want --https_port 8443 --https_keystore_path /Users/hjs/tmp/cert/KEYSTORE.jks --https_keystore_password secret
(The https_port and keystore path attributes should probably be dropped initially, if you have not created yourself a local key...)
it's worth trying out to see what is going on here....
Henry
I have been using Netbeans since my last post to see if the problems would get solved with a new release of IntelliJ. As I saw there was a new release of IntelliJ, I thought it would be worth trying to see how things stood now. The good news: one of my Scala problem seems to have been solved. On the other hand this maven problem remains in IntelliJ 10.0.2
I have attached two screenshots of IntelliJ with the maven tab open. Perhaps that can help spot the mistake. The first attached picture shows the following message at the top of the yellow pop up:
"Problem resolving expression ${project.groupid} ..."
But I can't see where I should set that. Also from the command line I don't need to set it.
The project is still available in the same place from svn, though the directory structure has changed recently. It can be downloaded from svn from here
$ svn co
The files are in the parent directory. Perhaps that should help to find out what IntelliJ is not picking up.
Attachment(s):
Screen shot 2011-02-17 at 23.01.39.png
Screen shot 2011-02-17 at 23.01.25.png
This seems to have been fixed in the trunk of IntelliJ Open Source. I built it following the recipe detailed on the page "Check Out & Build Community Edition"
Today is 18 Febuary 2011, btw.
This bug remains. I use IntelliJ IDEA 11.1 (117.84) and I still experience this problem. | https://intellij-support.jetbrains.com/hc/en-us/community/posts/206903525-Maven-dependencies-missing | CC-MAIN-2021-17 | refinedweb | 1,028 | 65.32 |
Red Hat Bugzilla – Full Text Bug Listing
The command:
cobbler import --name=xdist-rhel6-latest --path=/mnt/data/devel/trees/rhel6-latest --available-as=
No longer works for me with latest f16 cobbler. It worked with cobbler 2.0 IIRC.
The following patch fixes a few issues, pasted inline for explaination:
--- cobbler/modules/manage_import_redhat.py 2011-10-05 20:49:16.000000000 -0400
+++ /usr/lib/python2.7/site-packages/cobbler/modules/manage_import_redhat.py 2011-10-27 16:24:47.905975596 -0400
@@ -246,7 +246,7 @@ class ImportRedhatManager:
if not self.network_root.endswith("/"):
self.network_root = self.network_root + "/"
- self.path = os.path.normpath( self.mirror )
+ self.path = self.rootdir = os.path.normpath( self.mirror )
valid_roots = [ "nfs://", "ftp://", "http://" ]
for valid_root in valid_roots:
if self.network_root.startswith(valid_root):
rootdir needs to match self.path here, otherwise rootdir was pointing to somewhere under /var, when we need it to point to the media being imported.
@@ -273,6 +273,10 @@ class ImportRedhatManager:
# FIXME: this automagic is not possible (yet) without mirroring
self.repo_finder(distros_added)
+ if not distros_added:
+ utils.die(self.logger,
+ "Failed to detect any distros in %s" % self.path)
+
# find the most appropriate answer files for each profile object
self.logger.info("associating kickstarts")
This makes 'cobbler import' appropriately fail if no distros are actually successfully detected from the passed tree.
@@ -281,11 +285,12 @@ class ImportRedhatManager:
# ensure bootloaders are present
self.api.pxegen.copy_bootloaders()
+
return True
# required function for import modules
def get_valid_arches(self):
- return ["i386", "ia64", "ppc", "ppc64", "s390", "s390x", "x86_64", "x86",]
+ return ["i386", "ia64", "ppc", "ppc64", "s390", "s390x", "x86_64"]
# required function for import modules
def get_valid_breeds(self):
Ignoring the accidental whitespace change, this removes x86 from the expected package arch list. AFAIK red hat has never used 'x86' in RPM package strings. It's a problem anyways, because x86 will always also match x86_64, so importing an x86_64 distro will cause problems since cobbler detects it as a multiarch tree.
self.get_entry should also be changed to never return None (which it does in 2 error paths), since callers expect a list for a return value. The offending areas should probably just use utils.die.
Created attachment 530565 [details]
Fix various issues importing RH distros
A couple more fixes were needed for importing an F16 tree. The first basic one is that fedora16 needs to be added to the whitelist in cobblers/codes.py
Second problem is that scanning an x86_64 tree is again detecting 2 architectures, because the kernel package scanning is picking up an i686 kernel-tools package. This patch fixes that:
@@ -835,12 +844,22 @@ class ImportRedhatManager:
"""
Is the given filename a kernel filename?
"""
+ blacklist = ["kernel-tools"]
+ whitelist = ["kernel-header", "kernel-source", "kernel-smp",
+ "kernel-largesmp", "kernel-hugemem", "linux-headers-",
+ "kernel-devel", "kernel-"]
if not filename.endswith("rpm") and not filename.endswith("deb"):
return False
- for match in ["kernel-header", "kernel-source", "kernel-smp", "kernel-largesmp", "kernel-hugemem", "linux-headers-", "kernel-devel", "kernel-"]:
+
+ for match in blacklist:
+ if filename.find(match) != -1:
+ return False
+
+ for match in whitelist:
if filename.find(match) != -1:
return True
+
return False
def scan_pkg_filename(self, rpm):
Yall should probably be parsing .treeinfo files for red hat distros, it's much much easier. Would also be nice to have some logic that makes an unknown fedora value just default to the highest supported fedora OS. We do similar things in python-virtinst:;a=blob_plain;f=virtinst/OSDistro.py;hb=HEAD
This package has changed ownership in the Fedora Package Database. Reassigning to the new owner of this component.
I believe this has been fixed in the most recent release (2.2.x). Can you please verify if this is still an issue? I have imported SL6 on a F16 system running cobbler without issue.
Things don't obviously fail like previously, but now there is a very annoying change in behavior.
I have a RHEL6 tree on my local machine. I want cobbler to learn about it and advertise a kickstart for it. I don't want it to copy it to it's preferred location.
sudo cobbler import --name=xdist-rhel6-latest --path=/mnt/data/devel/trees/rhel6-latest --available-as=
Previously the above invocation was a very quick operation. With current version from updates-testing, it begins rsyncing the distro tree to /var/www/cobbler/ks_mirror/xdist-rhel6-latest which would leave 2 copies on my physical machine.
Maybe there was a regression and import just isn't taking into account --available-as ?
Actually it looks like the change was deliberate (and done by you) :)
But the commit message provides little justification for this change. All the cobbler documentation for --available-as touted its main feature that it didn't try and mirror the distro to another location. So this change is a regression and an API break since the cli tool is the admin's API to cobbler.
IMO this change should be reverted. If you need a way to pass to specify network_root but still do the mirroring, what you want is a new cli option to force mirroring, not changing existing option semantics.
My workflow has me regularly blow away and reimport distros with cobbler (I'm a virt developer and use pxe to advertise multiple distros).
Yes, that is correct. That commit was to address a chicken & egg problem with the new import modules, which have to scan files before they can figure out what distro to import. Doing the full rsync was supposed to be a stop-gap measure so that --available-as imports didn't fail outright (which was the situation before).
I will start working on correcting this situation, which is going to involve using a file containing a whitelist of just the files it needs to rsync while excluding the rest. This bug will be closed as fixed, and I'll open a new issue on github to track the fix if that works for you.
Got this working for RedHat-based distros:
As you can see, the downloaded data for Centos 5.8 was just 78M - much better than the full DVD (or more). I should get the rest of the distros fixed up the same way, and will aim to get fix in 2.2.3.
Sounds good, thanks James. I'd also appreciate a redhat bugzilla so I can track when the issue is fixed in fedora.
That's fine, I'll just leave this open as well. The github issue is
Code merged into master to correct this:
If there's enough testing we may be able to get it into 2.2.3, otherwise it will be in 2.2.4. | https://bugzilla.redhat.com/show_bug.cgi?format=multiple&id=749667 | CC-MAIN-2017-13 | refinedweb | 1,102 | 56.45 |
A lot of algorithms are using record() to keep track of leverage. The problem is that record() only shows the last value seen per day and drops the rest. Leverage can change in any minute so 389 out of 390 minutes of the trading day are missed. If record() is set to operate on just the first and second minute of the day, the second minute will show up, it works like that.
Why is leverage so important? Because leverage above 1 often means margin, spending beyond the amount in the account. If you start with $100,000 and the code hits a leverage of 1.5 at any time during the run, it typically means you would have had to supply $150,000 to achieve that result by borrowing from the broker, requiring a margin account with fees/interest. (It's a little more involved, that's just the general idea).
A lot can happen intraday (occurring within/throughout a day). To catch leverage jumps (new highs), here is some minimal code for copy/paste. This will read leverage on every minute processed and chart the maximum every day, looks like a stair-step showing when leverage increases happen. Quick and easy.
Edit, from below ...
def initialize(context): ... context.mxlv = 0 for i in range(1, 391): schedule_function(mxlv, date_rules.every_day(), time_rules.market_open(minutes=i)) def mxlv(context, data): if context.account.leverage > context.mxlv: context.mxlv = context.account.leverage record(MxLv = context.mxlv)
Earlier ...
def handle_data(context, data): if 'mx_lvrg' not in context: # Max leverage context.mx_lvrg = 0 # Init this instead in initialize() for better efficiency if context.account.leverage > context.mx_lvrg: context.mx_lvrg = context.account.leverage record(MxLv = context.mx_lvrg) # Record maximum leverage encountered | https://www.quantopian.com/posts/max-intraday-leverage | CC-MAIN-2018-43 | refinedweb | 287 | 67.76 |
Customizing ASP.NET Core Part 08: ModelBinders
Jürgen Gutsch - 17 October, 2018
In the last post about
OutputFormatters I wrote about sending data out to the clients in different formats. In this post we are going to do it the other way. This post is about data you get into your Web API from outside. What if you get data in a special format or what if you get data you need to validate in a special way.
ModelBinders will help you handling this. - This article
- Customizing ASP.NET Core Part 09: ActionFilter
- Customizing ASP.NET Core Part 10: TagHelpers
- Customizing ASP.NET Core Part 11: WebHostBuilder
- customizing ASP.NET Core Part 12: Hosting
About ModelBinders
ModelBinders are responsible to bind the incoming data to specific action method parameters. It binds the data sent with the request to the parameters. The default binders are able to bind data that are sent via the QueryString or sent within the request body. Within the body the data can be sent in URL format or JSON.
The model binding tries to find the values in the request by the parameter names. The form values, the route data and the query string values are stored as a key-value pair collection and the binding tries to find the parameter name in the keys of the collection.
Preparation of the test project
In this],"1985 Devon Avenue ",Sansom Park,(357) 274-3606
So let's start by creating a new project using the .NET CLI:
dotnet new webapi -n ModelBinderSample -o ModelBinderSample
This creates a new Web API project.
In this new project I created a new controller with a small action inside:
namespace ModelBinderSample.Controllers { [Route("api/[controller]")] [ApiController] public class PersonsController : ControllerBase { public ActionResult<object> Post(IEnumerable<Person> persons) { return new { ItemsRead = persons.Count(), Persons = persons }; } } }
This looks basically like any other action. It accepts a list of persons and returns an anonymous object that contains the number of persons as well as the list of persons. This action is pretty useless, but helps us to debug the ModelBinder using Postman.
We also need the Person class:
public class Person { public int Id { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public int Age { get; set; } public string EmailAddress { get; set; } public string Address { get; set; } public string City { get; set; } public string Phone { get; set; } }
This actually will work fine, if we would send JSON based data to that action.
As a last preparation step, we need to add the CsvHelper NuGet package to easier parse the CSV data. I also love to use the .NET CLI here:
dotnet package add CsvHelper
Creating a CsvModelBinder
To create the
ModelBinder add a new class called
CsvModelBinder, which implements the
IModelBinder. The next snippet shows a generic binder that should work with any list of models:
public class CsvModelBinder : IModelBinder { public Task BindModelAsync(ModelBindingContext bindingContext) { if (bindingContext == null) { throw new ArgumentNullException(nameof(bindingContext)); } // Specify a default argument name if none is set by ModelBinderAttribute var modelName = bindingContext.ModelName; if (String.IsNullOrEmpty(modelName)) { modelName = "model"; } // Try to fetch the value of the argument by name var valueProviderResult = bindingContext.ValueProvider.GetValue(modelName); if (valueProviderResult == ValueProviderResult.None) { return Task.CompletedTask; } bindingContext.ModelState.SetModelValue(modelName, valueProviderResult); var value = valueProviderResult.FirstValue; // Check if the argument value is null or empty if (String.IsNullOrEmpty(value)) { return Task.CompletedTask; } var stringReader = new StringReader(value); var reader = new CsvReader(stringReader); var modelElementType = bindingContext.ModelMetadata.ElementType; var model = reader.GetRecords(modelElementType).ToList(); bindingContext.Result = ModelBindingResult.Success(model); return Task.CompletedTask; } }
In the method
BindModelAsync we get the
ModelBindingContext with all the information in it we need to get the data and to de-serialize it.
First the
context get's checked against null values. After that we set a default argument name to model, if none is specified. If this is done we are able to fetch the value by the name we previously set.
If there's no value, we shouldn't throw an exception in this case. The reason is that maybe the next configured
ModelBinder is responsible. If we throw an exception the execution of the current request is broken and the next configured
ModelBinder doesn't have the chance to get executed.
With a
StringReader we read the value into the
CsvReader and de-serialize it to the list of models. We get the type for the de-serialization out of the
ModelMetadata property. This contains all the relevant information about the current model.
Using the ModelBinder
The Binder isn't used automatically, because it isn't registered in the dependency injection container and not configured to use within the MVC framework.
The easiest way use this model binder is to use the ModelBinderAttribute on the argument of the action where the model should be bound:
[HttpPost] public ActionResult<object> Post( [ModelBinder(binderType: typeof(CsvModelBinder))] IEnumerable<Person> persons) { return new { ItemsRead = persons.Count(), Persons = persons }; }
Here the type of our
CsvModelBinder is set as
binderType to that attribute.
Steve Gordon wrote about a second option in his blog post: Custom ModelBinding in ASP.NET MVC Core. He uses a
ModelBinderProvider to add the
ModelBinder to the list of existing ones.
I personally prefer the explicit declaration, because the most custom
ModelBinders will be pretty specific to an action or to an specific type and
theres no hidden magic in the background.
Testing the ModelBinder
To test it, we need to create a new Request in Postman. I set the request type to POST and put the URL in the address bar. No I need to add the CSV data in the body of the request. Because it is a URL formatted body, I needed to put the data as
persons variable into the body:
persons
After pressing send, I got the result as shown below:
Now the clients are able to send CSV based data to the server.
Conclusion
This is a good way to transform the input in a way the action really needs. You could also use the ModelBinders to do some custom validation against the database or whatever you need to do before the model get's passed to the action.
To learn more about ModelBinders, you need to have a look into the pretty detailed documentation:
While playing around with the
ModelBinderProvider Steve describes in his blog, I stumbled upon
InputFormatters. Would this actually be the right way to transform CSV input into objects? I definitely need to learn some more details about the
InputFormattersand will use this as 12th topic of this series.
Please follow the introduction post of this series to find additional customizing topics I will write about.
In the next part I will show you what you can do with ActionFilters: Customizing ASP.NET Core Part 09: ActionFilter | https://asp.net-hacker.rocks/2018/10/17/customizing-aspnetcore-08-modelbinders.html?utm_source=feedburner&utm_medium=twitter&utm_campaign=Feed%3A%20jgutsch%20%28ASP.NET%20Hacker%29 | CC-MAIN-2020-24 | refinedweb | 1,131 | 55.03 |
HTML and CSS Reference
In-Depth Information
figure 9-30
Gargoyle collection page
Complete the following:
1. Use your text editor to open the gargtxt.htm and maatxt.css files from the
tutorial.09\case2 folder included with your Data Files. Enter your name and the
date in the comment sections, and then save the files as gargoyle.htm and maa.css ,
respectively.
2. Go to the gargoyle.htm file in your text editor, and then insert an XML prolog at the
top of the file. Use the standard attribute defaults for an XHTML file.
3. After the XML prolog, insert a DOCTYPE declaration for the XHTML 1.0 strict DTD.
4. Set the default namespace of the document to the XHTML namespace.
5. Link the file to the maa.css style sheet.
Search WWH ::
Custom Search | http://what-when-how.com/Tutorial/topic-654h9f5/HTML-CSS-and-Dynamic-HTML-709.html | CC-MAIN-2017-47 | refinedweb | 137 | 71.92 |
How to delay assignment of class values in Scala?
scala local variables must be initialized
scala global variable
scala lazy var
a variable requires an initial value to be defined
scala lazy val
scala var vs val
how to print the variable in scala
I have a class that gets created using data
a read from a .conf file. The class also contains a value
b which does not come from the conf file.
The class:
case class Configuration(a: String, b: String)
The sequence of calls to instantiate a Configuration looks like this:
User ->
ConfigurationCompiler ->
ConfigurationReader ->
readConfig(): Configuration
(The user gets a Configuration object back)
Value
a gets read and set by the .conf file, but value
b is specified by the user. I don't want to pass value
b all the way down to readConfig() so it can be set on instantiation of
Configuration.
I have experimented with
Option, but to me it looks ugly because you first instantiate
b with
None, and then set it later. Also, the tests look weird because you must test
Some(String) instead of
String. Option also doesn't seem like it fits here because the field is actually not optional, it is just set at a later time.
The
Option class:
case class Configuration(a: String, var b: Option[String])
Some solution might be to use
implicit, but
implicit String doesn't look good - implicit types are resolved by type, so it should be something less general than
String.
What I can think of is something like
PartialConfiguration returned by
readConfig(). Then you can add it's values to the values provided by
User to create full
Configuration.
How to assign a Scala class field to a (lazy) block or function , You want to initialize a field in a Scala class using a block of code, or by calling a function. Solution. Set the field equal to the desired block of Re: Re assignment to Val Immutable values can't be changed, ever, this is by design. It might first look like you cant doing a lot of things, but to see how this idea works it helps to think about the way that numbers behave.
You could create the original configuration object with a default value in the
b position. When you get the actual value of
b, then create a copy that is used by your program using:
val realConfig = originalConfig.copy( b = bValue )
The copy will have the field
b replaced with the desired value for actual use.
[PDF] Programming in Scala, 4 First-Class Functions. 29 Web services address the message delay prob- whole class hierarchy, not just values of a single type. Rational conforms to AnyRef, so it is legal to assign a Rational value to a variable. This.
I would go for a
builder. Here's an example:
import cats.implicits._ import cats.data.ValidatedNel case class ConfigBuilder(a: Option[String] = None, b: Option[String] = None) { def withA(a: String): ConfigBuilder = copy(a = Some(a)) def withB(b: String): ConfigBuilder = copy(b = Some(b)) def build: ValidatedNel[String, Configuration] = (a.toValidNel("'a' must be defined"), b.toValidNel("'b' must be defined")) .mapN(Configuration) }
Validating both fields:
ConfigurationBuilder() .build // Invalid(NonEmptyList('a' must be defined, 'b' must be defined))
Getting valid configuration:
ConfigurationBuilder() .withA("test") .withB("test2") .build // Valid(Configuration(test,test2))
initializing later, to be later (and something has to be there):. class Test { // will hold a Foo class lazy val foo = null. def validate(bar:SomeFactory) { if (test == null). Let us extend our above class and add one more class method.
Scala Basic Tutorial, 6 Case Classes and Pattern Matching. 51 Web services address the message delay prob- lem by increasing granularity, Rational conforms to AnyRef, so it is legal to assign a Rational value to a variable of type AnyRef:. You want to provide a default value for a Scala constructor parameter, which gives other classes the option of specifying that parameter when calling the constructor, or not. Solution Give the parameter a default value in the constructor declaration.
Using the Timer and TimerTask Classes, In this Scala Tutorial, you will learn the basics of Scala such as how to declare tutorial, immutability is a first class citizen in the Scala programming language. variable named donutsToBuy of type Int and assign its value to 5. Sometimes you may wish to delay the initialization of some variable until at You can also assign the results from a Scala if expression in a simple function, like this absolute value function: As shown, the Scala if/then/else syntax is similar to Java, but because Scala is also a functional programming language, you can do a few extra things with the syntax, as shown in the last two examples.
Scala Programming Projects: Build real world projects using , is an example of using a timer to perform a task after a delay: import java.util. public class Reminder { Timer timer; public Reminder(int seconds) { timer = new The comparison of pattern matching decomposition approach (using scala trait and case class) with the object oriented approach (trait and class implementations) is highlighted in Expr.scala, Expr2.scala, and Expr3.scala.
- I think this is the simplest solution that doesn't use defaults which I feel are not elegant if you intend to set them later. I like this approach because it gives back a complete object, that is explicitly stated as being a
PartialConfiguration, instead of a
Configurationwith implied missing parts.
- @jcallin Do you have to be able to inspect the
PartialConfigurationobjects? If not, you could just use a function
ConfInput => UserInput => Configurationdirectly, without inventing any new classes to hold intermediate results. If you define
Configurationin the simplest way possible as
case class Configuration(a: String, b: String), then the function
(confStr: String) => (userStr: String) => Configuration(confStr, userStr)of type
String => String => Configurationseems sufficient. | https://thetopsites.net/article/51619817.shtml | CC-MAIN-2021-25 | refinedweb | 977 | 51.89 |
Opened 7 years ago
Closed 2 years ago
Last modified 2 years ago
#15815 closed New feature (duplicate)
Support memcached binary protocol in PyLibMCCache
Description
As per , enabling the binary protocol requires the
binary=True argument to be passed into pylibmc.Client when it is initialized.
The binary protocol is available in Memcached 1.3+ and provides a performance boost in high-load circumstances:
pylibmc ignores any unknown options passed into Client.behaviors (see below), so putting this in
OPTIONS seems like the way to go.
>>> import pylibmc >>> a=pylibmc.Client(['127.0.0.1:55838',]) >>>} >>> a.behaviors = {"binary":True} # should be ignored by pylibmc >>>}
Attachments (1)
Change History (19)
Changed 7 years ago by
comment:1 Changed 7 years ago by
comment:2 Changed 7 years ago by
The patch isn't ideal - special-casing "binary" is a bit annoying - but this is totally worth having.
comment:3 Changed 7 years ago by
I’ll agree with that, though I wasn’t sure how to make this semantically better.
For the older cache framework, django-newcache accepted
binary as a cache param (with
timeout,
cull_frequency, et. al.), with
CACHE_BEHAVIORS as a separate settings option. (Using django-newcache as a comparison point since some of the newer 1.3+ cache features like versioning and key prefixing appear to have been based on the
newcache implementation.)
But I didn’t quite feel that special casing
PyLibMCCache to accept a new base parameter was correct, either …
CACHES = { 'default' : { "BACKEND" : 'django.core.cache.backends.memcached.PyLibMCCache', "LOCATION" : '127.0.0.1:11211', "BINARY" : True, "OPTIONS" : dict(tcp_nodelay=True, ketama=True), } }
… since the description of
OPTIONS reads, “Any options that should be passed to cache backend. The list options understood by each backend vary with each backend. […] Cache backends backed by a third-party library will pass their options directly to the underlying cache library.”
In particular, that seems to imply that for consistency’s sake, all implementation-specific options regarding a backend should go into
OPTIONS and that it’s up to the backend to do what it needs to provide the correct information to the underlying library.
Technically, a more semantically-correct option would be to do something like:
CACHES = { 'default' : { "BACKEND" : 'django.core.cache.backends.memcached.PyLibMCCache', "LOCATION" : '127.0.0.1:11211', "OPTIONS" : { "binary": True, "behaviors" : dict(tcp_nodelay=True, ketama=True), } } }
Not really sure what the best patch would be at this point.
comment:4 Changed 7 years ago by
comment:5 Changed 7 years ago by
I recently updated django-pylibmc, a 3rd party cache backend, with a few features we wanted at Mozilla. Namely, to make a timeout of zero mean an infinite timeout, add compression support (which pylibmc handles), and support the binary protocol. The package can be found here:.
I'd be happy to help push this forward or bring some code over into Django. Mapping
OPTIONS to pylibmc "behaviors" and adding the extra
BINARY=True parameter made sense to me. But whichever way is decided, this should be an easy thing to add to this backend.
comment:6 Changed 6 years ago by
comment:7 Changed 6 years ago by
Enabling binary mode breaks the tests at the moment, because keys with spaces are actually allowed in binary mode.
comment:8 Changed 6 years ago by
Working on it here:
Still needs some docs to demystify the options for pylibmc a bit. Currently all options are set as behaviors except binary (which still feels a bit ugly)
comment:9 Changed 5 years ago by
I really don't think special casing is a big deal here -- it's not far off of what BaseCache does already, although I'd rather
pop binary off of _options in the _cache method before setting behaviors to it.
Happy to finish this off if bpeschier can't get to it, but it's almost done at this point I think.
comment:10 Changed 5 years ago by
So this is already implemented in.
Shouldn't we just merge that cache provider back?
comment:11 Changed 3 years ago by
comment:12 Changed 3 years ago by
How come this is not supported? What's blocking it exactly?
How can I help?
comment:13 Changed 3 years ago by
I don't see a reviewable patch.
comment:14 Changed 3 years ago by
any news about it?
comment:15 Changed 3 years ago by
comment:16 Changed 3 years ago by
In case this helps anyone working on this in the future, if you get test failures running the Django pylibmc cache tests with binary mode enabled in
test_zero_timeout() - I believe it's due to a bug in older versions of libmemcached-dev.
This appears to only affect
.set() and not
.add(), and only appears if using binary mode. The bug means that a Django zero timeout (which is converted to
-1 when passed to pylibmc) is ignored by libmemcached and the key is in fact set after all.
Affected libmemcached versions include that running on Travis on their Ubuntu precise images (which presuming they are using the official package is). libmemcached-dev 1.0.8 on Ubuntu trusty works fine. I couldn't find a related bug or commit on but Launchpad is pretty painful to use so I may have just missed it.
comment:17 Changed 3 years ago by
<obsolete comment>
comment:18 Changed 2 years ago by
This has been fixed in ticket #20892, by adding generic support for passing parameters through to the memcached client constructor (and for all backends, not just
PyLibMCCache).
pylibmc's constructor is:
def __init__(self, servers, behaviors=None, binary=False, username=None, password=None):
So on Django master (will be 1.11), binary mode can be enabled using eg:
CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.PyLibMCCache', 'LOCATION': '127.0.0.1:11211', 'OPTIONS': { 'binary': True, 'username': 'user', 'password': 'pass', 'behaviors': { 'ketama': True, } } } }
For more examples, see:
I'm going to try and backport these changes (plus ticket #27152) to django-pylibmc, so the same Django settings file will work for both, to make transitioning from "older Django+django-pylibmc backend" to "Django 1.11+ with stock backend" easier.
Not sure if patch is in the proper format; have also opened a github pull request, tracking this bug.
Looked into updating the docs, however the documentation appears to leave out implementation-specific details of the different memcached backends. Please let me know if I’m mistaken. | https://code.djangoproject.com/ticket/15815?cversion=0&cnum_hist=16 | CC-MAIN-2018-39 | refinedweb | 1,077 | 53.61 |
Support models¶
Support models are abstracts over “raw” objects within a Pdf. For example, a page
in a PDF is a Dictionary with set to
/Type of
/Page. The Dictionary in
that case is the “raw” object. Upon establishing what type of object it is, we
can wrap it with a support model that adds features to ensure consistency with
the PDF specification.
In version 2.x, did not apply support models to “raw” objects automatically.
Version 3.x automatically applies support models to
/Page objects.
- class pikepdf.Page¶
Support model wrapper around a page dictionary object.
- add_content_token_filter(self: pikepdf.Page, tf: pikepdf.TokenFilter) None ¶
Attach a
pikepdf.TokenFilterto a page’s content stream.
This function applies token filters lazily, if/when the page’s content stream is read for any reason, such as when the PDF is saved. If never access, the token filter is not applied.
Multiple token filters may be added to a page/content stream.
Token filters may not be removed after being attached to a Pdf. Close and reopen the Pdf to remove token filters.
If the page’s contents is an array of streams, it is coalesced.
- add_overlay(other, rect=None, *, push_stack=True)¶
Overlay another object on this page.
Overlays will be drawn after all previous content, potentially drawing on top of existing content.
- Parameters
other (Union[pikepdf.objects.Object, pikepdf._qpdf.Page]) – A Page or Form XObject to render as an overlay on top of this page.
rect (Optional[pikepdf._qpdf.Rectangle]) – The PDF rectangle (in PDF units) in which to draw the overlay. If omitted, this page’s trimbox, cropbox or mediabox will be used.
push_stack (Optional[bool]) – If True (default), push the graphics stack of the existing content stream to ensure that the overlay is rendered correctly. Officially PDF limits the graphics stack depth to 32. Most viewers will tolerate more, but excessive pushes may cause problems. Multiple content streams may also be coalseced into a single content stream where this parameter is True, since the PDF specification permits PDF writers to coalesce streams as they see fit.
New in version 2.14.
Changed in version 3.3.0.
- add_resource(res, res_type, name=None, *, prefix='', replace_existing=True)¶
Adds a new resource to the page’s Resources dictionary.
If the Resources dictionaries do not exist, they will be created.
- Parameters
self – The object to add to the resources dictionary.
res (pikepdf.objects.Object) – The dictionary object to insert into the resources dictionary.
res_type (pikepdf.objects.Name) – Should be one of the following Resource dictionary types: ExtGState, ColorSpace, Pattern, Shading, XObject, Font, Properties.
name (Optional[pikepdf.objects.Name]) – The name of the object. If omitted, a random name will be generated with enough randomness to be globally unique.
prefix (str) – A prefix for the name of the object. Allows conveniently namespacing when using random names, e.g. prefix=”Im” for images. Mutually exclusive with name parameter.
replace_existing (bool) – If the name already exists in one of the resource dictionaries, remove it.
- Returns
The name of the object.
- Return type
-
Example
>>> resource_name = pdf.pages[0].add_resource(formxobj, Name.XObject)
New in version 2.3.
Changed in version 2.14: If res does not belong to the same Pdf that owns this page, a copy of res is automatically created and added instead. In previous versions, it was necessary to change for this case manually.
- add_underlay(other, rect=None)¶
Underlay another object beneath this page.
Underlays will be drawn before all other content, so they may be overdrawn partially or completely.
- Parameters
other (Union[pikepdf.objects.Object, pikepdf._qpdf.Page]) – A Page or Form XObject to render as an underlay underneath this page.
rect (Optional[pikepdf._qpdf.Rectangle]) – The PDF rectangle (in PDF units) in which to draw the underlay. If omitted, this page’s MediaBox will be used.
New in version 2.14.
- as_form_xobject(self: pikepdf.Page, handle_transformations: bool = True) pikepdf.Object ¶
Return a form XObject that draws this page.
This is useful for n-up operations, underlay, overlay, thumbnail generation, or any other case in which it is useful to replicate the contents of a page in some other context. The dictionaries are shallow copies of the original page dictionary, and the contents are coalesced from the page’s contents. The resulting object handle is not referenced anywhere.
- calc_form_xobject_placement(self: pikepdf.Page, formx: pikepdf.Object, name: pikepdf.Object, rect: pikepdf.Rectangle, *, invert_transformations: bool = True, allow_shrink: bool = True, allow_expand: bool = False) bytes ¶
Generate content stream segment to place a Form XObject on this page.
The content stream segment must be then be added to the page’s content stream.
The default keyword parameters will preserve the aspect ratio.
- Parameters
formx – The Form XObject to place.
name – The name of the Form XObject in this page’s /Resources dictionary.
rect – Rectangle describing the desired placement of the Form XObject.
invert_transformations – Apply /Rotate and /UserUnit scaling when determining FormX Object placement.
allow_shrink – Allow the Form XObject to take less than the full dimensions of rect.
allow_expand – Expand the Form XObject to occupy all of rect.
New in version 2.14.
- contents_add(contents, *, prepend=False)¶
Append or prepend to an existing page’s content stream.
- Parameters
contents (Union[pikepdf.objects.Stream, bytes]) – An existing content stream to append or prepend.
prepend (bool) – Prepend if true, append if false (default).
New in version 2.14.
- contents_coalesce(self: pikepdf.Page) None ¶
Coalesce a page’s content streams.
A page’s content may be a stream or an array of streams. If this page’s content is an array, concatenate the streams into a single stream. This can be useful when working with files that split content streams in arbitrary spots, such as in the middle of a token, as that can confuse some software.
- property cropbox¶
This page’s effective /CropBox, in PDF units.
If the /CropBox is not defined, the /MediaBox is returned.
- externalize_inline_images(self: pikepdf.Page, min_size: int = 0) None ¶
Convert inlines image to normal (external) images.
- get_filtered_contents(self: pikepdf.Page, tf: pikepdf.TokenFilter) bytes ¶
Apply a
pikepdf.TokenFilterto a content stream, without modifying it.
This may be used when the results of a token filter do not need to be applied, such as when filtering is being used to retrieve information rather than edit the content stream.
Note that it is possible to create a subclassed
TokenFilterthat saves information of interest to its object attributes; it is not necessary to return data in the content stream.
To modify the content stream, use
pikepdf.Page.add_content_token_filter().
- Returns
The modified content stream.
- property index¶
Returns the zero-based index of this page in the pages list.
That is, returns
nsuch that
pdf.pages[n] == this_page. A
ValueErrorexception is thrown if the page is not attached to a
Requires O(n) search.
New in version 2.2.
- property label¶
Returns the page label for this page, accounting for section numbers.
For example, if the PDF defines a preface with lower case Roman numerals (i, ii, iii…), followed by standard numbers, followed by an appendix (A-1, A-2, …), this function returns the appropriate label as a string.
It is possible for a PDF to define page labels such that multiple pages have the same labels. Labels are not guaranteed to be unique.
Note that this requires a O(n) search over all pages, to look up the page’s index.
New in version 2.2.
Changed in version 2.9: Returns the ordinary page number if no special rules for page numbers are defined.
- property obj¶
Get the underlying
pikepdf.Object.
- parse_contents(self: pikepdf.Page, arg0: pikepdf.StreamParser) None ¶
Parse a page’s content streams using a
pikepdf.StreamParser.
The content stream may be interpreted by the StreamParser but is not altered.
If the page’s contents is an array of streams, it is coalesced.
- remove_unreferenced_resources(self: pikepdf.Page) None ¶
Removes from the resources dictionary any object not referenced in the content stream.
A page’s resources dictionary maps names to objects elsewhere in the file. This method walks through a page’s contents and keeps tracks of which resources are referenced somewhere in the contents. Then it removes from the resources dictionary any object that is not referenced in the contents. This method is used by page splitting code to avoid copying unused objects in files that used shared resource dictionaries across multiple pages.
- property resources: pikepdf.objects.Dictionary¶
Return this page’s resources dictionary.
- rotate(self: pikepdf.Page, angle: int, relative: bool) None ¶
Rotate a page.
If
relativeis
False, set the rotation of the page to angle. Otherwise, add angle to the rotation of the page.
anglemust be a multiple of
90. Adding
90to the rotation rotates clockwise by
90degrees.
- class pikepdf.PdfMatrix(*args)¶
Support class for PDF content stream matrices
PDF content stream matrices are 3x3 matrices summarized by a shorthand
(a, b, c, d, e, f)which correspond to the first two column vectors. The final column vector is always
(0, 0, 1)since this is using homogenous coordinates.
PDF uses row vectors. That is,
vr @ A'gives the effect of transforming a row vector
vr=(x, y, 1)by the matrix
A'. Most textbook treatments use
A @ vcwhere the column vector
vc=(x, y, 1)'.
(
@is the Python matrix multiplication operator.)
Addition and other operations are not implemented because they’re not that meaningful in a PDF context (they can be defined and are mathematically meaningful in general).
PdfMatrix objects are immutable. All transformations on them produce a new matrix.
- f¶
Return one of the six “active values” of the affine matrix.
eand
fcorrespond to x- and y-axis translation respectively. The other four letters are a 2×2 matrix that can express rotation, scaling and skewing;
a=1 b=0 c=0 d=1is the identity matrix.
- class pikepdf.PdfImage(obj)¶
Support class to provide a consistent API for manipulating PDF images
The data structure for images inside PDFs is irregular and complex, making it difficult to use without introducing errors for less typical cases. This class addresses these difficulties by providing a regular, Pythonic API similar in spirit (and convertible to) the Python Pillow imaging library.
- as_pil_image()¶
Extract the image as a Pillow Image, using decompression as necessary.
Caller must close the image.
- Return type
PIL.Image.Image
- extract_to(*, stream=None, fileprefix='')¶
Attempt to extract the image directly to a usable image file
If possible, the compressed data is extracted and inserted into a compressed image file format without transcoding the compressed content. If this is not possible, the data will be decompressed and extracted to an appropriate format.
Because it is not known until attempted what image format will be extracted, users should not assume what format they are getting back. When saving the image to a file, use a temporary filename, and then rename the file to its final name based on the returned file extension.
Images might be saved as any of .png, .jpg, or .tiff.
Examples
>>> im.extract_to(stream=bytes_io) '.png'
>>> im.extract_to(fileprefix='/tmp/image00') '/tmp/image00.jpg'
- Parameters
-
- Returns
If fileprefix was provided, then the fileprefix with the appropriate extension. If no fileprefix, then an extension indicating the file type.
- Return type
-
- property filter_decodeparms¶
PDF has a lot of optional data structures concerning /Filter and /DecodeParms. /Filter can be absent or a name or an array, /DecodeParms can be absent or a dictionary (if /Filter is a name) or an array (if /Filter is an array). When both are arrays the lengths match.
Normalize this into: [(/FilterName, {/DecodeParmName: Value, …}), …]
The order of /Filter matters as indicates the encoding/decoding sequence.
- get_stream_buffer(decode_level=<StreamDecodeLevel.specialized: 2>)¶
Access this image with the buffer protocol.
- property icc¶
If an ICC profile is attached, return a Pillow object that describe it.
Most of the information may be found in
icc.profile.
- Returns
PIL.ImageCms.ImageCmsProfile
- property mode¶
PIL.Image.modeequivalent for this image, where possible
If an ICC profile is attached to the image, we still attempt to resolve a Pillow mode.
- property palette: Optional[pikepdf.models.image.PaletteData]¶
Retrieves the color palette for this image if applicable.
- read_bytes(decode_level=<StreamDecodeLevel.specialized: 2>)¶
Decompress this image and return it as unencoded bytes.
- class pikepdf.PdfInlineImage(*, image_data, image_object)¶
Support class for PDF inline images. Implements the same API as
PdfImage.
- Parameters
image_data (pikepdf.objects.Object) –
-
- class pikepdf.models.PdfMetadata(pdf, pikepdf_mark=True, sync_docinfo=True, overwrite_invalid_xml=True)¶
Read and edit the metadata associated with a PDF
The PDF specification contain two types of metadata, the newer XMP (Extensible Metadata Platform, XML-based) and older DocumentInformation dictionary. The PDF 2.0 specification removes the DocumentInformation dictionary.
This primarily works with XMP metadata, but includes methods to generate XMP from DocumentInformation and will also coordinate updates to DocumentInformation so that the two are kept consistent.
XMP metadata fields may be accessed using the full XML namespace URI or the short name. For example
metadata['dc:description']and
metadata['{}description']both refer to the same field. Several common XML namespaces are registered automatically.
See the XMP specification for details of allowable fields.
To update metadata, use a with block.
Example
>>> with pdf.open_metadata() as records: records['dc:title'] = 'New Title'
See also
pikepdf.Pdf.open_metadata()
- load_from_docinfo(docinfo, delete_missing=False, raise_failure=False)¶
Populate the XMP metadata object with DocumentInfo
- Parameters
-
- Return type
-
A few entries in the deprecated DocumentInfo dictionary are considered approximately equivalent to certain XMP records. This method copies those entries into the XMP metadata.
- property pdfa_status: str¶
Returns the PDF/A conformance level claimed by this PDF, or False
A PDF may claim to PDF/A compliant without this being true. Use an independent verifier such as veraPDF to test if a PDF is truly conformant.
- Returns
The conformance level of the PDF/A, or an empty string if the PDF does not claim PDF/A conformance. Possible valid values are: 1A, 1B, 2A, 2B, 2U, 3A, 3B, 3U.
- property pdfx_status: str¶
Returns the PDF/X conformance level claimed by this PDF, or False
A PDF may claim to PDF/X compliant without this being true. Use an independent verifier such as veraPDF to test if a PDF is truly conformant.
- Returns
The conformance level of the PDF/X, or an empty string if the PDF does not claim PDF/X conformance.
- class pikepdf.models.Encryption(*, owner, user, R=6, allow=Permissions(accessibility=True, extract=True, modify_annotation=True, modify_assembly=False, modify_form=True, modify_other=True, print_lowres=True, print_highres=True), aes=True, metadata=True)¶.
- class pikepdf.models.Outline(pdf, max_depth=15, strict=False)¶
Maintains a intuitive interface for creating and editing PDF document outlines, according to the PDF 1.7 Reference Manual section 12.3.
- Parameters
pdf (pikepdf._qpdf.Pdf) – PDF document object.
max_depth (int) – Maximum recursion depth to consider when reading the outline.
strict (bool) – If set to
False(default) silently ignores structural errors. Setting it to
Trueraises a
pikepdf.OutlineStructureErrorif any object references re-occur while the outline is being read or written.
See also
pikepdf.Pdf.open_outline()
- class pikepdf.models.OutlineItem(title, destination=None, page_location=None, action=None, obj=None, *, left=None, top=None, right=None, bottom=None, zoom=None)¶
Manages a single item in a PDF document outlines structure, including nested items.
- Parameters
title (str) – Title of the outlines item.
destination (Optional[Union[pikepdf.objects.Array, pikepdf.objects.String, pikepdf.objects.Name, int]]) – Page number, destination name, or any other PDF object to be used as a reference when clicking on the outlines entry. Note this should be
Noneif an action is used instead. If set to a page number, it will be resolved to a reference at the time of writing the outlines back to the document.
page_location (Optional[Union[pikepdf.models.outlines.PageLocation, str]]) – Supplemental page location for a page number in
destination, e.g.
PageLocation.Fit. May also be a simple string such as
'FitH'.
action (Optional[pikepdf.objects.Dictionary]) – Action to perform when clicking on this item. Will be ignored during writing if
destinationis also set.
obj (Optional[pikepdf.objects.Dictionary]) –
Dictionaryobject representing this outlines item in a
Nonefor creating a new object. If present, an existing object is modified in-place during writing and original attributes are retained.
left (Optional[float]) – Describes the viewport position associated with a destination.
top (Optional[float]) – Describes the viewport position associated with a destination.
bottom (Optional[float]) – Describes the viewport position associated with a destination.
right (Optional[float]) – Describes the viewport position associated with a destination.
zoom (Optional[float]) – Describes the viewport position associated with a destination.
This object does not contain any information about higher-level or neighboring elements.
- Valid destination arrays:
[page /XYZ left top zoom] generally [page, PageLocationEntry, 0 to 4 ints]
- classmethod from_dictionary_object(obj)¶
Creates a
OutlineItemfrom a PDF document’s
Dictionaryobject. Does not process nested items.
- Parameters
obj (pikepdf.objects.Dictionary) –
Dictionaryobject representing a single outline node.
- to_dictionary_object(pdf, create_new=False)¶
Creates a
Dictionaryobject from this outline node’s data, or updates the existing object. Page numbers are resolved to a page reference on the input
- Parameters
pdf (pikepdf._qpdf.Pdf) – PDF document object.
create_new (bool) – If set to
True, creates a new object instead of modifying an existing one in-place.
- Return type
pikepdf.objects.Dictionary
- class pikepdf.Permissions(accessibility=True, extract=True, modify_annotation=True, modify_assembly=False, modify_form=True, modify_other=True, print_lowres=True, print_highres=True)¶
Stores the user-level permissions for an encrypted PDF.
A compliant PDF reader/writer should enforce these restrictions on people who have the user password and not the owner password. In practice, either password is sufficient to decrypt all document contents. A person who has the owner password should be allowed to modify the document in any way. pikepdf does not enforce the restrictions in any way; it is up to application developers to enforce them as they see fit.
Unencrypted PDFs implicitly have all permissions allowed. Permissions can only be changed when a PDF is saved.
- class pikepdf.models.EncryptionMethod¶
Describes which encryption method was used on a particular part of a PDF. These values are returned by
pikepdf.EncryptionInfobut are not currently used to specify how encryption is requested.
- aes¶
The AES-based algorithm was used as described in the PDF 1.7 Reference Manual.
- aesv3¶
An improved version of the AES-based algorithm was used as described in the Adobe Supplement to the ISO 32000, requiring PDF 1.7 extension level 3. This algorithm still uses AES, but allows both AES-128 and AES-256, and improves how the key is derived from the password.
- class pikepdf.models.EncryptionInfo(encdict)¶
Reports encryption information for an encrypted PDF.
This information may not be changed, except when a PDF is saved. This object is not used to specify the encryption settings to save a PDF, due to non-overlapping information requirements.
- property user_password: bytes¶
If possible, return the user password.
The user password can only be retrieved when a PDF is opened with the owner password and when older versions of the encryption algorithm are used.
The password is always returned as
byteseven if it has a clear Unicode representation.
- class pikepdf.Annotation¶
Describes an annotation in a PDF, such as a comment, underline, copy editing marks, interactive widgets, redactions, 3D objects, sound and video clips.
See the PDF 1.7 Reference Manual section 12.5.6 for the full list of annotation types and definition of terminology.
New in version 2.12.
- property appearance_state¶
Returns the annotation’s appearance state (or None).
For a checkbox or radio button, the appearance state may be
pikepdf.Name.Onor
pikepdf.Name.Off.
- get_appearance_stream(*args, **kwargs)¶
Overloaded function.
get_appearance_stream(self: pikepdf.Annotation, which:.
get_appearance_stream(self: pikepdf.Annotation, which: pikepdf.Object, state:.
- state: The appearance state. For checkboxes or radio buttons, the
appearance state is usually whether the button is on or off.
- get_page_content_for_appearance(self: pikepdf.Annotation, name: pikepdf.Object, rotate: int, required_flags: int = 0, forbidden_flags: int = 3) bytes ¶
Generate content stream text that draws this annotation as a Form XObject.
- Parameters
name (pikepdf.Name) – What to call the object we create.
rotate – Should be set to the page’s /Rotate value or 0.
Note
This method is done mainly with QPDF. Its behavior may change when different QPDF versions are used.
- class pikepdf._qpdf.Attachments¶
This interface provides access to any files that are attached to this PDF, exposed as a Python
collections.abc.MutableMappinginterface.
The keys (virtual filenames) are always
str, and values are always
pikepdf.AttachedFileSpec.
Use this interface through
pikepdf.Pdf.attachments..
- update([E, ]**F)
- class pikepdf.AttachedFileSpec¶
In a PDF, a file specification provides name and metadata for a target file.
Most file specifications are simple file specifications, and contain only one attached file. Call
get_file()to get the attached file:
pdf = Pdf.open(...) fs = pdf.attachments['example.txt'] stream = fs.get_file()
To attach a new file to a PDF, you may construct a
AttachedFileSpec.
pdf = Pdf.open(...) fs = AttachedFileSpec.from_filepath(pdf, Path('somewhere/spreadsheet.xlsx')) pdf.attachments['spreadsheet.xlsx'] = fs
PDF supports the concept of having multiple, platform-specialized versions of the attached file (similar to resource forks on some operating systems). In theory, this attachment ought to be the same file, but encoded in different ways. For example, perhaps a PDF includes a text file encoded with Windows line endings (
\r\n) and a different one with POSIX line endings (
\n). Similarly, PDF allows for the possibility that you need to encode platform-specific filenames. pikepdf cannot directly create these, because they are arguably obsolete; it can provide access to them, however.
If you have to deal with multiple versions, use
get_all_filenames()to enumerate those available.
Described in the PDF 1.7 Reference Manual section 7.11.3.
New in version 3.0.
- __init__(self: pikepdf.AttachedFileSpec, q: pikepdf.Pdf, data: bytes, *, description: str = '', filename: str = '', mime_type: str = '', creation_date: str = '', mod_date: str = '') None ¶
Low-level constructor for attached file spec from data.
- Parameters
data – Resource to load.
description – Any description text for the attachment. May be shown in PDF viewers.
filename – Filename to display in PDF viewers.
mime_type – Helps PDF viewers decide how to display the information.
creation_date – PDF date string for when this file was creation.
mod_date – PDF date string for when this file was last modified.
- property filename¶
The main filename for this file spec.
In priority order, getting this returns the first of /UF, /F, /Unix, /DOS, /Mac if multiple filenames are set. Setting this will set a UTF-8 encoded Unicode filename and write it to /UF.
- from_filepath(path, *, description='')¶
Construct a file specification from a file path.
This function will automatically add a creation and modified date using the file system, and a MIME type inferred from the file’s extension.
- Parameters
pdf (pikepdf._qpdf.Pdf) – The Pdf to attach this file specification to.
path (Union[pathlib.Path, str]) – A file path for the file to attach to this Pdf.
description (str) – An optional description. May be shown to the user in PDF viewers.
- get_all_filenames(self: pikepdf.AttachedFileSpec) dict ¶
Return a Python dictionary that describes all filenames.
The returned dictionary is not a pikepdf Object.
Multiple filenames are generally a holdover from the pre-Unicode era. Modern PDFs can generally set UTF-8 filenames and avoid using punctuation or other marks that are forbidden in filenames.
- get_file(*args, **kwargs)¶
Overloaded function.
get_file(self: pikepdf.AttachedFileSpec) -> pikepdf._qpdf.AttachedFile
Return the primary (usually only) attached file.
get_file(self: pikepdf.AttachedFileSpec, arg0: pikepdf.Object) -> pikepdf._qpdf.AttachedFile
Return an attached file selected by
pikepdf.Name.
Typical names would be
/UFand
/F. See PDF 1.7 Reference Manual for other obsolete names.
- class pikepdf._qpdf.AttachedFile¶
An object that contains an actual attached file. These objects do not need to be created manually; they are normally part of an AttachedFileSpec.
New in version 3.0.
- class pikepdf.NameTree¶
An object for managing name tree data structures in PDFs.
A name tree is a key-value data structure. The keys are any binary strings (that is, Python
bytes). If
strselected is provided as a key, the UTF-8 encoding of that string is tested. Name trees are (confusingly) not indexed by PDF name objects.
The keys are ordered; pikepdf will ensure that the order is preserved.
The value may be any PDF object. Typically it will be a dictionary or array.
If the name tree is invalid in any way, pikepdf will automatically repair it if it is able to. There should not be any reason to access the internal nodes of a name tree; use this interface instead. Likewise, pikepdf will automatically rebalance the tree as appropriate (all thanks to libqpdf).
NameTrees are used to store certain objects like file attachments in a PDF. Where a more specific interface exists, use that instead, and it will manipulate the name tree in a semantic correct manner for you.
Do not modify the internal structure of a name tree while you have a
NameTreereferencing it. Access it only through the
NameTreeobject.
Names trees are described in the PDF 1.7 Reference Manual section 7.9.6. See section 7.7.4 for a list of PDF objects that are stored in name trees.. | https://pikepdf.readthedocs.io/en/latest/api/models.html | CC-MAIN-2021-49 | refinedweb | 4,168 | 51.34 |
I am doing cross compiling between C++ and C compiler for header files and desperately need help on figuring out namespace issue error generated by C++ compiler.
Here are two header files:abc1.h, abc2.h and a C/C++ file containing the main().
When I compile this code on C, it works fine. However whenever I compile this on C++ compiler. It gives me the following error:
g++ -g -Wall cpptest.c -o cpptest
"abc2.h:3: error: using typedef-name ‘abc’ after ‘struct’
abc1.h:9: error: ‘abc’ has a previous declaration here"
I need to have both data structure available (struct _abc and struct abc)
someone typedef the _abc to abc inside abc1.h and I cannot change this file. I can only change abc2.h and the C/C++ file.
Anyone can explain to me why C++ compiler cannot accept this?
Anyone know of solution without changing abc1.h? (a work around)
/*********** Header file abc1.h************/ #ifndef __ABC1__ struct _abc { int a; int b; }/*abc*/; /* if I comment out The line below and uncommented the abc above, it works well. typedef seems to be an issue in C++ compiler. However I can not touch this header file. */ typedef struct _abc abc; #endif
/*********** Header file abc2.h************/ #ifndef __ABC2__ struct abc { char *c; void *d; }; #endif
/* C/C++ file named abc.c or abc.cpp */ #include "abc1.h" #include "abc2.h" #include <stdio.h> int main() { printf("testing \n"); return 0; } | https://www.daniweb.com/programming/software-development/threads/88863/header-files-c-vs-c-compiler-question | CC-MAIN-2017-13 | refinedweb | 244 | 79.46 |
Hello,
I'm stumped by a problem that I think should have a fairly straightforward answer...
I have a few hundred features with a number of attributes, all numerical. Some features have all attributes filled in, but some have a handful of null values. Is there any way of calculating the mean value for each feature that takes into account these null values? For instance, it calculates how many attributes are not null for each feature and uses that to generate the mean?
Would really appreciate any help on this!
Thanks
what is the source type of your data? Featureclass tables? dbase? excel?
And I assume you want to automate the whole process rather than doing the process table by table and field by field.
You could use numpy, and convert the tables to numpy arrays and do the columns all at once.
#..... stuff before includes FeatureclassToNumPyArray or TabletoNumPyArray # to get the table into an array... Then use nanmean to the mean b Out[6]: array([[ nan, 5., nan, nan, 5.], [ nan, nan, 8., 6., nan], [ 5., nan, 7., 4., 7.], [ nan, 9., nan, 9., nan], [ nan, nan, 4., 9., 6.], [ nan, 4., nan, 6., 5.]]) np.nanmean(b, axis=1) Out[7]: array([ 5. , 7. , 5.75 , 9. , 6.33333333, 5. ])
Then you could cycle through the tables and get the mean
PS 'nan' is the numpy equivalent of None in python and nodata in tables.
Thanks Dan, I'm not sure I fully follow you. The data source is a featureclass attribute table and ideally I'd be looking to use the Field Calculator or a python script to find the means. Unfortunately, I really don't follow the code block you've copied in, can you suggest how I might make use of it? I had thought numpy might be the answer.
Fairly simple database manipulation; you want the mean that includes the number records where field-of-choice is null, right? Try this: right click on the field name and select Statistics. That will give you mean. Select where field-of-choice is null. Calculate the field-of-choice value = mean from the Statistics results.
With all due respect to Dan and his elegant solution, sometimes going with the basics can be useful too....
Hi Joe, I'm looking for statistics to be done on a field array, not only one field. So each feature has ten fields that may or may not be null... I want a script that figures out how many are null to then sum and correctly produce the mean (e.g. if only 8 of the 10 fields are scored, the sum of those will be divided by 8). If you're answer addresses this, please could you rephrase it as I don't follow it.
I had thought of that of course.. but I think Luke has multiple fields in a table, and there is a need to do this all at once, rather than one field at a time, hence, the all fields all at once solution, which can then be wrapped in an all files, for all fields all at once solution.
Of course, the problem could be simpler than assumed
Yep... I get it.. But there are so many "it's gotta be automated, but I can't describe it" types of questions here....
I am trying the following but still not having any success:
def mean(Welcoming, Access, Community, Safe, Provision, Quality, Security, Dog, Litter, Grounds): fieldList = [Welcoming, Access, Community, Safe, Provision, Quality, Security, Dog, Litter, Grounds] validList = [] for i in fieldList: if i != None: validList.append(i) meanVal = sum(validList)/len(validList) return meanVal
I get an error that 9 required positional arguments are missing. Any suggestions please?
[edited to use Python syntax highlighting, with proper indent - VA]
How are you calling your function from the field calculator? Obviously the function definition is going into the code block, but what are you putting into the expression box. The error message makes me think you are calling your function incorrectly.
Hi Joshua, I'm putting this in:
mean([ !Welcoming!, !Access!, !Community!, !Safe!, !Provision!, !Quality!, !Security!, !Dog!, !Litter!, !Grounds! ]) | https://community.esri.com/t5/data-management-questions/how-to-field-calculate-the-mean-of-a-field-array/td-p/658815 | CC-MAIN-2022-21 | refinedweb | 692 | 72.76 |
how can I link a button that I made to modifier
On 16/07/2015 at 00:03, xxxxxxxx wrote:
hi ,
I have some knowledge in xpresso and trying to make simple tool , I want to link initialize button to user data button that I had created ,
please give me a simple answer because I am beginner with python
**
**
On 16/07/2015 at 02:06, xxxxxxxx wrote:
Hello and welcome,
could you please elaborate what exactly you want to do? What modifier do you talk about? What exactly do you mean with "link initialize button to user data button", what workflow do you want to create?
Best wishes,
Sebastian
On 16/07/2015 at 13:53, xxxxxxxx wrote:
the modifier that I am talking about is mesh deformer and I want to link the button in the picture below
and thanks for your support
On 16/07/2015 at 13:58, xxxxxxxx wrote:
and I want to link that button to this user data that I created
On 17/07/2015 at 07:20, xxxxxxxx wrote:
Hello,
I wouldn't know any Xpresso way. But you could add your userdata to a Python Generator or Python Tag. Then you could catch the event when your userdata button is pressed to use CallButton() to "press" the button on the deformer. Something like this:
def message(id,data) : if id == c4d.MSG_DESCRIPTION_CHECKUPDATE: # check if button if data["descid"][1].dtype == c4d.DTYPE_BUTTON: # get id buttonID = data["descid"][1].id # lets assume that the userdata button has the ID 1 if buttonID == 1: # get linked "Mesh" deformer from the userdata link 2 meshDeformer = op[c4d.ID_USERDATA,2] if meshDeformer is not None: c4d.CallButton(meshDeformer, c4d.ID_CA_MESH_DEFORMER_OBJECT_INITIAL)
The code assumes that the userdata button has the ID 1 and that the target modifier is referenced in a userdata link field with the ID 2.
best wishes,
Sebastian
On 19/07/2015 at 15:03, xxxxxxxx wrote:
thanks a lot | https://plugincafe.maxon.net/topic/8929/11859_how-can-i-link-a-button-that-i-made-to-modifier | CC-MAIN-2019-18 | refinedweb | 327 | 65.56 |
[367
Comment
Thanks Thomas. I was trying the latest version. I did not notice there was another version of Arduino, I was trying 0022 and 1.0.1 I used 0100 now and it had uploaded the file. Where did you read this I missed it. Now I have to test it out. Thanks again.
I can't compile AMP220 if the AMP_Config.h is altered for simulation mode;
#include "APM_Config_HILmode.h" // for test in HIL mode with AeroSIM Rc 3.83
//#include "APM_Config_Rover.h" // to be used with the real Traxxas model Monster Jam Grinder
Here is the error message:
C:\Users\Chris\Downloads\arduino-0100-relax-windows\arduino-0100-relax\libraries\GCS_MAVLink/include/mavlink/v1.0/ardupilotmega/../common/./mavlink_msg_rc_channels_scaled.h: In function 'void send_servo_out(mavlink_channel_t)':
C:\Users\Chris\Downloads\arduino-0100-relax-windows\arduino-0100-relax\libraries\GCS_MAVLink/include/mavlink/v1.0/ardupilotmega/../common/./mavlink_msg_rc_channels_scaled.h:191: error: too few arguments to function 'void mavlink_msg_rc_channels_scaled_send(mavlink_channel_t, uint32_t, uint8_t, int16_t, int16_t, int16_t, int16_t, int16_t, int16_t, int16_t, int16_t, uint8_t)'
GCS_Mavlink:412: error: at this point in file
I'm using Arduino 0100-relax.
Help please!
Developer Comment by Jean-Louis Naudin on June 20, 2012 at 11:05pm
Hello Chris,
To compile successfully with HIL mode selected, you need to add two lines in the GCS_Mavlink.pde below the line 380, see below:
I update the GIT repo with this update.
Regards, Jean-Louis
3D Robotics Comment by Chris Anderson on June 21, 2012 at 3:25pm
Adam: thanks for the catch. That link is now fixed. When I get back from travelling, I'll start work on a proper ArduRover manual on the Google Code site.
That works!
Thanks
Where is this Beginner's guide?
Thanks!
Alan
@KM6VV: The entrance is the "Getting Started" link in the top nav bar.
Thanks!
I guess I was looking for a PDF.
Alan
I'm a bit embarassed to ask this. How do I run a mission?? I've created a mission and uploaded it to rover.
Now what? What do I button should I push to make rover do somthing?
Admin Comment by Thomas J Coyle III on June 24, 2012 at 4:42am
@Chris C,
This page from the ArduPlane Wiki may give you an idea of what you should do:
Unless you have the Oilpan attached, I suspect that all you have to wait for is the GPS lock and you should be ready to go. I assume that your starting point is where all of your programmed waypoints are starting from?
@JLN,
Have you already discussed starting and running a waypoint course elsewhere in this thread, or can you provide some basic starup instructions in response to Chris' question above?
Regards,
TCIII | http://diydrones.com/profiles/blog/show?id=705844%3ABlogPost%3A844341&commentId=705844%3AComment%3A898601&xg_source=activity | CC-MAIN-2013-20 | refinedweb | 454 | 65.73 |
How Do I Write a REST API in Node.js?
When building a back end for a REST API, Express.js is often the first choice among Node.js frameworks. While it also supports building static HTML and templates, in this series, we’ll focus on back-end development using TypeScript. The resulting REST API will be one that any front-end framework or external back-end service would be able to query.
You’ll need:
- Basic knowledge of JavaScript and TypeScript
- Basic knowledge of Node.js
- Basic knowledge of REST architecture (cf. this section of my previous REST API article if needed)
- A ready installation of Node.js (preferably version 14+)
In a terminal (or command prompt), we’ll create a folder for the project. From that folder, run
npm init. That will create some of the basic Node.js project files we need.
Next, we’ll add the Express.js framework and some helpful libraries:
npm install --save express debug winston express-winston cors
There are good reasons these libraries are Node.js developer favorites:
debugis a module that we will use to avoid calling
console.log()while developing our application. This way, we can easily filter debug statements during troubleshooting. They can also be switched off entirely in production instead of having to be removed manually.
winstonis responsible for logging requests to our API and the responses (and errors) returned.
express-winstonintegrates directly with Express.js, so that all standard API-related
winstonlogging code is already done.
corsis a piece of Express.js middleware that allows us to enable cross-origin resource sharing. Without this, our API would only be usable from front ends being served from the exact same subdomain as our back end.
Our back end uses these packages when it’s running. But we also need to install some development dependencies for our TypeScript configuration. For that, we’ll run:
npm install --save-dev @types/cors @types/express @types/debug source-map-support tslint typescript
These dependencies are required to enable TypeScript for our app’s own code, along with the types used by Express.js and other dependencies. This can save a lot of time when we’re using an IDE like WebStorm or VSCode by allowing us to complete some function methods automatically while coding.
The final dependencies in
package.json should be like this:
"dependencies": { "debug": "^4.2.0", "express": "^4.17.1", "express-winston": "^4.0.5", "winston": "^3.3.3", "cors": "^2.8.5" }, "devDependencies": { "@types/cors": "^2.8.7", "@types/debug": "^4.1.5", "@types/express": "^4.17.2", "source-map-support": "^0.5.16", "tslint": "^6.0.0", "typescript": "^3.7.5" }
Now that we have all our required dependencies installed, let’s start to build up our own code!
TypeScript REST API Project Structure
For this tutorial, we are going to create just three files:
./app.ts
./common/common.routes.config.ts
./users/users.routes.config.ts
The idea behind the project structure’s two folders (
common and
users) is to have individual modules that have their own responsibilities. In this sense, we are eventually going to have some or all of the following for each module:
- Route configuration to define the requests our API can handle
- Services for tasks such as connecting to our database models, doing queries, or connecting to external services that are required by the specific request
- Middleware for running specific request validations before the final controller of a route handles its specifics
- Models for defining data models matching a given database schema, to facilitate data storage and retrieval
- Controllers for separating the route configuration from the code that finally (after any middleware) processes a route request, calls the above service functions if necessary, and gives a response to the client
This folder structure provides an early starting point for the rest of this tutorial series and enough to start practicing.
A Common Routes File in TypeScript
In the
common folder, let’s create the
common.routes.config.ts file to look like the following:
import express from 'express'; export class CommonRoutesConfig { app: express.Application; name: string; constructor(app: express.Application, name: string) { this.app = app; this.name = name; } getName() { return this.name; } }
The way that we are creating the routes here is optional. But since we are working with TypeScript, our routes scenario is an opportunity to practice using inheritance with the
extends keyword, as we’ll see shortly. In this project, all route files have the same behavior: They have a name (which we will use for debugging purposes) and access to the main Express.js
Application object.
Now, we can start to create the users route file. At the
users folder, let’s create
users.routes.config.ts and start to code it like this:
import {CommonRoutesConfig} from '../common/common.routes.config'; import express from 'express'; export class UsersRoutes extends CommonRoutesConfig { constructor(app: express.Application) { super(app, 'UsersRoutes'); } }
Here, we are importing the
CommonRoutesConfig class and extending it to our new class, called
UsersRoutes. With the constructor, we send the app (the main
express.Application object) and the name UsersRoutes to
CommonRoutesConfig’s constructor.
This example is quite simple, but when scaling to create several route files, this will help us avoid duplicate code.
Suppose we would want to add new features in this file, such as logging. We could add the necessary field to the
CommonRoutesConfig class, and then all the routes that extend
CommonRoutesConfig will have access to it.
Using TypeScript Abstract Functions for Similar Functionality Across Classes
What if we would like to have some functionality that is similar between these classes (like configuring the API endpoints), but that needs a different implementation for each class? One option is to use a TypeScript feature called abstraction.
Let’s create a very simple abstract function that the
UsersRoutes class (and future routing classes) will inherit from
CommonRoutesConfig. Let’s say that we want to force all routes to have a function (so we can call it from our common constructor) named
configureRoutes(). That’s where we’ll declare the endpoints of each routing class’ resource.
To do this, we’ll add three quick things to
common.routes.config.ts:
- The keyword
abstractto our
classline, to enable abstraction for this class.
- A new function declaration at the end of our class,
abstract configureRoutes(): express.Application;. This forces any class extending
CommonRoutesConfigto provide an implementation matching that signature—if it doesn’t, the TypeScript compiler will throw an error.
- A call to
this.configureRoutes();at the end of the constructor, since we can now be sure that this function will exist.
The result:
import express from 'express'; export abstract class CommonRoutesConfig { app: express.Application; name: string; constructor(app: express.Application, name: string) { this.app = app; this.name = name; this.configureRoutes(); } getName() { return this.name; } abstract configureRoutes(): express.Application; }
With that, any class extending
CommonRoutesConfig must have a function called
configureRoutes() that returns an
express.Application object. That means
users.routes.config.ts needs updating:
import {CommonRoutesConfig} from '../common/common.routes.config'; import express from 'express'; export class UsersRoutes extends CommonRoutesConfig { constructor(app: express.Application) { super(app, 'UsersRoutes'); } configureRoutes() { // (we'll add the actual route configuration here next) return this.app; } }
As a recap of what we’ve made:
We are first importing the
common.routes.config file, then the
express module. We then define the
UserRoutes class, saying that we want it to extend the
CommonRoutesConfig base class, which implies that we promise that it will implement
configureRoutes().
To send information along to the
CommonRoutesConfig class, we are using the
constructor of the class. It expects to receive the
express.Application object, which we will describe in greater depth in the next step. With
super(), we pass to
CommonRoutesConfig’s constructor the application and the name of our routes, which in this scenario is UsersRoutes. (
super(), in turn, will call our implementation of
configureRoutes().)
Configuring the Express.js Routes of the Users Endpoints
The
configureRoutes() function is where we will create the endpoints for users of our REST API. There, we will use the application and its route functionalities from Express.js.
The idea in using the
app.route() function is to avoid code duplication, which is easy since we’re creating a REST API with well-defined resources. The main resource for this tutorial is users. We have two cases in this scenario:
- When the API caller wants to create a new user or list all existing users, the endpoint should initially just have
usersat the end of the requested path. (We won’t be getting into query filtering, pagination, or other such queries in this article.)
- When the caller wants to do something specific to a specific user record, the request’s resource path will follow the pattern
users/:userId.
The way
.route() works in Express.js lets us handle HTTP verbs with some elegant chaining. This is because
.get(),
.post(), etc., all return the same instance of the
IRoute that the first
.route() call does. The final configuration will be like this:
configureRoutes() { this.app.route(`/users`) .get((req: express.Request, res: express.Response) => { res.status(200).send(`List of users`); }) .post((req: express.Request, res: express.Response) => { res.status(200).send(`Post to users`); }); this.app.route(`/users/:userId`) .all((req: express.Request, res: express.Response, next: express.NextFunction) => { // this middleware function runs before any request to /users/:userId // but it doesn't accomplish anything just yet--- // it simply passes control to the next applicable function below using next() next(); }) .get((req: express.Request, res: express.Response) => { res.status(200).send(`GET requested for id ${req.params.userId}`); }) .put((req: express.Request, res: express.Response) => { res.status(200).send(`PUT requested for id ${req.params.userId}`); }) .patch((req: express.Request, res: express.Response) => { res.status(200).send(`PATCH requested for id ${req.params.userId}`); }) .delete((req: express.Request, res: express.Response) => { res.status(200).send(`DELETE requested for id ${req.params.userId}`); }); return this.app; }
The above code lets any REST API client call our
users endpoint with a
POST or a
GET request. Similarly, it lets a client call our
/users/:userId endpoint with a
GET,
PUT,
PATCH, or
DELETE request.
But for
/users/:userId, we’ve also added generic middleware using the
all() function, which will be run before any of the
get(),
put(),
patch(), or
delete() functions. This function will be beneficial when (later in the series) we create routes that are meant to be accessed only by authenticated users.
You might have noticed that in our
.all() function—as with any piece of middleware—we have three types of fields:
Request,
Response, and
NextFunction.
- The Request is the way Express.js represents the HTTP request to be handled. This type upgrades and extends the native Node.js request type.
- The Response is likewise how Express.js represents the HTTP response, again extending the native Node.js response type.
- No less important, the
NextFunctionserves as a callback function, allowing control to pass through any other middleware functions. Along the way, all middleware will share the same request and response objects before the controller finally sends a response back to the requester.
Our Node.js Entry-point File,
app.ts
Now that we have configured some basic route skeletons, we will start configuring the application’s entry point. Let’s create the
app.ts file at the root of our project folder and begin it with this code:
import express from 'express'; import * as http from 'http'; import * as bodyparser from 'body-parser'; import * as winston from 'winston'; import * as expressWinston from 'express-winston'; import cors from 'cors'; import {CommonRoutesConfig} from './common/common.routes.config'; import {UsersRoutes} from './users/users.routes.config'; import debug from 'debug';
Only two of these imports are new at this point in the article:
httpis a Node.js-native module. It’s required to start our Express.js application.
body-parseris middleware that comes with Express.js. It parses the request (in our case, as JSON) before control goes to our own request handlers.
Now that we’ve imported the files, we will start declaring the variables that we want to use:
const app: express.Application = express(); const server: http.Server = http.createServer(app); const port: Number = 3000; const routes: Array<CommonRoutesConfig> = []; const debugLog: debug.IDebugger = debug('app');
The
express() function returns the main Express.js application object that we will pass around throughout our code, starting with adding it to the
http.Server object. (We will need to start the
http.Server after configuring our
express.Application.)
We’ll listen on port 3000 instead of the standard ports 80 (HTTP) or 443 (HTTPS) because those would typically be used for an app’s front end.
Why Port 3000?
There is no rule that the port should be 3000—if unspecified, an arbitrary port will be assigned—but 3000 is used throughout the documentation examples for both Node.js and Express.js, so we continue the tradition here.
Can Node.js Share Ports With the Front End?
We can still run locally at a custom port, even when we want our back end to respond to requests on standard ports. This would require a reverse proxy to receive requests on port 80 or 443 with a specific domain or a subdomain. It would then redirect them to our internal port 3000.
The
routes array will keep track of our routes files for debugging purposes, as we’ll see below.
Finally,
debugLog will end up as a function similar to
console.log, but better: It’s easier to fine-tune because it’s automatically scoped to whatever we want to call our file/module context. (In this case, we’ve called it “app” when we passed that in a string to the
debug() constructor.)
Now, we’re ready to configure all our Express.js middleware modules and the routes of our API:
// here we are adding middleware to parse all incoming requests as JSON app.use(bodyparser.json()); // here we are adding middleware to allow cross-origin requests app.use(cors()); // here we are configuring the expressWinston logging middleware, // which will automatically log all HTTP requests handled by Express.js app.use(expressWinston.logger({ transports: [ new winston.transports.Console() ], format: winston.format.combine( winston.format.colorize(), winston.format.json() ) })); // here we are adding the UserRoutes to our array, // after sending the Express.js application object to have the routes added to our app! routes.push(new UsersRoutes(app)); // here we are configuring the expressWinston error-logging middleware, // which doesn't *handle* errors per se, but does *log* them app.use(expressWinston.errorLogger({ transports: [ new winston.transports.Console() ], format: winston.format.combine( winston.format.colorize(), winston.format.json() ) })); // this is a simple route to make sure everything is working properly app.get('/', (req: express.Request, res: express.Response) => { res.status(200).send(`Server up and running!`) });
You might have noticed that the
expressWinston.errorLogger is set after we define our routes. This is not a mistake! As the express-winston documentation states:
The logger needs to be added AFTER the express router (
app.router) and BEFORE any of your custom error handlers (
express.handler).
Finally and most importantly:
server.listen(port, () => { debugLog(`Server running at{port}`); routes.forEach((route: CommonRoutesConfig) => { debugLog(`Routes configured for ${route.getName()}`); }); });
This actually starts our server. Once it’s started, Node.js will run our callback function, which reports that we’re running, followed by the names of all the routes we’ve configured—so far, just
UsersRoutes.
Updating
package.json to Transpile TypeScript to JavaScript and Run the App
Now that we have our skeleton ready to run, we first need some boilerplate configuration to enable TypeScript transpilation. Let’s add the file
tsconfig.json in the project root:
{ "compilerOptions": { "target": "es2016", "module": "commonjs", "outDir": "./dist", "strict": true, "esModuleInterop": true, "inlineSourceMap": true } }
Then we just need to add the final touches to
package.json in the form of the following scripts:
"scripts": { "start": "tsc && node ./dist/app.js", "debug": "export DEBUG=* && npm run start", "test": "echo \"Error: no test specified\" && exit 1" },
The
test script is a placeholder that we’ll replace later in the series.
The tsc in the
start script belongs to TypeScript. It’s responsible for transpiling our TypeScript code into JavaScript, which it will output into the
dist folder. Then, we just run the built version with
node ./dist/app.js.
The
debug script calls the
start script but first defines a
DEBUG environment variable. This has the effect of enabling all of our
debugLog() statements (plus similar ones from Express.js itself, which uses the same
debug module we do) to output useful details to the terminal—details that are (conveniently) otherwise hidden when running the server in production mode with a standard
npm start.
Try running
npm run debug yourself, and afterward, compare that with
npm start to see how the console output changes.
Tip: You can limit the debug output to our
app.ts file’s own
debugLog() statements using
DEBUG=app instead of
DEBUG=*. The
debug module is generally quite flexible, and this feature is no exception.
Windows users will probably need to change the
export to
SET since
export is how it works on Mac and Linux. If your project needs to support multiple development environments, the cross-env package provides a straightforward solution here.
Testing the Live Express.js Back End
With
npm run debug or
npm start still going, our REST API will be ready to service requests on port 3000. At this point, we can use cURL, Postman, Insomnia, etc. to test the back end.
Since we’ve only created a skeleton for the users resource, we can simply send requests without a body to see that everything is working as expected. For example:
curl --location --request GET 'localhost:3000/users/12345'
Our back end should send back the answer
GET requested for id 12345.
As for
POSTing:
curl --location --request POST 'localhost:3000/users' \ --data-raw ''
This and all other types of requests that we built skeletons for will look quite similar.
Poised for Rapid Node.js REST API Development with TypeScript
In this article, we started to create a REST API by configuring the project from scratch and diving into the basics of the Express.js framework. Then, we took our first step toward mastering TypeScript by building a pattern with
UsersRoutesConfig extending
CommonRoutesConfig, a pattern that we will reuse for the next article in this series. We finished by configuring our
app.ts entry point to use our new routes and
package.json with scripts to build and run our application.
But even the basics of a REST API made with Express.js and TypeScript are fairly involved. In the next part of this series, we focus on creating proper controllers for the users resource and dig into some useful patterns for services, middleware, controllers, and models.
The full project is available on GitHub, and the code as of the end of this article is found in the
toptal-article-01 branch.
Understanding the basics
Can I use TypeScript with Node.js?
Absolutely! It's very common for popular npm packages (including Express.js) to have corresponding TypeScript type definition files. This is true about Node.js itself, plus included subcomponents like its debug package.
Is Node.js good for REST APIs?
Yes. Node.js can be used by itself to create production-ready REST APIs, and there are also several popular frameworks like Express.js to reduce the inevitable boilerplate.
Is TypeScript difficult to learn?
No, it's not difficult to start learning TypeScript for those with a modern JavaScript background. It's even easier for those with experience in object-oriented programming. But mastering all of TypeScript's nuances and best practices takes time, as with any skill.
Should I use TypeScript?
It depends on the project, but it's definitely recommended for Node.js programming. It's a more expressive language for modeling real-world problem domains on the back end. This makes code more readable and reduces the potential for bugs.
What is TypeScript used for?
TypeScript is used anywhere JavaScript is found, but it's especially well suited to larger applications. It uses JavaScript as a base, adding static typing and much better support for the object-oriented programming (OOP) paradigm. This, in turn, supports a more advanced development and debugging experience. | https://www.toptal.com/express-js/nodejs-typescript-rest-api-pt-1 | CC-MAIN-2021-10 | refinedweb | 3,398 | 59.3 |
0
I have the following simple code. It consists of an array with 4 items. It first displays the items, then asks the user to delete as many as they want.(with an option to continue without deleting anything). After the user has removed or not removed items, the program will display the new update list.
i do not know how to delete specific things from my array. this is the code: (it works until you delete something, if you choose to not delete anything it works fine)
#include <iostream> #include <string> using namespace std; class test { public: int del; string word; void one(test array[]); }; void test::one(test array[]){ array[0].word="0 cat"; array[1].word="1 dog"; array[2].word="2 bird"; array[3].word="3 fish"; for(int i=0; i<4; i++){ cout<<array[i].word<<endl; } cout<<"DELETE NOW. input 4 to confirm/finish"<<endl; for(int i=0; i<3; i++){ cin>>del; if(del==4) break; delete &array[del].word; } for(int i=0; i<4; i++){ cout<<array[i].word<<endl; } } int main(){ test array[10]; test tes; tes.one(array); system("pause"); return 0; }
any help is greatly appreciated | https://www.daniweb.com/programming/software-development/threads/420295/how-to-delete-an-an-object-element-thing-from-an-array#post1792999 | CC-MAIN-2016-40 | refinedweb | 200 | 78.55 |
Hi On Sun, Oct 15, 2006 at 02:27:55AM +0800, jserv at linux2.cc.ntu.edu.tw wrote: > On Sat, Oct 14, 2006 at 08:07:23PM +0200, Diego Biurrun wrote: > > I recently cleaned up all license headers in FFmpeg, please fix this one > > using the others as a template. "This library" should be FFmpeg, LGPL > > version should be 2.1. > > hi Diego, > > Thanks for noticing. I have attached the new patch in this mail. > > Sincerely, > Jim Huang [...] > +#ifndef min > +#define min(a,b) ((a < b) ? a : b) ///< Return the smaller of 2 values. > +#define max(a,b) ((a > b) ? a : b) ///< Return the larger of 2 values. > +#endif theres FFMIN/FFMAX in ffmpeg [...] > +/** > + * Initialization callback for av_read_image. > + * @param opaque Context pointer. > + * @param info Image info from codec. > + * @return 0 for success; otherwise failure. > + */ > +static int read_image_alloc_cb(void *opaque, AVImageInfo *info); AVImageInfo and the other image1 related things are deprecated and no patch which depends on them will be accepted rest not reviewed as the whole code depends on deprecated code also the recommandition to enable the disable image1 formats is VERY dangerous as that code contains several possibly exploitable buffer overfl | http://ffmpeg.org/pipermail/ffmpeg-devel/2006-October/018303.html | CC-MAIN-2017-04 | refinedweb | 193 | 68.67 |
Say you have a project named SampleProject. And you want to create a new unit test suite. So you Command-N to make a new file, and select “Unit Test Case Class.”
If we give it the name AppleTests, here’s what Apple provides:
// // AppleTests.swift // SampleProjectTests // // Created by Jon Reid on 12/12/20. // import XCTest class AppleTests: XCTestCase {. } } }
It’s instructive… the first time. After that, it’s only noisy. So I use a customized file template for new unit test suites. Command-N and select “Swift XCTest Test Suite.”
It suggests a file name ending with Tests. If we give it the name QualityCodingTests, here’s what I provide:
@testable import SampleProject import XCTest final class QualityCodingTests: XCTestCase { func test_zero() throws { XCTFail("Tests not yet implemented in QualityCodingTests") } }
Isn’t that better? You can download it here:
Curious about the problems I have with Apple’s template and the decisions I made for my custom template? Read on…
What's In Apple’s Template?
Let’s look more closely at what each file template provides. We’ll start with Apple’s “Unit Test Case Class” template.
Prompting for Unnecessary Inputs
If you select the template, Xcode displays a large dialog:
It feels like this large, clunky dialog handles various dynamic options. For test suites:
- It asks for a class name but doesn’t suggest any pattern.
- It asks if we want it to be a subclass of XCTestCase, which of course we do.
- It asks for the programming language.
What do we get next? Another dialog. Xcode prompts us for the location, group, and target.
File Comment Block
Then we get the file content. It starts with a file comment block:
// // AppleTests.swift // SampleProjectTests // // Created by Jon Reid on 12/12/20. //
What do you do with these? I delete them, every time. They serve no useful purpose in a project. Even if you work at a company that requires a standard file comment block at the top of each file, it doesn’t look like this. Delete.
Import Lacking Production Code
Next, we have the import statements. Or rather, import statement, singular:
import XCTest
This is incomplete. To access your production code, you need to @testable import the module.
Placeholders for Set-Up and Tear-Down
After the class declaration, we get placeholders for set-up and tear-down:. }
These are instructive, with explanatory comments. But I prefer not to create set-up and tear-down when I start creating a new test suite. I don’t want to make assumptions about what belongs there. Instead, I code a test, then another test. Then I can begin to see what might belong in set-up.
Set-up is there to serve the tests. Wait until you have tests so you can discover what belongs there. Delete them, comments and all.
…Wait, Really Delete Those Placeholders?
You may resist the idea of deleting these function placeholders. You may want them there because you don’t want to type them in later. That’s where my test-oriented code snippets come in. I’m lazy, and don’t enjoy typing the same things over and over. So my code snippets define:
These code snippets are available separately by subscribing to Quality Coding:
Two Test Placeholders, Including a Performance Test
Finally, we get a place to put our test. But again, they are mini-tutorials:. } }
Instructive code with explanatory comments is nice the first time. After that, it’s noise. I want to start writing a new test case by typing something new, not by deleting comments.
And can I tell you how many performance test cases I’ve written? Zero. They probably have their place, just not for my needs. Delete.
I want less code, not more.
What’s In My Template?
Now let’s look at the workflow of my “Swift XCTest Test Case” template.
Simple Prompt Suggesting Naming Pattern
Here’s what Xcode shows when you select my template:
First, notice that it suggests a naming pattern. I use the suffix Tests to name test suites because a test suite holds a group of test cases. Just hit Up-Arrow to move the cursor to the beginning of the field, and start typing the rest.
Note that it doesn’t ask you what to subclass, or what programming language to use. You already selected “Swift XCTest Test Case” so we know. (The download also includes an Objective-C version.)
Then specify the location, group, and target.
No File Comment Block
The file doesn’t start with a file comment block. There’s nothing to delete. Move along, move along.
Useful import Statements
The first thing in the file is not one, but two import statements:
@testable import SampleProject import XCTest
The file template makes an educated guess about the name of your production code module. It assumes it’s the same as your project name.
This isn’t always true, of course. But even when it’s wrong, at least it shows that you should use @testable import to access the production code.
Class Declared final
The class declaration has a subtle difference from Apple’s template.
final class QualityCodingTests: XCTestCase {
I like to declare my test suites as final. Why? It’s very unusual to subclass test suites, so we don’t need dynamic dispatch to call test helpers. Private test helpers become direct function calls instead of dynamic messaging.
I doubt this makes much difference. But why leave any performance on the table? I want tests to run as fast as they possibly can.
Test Zero
Before I write the first test, I like to execute what I call Test Zero:
func test_zero() throws { XCTFail("Tests not yet implemented in QualityCodingTests") }
This is a trick I describe in my book iOS Unit Testing by Example. Test Zero helps check that the new test suite does nothing well. It’s the first check of our new infrastructure.
Once the test fails correctly, I delete it. Then using my test-oriented code snippets, I type “test” to begin writing a test case. The test suite is gloriously empty.
I’m a lazy programmer and don’t want to waste my time deleting things I don’t need, and typing things I do need. I hope you find my XCTestCase file template useful!
Be lazy. Don't waste time deleting test code you don't need, and typing test code you do need. This XCTestCase template helps.!
This is a great post and very helpful in cutting time down doing repetitive work. Thanks Jon!
Yay! You’re welcome, Tim. | https://qualitycoding.org/swift-unit-testing-template/ | CC-MAIN-2022-27 | refinedweb | 1,102 | 76.52 |
Opened 4 years ago
Closed 4 years ago
#17771 closed Bug (wontfix)
weird problem db with autocommit
Description
Hello,
Here is problematic code (standard django setup, mysql backend):
import time import os, sys, re sys.path.append(os.path.abspath(os.path.dirname(__file__))+'/..') os.environ['DJANGO_SETTINGS_MODULE'] = 'settings' from django.conf import settings from django.contrib.auth.models import User #from django.db import connection #cursor = connection.cursor() #cursor.execute('SET autocommit = 1') while True: u = User.objects.get(pk=1) print u.first_name time.sleep(1)
It displays user first_name each second. On the other hand with mysql client :
$ update auth_user set first_name = "foo" where id=1;
Value does not update in my loop (wireshark shows the old value too in the mysql packets dumped). If I restart the process, it fetch the correct new value.
I can fix the problem by adding the 3 autocommit lines commented out.
Problem do not occur on my ubuntu 32b desktop (32bits django 1.3.1 / MySQL-python 1.2.3, mysql 5.1.58) nor a debian squeeze server (32bits mysql 5.1.49).
Problem occurs on a 64 bits debian squeeze server (64bits mysql 5.1.49) and a ubuntu 64 server (64 bits mysql 5.1.41).
Thanks.
Change History (4)
comment:1 Changed 4 years ago by meister <admin@…>
- Needs documentation unset
- Needs tests unset
- Patch needs improvement unset
comment:2 follow-up: ↓ 3 Changed 4 years ago by akaariai
The problem seems to be that you are running in a transaction with repeatable read semantics. I don't think Django supports autocommit for MySQL.
comment:3 in reply to: ↑ 2 ; follow-up: ↓ 4 Changed 4 years ago by meister <admin@…>
comment:4 in reply to: ↑ 3 Changed 4 years ago by meister <admin@…>
- Resolution set to wontfix
- Status changed from new to closed
Replying to meister <admin@…>:
The problem seems to be that you are running in a transaction with repeatable read semantics. I don't think Django supports autocommit for MySQL.
It doesn't explain why my code works on some environments and do not work on others...
It works when using MyISAM and creates some problem with InnoDB. My problem is explained in.
A easier way to reproduce it :
aaa | https://code.djangoproject.com/ticket/17771 | CC-MAIN-2016-07 | refinedweb | 375 | 58.28 |
This is your resource to discuss support topics with your peers, and learn from each other.
11-05-2012 05:32 PM
How can I connect to the touch event in C++? This code:
QObject::connect(m_Title, SIGNAL(touch(TouchEvent)), this, SLOT(onTitleTouched(TouchEvent)));
is telling me Label.touch(TouchEvent) is not defined. I'm under the impression, since Label is inheriting from VisualNode, I should have access to that SIGNAL.
Solved! Go to Solution.
11-05-2012 05:43 PM
You may require a different signature for the signal/slot. The samples use this:
QObject::connect(m_Title, SIGNAL(touch(TouchEvent *)), this, SLOT(onTitleTouched(TouchEvent *)));
Or possibly even "bb::cascades::TouchEvent *", but I'm assuming you know how to handle namespaces.
11-06-2012 01:18 AM | https://supportforums.blackberry.com/t5/Native-Development/Connecting-to-the-touch-event-in-C/m-p/1977021 | CC-MAIN-2016-50 | refinedweb | 127 | 60.31 |
Build Your Own redux from scratch
Sai gowtham
・3 min read
Redux is a State Mangement library for React apps as it helps to manage the app state in a single Object it means the whole app state lives on a single Object.
If you try to connect a redux store you have to do some boilerplate setup to your react app often sometimes confusing.
So that's why we write it from scratch.
Create a store.js file in your react app.
first, we need to create a dispatch function, subscribe function, thunk function
1.getState function helps to get the app current state.
2.thunk is used to do aysnc things you can even delay the network request.
create a reducers.js file
Reducer
when we dispatch an action it gives the new app state instead of mutating the old state.
How to connect our redux to the React app?
open your index.js file and import subscribe from the store that's it you are connected to the store like how i did in below code.
import React from "react"; import { render } from "react-dom"; import "./index.css"; import App from "./App"; import { subscribe } from './store'; subscribe(()=>render( <App />, document.getElementById("root") ))
Now let's implement a counter and todo, and send some network requests so that we can know our redux is working correctly or not.
todo.js file
in above code first, we imported getState and dispatch from the store.
when we click an add todo button we are dispatching the type of the action with a payload,getState helps to get the added todos from the store.
counterbuttons.js file
import React from "react"; import {dispatch} from './store' function Inc() { dispatch({ type: 'INC' }) } function Dec() { dispatch({ type: 'DEC' }) } const width = { width: '2rem' ,fontSize:'1.2rem'} const CounterButtons= () => ( <div> <button onClick={Inc} style={width} >+</button> <button onClick={Dec} style={width} >-</button> </div> ); export default CounterButtons;
It's time to send a network requests using thunks and thunks are used to make a network requests.
create a thunks.js file
import { dispatch, thunk } from "./store"; import axios from "axios"; export const users = () => thunk( function (res) { dispatch({ type: "GET_USERS", users: res.data }); }, (cb) => { axios.get('') .then(response => cb(response)) .catch(err => cb({ err:'Error occurred'})) },5000 //delay time )
thunk function takes the three arguments first two are callback functions last
argument is delay and it is optional.
in the first callback function, you need to invoke the dispatch with the type of action and payload
in second callback you need to make a network request whenever response comes back wrap with cb(callback) function. so that you can take the response from the first call back function parameter.
FetchData Component
import React from "react"; import { getState } from "./store"; import { users } from "./thunks"; function Loading() { return <h1 style={{ color: "red" }}>Loading</h1>; } class FetchData extends React.Component { componentDidMount() { users(); } Users = () => { if (getState().users) { return getState().users.map(user => ( <ul key={user.id}> <li>{user.name}</li> <li>{user.email}</li> </ul> )); } else { return <h1 style={{color:'red'}}>Delaying request for 5seconds</h1>; } }; render() { return ( <div> <ul> <li>{getState().data ? getState().data : <Loading />}</li> </ul> <hr /> <h1>Users</h1> <hr /> {this.Users()} </div> ); } } export default FetchData;
that's it we are done with creating all components.
Now we need to import these components in App.js file because our app doesn't aware of these components.
App.js file
Hoo successfully completed
final output
Hope you guys enjoyed...👍🏻
Code Repository
What would you like to see on your DEV profile?
If we were to rethink the DEV profile, what would you like to see on it?
That's pretty cool, thanks!
it's my pleasure Ivan
Are you following Udacity React nanodegree. This code seems very very familiar to me: :)
No ...
then what tutorials you are following? | https://practicaldev-herokuapp-com.global.ssl.fastly.net/saigowthamr/build-your-own-redux-from-scratch-5a4d | CC-MAIN-2019-39 | refinedweb | 638 | 67.76 |
Unity has a class in the UnityEngine namespace called Object, which acts as a base class for all objects that Unity can reference in the editor. Classes which inherit from UnityEngine.Object have special functionality which means they can be dragged and dropped into fields in the InspectorA Unity window that displays information about the currently selected GameObject, asset or project settings, allowing you to inspect and edit the values. More info
See in Glossary, or picked using the Object Picker next to an Object field.
This page provides an overview of the Object class and its common uses when scripting with it. For an exhaustive reference of every member of the Object class, see the Object script reference.
When creating your own objects via scripting, you typically do not want to inherit directly from Object. Instead, you should inherit from a class designed to be more specific to your goal.
For example, you should inherit from MonoBehaviour if you want to write a custom component which you can add to a GameObjectThe fundamental object in Unity scenes, which can represent characters, props, scenery, cameras, waypoints, and more. A GameObject’s functionality is defined by the Components attached to it. More info
See in Glossary, to control what the GamObject does or provide some functionality relating to it.
Or, you should inherit from ScriptableObject if you want to create custom assets which can store serialized data. Both of these inherit from Unity’s Object class, but provide extra functionality to suit those purposes.
Note: Unity’s Object class is in the UnityEngine namespace. This is different from .NET’s base Object class, which has the same name, but is the System namespace, and is not included in the default script template, so that the names do not clash. You can still inherit your classes from .NET’s System.Object if you want to create classes in your script which do not need to be assigned in the inspector.
Unity’s Object class acts as the base class for most of Unity’s built-in classes such as GameObject, Component, Material, Texture,, SpriteA 2D graphic objects. If you are used to working in 3D, Sprites are essentially just standard textures but there are special techniques for combining and managing sprite textures for efficiency and convenience during development. More info
See in Glossary, and many more, which means all of these types can be dragged and dropped into these reference fields in the inspector.
If a field in the inspector specifies a specific type of class (such as Texture), then Unity restricts you from dropping any other type of object into that field, and the object picker will only show objects of the correct type.
The above image shows three types of object field in the inspector.
The first is of type Object, meaning any Unity Object can be assigned to this field. You could drop any type of object into this field, whether it was a GameObject, a Texture,, or anything else. This is not usually very useful, and it’s better to make your fields be more specific about what they should accept.
The second shows that its type is “Texture”, as shown in the parentheses. Texture is a built-in Unity class, and this means you can drop any Texture Asset into this field. Unity has two classes which inherit from this, Texture2D and RenderTexture, which means you can drop either of these types into this field.
The third shows that its type is “Food”. There’s no built-in Unity class with this name, so this example is showing a custom user-made class which inherits from Object. If you were to subsequently create classes which inherit from “Food”, such as “Apple” and “Banana”, you would be able to assign references to instances of these classes into the Food field, because they inherit from that type.
The Object class provides a few methods which allow you to Instantiate and Destroy them properly, as well as finding references to Objects of a specific type.
For more information on the API for the Object class, see the script reference page for Object. | https://docs.unity3d.com/Manual/class-Object.html | CC-MAIN-2021-25 | refinedweb | 692 | 59.74 |
Adding Uli to the Cc list to make sure this system call is usefulfor glibc / can be exported by it. Otherwise it's rather pointlessto add it.> (6) BSD stat compatibility: Including more fields from the BSD stat such as> creation time (st_btime) and inode generation number (st_gen) [Jeremy> Allison, Bernd Schubert]How is this different from (1) and (4)?> (7) Extra coherency data may be useful in making backups [Andreas Dilger].What do you mean with that?> (8) Allow the filesystem to indicate what it can/cannot provide: A filesystem> can now say it doesn't support a standard stat feature if that isn't> available.What for?> (9) Make the fields a consistent size on all arches, and make them large.Why making them large for the sake of it? We'll need massive changesall through libc and applications to ever make use of this. So pleasecoordinate the types used with Uli.> The following structures are defined for the use of these new system calls:> > struct xstat_parameters {> unsigned long long request_mask;> };Just pass this as a single flag by value. And just make it an unsignedlong to make the calling convention a lot simpler.> struct xstat_dev {> unsigned int major, minor;> };> > struct xstat_time {> unsigned long long tv_sec, tv_nsec;> };No point in adding special types here that aren't genericly useful.Also this is the first and only system call using split major/minorvalues for the dev_t. All this just creates more churn than it helps.> > struct xstat {> unsigned long long st_result_mask;Just st_mask?> unsigned long long st_data_version;st version?> unsigned long long st_inode_flags;>_INODE_FLAGS Want/got st_inode_flags> XSTAT_REQUEST__EXTENDED_STATS The stuff in the xstat struct> XSTAT_REQUEST__ALL_STATS The defined set of requestablesWhat's the point of the REQUEST in the name? Also no doubleunderscores inside the identifier. Instead adding a _MASK postfixfor masks would make it a lot more clear.> The defined bits in st_inode_flags are the usual FS_xxx_FL flags in the LSW,> plus some extra flags in the MSW:> > FS_SPECIAL_FL Special kernel file, such as found in procfs> FS_AUTOMOUNT_FL Specific automount point> FS_AUTOMOUNT_ANY_FL Free-form automount directory> FS_REMOTE_FL File is remote> FS_ENCRYPTED_FL File is encrypted> FS_SYSTEM_FL File is marked system (DOS/NTFS/CIFS)> FS_TEMPORARY_FL File is temporary (NTFS/CIFS)> FS_OFFLINE_FL File is offline (CIFS)Please don't overload the FL_ namespace even more. It's already acomplete mess given that it overloads the extN on-disk namespace.You're much better off just adding a clean new namespace.> The system calls are:> > ssize_t ret = xstat(int dfd,> const char *filename,> unsigned flags,> const struct xstat_parameters *params,> struct xstat *buffer,> size_t buflen);If you already have a buflen parameter there is absolute no need forthe extra results field. Just define new fields at the end and includethem if the bufsize is big enough and it's in the mask of requestedfields.> When the system call is executed, the request_mask bitmask is read from the> parameter block to work out what the user is requesting. If params is NULL,> then request_mask will be assumed to be XSTAT_REQUEST__BASIC_STATS.Why add a special case like that? Especially if we make the requestflags a pass by value scalar initalizing it is trivial.>.Please don't introduce tons of special cases. Instead use a simple rulelike: - a filesystem must return all attributes requests, or return an error if it can't. - a filesystem may return additional attributes, the caller can detect this by looking at st_mask.plus possibly a list of attributes the filesystem must be able toprovide if requests. I don't see a reason to make that mask differentfrom the attributes required by Posix. | http://lkml.org/lkml/2010/7/18/32 | CC-MAIN-2014-15 | refinedweb | 600 | 63.09 |
02 March 2011 18:00 [Source: ICIS news]
HOUSTON (ICIS)--Here is Wednesday’s midday ?xml:namespace>
CRUDE: Apr WTI: $101.55/bbl, up $1.92; Apr Brent: $117.13/bbl, up $1.71
NYMEX WTI crude futures surged in response to geopolitical worries and to weekly EIA statistics showing a drawdown in crude and gasoline inventories. WTI topped out at $102.20/bbl before retreating.
RBOB: Apr: $3.0424/gal, up 5.90 cents
Reformulated gasoline blendstock for oxygenate blending (RBOB) broke through the $3.00/gal level on rising crude oil stockpiles and EIA data that showed US stockpiles falling by 3.6m bbl week on week.
NATURAL GAS: Apr: $3.809/MMBtu, down 6.4 cents
The front month natural gas contract picked up from Tuesday’s slide and selling continued, as two-week weather forecasts began to show much milder temperatures. Also, analysts predicted a narrow withdrawal in Thursday’s storage report.
ETHANE: up at 73.0-74.5 cents/gal
Mont Belvieu ethane prices picked up steam to start Wednesday, gaining more than 2-3 cents/gal while tracking surging crude futures.
AROMATICS: benzene up at $4.17-4.22/gal
Prompt US benzene spot prices were 2 cents/gal higher this morning, traders said. The range was up following the increase in crude on Tuesday.
OLEFINS: RGP up at 66.75 cents/lb
US Gulf refinery-grade propylene (RGP) was heard traded at 66.75 cents/lb, up more than 5 cents from deals at 61.00-61.50 cents/lb | http://www.icis.com/Articles/2011/03/02/9440247/noon-snapshot-americas-markets-summary.html | CC-MAIN-2014-49 | refinedweb | 257 | 79.77 |
. There is one ioctl call to read the status of all counters, and one ioctl call to program the function of each counter. All require the following includes:
#include <sys/types.h> #include <machine/cpu.h> #include <machine/pctr.h>
PCIOCRDioctl, which takes an argument of type
struct pctrst:
#define PCTR_NUM 4 struct pctrst { u_int pctr_fn[PCTR_NUM]; pctrval pctr_tsc; pctrval pctr_hwc[PCTR_NUM]; };
PCIOCS0,
PCIOCS1,
PCIOCS2and
PCIOCS3ioand
PCIOCS3which require a writeable file descriptor and take an argument of type
unsigned int.
The meaning of this integer is dependent on the particular CPU.The meaning of this integer is dependent on the particular CPU.
rdtsc() macro, which returns a 64-bit value of type pctrval. The following example illustrates a simple use of
rdtsc() to measure the execution time of a hypothetical subroutine called
functionx():
void time_functionx(void) { pctrval tsc; tsc = rdtsc(); functionx(); tsc = rdtsc() - tsc; printf("Functionx took %llu cycles.\n", tsc); }
PCIOCRDioctl, so that one can get an exact timestamp on readings of the hardware event counters. The performance counters can be read directly from user-mode without need to invoke the kernel. The macro
PCTR_UM_MESIwhich contains the bitwise or of all of the above. For event types dealing with bus transactions, there is another flag that can be set in the unit mask:
PCTR_UM_A
PCTR_UM_[MESI]bits in the unit mask. Events marked (A) can take the
PCTR_UM_Abit. Finally, the least significant byte of the counter function is the event type to count. A list of possible event functions could be obtained by running a pctr(1) command with
-loption.
ENODEV]
EINVAL]
PCIOCSxioctl.
EPERM]
pctrdevice first appeared in OpenBSD 2.0. Support for amd64 architecture appeared in OpenBSD 4.3.
pctrdevice was written by David Mazieres <dm@lcs.mit.edu>. Support for amd64 architecture was written by Mike Belopuhov <mikeb@openbsd.org>.
rdpmc() and/or
rdtsc() that can potentially decrease the accuracy of measurements. | https://man.openbsd.org/OpenBSD-5.9/pctr.4 | CC-MAIN-2018-26 | refinedweb | 315 | 58.08 |
Formatting a string is a very important tool to know when you are coding in Python. Whenever you write a program, you have to format the output into a string before you print it or display it in some form.
There are times when you want to control the formatting of your output rather than simply printing it. There are four different ways to perform string formatting:
Formatting Strings with the % Operator
This method is the oldest method of string formatting which uses the modulo(%) operator. Let’s see an example:
name = 'world' print('Hello, %s!' % name) year = 2022 print('Hello %s, this year is %d.' % (name, year))
Output:
Hello, world! Hello world, this year is 2022.
This operator formats a set of variables enclosed in a tuple together with a format string. The string contains a normal text together with the
argument specifiers, special symbols are used like %s, and %d. These special symbols act as a placeholder.
In the second example, there are two arguments specifiers - the first one represents a string and the second represents an integer, and the actual string value is enclosed in a tuple (parentheses).
Formatting Strings with the format() method
This method inserts the specified values inside the string's placeholder. The placeholder is defined by a pair of curly braces { }. The placeholder can be used as the named indexes {name}, numbered indexes {0}, or empty placeholders { }.
Let’s understand through an example:
# Default / empty arguments print('Hello {}, this year is {}.'.format('world', 2022)) # Positional arguments print('Hello {0}, this year is {1}.'.format('world', 2022)) # Keyword arguments print('Hello {name}, this year is {yr}.'.format(name='world', yr=2022)) # Mixed arguments print('Hello {0}, this year is {yr}.'.format('world', yr=2022))
Output:
Hello world, this year is 2022. Hello world, this year is 2022. Hello world, this year is 2022. Hello world, this year is 2022.
The first example is the basic example with the
default arguments. In this case, empty placeholders are used.
The second example is of the
positional arguments. In this case, a number in the bracket refers to the position of the object passed into the .format() method. In other words, values of these arguments can be accessed using an index number inside the { }.
The third example is of the
keyword arguments. In this case, the argument consists of a key-value pair. Values of these arguments can be accessed by using the key inside the { }.
The fourth example is of the
mixed arguments. In this case, positional and keyword arguments can be used together.
Note: In the case of mixed arguments, keyword arguments always follow positional arguments.
Formatted String Literals (f-strings)
This method is known as Literal String Interpolation or f-strings. It makes the string interpolation simpler by embedding the Python expressions inside string constants.
It works by prefixing the string either with f or F. The string can be formatted in the same way as the str.format() method. It provides a very concise and convenient way of formatting string literals.
Let’s go through with the example:
# Simple example name = 'world' print(f'Hello, {name}!')
Output:
Hello, world!
This was a simple example. This method also evaluates expressions in real-time. Let’s see another simple example:
# Performing an arithmetic operation print(F'Two minus Ten is {2 - 10}')
Output:
Two minus Ten is -8
This method is very easy and powerful as the real-time implementation makes it faster.
Formatting with Template Class
This is the standard library, all you have to do is to import the
Template class from Python’s built-in string module. It is simple but less powerful. Let’s see an example.
from string import Template t = Template('Hello $name, this year is $yr') print(t.substitute(name='world', yr=2022))
Output:
Hello world, this year is 2022
This format uses $ as a placeholder. First, you have to create a template that is used to pass the variables, and then later in the print statement, we pass the parameters into the template string.
.substitute() is used to replace placeholders.
Conclusion
In this tutorial, we have used four types of methods to format a string in python. These are the basic examples that will help you to understand which method to use. To know more about them, refer to their official documentation.
There are different ways to handle string formatting in Python. Each method has its pros and cons. It depends on your use case which method you want to use. | https://www.python-engineer.com/posts/string-formatting/ | CC-MAIN-2022-21 | refinedweb | 754 | 67.25 |
What's New in Neo4j Databridge
What's New in Neo4j Databridge
Neo4j-Databridge can help you import large amounts of data into Neo4j. Read on to find out what's new with Databridge and what it means to you.
Join the DZone community and get the full member experience.Join For Free
Running out of memory? Learn how Redis Enterprise enables large dataset analysis with the highest throughput and lowest latency while reducing costs over 75%!
Since our first post a few months back, Neo4j Databridge has seen a number of improvements and enhancements. In this post, we’ll take a quick tour of the latest features.
Streaming Endpoint
Although Databridge is primarily designed for bulk data import, which requires Neo4j to be offline, we recently added the capability to import data into a running Neo4j instance.
This was prompted by a specific request from a user who pointed out that in many cases, people want to do a fast bulk-load of an initial large dataset with the database offline and then subsequently apply small incremental updates to that data with the database running. This seemed like a great idea, so we added the streaming endpoint to enable this feature.
The streaming endpoint uses Neo4j’s Bolt binary protocol, and the good news is that you don’t need to change any of your existing import configuration to use it. Simply pass the
-s option to the import command, and it will automatically use the streaming endpoint:
Example: Use the
-s option to import the
hawkeye dataset into a running instance of Neo4j.
bin/databridge import -s hawkeye
The streaming endpoint connects to Neo4j using the following defaults:
neo4j.url=bolt://localhost neo4j.username=neo4j neo4j.password=password
You can override these defaults by creating a file
custom.properties in the Databridge
config folder and setting the values as appropriate for your particular Neo4j installation.
Please note that despite using the Bolt protocol, the streaming endpoint will take quite a bit longer to run than the offline endpoint for large datasets, so it isn’t really intended to replace bulk import. For small incremental updates, however, this should not be a problem.
Updates from the streaming endpoint are batched, with the transaction commit size currently set to 1000, and the plan is to make the commit size user-configurable in the near future.
Specifying the Output Database Folder
By default, Neo4j-Databridge creates a new
graph.db database in the same folder as the import task. We’ve now added the ability for you to define the output path to the database explicitly. To do this, use the
-o option to specify the output folder path to the import command:
Example: Use the
-o option to import the
hawkeye dataset into a user-specified database.
bin/databridge import -o /databases/common hawkeye
In the example above, the
hawkeye dataset will be imported into
/databases/common/graph.db, instead of the default location
hawkeye/graph.db.
Among other things, this new feature allows you to import different datasets into the same physical database:
Example: Use the
-o option to allow the
hawkeye and
epsilon datasets to co-exist in the same Neo4j database.
bin/databridge import -o /databases/common hawkeye bin/databridge pimport -o /databases/common epsilon
Simpler Commands
The eagle-eyed among you will have spotted that the above examples use the
import command, while in our first blog post, our examples all used the
run command, which was invoked with a variety of different option flags. The original
run command still exists, but we’ve added some additional commands to make life a bit simpler.
All the new commands also now support a
-l option, to limit the number of rows imported. This can be very useful when testing a new import task for example. The new commands are:
import: Runs the specified import task.
Usage:
import [-cdsq] [-o target] [-l limit]
c: Allow multiple copies of this import to co-exist in the target database
d: Delete any existing dataset prior to running this import
s: Stream data into a running instance of Neo4j
q: Run the import task in the background, logging output to
import.loginstead of the console.
o target: Use the specified target database for this import.
l limit: The maximum number of rows to process from each resource during the import.
test: Performs a dry run of the specified import task, but does not create a database.
Usage:
test [-l limit]
l limit: The maximum number of rows to process from each resource during the dry run.
profile: Profiles the resources for an import task.
Databridge uses a profiler at the initial phase of every import. The profiler examines the various data resources that will be loaded during the import and generates tuning information for the actual import phase.
Usage:
profile [-l limit]
l limit: The maximum number of rows to profile from each resource.
The profiler displays the statistics that will be used to tune the import. For nodes, these statistics include the average key length
akl of the unique identifiers for each node type, as well as an upper bound
max on the number of nodes of each type.
For relationships, the statistics include an upper bound on the number of edges of each type. (The
max values are upper bounds because the profiler doesn’t attempt to detect possible duplicates.)
Profile statistics are displayed in JSON format:
{ nodes: [ { 'Orbit': {'max':11, 'akl':10.545455} } { 'Satellite': {'max':11, 'akl':8.909091} } { 'SpaceProgram': {'max':11, 'akl':9.818182} } { 'Location': {'max':11, 'akl':4.818182} } ],edges: [ { 'LOCATION': {'max':11} } { 'ORBIT': {'max':11} { 'LAUNCHED': {'max':11} } { 'LIVE': {'max':11} } ] }
Deleting and Copying Individual Datasets
In order to support the new streaming endpoint as well as the ability to host multiple import datasets in the same database, Databridge only creates a brand new database the first time you run an import task.
If you run the same import task multiple times with the same datasets, Databridge will not create any new nodes or relationships in the graph during the second and subsequent imports.
If you want to force Databridge to clear down any previous data and re-import it again, you can use the
-d option, which will delete the existing dataset first.
Example: Use the
-d option to delete an existing dataset prior to re-importing it.
bin/databridge import hawkeye bin/databridge import -d hawkeye
On the other hand, if you want to create a
copy of an existing dataset, you can use the
-c option instead:
Example: Use the
-c option to create a copy of a previously imported dataset.
bin/databridge import hawkeye bin/databridge import -c hawkeye
Deleting All the Things
If you need to delete everything in the graph database and start again with a completely clean slate, you can use the
purge command:
bin/databridge purge hawkeye
Note that if you have imported multiple datasets into the same physical database, you should
purge
each of them individually, specifying the database path each time:
bin/databridge purge -o /databases/common hawkeye bin/databridge purge -o /databases/common epsilon
Conclusion
Well, that about wraps up this quick survey of what’s new in Databridge from GraphAware. If you’re interested in finding out more, please take a look at the project WIKI, and in particular the Tutorials section.
Running out of memory? Never run out of memory with Redis Enterprise database. Start your free trial today.
Published at DZone with permission of Vince Bickers , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/whats-new-in-neo4j-databridge-april-2017 | CC-MAIN-2018-17 | refinedweb | 1,282 | 59.84 |
Haskell Quiz/Index and Query/Solution Jethr0
From HaskellWiki
< Haskell Quiz | Index and Query
Unfortunately this solution doesn't really address the problem :)
Neither are bit-arrays used nor is this solution saving much space. I just wanted to experiment with the State Monad, and I'm quite happy with what I learned.
Example:
> let docs = [("Doc1", "The quick brown fox") ,("Doc2", "Jumped over the brown dog") ,("Doc3", "Cut him to the quick")] > finder docs "brown" ["Doc1","Doc2"] > finder docs "the" ["Doc2","Doc3"]
Solution:
import qualified Control.Monad.State as State import qualified Data.Map as Map import qualified Data.Set as Set data Rd = Rd {rdN :: Integer ,rdMap :: Map.Map String Integer } deriving (Show) -- process words of a file and return the set of indices processWords :: [String] -> State.State Rd (Set.Set Integer) processWords = foldM step (Set.empty) where step ws x = do mp <- State.gets rdMap i <- case Map.lookup x mp of Nothing -> do n <- State.gets rdN State.modify (\s -> s{rdN=(n+1), rdMap=Map.insert x n (rdMap s)}) return n Just a -> return a return $ Set.insert i ws processFile :: (String,String) -> State.State Rd (String, [Integer]) processFile (doc,str) = do indices <- processWords (words str) return (doc, Set.toList indices) -- find all documents containing string "str" as a word. findDocs :: String -> [(String,[Integer])] -> State.State Rd [String] findDocs str indices = do mp <- State.gets rdMap case Map.lookup str mp of Nothing -> return [] Just i -> return . map fst . filter (\(_,is) -> i `elem` is) $ indices runIt f = State.evalState f (Rd {rdN=0, rdMap=Map.empty}) finder ds str = runIt (mapM processFile ds >>= findDocs str) | https://wiki.haskell.org/index.php?title=Haskell_Quiz/Index_and_Query/Solution_Jethr0&oldid=10226 | CC-MAIN-2015-27 | refinedweb | 272 | 70.39 |
Opening dynamically created buttons
In my qt c++ application I create buttons dynamically based on the contents of a QStringList(i.e number of buttons is equal to the number of elements in the QStringlist and the text of the buttons are the elements in the list).
following is my code
#include "dialog.h"
#include "ui_dialog.h"
#include "QFrame"
#include "QLabel"
#include "QPushButton"
Dialog::Dialog(QWidget *parent) :
QDialog(parent),
ui(new Ui::Dialog)
{
ui->setupUi(this);
}
Dialog::~Dialog()
{
delete ui;
}
void Dialog::createButtons(){
for(int i=0;i<List.size();i++){ f1 = new QFrame(); a= new QPushButton(); a->setText(List[i]); ui->horizontalLayout->addWidget(a); }
}
void Dialog::on_pushButton_clicked()
{
createButtons()
}
Here "List"is the respective QStringList that I used!
when I call the createButtons() method in a button click as shown in my code the buttons are dynamically created!
When I select each dynamically created button and click that button I want to display an interface( let us assume it to be a blank interface for the moment). How can I achieve it?
You need to connect each Button to a Slot, as Example. The Slot opens a new Window, if you mean that with Interface.
connect(button, SIGNAL(clicked()), this, SLOT(mySlot()));
Is that what you mean?
@Fuel I want a way to distinguish the buttons that I created dynamically from one another! Since they are created dynamically I cant right click the buttons and select goto slot option like normal ordinary buttons that we add statically
- mrjj Lifetime Qt Champion last edited by
@Kushan
The connect statement @Fuel shows is how you would
make the same as right clicking in Designer.
( sort of, as Designer makes a special name and its found at runtime and hooked up)
So you hook up each button to different slots and hence different things will happen when you click it.
If you have many , it might be awkward creating that many slots.
You can also use
sender() inside a slot to know whom sent it. But the answer is just some Buttons so
in what way do you need to distinguish the buttons ?
@mrjj Thanx where should I place
connect(button, SIGNAL(clicked()), this, SLOT(mySlot()));
this code?
- mrjj Lifetime Qt Champion last edited by mrjj
Just after
a= new QPushButton();
connect(a, SIGNAL(clicked()), this, SLOT(mySlot()));
Note that you should create a slot in Dialog .h
called
void mySlot();
and in Dialog.cpp
void mySlot() {
}
Just like when you rightclick,
Designer adds such function to the class.h and in .cpp
@mrjj Thanx! But the problem I face is if 2 buttons that are dynamically created and I want one to perform a specific function when it is clicked and the other button to perform another action when its clicked! how can I achieve it? Here "a"is cmmon to both in such a scenario
- mrjj Lifetime Qt Champion last edited by
@Kushan
Hi
you could simply make another slot
and connect to that instead.
connect(a, SIGNAL(clicked()), this, SLOT(mySlot2()));
But how do you know what the button you are creating should do ?
its it part of the data you read in ?
@mrjj The buttons are displaying the names of elements in the qstringlist! Each element has a method name! so when I click a button a method resembling that method name shouldget executed!
eg-List<<"Run"<< "Stop"
so 2 buttons are created displaying Run and Stop! so when I click the button displaying with the word "Run" the Run() method should execute and when I click the button displaying the word "Stop" the Stop() method should execute.
- jsulm Qt Champions 2019 last edited by
@Kushan You should use | https://forum.qt.io/topic/85611/opening-dynamically-created-buttons/11 | CC-MAIN-2020-16 | refinedweb | 610 | 63.49 |
XML::XPath::Node - internal representation of a node
The Node API aims to emulate DOM to some extent, however the API isn't quite compatible with DOM. This is to ease transition from XML::DOM programming to XML::XPath. Compatibility with DOM may arise once XML::DOM gets namespace support.
Creates a new node. See the sub-classes for parameters to pass to new().
Returns one of ELEMENT_NODE, TEXT_NODE, COMMENT_NODE, ATTRIBUTE_NODE, PROCESSING_INSTRUCTION_NODE or NAMESPACE_NODE. UNKNOWN_NODE is returned if the sub-class doesn't implement getNodeType - but that means something is broken! The constants are exported by default from XML::XPath::Node. The constants have the same numeric value as the XML::DOM versions.
Returns the parent of this node, or undef if this is the root node. Note that the root node is the root node in terms of XPath - not the root element node.
Generates sax calls to the handler or handlers. See the PerlSAX docs for details (not yet implemented correctly).
See the sub-classes for the meaning of the rest of the API: | http://search.cpan.org/~msergeant/XML-XPath/XPath/Node.pm | CC-MAIN-2017-09 | refinedweb | 176 | 59.7 |
Unlike most other languages, Elm doesn’t use syntactic markers such as curly brackets, parentheses, or semicolons to specify code boundaries. It uses whitespace and indentation instead. There are situations where if we don’t place our code in a certain order or provide proper indentation, Elm will throw a syntax error. Let’s go through them one by one.
Function Definitions
Let’s use an example from the Function section to understand the implications of using incorrect order and indentations when defining a function.
module Playground exposing (..) import Html escapeEarth velocity speed = if velocity > 11.186 then "Godspeed" else if speed == 7.67 then "Stay in orbit" else "Come back" main = Html.text (escapeEarth 11.2 7.2)
All Elm files begin with a
module definition. It’s perfectly fine to add comments above the
module definition, but no other code can go above it.
{- The Playground module is used to experiment with various concepts in Elm programming language. -} module Playground exposing (..) . .
import lines are optional. If they’re included, they’re listed right below the
module definition. Both
module and
import lines must start at the left most column. The top-level function definitions are placed below the
import lines. They too must start at the left most column. Later you will be introduced to other concepts in Elm such as
type that also go below the
import lines.
Notice how two blank lines are used to separate the definition for functions
escapeEarth and
main? Because Elm doesn’t use delimiters such as curly braces to surround functions, using only a single blank line to separate definitions can make our code less readable. Elm borrowed this convention from Python, another great language. However, the
module and
import lines are separated with only one blank line. These spacing rules are enforced by
elm-format. It is a little confusing that
elm-format uses two blank lines between some definitions, and one blank line between others. Maybe that will change by the time
1.0.0 version is out.
At the time of this writing,
elm-format uses four spaces to indent a function body. Because it’s still in alpha, don’t be surprised if it uses two spaces instead when
1.0.0 is out. In the earlier sections, you were led to believe that the body of a function,
if,
let, and
case expressions must be indented with at least one space. That still holds true. Syntactically speaking, all Elm cares about is an indentation with at least one space. But, using more than one space improves the readability of our code.
If you’re interested, here is a vigorous debate between the Elm community members on whether to use two or four spaces for indentation. We programmers are so nitpicky, aren’t we?
If, Let, and Case Expression
An
if expression must be placed inside a function definition, otherwise Elm will throw an error.
module Playground exposing (..) import Html -- This is invalid code if velocity > 11.186 then "Godspeed" else if speed == 7.67 then "Stay in orbit" else "Come back"
The part after
then and final
else should be placed on the next line indented with four spaces. It’s perfectly fine to place an
if expression inside a
let or
case expression as long as they themselves are placed inside a function. We already saw an example of this in the Let Expression section. Here it is again."
The body of
let and
case expressions must also be indented with at least one space. As of this writing,
elm-format uses four spaces in both cases to improve the readability. The list below summarizes the indentation rules we have covered so far.
- Basic Indentation Rules
module,
import, and top-level
functiondefinitions must start at the left most column.
- If an expression is split into multiple lines, the code that is part of that expression must be indented under that expression with at least one space.
- Parts of the expression that are grouped together should be indented with equal number of spaces. This rule is particularly important in a
letexpression. | http://elmprogramming.com/indentation.html | CC-MAIN-2017-34 | refinedweb | 685 | 67.15 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
V9 - Enable a currency when installing a module
We're developing modules for our company and since we use CAD currency I want to enable it in the system. How can I do that from a module?
EDIT:
I can believe this is taking so long to accomplish this little task!
I get the following error:
ParseError: "Invalid field 'name' in leaf "<osv.ExtendedLeaf: ('name', '=', 'CAD') on res_currency (ctx: )>"" while parsing, near
<function model="res.currency" name="_enable_currencies"/>
I don't get it... when I check the fields of res.currency, I only see those:
{
'create_uid': <openerp.osv.fields.many2one object at 0x0000000009A8D138>,
'create_date': <openerp.osv.fields.datetime object at 0x0000000009A8C3C8>,
'id': <openerp.osv.fields.integer object at 0x0000000009A8C588>,
'write_date': <openerp.osv.fields.datetime object at 0x0000000009A8C4A8>,
'write_uid': <openerp.osv.fields.many2one object at 0x0000000009A8D228>
}
Where are the name, rounding, symbol, position and active fields?? In the fields from res.currency are from the base modules. What's going on??
I have the following files:
res_currency.py
from import openerp models
class Currency(models.Model):
_name = "res.currency"
def _enable_currencies(self, cr, uid, ids=Nonecontext=None, ):
cad_currency_ids = self.search(cr, uid, [('name', '=', 'CAD')])
if cad_currency_ids:
self.write(cr, uid, cad_currency_ids, {'active' : 'True'})
res_currency.xml
<openerp>
<data>
<function model="res.currency" name="_enable_currencies"/>
</data>
</openerp>
__openerp__.py
{
'name' :"Canadian data" ,
'description':"Create the canadian provinces and territories plus activate CAD currency." ,
'author' :"Transcontinental" ,
'category' :'' ,
'version' :'1.1' ,
'depends' : [],
'data' : [
'res_currency.xml',
],
'auto_install': False,
'installable': True,
}
__init__.py
import res_currency
Hello Mathieu,
In v9, all other currencies are DE-activated by default (active=False).
So from your code, first you have to search for your currency based on currency code and then enable that currency (active=True). Now, update currency_id field in Company and set your currency in that field.
#TIP: If you want to show Currency menu to user, you have to assign "Multi Currencies" group to your user(s).
EDIT:
I can see there is a problem in your code.
Instead of _inherit, you have used _name which consider as a new object and will have only those fields which you have defined in your class.
Try this:
class res_currency(...):Hope this helps you.
_inherit = 'res.currency' # use _inherit to extend the functionality of the object
Thanks but I wonder where I should write that code. I tried making an xml to change the data but since the original data was imported with noupdate it doesn't work (I don't get why they use noupdate here!). There should be a way to run code at installation time.
Ok I search a bit more and I found this answer:
Yes that will work if you call the function from XML and will follow the steps we mentioned in
See my updated answer.
Your module should depends on base module. | https://www.odoo.com/forum/help-1/question/v9-enable-a-currency-when-installing-a-module-95260 | CC-MAIN-2017-09 | refinedweb | 498 | 61.02 |
Acquisition
Acquisition. to
class statement defines a class and a
def statement inside of a class statement defines a method. A class statement followed by one or more words inside (Parenthesis) causes that class to inherit behavior from the classes named in the parenthesis.: 'another. In the case of the Sub class above, it and if it doesn't find it there, it will search in the SuperB class.
Note that when we called the
another.
Acquisition is about Containment
The concept behind acquisition is simple:
- Objects are situated inside other objects. These objects act as their "containers". For example, the container of a DTML Method named "amethod" inside the DTML_Example folder is the DTML_Example, the acquisition hierarchy is searched.
Say What?
Let's toss aside the formal explanations. Acquisition can be best explained with a simple example.
Place a DTML Method named
acquisition_test in your Zope root folder. Give it the following body:
AcquisitionTestFolder folder that it didn't have before (by way of giving it an
acquisition_test method).
Providing Services.
Getting Deeper with Multiple Levels
If you place a method in the root folder, and create a subfolder in the root folder, you can acquires possible. However, if you want more information about acquiring via a context and you are ready to have your brain explode, please see the presentation named Acquisition Algebra.
Summary. | https://engineering.purdue.edu/ECN/Support/KB/Docs/ZopeBook/Acquisition.whtml | CC-MAIN-2016-26 | refinedweb | 226 | 54.02 |
Directory Listing
update the netmount script to kill processes again
fix uml/colinux checks in clock init script
make keymaps more flexible #72225
rm with impunity
use find|xargs instead of find -exec #59732.
new hostname/domainname scripts; move configuration to the standard conf.d/{hostname,domainname} files
make sure users stop setting CLOCK in rc.conf instead of conf.d/clock
dump the error message returned by hwclock
Only add files not managed by udev to device tarball, bug #57110.
unify all the uml checks into one function, is_uml_sys()
dont pass adjust options with --hctosys
handle read only filesystems nicer
use [ = ] and [[ == ]]
add UNICODE #32111 and UML support #29707
only run cache-building scripts when the cache is out of date #67976
convert to using $NET_FS_LIST
dont run /sbin/pam_console_apply if using udev #50315
add support for coLinux to clock
clean up clock and make it more configurable (ideas from redhat and #15834)
move the utmp clearing code out of bootmisc and into rc itself #61727
use sort -u instead of uniq #36453
after-boot dmesg logging #55421
let domainname override settings obtained by dhcp/etc... #48277
reorder mount arguements to be POSIX standard #66225
make sure we check out what happened with swapon #39834
respect fs_passno for / #39212
dont try to fsck a network-ed root (like NFS) #36624
nfs4 support in netmount #25106
update the cryptfs check to include [ -x /bin/cryptsetup ]
clean up the output of dm-crypt
move nscd back to glibc
Fix bug 64034: simplify netmount script's stop function so that mountpoints containing spaces work correctly
fix whitespacing
Commit dm-crypt enablement patch from Tigger (Rob Holland) in bug 26953
fix copyright lines
import better, gentoo-specific serial script #16079
Fix bug 46680: Add cifs support to localmount and netmount
Fix bug 51351: Quote parsed output of /proc/filesystems to handle octal sequences in mountpoint such as encoded spaces (\040)
Fix bug 58805: net.eth0 should use bridge so that bridge interfaces are configured prior to net.br0 running 25975: support adsl in net.eth0. Thanks to Patrick McLean for the initial pass at the code.
Fix bug 50448: wrong conditional syntax in for loops.
update copyright years.
Patch init.d/checkroot to list / (root) only once in mtab; see bug 38360. Patch from Gustavoz to livecd-functions.sh to run bash instead of login on serial consoles, necessary due to scrambled root password.
Add RC_DEVICE_TARBALL to /etc/conf.d/rc to control use of device tarball. Also modified /sbin/rc and /etc/init.d/halt.sh for this. Start udevd if present..
Update /etc/init.d/consolefont to use newer kbd. Should also close bug #39864.
Fix wrong logic in /etc/init.d/halt.sh which did not umount all mounts _but_ /mnt/cdrom and /mnt/livecd.
More livecd fixes
Fix type-o in /etc/init.d/checkfs, bug #37113.
Misc udev fixes
Make sure we mount already mounted mount (done in /sbin/rc) with correct permissions, etc, bug #33764. Modified /etc/init.d/checkroot for this.
Revert carrier detection check, as there is currently too many issues with it, bug #33272. carrier detection to /etc/init.d/net.eth0 closing bug #25480; patch by Jordan Ritter <jpr5+gentoo@darkridge.com>.
LiveCD fixes
Add support for --tty switch added to setfont and remove consolechars support; modified /etc/init.d/consolefont for this. Also remove consoletools support from /etc/init.d/keymaps.
Add a fix to /etc/init.d/keymaps for bug #32111 (we should not have '-u' in the call to loadkeys when using unicode).
Fix return code checking of fsck in /etc/init.d/checkfs, bug #31349.
Remove the killall5 stuff from /etc/init.d/halt.sh, as it messes with bootsplash. Add support to kill processes still using non-critical mounts with fuser though.
Add initial bootsplash patch. Add more tty's to numlock, bug #28252.
Remove changing group of /tmp/.{X,ICE}-unix, as it it not needed, bug #28861. the '-k' switch to dhcpcd to '-z' in /etc/init.d/net.eth0.
Fix small logic error (changed things, but forgot to update i)
Add LVM2 support thanks to Max Kalika <max@gentoo.org> (bug #21908).
adelie fixes, add better logger support
Adelie updates and some other fixes
Add EVMS2 support
Fix the /dev/root entry in /etc/mtab, bug #24916.
use uname -r to get kernel version, bug #23923
Add mdadm support to /etc/init.d/checkfs, bug #23437
vlan support, bug #15588
fix keymap issues, bug #24084
more fixes
various fixes
small fixes
remove lockfile
Add frozen lock support
cleanup for bug #21438
really fix bootmisc, bug #21438
Fix domainname to start before bootmisc; new file /etc/issue.logo
type-o
fixes
fix net.ppp0 issues for kppp; dependency fixes
some more fixes
new release; supporting parallel startup and new dep system with many fixes
hostname again; add domainname
updates from Rach
bugfixes; new version
fix net.ppp0 and add save to clock rc-script
some fixes
small fixes
various fixes; moved .c files to src
raid tweaks
cleanups
some fixes and new release
some fixes
small tweaks; sysfs support
some updates and fixes
fix trying to unmount / on 2.4 kernel
cleanup checkroot
slight tweaks
many fixes/optimizations
fix unmount of non critical mounts
remove greps that could be called if /usr not mounted
bug fixes
add isapnp to modules use
some fixes to halt.sh
fix retval check in checkroot and checkfs
add unicode keymap support
misc fixes
new release
bugfix
misc fixes
lot of changes; hopeful release of rc-scripts-1.4.3.0
small fixes and enhancements
use 'usbfs' for kernel 2.5
add crypto-loop
mips support among things
odd fixes
also remove fam-oss temp files
bugfixes
add some flexiblity to net dependency
import Adelie Cluster stuff
change license
fix misc deps
fix modules rc-script to handle moduleless kernel
fix update-modules path in init.d/modules
fix race condition
misc form updates
more fixes
minor fixes
many misc fixes and updates
more fixes
ngpt fixes
fixor
swraid fixes
NGPT support
small fixor
bork
fix type-o in netmount
release again
quick release
bugfixes
fix invalid check for env-update
fix LVM bork
add nfs to USE
new release
small fixes
fix keymaps
raid support fixor | https://sources.gentoo.org/cgi-bin/viewvc.cgi/baselayout/branches/baselayout-1_12/init.d/?view=log&pathrev=1821 | CC-MAIN-2016-30 | refinedweb | 1,046 | 64.2 |
19332/avoid-killing-children-when-parent-process-is-killed
I use the library multiprocessing in a flask-based web application to start long-running processes. The function that does it is the following:
def execute(self, process_id):
self.__process_id = process_id
process_dir = self.__dependencies["process_dir"]
self.fit_dependencies()
process = Process(target=self.function_wrapper, name=process_id, args=(self.__parameters, self.__config, process_dir,))
process.start()
When I want to deploy some code on this web application, I restart a service that restarts gunicorn, served by nginx. My problem is that this restart kills all children processes started by this application as if a SIGINT signal were sent to all children. How could I avoid that ?
EDIT: After reading this post, it appears that this behavior is normal. The answer suggests to use the subprocess library instead. So I reformulate my question: how should I proceed if I want to start long-running tasks (which are python functions) in a python script and make sure they would survive the parent process OR make sure the parent process (which is a gunicorn instance) would survive a deployement ?
FINAL EDIT: I chose @noxdafox answer since it is the more complete one. First, using process queuing systems might be the best practice here. Then as a workaround, I can still use multiprocessing but using the python-daemon context (see here ans here) inside the function wrapper. Last, @Rippr suggests using subprocess with a different process group, which is cleaner than forking with multiprocessing but involves having standalone functions to launch (in my case I start specific functions from imported libraries).
I would recommend against your design as it's quite error prone. Better solutions would de-couple the workers from the server using some sort of queuing system (RabbitMQ, Celery, Redis, ...).
Nevertheless, here's a couple of "hacks" you could try out.
Instruct your child processes to ignore the SIGINT signal. The service orchestrator might work around that by issuing a SIGTERM or SIGKILL signal if child processes refuse to die. You might need to disable such feature.
To do so, just add the following line at the beginning of the function_wrapper function:
signal.signal(signal.SIGINT, signal.SIG_IGN)
I cant really seem to reproduce the ...READ MORE
Only in Windows, in the latter case, ...READ MORE
Q1. feed_dict is used in this case to set ...READ MORE
A multiprocessing.Process can p.terminate()
In the cases where I want to ...READ MORE
Hey @nmentityvibes, you seem to be using ...READ MORE
Try using ingress itself in this manner
except ...READ MORE
Consider this - In 'extended' Git-Flow, (Git-Multi-Flow, ...READ MORE
It can work if you try to put ...READ MORE
I would recommend against your design as ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/19332/avoid-killing-children-when-parent-process-is-killed | CC-MAIN-2020-50 | refinedweb | 461 | 59.5 |
Every time I customize an Expander in WPF using a HeaderTemplate, I make a critical mistake. I forget to set the binding for the header. Here’s a contrived example to demonstrate the problem – and the solution.
Here’s what we’re aiming for. A simple Expander with a title and a few lines of text contained within. Of course, a HeaderTemplate is overkill here, but it’s necessary in order to demonstrate the problem.
Let’s start by creating a simple view model for our Expander to bind to:
public class DemoViewModel { public string Title { get; set; } public string ContentLine1 { get; set; } public string ContentLine2 { get; set; } public string ContentLine3 { get; set; } }
Now create an instance of the view model and set it as the data context for the window:
public partial class MainWindow : Window { public DemoViewModel ViewModel { get; set; } public MainWindow() { InitializeComponent(); InitializeViewModel(); } private void InitializeViewModel() { ViewModel = new DemoViewModel { Title = "Expander Title", ContentLine1 = "This is line 1", ContentLine2 = "This is line 2", ContentLine3 = "This is line 3" }; this.DataContext = ViewModel; } }
Switch to XAML mode and create an Expander with a HeaderTemplate:
<Expander Width="200"> <Expander.HeaderTemplate> <DataTemplate> <StackPanel Orientation="Horizontal"> <TextBlock Text="{Binding Title}" /> </StackPanel> </DataTemplate> </Expander.HeaderTemplate> <StackPanel> <TextBlock Text="{Binding ContentLine1}" /> <TextBlock Text="{Binding ContentLine2}" /> <TextBlock Text="{Binding ContentLine3}" /> </StackPanel> </Expander>
When you run this code you’ll notice that the header is blank. The bindings inside the HeaderTemplate don’t bind. I’m no expert, but this seems to be because the data context for the header is not inherited from the data context of the Expander… which seems a little odd to me, but I’m sure there’s a good reason.
The fix is simple enough, just add the attribute Header={Binding} to the Expander:
<Expander Header="{Binding}" Width="200"> ...
The bindings inside the HeaderTemplate should now work:
This is really helpful..I too had faced similar issue
Yes, That’s odd to me too, and finally thanks your solution.
I constantly spent my half an hour to read this blog’s posts every day along with a mug of coffee.
Instead of “Expander Title” can I show dynamic data from my viewmodel, in header title everytime in expander??
I don’t see why not – provided you raise a Property Changed Notification whenever the value changes in the viewmodel. See
A great Thank!!
I understand over read a long time, what I need for costumize my EXPANDER is a controltemplate, but is just not the time for me at the moment..;;(
You help me go further without frustration. The most Questions in the Web is about align the Header.
I think the Problem is not this really. If you need Expander.Header, the Design is perfect in Design Time, but you see only the Header-Text (first entry) in Lifetime. (If it is not so (by you..), please a comment for me..:)
You take me a middle way to understand, so I can go further..
Greetings from Leipzig
Ellen
PS. I’m a native German writer, and over years old.
Great… Thank you.
thanks. Very helpful! | https://codeoverload.wordpress.com/2012/03/04/wpf-expander-headertemplates-dont-forget-the-binding/ | CC-MAIN-2019-18 | refinedweb | 513 | 64.1 |
Hi I've been trying to code a bst based upon this link below:
I've adopted a similar strategy, whilst making some modifications, however for me I'm having a problem in that every time I go to add a new node to the bst, the root is always null, and thus no to *bst occur once the insert function has terminated.
I think I could use a pointer to a pointer to fix this, but what I don't understand is how the code in the above link works, when mine is basically the same.
Here's my code:
#include <stdio.h> #include <stdlib.h> #include <string.h> struct node{ struct node *left, *right; char *data; int count; }; struct bst{ struct node *root; }; bst *bst_create( ){ struct bst *bstTree; if ((bstTree = (struct bst *)malloc(sizeof(struct bst))) != NULL) { bstTree->root = NULL; return bstTree; } } node *create_node(char *data){ struct node *tempNode; if ((tempNode = (struct node *)malloc(sizeof(struct node))) != NULL) { tempNode->data = data; tempNode->left = NULL; tempNode->right = NULL; tempNode->count = 0; return tempNode; } } int bst_insert(bst *bstTree, char *data){ struct bst *tempBst = bstTree; struct tldnode *current = bstTree->root; struct tldnode *newNode = create_node(data); if ( current == NULL){ current = newNode; return 1; } for ( ; ;){ cmp = strcmp(data,current->data); if (cmp == 0){ current->count++; return 1; } if (cmp < 0) current = current->left; else current = current->right; if (current == NULL) break; } if (cmp < 0) current->left = newNode; else current->right = newNode; return 1; } | https://www.daniweb.com/programming/software-development/threads/388663/c-bst-tree-implementation | CC-MAIN-2017-43 | refinedweb | 240 | 53.14 |
Testing Guidelines¶
This section describes the testing framework and format standards for tests in Astropy core packages (this also serves as recommendations for affiliated packages).
Testing Framework¶
The testing framework used by astropy (and packages using the Astropy
package template) is the pytest framework,
accessed through the
python setup.py test command.
Note
The
pytest project was formerly called
py.test, and you may
see the two spellings used interchangeably in the documentation.
Testing Dependencies¶
As of Astropy 3.0, the dependencies used by the Astropy test runner are
provided by a separate package called pytest-astropy. This package provides
the
pytest dependency itself, in addition to several
pytest plugins
that are used by Astropy, and will also be of general use to other packages.
Since the testing dependencies are not actually required to install or use
Astropy, they are not included in
install_requires in
setup.py.
However, for technical reasons it is not currently possible to express these
dependencies in
tests_require either. Therefore,
pytest-astropy is
listed as an extra dependency using
extras_require in
setup.py.
Developers who want to run the test suite will need to install the testing
package using pip:
> pip install pytest-astropy
A detailed description of the plugins can be found in the Pytest Plugins section.
Running Tests¶
There are currently three different ways to invoke Astropy tests. Each method invokes pytest to run the tests but offers different options when calling. To run the tests, you will need to make sure you have the pytest package (version 3.1 or later) installed.
In addition to running the Astropy tests, these methods can also be called so that they check Python source code for PEP8 compliance. All of the PEP8 testing options require the pytest-pep8 plugin, which must be installed separately.
setup.py test¶
The astropy core package and the Astropy package template provide a
test
setup command, invoked by running
python setup.py test while in the
package root directory. Run
python setup.py test --help to see the
options to the test command.
Since
python setup.py test wraps the widely-used pytest framework, you may
from time to time want to pass options to the
pytest command itself. For
example, the
-x option to stop after the first failure can be passed
through with the
--args argument:
> python setup.py test --args "-x"
pytest will look for files that look like tests in the current directory and all recursive directories then run all the code that looks like tests within those files.
Turn on PEP8 checking by passing
--pep8 to the
test command. This will
turn off regular testing and enable PEP8 testing.
Note also that this test runner actually installs astropy into a temporary
directory and uses that for running the tests. This means that tests of things
like entry points or data file paths should act just like they would once
astropy is installed. The other two approaches described below do not do
this, and hence may give different results when run from the astropy source
code. Hence if you’re running the tests because you’ve modified code that might
be impacted by this, the
setup.py test approach is the recommended method.
astropy.test()¶
Tests can be run from within Astropy with:
import astropy astropy.test()
This will run all the default tests for Astropy.
Tests for a specific package can be run by specifying the package in the call
to the
test() function:
astropy.test(package='io.fits')
This method works only with package names that can be mapped to Astropy
directories. As an alternative you can test a specific directory or file
with the
test_path option:
astropy.test(test_path='wcs/tests/test_wcs.py')
The
test_path must be specified either relative to the working directory
or absolutely.
By default astropy.test() will skip tests which retrieve data from the
internet. To turn these tests on use the
remote_data flag:
astropy.test(package='io.fits', remote_data=True)
In addition, the
test function supports any of the options that can be
passed to pytest.main(),
and convenience options
verbose= and
pastebin=.
Enable PEP8 compliance testing with
pep8=True in the call to
astropy.test. This will enable PEP8 checking and disable regular tests.
pytest¶
The test suite can be run directly from the native
pytest command. In this
case, it is important for developers to be aware that they must manually
rebuild any extensions by running
setup.py build_ext before testing.
In contrast to the case of running from
setup.py, the
--doctest-plus
and
--doctest-rst options are not enabled by default when running the
pytest command directly. This flags should be explicitly given if they are
needed.
Test-running options¶
Running parts of the test suite¶
It is possible to run only the tests for a particular subpackage or set of
subpackages. For example, to run only the
wcs tests from the
commandline:
python setup.py test -P wcs
Or, to run only the
wcs and
utils tests:
python setup.py test -P wcs,utils
Or from Python:
>>> import astropy >>> astropy.test(package="wcs,utils")
You can also specify a single file to test from the commandline:
python setup.py test -t astropy/wcs/tests/test_wcs.py
When the
-t option is given a relative path, it is relative to the
installed root of astropy. When
-t is given a relative path to a
documentation
.rst file to test, it is relative to the root of the
documentation, i.e. the
docs directory in the source tree. For
example:
python setup.py test -t units/index.rst
Testing for open files¶
Astropy can test whether any of the unit tests inadvertently leave any files open. Since this greatly slows down the time it takes to run the tests, it is turned off by default.
To use it from the commandline, do:
python setup.py test --open-files
To use it from Python, do:
>>> import astropy >>> astropy.test(open_files=True)
For more information on the
pytest-openfiles plugin see
pytest-openfiles
Test coverage reports¶
Astropy can use coverage.py to generate test coverage reports. To generate a test coverage report, use:
python setup.py test --coverage
There is a coveragerc file that
defines files to omit as well as lines to exclude. It is installed
along with astropy so that the
astropy testing framework can use
it. In the source tree, it is at
astropy/tests/coveragerc.
Running tests in parallel¶
It is possible to speed up astropy’s tests using the pytest-xdist plugin. This plugin can be installed using pip:
pip install pytest-xdist
Once installed, tests can be run in parallel using the
'--parallel'
commandline option. For example, to use 4 processes:
python setup.py test --parallel=4
Pass
--parallel=auto to create the same number of processes as cores
on your machine.
Similarly, this feature can be invoked from Python:
>>> import astropy >>> astropy.test(parallel=4)
Running tests to catch permissions errors¶
It is possible to write code or tests that write into the source directory. This is not desirable because Python packages can be (and frequently are) installed in locations where the user may not have write permissions. To check for these cases, the test runner has an option to have the test-runner directory be set as read-only to ensure the tests are not writing to that location. This mode can be triggered by running the tests like so:
python setup.py test --readonly
Writing tests¶
pytest has the following test discovery rules:
-
test_*.pyor
*_test.pyfiles
-
Testprefixed classes (without an
__init__method)
-
test_prefixed functions and methods
Consult the test discovery rules for detailed information on how to name files and tests so that they are automatically discovered by pytest.
Simple example¶
The following example shows a simple function and a test to test this function:
def func(x): """Add one to the argument.""" return x + 1 def test_answer(): """Check the return value of func() for an example argument.""" assert func(3) == 5
If we place this in a
test.py file and then run:
pytest test.py
The result is:
============================= test session starts ============================== python: platform darwin -- Python 3.6.0 -- pytest-3.2.0 test object 1: /Users/username/tmp/test.py test.py F =================================== FAILURES =================================== _________________________________ test_answer __________________________________ def test_answer(): > assert func(3) == 5 E assert 4 == 5 E + where 4 = func(3) test.py:5: AssertionError =========================== 1 failed in 0.07 seconds ===========================
Where to put tests¶
Package-specific tests¶
Each package should include a suite of unit tests, covering as many of the public methods/functions as possible. These tests should be included inside each sub-package, e.g:
astropy/io/fits/tests/
tests directories should contain an
__init__.py file so that
the tests can be imported and so that they can use relative imports.
Regression tests¶
Any time a bug is fixed, and wherever possible, one or more regression tests should be added to ensure that the bug is not introduced in future. Regression tests should include the ticket URL where the bug was reported.
Working with data files¶
Tests that need to make use of a data file should use the
get_pkg_data_fileobj or
get_pkg_data_filename functions. These functions
search locally first, and then on the astropy data server or an arbitrary
URL, and return a file-like object or a local filename, respectively. They
automatically cache the data locally if remote data is obtained, and from
then on the local copy will be used transparently. See the next section for
note specific to dealing with the cache in tests.
They also support the use of an MD5 hash to get a specific version of a data
file. This hash can be obtained prior to submitting a file to the astropy
data server by using the
compute_hash function on a
local copy of the file.
Tests that may retrieve remote data should be marked with the
@pytest.mark.remote_data decorator, or, if a doctest, flagged with the
REMOTE_DATA flag. Tests marked in this way will be skipped by default by
astropy.test() to prevent test runs from taking too long. These tests can
be run by
astropy.test() by adding the
remote_data='any' flag. Turn on
the remote data tests at the command line with
python setup.py test
--remote-data=any.
It is possible to mark tests using
@pytest.mark.remote_data(source='astropy'), which can be used to indicate
that the only required data is from the server. To
enable just these tests, you can run the
tests with
python setup.py test --remote-data=astropy.
For more information on the
pytest-remotedata plugin, see
pytest-remotedata.
Examples¶
from ...config import get_data_filename def test_1(): """Test version using a local file.""" #if filename.fits is a local file in the source distribution datafile = get_data_filename('filename.fits') # do the test @pytest.mark.remote_data def test_2(): """Test version using a remote file.""" #this is the hash for a particular version of a file stored on the #astropy data server. datafile = get_data_filename('hash/94935ac31d585f68041c08f87d1a19d4') # do the test def doctest_example(): """ >>> datafile = get_data_filename('hash/94935') # doctest: +REMOTE_DATA """ pass
The
get_remote_test_data will place the files in a temporary directory
indicated by the
tempfile module, so that the test files will eventually
get removed by the system. In the long term, once test data files become too
large, we will need to design a mechanism for removing test data immediately.
Tests that use the file cache¶
By default, the Astropy test runner sets up a clean file cache in a temporary
directory that is used only for that test run and then destroyed. This is to
ensure consistency between test runs, as well as to not clutter users’ caches
(i.e. the cache directory returned by
get_cache_dir) with
test files.
However, some test authors (especially for affiliated packages) may find it
desirable to cache files downloaded during a test run in a more permanent
location (e.g. for large data sets). To this end the
set_temp_cache helper may be used. It can be used either as
a context manager within a test to temporarily set the cache to a custom
location, or as a decorator that takes effect for an entire test function
(not including setup or teardown, which would have to be decorated separately).
Furthermore, it is possible to set an option
astropy_cache_dir in the
pytest config file which sets the cache location for the entire test run. A
--astropy-cache-dir command-line option is also supported (which overrides
all other settings). Currently it is not directly supported by the
./setup.py test command, so it is necessary to use it with the
-a
argument like:
$ ./setup.py test -a "--astropy-cache-dir=/path/to/custom/cache/dir"
Tests that create files¶
Tests may often be run from directories where users do not have write permissions so tests which create files should always do so in temporary directories. This can be done with the pytest tmpdir function argument or with Python’s built-in tempfile module.
Setting up/Tearing down tests¶
In some cases, it can be useful to run a series of tests requiring something to be set up first. There are four ways to do this:
Module-level setup/teardown¶
If the
setup_module and
teardown_module functions are specified in a
file, they are called before and after all the tests in the file respectively.
These functions take one argument, which is the module itself, which makes it
very easy to set module-wide variables:
def setup_module(module): """Initialize the value of NUM.""" module.NUM = 11 def add_num(x): """Add pre-defined NUM to the argument.""" return x + NUM def test_42(): """Ensure that add_num() adds the correct NUM to its argument.""" added = add_num(42) assert added == 53
We can use this for example to download a remote test data file and have all the functions in the file access it:
import os def setup_module(module): """Store a copy of the remote test file.""" module.DATAFILE = get_remote_test_data('94935ac31d585f68041c08f87d1a19d4') def test(): """Perform test using cached remote input file.""" f = open(DATAFILE, 'rb') # do the test def teardown_module(module): """Clean up remote test file copy.""" os.remove(DATAFILE)
Class-level setup/teardown¶
Tests can be organized into classes that have their own setup/teardown functions. In the following
def add_nums(x, y): """Add two numbers.""" return x + y class TestAdd42(object): """Test for add_nums with y=42.""" def setup_class(self): self.NUM = 42 def test_1(self): """Test behavior for a specific input value.""" added = add_nums(11, self.NUM) assert added == 53 def test_2(self): """Test behavior for another input value.""" added = add_nums(13, self.NUM) assert added == 55 def teardown_class(self): pass
In the above example, the
setup_class method is called first, then all the
tests in the class, and finally the
teardown_class is called.
Method-level setup/teardown¶
There are cases where one might want setup and teardown methods to be run
before and after each test. For this, use the
setup_method and
teardown_method methods:
def add_nums(x, y): """Add two numbers.""" return x + y class TestAdd42(object): """Test for add_nums with y=42.""" def setup_method(self, method): self.NUM = 42 def test_1(self): """Test behavior for a specific input value.""" added = add_nums(11, self.NUM) assert added == 53 def test_2(self): """Test behavior for another input value.""" added = add_nums(13, self.NUM) assert added == 55 def teardown_method(self, method): pass
Function-level setup/teardown¶
Finally, one can use
setup_function and
teardown_function to define a
setup/teardown mechanism to be run before and after each function in a module.
These take one argument, which is the function being tested:
def setup_function(function): pass def test_1(self): """First test.""" # do test def test_2(self): """Second test.""" # do test def teardown_function(function): pass
Parametrizing tests¶
If you want to run a test several times for slightly different values, then
it can be advantageous to use the
pytest option to parametrize tests.
For example, instead of writing:
def test1(): assert type('a') == str def test2(): assert type('b') == str def test3(): assert type('c') == str
You can use the
parametrize decorator to loop over the different
inputs:
@pytest.mark.parametrize(('letter'), ['a', 'b', 'c']) def test(letter): """Check that the input is a string.""" assert type(letter) == str
Tests requiring optional dependencies¶
For tests that test functions or methods that require optional dependencies (e.g. Scipy), pytest should be instructed to skip the test if the dependencies are not present. The following example shows how this should be done:
import pytest try: import scipy HAS_SCIPY = True except ImportError: HAS_SCIPY = False @pytest.mark.skipif('not HAS_SCIPY') def test_that_uses_scipy(): ...
In this way, the test is run if Scipy is present, and skipped if not. No tests should fail simply because an optional dependency is not present.
Using pytest helper functions¶
If your tests need to use pytest helper functions, such as
pytest.raises, import
pytest into your test module like so:
import pytest
Prior to Astropy 2.0, it was possible to import pytest from a bundled version using e.g.:
from ...tests.helper import pytest
but this is no longer the recommended method.
Testing warnings¶
In order to test that warnings are triggered as expected in certain
situations, you can use the
astropy.tests.helper.catch_warnings
context manager. Unlike the
warnings.catch_warnings context manager
in the standard library, this one will reset all warning state before
hand so one is assured to get the warnings reported, regardless of
what errors may have been emitted by other tests previously. Here is
a real-world example:
from astropy.tests.helper import catch_warnings with catch_warnings(MergeConflictWarning) as warning_lines: # Test code which triggers a MergeConflictWarning out = table.vstack([t1, t2, t4], join_type='outer') assert warning_lines[0].category == metadata.MergeConflictWarning assert ("In merged column 'a' the 'units' attribute does not match (cm != m)" in str(warning_lines[0].message))
Note
Within pytest there is also the option of using the
recwarn
function argument to test that warnings are triggered. This method has
been found to be problematic in at least one case (pull request 1174)
so the
astropy.tests.helper.catch_warnings context manager is
preferred.
Testing configuration parameters¶
In order to ensure reproducibility of tests, all configuration items are reset to their default values when the test runner starts up.
Sometimes you’ll want to test the behavior of code when a certain
configuration item is set to a particular value. In that case, you
can use the
astropy.config.ConfigItem.set_temp context manager to
temporarily set a configuration item to that value, test within that
context, and have it automatically return to its original value.
For example:
def test_pprint(): from ... import conf with conf.set_temp('max_lines', 6): # ...
Marking blocks of code to exclude from coverage¶
Blocks of code may be ignored by the coverage testing by adding a
comment containing the phrase
pragma: no cover to the start of the
block:
if this_rarely_happens: # pragma: no cover this_call_is_ignored()
Image tests with pytest-mpl¶
Running image tests¶
We make use of the pytest-mpl plugin to write tests where we can compare the output of plotting commands with reference files on a pixel-by-pixel basis (this is used for instance in astropy.visualization.wcsaxes).
To run the Astropy tests with the image comparison, use:
python setup.py test -a "--mpl" --remote-data
However, note that the output can be very sensitive to the version of Matplotlib as well as all its dependencies (e.g. freetype), so we recommend running the image tests inside a Docker container which has a frozen set of package versions (Docker containers can be thought of as mini virtual machines). We have made a set of Docker container images that can be used for this. Once you have installed Docker, to run the Astropy tests with the image comparison inside a Docker container, make sure you are inside the Astropy repository (or the repository of the package you are testing) then do:
docker run -it -v ${PWD}:/repo astropy/image-tests-py35-mpl300:1.3 /bin/bash
This will start up a bash prompt in the Docker container, and you should see something like:
root@8173d2494b0b:/#
You can now go to the
/repo directory, which is the same folder as
your local version of the repository you are testing:
cd /repo
You can then run the tests as above:
python3 setup.py test -a "--mpl" --remote-data
Type
exit to exit the container.
You can find the names of the available Docker images on the Docker Hub.
Writing image tests¶
The README.rst
for the plugin contains information on writing tests with this plugin. The only
key addition compared to those instructions is that you should set
baseline_dir:
from astropy.tests.image_tests import IMAGE_REFERENCE_DIR @pytest.mark.mpl_image_compare(baseline_dir=IMAGE_REFERENCE_DIR)
This is because since the reference image files would contribute significantly to the repository size, we instead store them on the site. The downside is that it is a little more complicated to create or re-generate reference files, but we describe the process here.
Generating reference images¶
Once you have a test for which you want to (re-)generate reference images, start up one of the Docker containers using e.g.:
docker run -it -v ${PWD}:/repo astropy/image-tests-py35-mpl300:1.3 /bin/bash
then run the tests inside
/repo with the
--mpl-generate-path argument, e.g:
cd repo python3 setup.py test -a "--mpl --mpl-generate-path=reference_tmp" --remote-data
This will create a
reference_tmp folder and put the generated reference
images inside it - the folder will be available in the repository outside of
the Docker container. Type
exit to exit the container.
Make sure you generate images for the different supported Matplotlib versions using the available containers.
Uploading the reference images¶
Next, we need to add these images to the server. To do this, open a pull request to this repository. The reference images for Astropy tests should go inside the testing/astropy directory. In that directory are folders named as timestamps. If you are simply adding new tests, add the reference files to the most recent directory.
If you are re-generating baseline images due to changes in Astropy, make a new
timestamp directory by copying one the most recent one, then replace any
baseline images that have changed. Note that due to changes between Matplotlib
versions, we need to add the whole set of reference images for each major
Matplotlib version. Therefore, in each timestamp folder, there are folders named
e.g.
1.4.x and
1.5.x.
Once the reference images are merged in and available on, update the timestamp in the
IMAGE_REFERENCE_DIR
variable in the
astropy.tests.image_tests sub-module. Because the timestamp
is hard-coded, adding a new timestamp directory will not mess with testing for
released versions of Astropy, so you can easily add and tweak a new timestamp
directory while still working on a pull request to Astropy.
Writing doctests¶
A doctest in Python is a special kind of test that is embedded in a
function, class, or module’s docstring, or in the narrative Sphinx
documentation, and is formatted to look like a Python interactive
session–that is, they show lines of Python code entered at a
>>>
prompt followed by the output that would be expected (if any) when
running that code in an interactive session.
The idea is to write usage examples in docstrings that users can enter verbatim and check their output against the expected output to confirm that they are using the interface properly.
Furthermore, Python includes a
doctest module that can detect these
doctests and execute them as part of a project’s automated test suite. This
way we can automatically ensure that all doctest-like examples in our
docstrings are correct.
The Astropy test suite automatically detects and runs any doctests in the
astropy source code or documentation, or in packages using the Astropy test
running framework. For example doctests and detailed documentation on how to
write them, see the full
doctest documentation.
Note
Since the narrative Sphinx documentation is not installed alongside
the astropy source code, it can only be tested by running
python
setup.py test, not by
import astropy; astropy.test().
For more information on the
pytest-doctestplus plugin used by Astropy, see
pytest-doctestplus.
Skipping doctests¶
Sometimes it is necessary to write examples that look like doctests but that are not actually executable verbatim. An example may depend on some external conditions being fulfilled, for example. In these cases there are a few ways to skip a doctest:
Next to the example add a comment like:
# doctest: +SKIP. For example:
>>> import os >>> os.listdir('.') # doctest: +SKIP
In the above example we want to direct the user to run
os.listdir('.')but we don’t want that line to be executed as part of the doctest.
To skip tests that require fetching remote data, use the
REMOTE_DATAflag instead. This way they can be turned on using the
--remote-dataflag when running the tests:
>>> datafile = get_data_filename('hash/94935') # doctest: +REMOTE_DATA
Astropy’s test framework adds support for a special
__doctest_skip__variable that can be placed at the module level of any module to list functions, classes, and methods in that module whose doctests should not be run. That is, if it doesn’t make sense to run a function’s example usage as a doctest, the entire function can be skipped in the doctest collection phase.
The value of
__doctest_skip__should be a list of wildcard patterns for all functions/classes whose doctests should be skipped. For example:
__doctest_skip__ = ['myfunction', 'MyClass', 'MyClass.*']
skips the doctests in a function called
myfunction, the doctest for a class called
MyClass, and all methods of
MyClass.
Module docstrings may contain doctests as well. To skip the module-level doctests include the string
'.'in
__doctest_skip__.
To skip all doctests in a module:
__doctest_skip__ = ['*']
In the Sphinx documentation, a doctest section can be skipped by making it part of a
doctest-skipdirective:
.. doctest-skip:: >>> # This is a doctest that will appear in the documentation, >>> # but will not be executed by the testing framework. >>> 1 / 0 # Divide by zero, ouch!
It is also possible to skip all doctests below a certain line using a
doctest-skip-allcomment. Note the lack of
::at the end of the line here:
.. doctest-skip-all All doctests below here are skipped...
__doctest_requires__is a way to list dependencies for specific doctests. It should be a dictionary mapping wildcard patterns (in the same format as
__doctest_skip__) to a list of one or more modules that should be importable in order for the tests to run. For example, if some tests require the scipy module to work they will be skipped unless
import scipyis possible. It is also possible to use a tuple of wildcard patterns as a key in this dict:
__doctest_requires__ = {('func1', 'func2'): ['scipy']}
Having this module-level variable will require
scipyto be importable in order to run the doctests for functions
func1and
func2in that module.
In the Sphinx documentation, a doctest requirement can be notated with the
doctest-requiresdirective:
.. doctest-requires:: scipy >>> import scipy >>> scipy.hamming(...)
Skipping output¶
One of the important aspects of writing doctests is that the example output can be accurately compared to the actual output produced when running the test.
The doctest system compares the actual output to the example output verbatim
by default, but this not always feasible. For example the example output may
contain the
__repr__ of an object which displays its id (which will change
on each run), or a test that expects an exception may output a traceback.
The simplest way to generalize the example output is to use the ellipses
.... For example:
>>> 1 / 0 Traceback (most recent call last): ... ZeroDivisionError: integer division or modulo by zero
This doctest expects an exception with a traceback, but the text of the
traceback is skipped in the example output–only the first and last lines
of the output are checked. See the
doctest documentation for
more examples of skipping output.
Ignoring all output¶
Another possibility for ignoring output is to use the
# doctest: +IGNORE_OUTPUT flag. This allows a doctest to execute (and
check that the code executes without errors), but allows the entire output
to be ignored in cases where we don’t care what the output is. This differs
from using ellipses in that we can still provide complete example output, just
without the test checking that it is exactly right. For example:
>>> print('Hello world') We don't really care what the output is as long as there were no errors...
Handling float output¶
Some doctests may produce output that contains string representations of floating point values. Floating point representations are often not exact and contain roundoffs in their least significant digits. Depending on the platform the tests are being run on (different Python versions, different OS, etc.) the exact number of digits shown can differ. Because doctests work by comparing strings this can cause such tests to fail.
To address this issue, the
pytest-doctestplus plugin provides support for a
FLOAT_CMP flag that can be used with doctests. For example:
>>> 1.0 / 3.0 # doctest: +FLOAT_CMP 0.333333333333333311
When this flag is used, the expected and actual outputs are both parsed to find
any floating point values in the strings. Those are then converted to actual
Python
float objects and compared numerically. This means that small
differences in representation of roundoff digits will be ignored by the
doctest. The values are otherwise compared exactly, so more significant
(albeit possibly small) differences will still be caught by these tests.
Continuous integration¶
Overview¶
Astropy uses the following continuous integration (CI) services:
These continuously test the package for each commit and pull request that is pushed to GitHub to notice when something breaks.
Astropy and many affiliated packages use an external package called
ci-helpers to provide
support for the generic parts of the CI systems.
ci-helpers consists of
a set of scripts that are used by the
.travis.yml and
appveyor.yml
files to set up the conda environment, and install dependencies.
Dependencies can be customized for different packages using the appropriate
environment variables in
.travis.yml and
appveyor.yml. For more
details on how to set up this machinery, see the package-template and ci-helpers.
The 32-bit tests on CircleCI use a pre-defined Docker image defined here which includes a 32-bit Python environment. If you want to run tests for packages in the same way, you can use the same set-up on CircleCI as the core package, but just be sure to install Astropy first using:
easy_install pip pip install astropy
For convenience, you can also use the
astropy/affiliated-32bit-test-env
Docker image instead of
astropy/astropy-32bit-test-env - the former includes
the latest stable version of Astropy pre-installed.
In some cases, you may see failures on continuous integration services that you do not see locally, for example because the operating system is different, or because the failure happens with only 32-bit Python. The following sections explain how you can reproduce specific builds locally.
Reproducing failing 32-bit builds¶
If you want to run your tests in the same 32-bit Python environment that CircleCI uses, start off by installing Docker if you don’t already have it installed. Docker can be installed on a variety of different operating systems.
Then, make sure you have a version of the git repository (either the main Astropy repository or your fork) for which you want to run the tests. Go to that directory, then run Docker with:
$ docker run -i -v ${PWD}:/astropy_src -t astropy/astropy-32bit-test-env:1.6 bash
This will put you in the bash shell inside the Docker container. Once inside,
you can go to the
astropy_src directory, and you should see the files that
are in your local git repository:
root@5e2b89d7b07c:/# cd /astropy_src root@5e2b89d7b07c:/astropy_src# ls ah_bootstrap.py CONTRIBUTING.md pip-requirements-doc appveyor.yml docs README.rst astropy examples readthedocs.yml astropy_helpers ez_setup.py setup.cfg cextern licenses setup.py CHANGES.rst MANIFEST.in static circle.yml pip-requirements CITATION pip-requirements-dev
You can then run the tests with:
root@5e2b89d7b07c:/astropy_src# python setup.py test
Pytest Plugins¶
The following
pytest plugins are maintained and used by Astropy. They are
included in the
pytest-astropy package, which is now required for testing
Astropy. More information on all of the plugins provided by the
pytest-astropy package (including dependencies not maintained by Astropy)
can be found here.
pytest-remotedata¶
The pytest-remotedata plugin allows developers to control whether to run tests that access data from the internet. The plugin provides two decorators that can be used to mark individual test functions or entire test classes:
@pytest.mark.remote_datafor tests that require data from the internet
@pytest.mark.internet_offfor tests that should run only when there is no internet access. This is useful for testing local data caches or fallbacks for when no network access is available.
The plugin also adds the
--remote-data option to the
pytest command
(which is also made available through the Astropy test runner).
If the
--remote-data option is not provided when running the test suite, or
if
--remote-data=none is provided, all tests that are marked with
remote_data will be skipped. All tests that are marked with
internet_off will be executed. Any test that attempts to access the
internet but is not marked with
remote_data will result in a failure.
Providing either the
--remote-data option, or
--remote-data=any, will
cause all tests marked with
remote_data to be executed. Any tests that are
marked with
internet_off will be skipped.
Running the tests with
--remote-data=astropy will cause only tests that
receive remote data from Astropy data sources to be run. Tests with any other
data sources will be skipped. This is indicated in the test code by marking
test functions with
@pytest.mark.remote_data(source='astropy'). Tests
marked with
internet_off will also be skipped in this case.
Also see Working with data files.
pytest-doctestplus¶
The pytest-doctestplus plugin provides advanced doctest features, including:
- handling doctests that use remote data in conjunction with the
pytest-remotedataplugin above (see Working with data files)
- approximate floating point comparison for doctests that produce floating point results (see Handling float output)
- skipping particular classes, methods, and functions when running doctests (see Skipping doctests)
- optional inclusion of
*.rstfiles for doctests
This plugin provides two command line options:
--doctest-plus for enabling
the advanced features mentioned above, and
--doctest-rst for including
*.rst files in doctest collection.
The Astropy test runner enables both of these options by default. When running
the test suite directly from
pytest (instead of through the Astropy test
runner), it is necessary to explicitly provide these options when they are
needed.
pytest-openfiles¶
The pytest-openfiles plugin allows for the detection of open I/O resources
at the end of unit tests. This plugin adds the
--open-files option to the
pytest command (which is also exposed through the Astropy test runner).
When running tests with
--open-files, if a file is opened during the course
of a unit test but that file not closed before the test finishes, the test
will fail. This is particularly useful for testing code that manipulates file
handles or other I/O resources. It allows developers to ensure that this kind
of code properly cleans up I/O resources when they are no longer needed.
Also see Testing for open files. | http://docs.astropy.org/en/latest/development/testguide.html | CC-MAIN-2019-26 | refinedweb | 5,910 | 55.24 |
HTTP provides a mechanism, using the "upgrade" request header, to bootstrap a new TCP protocol from HTTP. Basically, an upgrade token is sent in the request and if the server understands the protocol specified by the token it responds with a a 101, and after that HTTP response is complete the bidi TCP stream is speaking the new protocol.
The websocket protocol uses this approach.
The patch here provides a new interface for upgradable http channels, and teaches our http stack to implement it.
Created attachment 518091 [details] [diff] [review]
http upgrade v1
Created attachment 520295 [details] [diff] [review]
http upgrade v2
Update this patch to deal with data past the 101 response that might have accidentally been read by the transaction instead of being passed to the upgraded stream.
Few nits:
- couldn't we use pipe [1] for what nsPreloadedStream is used? I assume it will do the same for you. No need to duplicate a code we already have.
- please add a good documentation to your new interfaces, it is not clear which one is for what
- you are missing tests for all the new code paths you add
[1]
(In reply to comment #3)
> Few nits:
> - couldn't we use pipe [1] for what nsPreloadedStream is used? I assume it
> will do the same for you. No need to duplicate a code we already have.
I'm not seeing it - maybe you can help me. I could use pipe but then I would be responsible for producing _all_ of the data that is read out of the inputstream inside websockets.
I don't really want to do that - it is generally fine to read it right off the socket except for the first few bytes which might be replayed because we read-too-much from the http layer. The preloadedstream allows me to replay that hand full of bytes and then just get out of the way instead of having one service reading from the socket, writing to the pipe and then another reading it back out of the pipe.. but maybe I misunderstand the way the pipe works?
> - you are missing tests for all the new code paths you add
the websocket mochitests actually exercise this thoroughly though it would be nice to specify some specific unit tests, yes. but it is exercised by the test base.
(In reply to comment #4)
> (In reply to comment #3)
> > Few nits:
> > - couldn't we use pipe [1] for what nsPreloadedStream is used? I assume it
> > will do the same for you. No need to duplicate a code we already have.
>
> I'm not seeing it -
Ah, I missed the stream you read after the buffer is consumed. Then you really need a new class, there is nothing known to me that could do this for you. We have a multiplex input stream, but it is not async.
You might want to move this new class and the interface to xpcom/io, side by side with nsIAsync*Streams.
> the websocket mochitests actually exercise this thoroughly
Cool.
(In reply to comment #5)
> You might want to move this new class and the interface to xpcom/io, side by
> side with nsIAsync*Streams.
Hmm. This is just a class, maybe create an interface as this could be useful for other as well and put it to xpcom? Up to you.
Created attachment 526522 [details] [diff] [review]
http upgrade v3
udpate for bitrot only
How does this related to bug 276813?
(In reply to comment #8)
> How does this related to bug 276813?
I have my doubts about implementing that bug as its written.. but setting that aside, this patch enables our HTTP stack to negotiate other protocols to be bootstrapped out of HTTP and hand over a naked stream to the new protocol handler if the negotiation completes.
That is something 276813 needs and therefore, I guess this is a necessary component of that.
It is of course also something websockets needs.
Interestingly, it is something SPDY needs too.
Comment on attachment 526522 [details] [diff] [review]
http upgrade v3
Review of attachment 526522 [details] [diff] [review]:
-----------------------------------------------------------------
::: netwerk/base/src/nsPreloadedStream.cpp
@@ +42,5 @@
> +#include "nsThreadUtils.h"
> +#include "nsAlgorithm.h"
> +#include "prmem.h"
> +
> +namespace mozilla { namespace net {
Looks like netwerk puts the two declarations on two separate lines, please follow that style.
@@ +55,5 @@
> + mOffset(0),
> + mLen(datalen)
> +{
> + mBuf = (char *) moz_xmalloc(datalen);
> + memcpy (mBuf, data, datalen);
Please remove the space before (
@@ +60,5 @@
> +}
> +
> +nsPreloadedStream::~nsPreloadedStream()
> +{
> + moz_free (mBuf);
same
@@ +91,5 @@
> + if (!mLen)
> + return mStream->Read(aBuf, aCount, _retval);
> +
> + PRUint32 toRead = NS_MIN(mLen, aCount);
> + memcpy (aBuf, mBuf + mOffset, toRead);
No space before (
@@ +108,5 @@
> + return mStream->ReadSegments(aWriter, aClosure, aCount, result);
> +
> + *result = 0;
> + while (mLen > 0) {
> + PRUint32 toRead = NS_MIN(mLen, aCount);
If mLen > aCount, your loop does the wrong thing, calling aWriter with 0 bytes. The loop condition should probably be mLen > 0 && aCount > 0.
@@ +115,5 @@
> +
> + rv = aWriter(this, aClosure, mBuf + mOffset, *result, toRead, &didRead);
> +
> + if (NS_FAILED(rv)) {
> + return (*result > 0) ? NS_OK : rv;
This should always return NS_OK
@@ +140,5 @@
> + mLen = 0;
> + return mStream->CloseWithStatus(aStatus);
> +}
> +
> +class RunOnThread : public nsIRunnable
You could inherit from nsRunnable instead and save the XPCOM boilerplate
@@ +176,5 @@
> + aEventTarget);
> +
> + nsCOMPtr<nsIRunnable> event =
> + new RunOnThread(this, aCallback);
> + return aEventTarget->Dispatch(event, nsIEventTarget::DISPATCH_NORMAL);
aEventTarget may be null:
I think you want:
if (!aEventTarget) {
aCallback->OnInputStreamReady(this);
} else {
// your existing code
}
::: netwerk/base/src/nsPreloadedStream.h
@@ +42,5 @@
> +
> +#include "nsIAsyncInputStream.h"
> +#include "nsCOMPtr.h"
> +
> +namespace mozilla { namespace net {
Same as in the .h file - two different lines, please.
@@ +44,5 @@
> +#include "nsCOMPtr.h"
> +
> +namespace mozilla { namespace net {
> +
> +class nsPreloadedStream : public nsIAsyncInputStream
Please, add a comment about what this class does and how it is meant to be used.
@@ +53,5 @@
> + NS_DECL_NSIASYNCINPUTSTREAM
> +
> + nsPreloadedStream(nsIAsyncInputStream *aStream,
> + const char *data, PRUint32 datalen);
> + virtual ~nsPreloadedStream();
Make this private and nonvirtual
::: netwerk/protocol/http/nsHttpChannel.cpp
@@ +665,5 @@
>
> + if (mUpgradeProtocolCallback) {
> + mRequestHead.SetHeader(nsHttp::Upgrade, mUpgradeProtocol, PR_FALSE);
> + mRequestHead.SetHeader(nsHttp::Connection,
> + nsCAutoString(nsHttp::Upgrade.get()), PR_TRUE);
nsCAutoString -> nsDependentCString
@@ +4071,3 @@
> // authentication request over it. this applies to connection based
> // authentication schemes only. for request based schemes, conn is not
> // needed, so it may be null.
You really should rewrite this comment now that the connection is always initialized
@@ +4855,5 @@
> +nsHttpChannel::HTTPUpgrade(const nsACString &aProtocolName,
> + nsIHttpUpgradeChannelListener *aListener)
> +{
> + if (!aListener || aProtocolName.IsEmpty())
> + return NS_ERROR_NOT_INITIALIZED;
NOT_INITIALIZED -> INVALID_ARG?
I might also use NS_ENSURE_ARG(_POINTER) so that there's a warning message on the console when this happens.
::: netwerk/protocol/http/nsHttpConnection.cpp
@@ +440,5 @@
> + if (!upgradeReq || !upgradeResp ||
> + !nsHttp::FindToken(upgradeResp, upgradeReq,
> + HTTP_HEADER_VALUE_SEPS)) {
> + LOG(("HTTP 101 Upgrade header mismatch req = %s, resp = %s\n",
> + upgradeReq ? upgradeReq : "N/A",
You don't really need the check, PR_LOG can handle a null string:
@@ +516,5 @@
> +{
> + LOG(("nsHttpConnection::PushBack [this=%p, length=%d]\n", this, length));
> +
> + if (mInputOverflow) {
> + LOG(("nsHttpConnection::PushBack only one buffer supported"));
Maybe make this an NS_ERROR/NS_ABORT_IF_FALSE?
::: netwerk/protocol/http/nsHttpPipeline.c?
::: netwerk/protocol/http/nsHttpTransaction.h
@@ +210,5 @@
> PRPackedBool mStatusEventPending;
> PRPackedBool mHasRequestBody;
> PRPackedBool mSSLConnectFailed;
> PRPackedBool mHttpResponseMatched;
> + PRPackedBool mPreserveStream : 1; // in case of Upgrade
I'd prefer you not to use the :1 for just one variable here. It won't save you any space, anyway.
::: netwerk/protocol/http/nsIHttpUpgradeChannel.idl
@@ +44,5 @@
> +
> +[scriptable, uuid(5644af88-09e1-4fbd-83da-f012b3b30180)]
> +interface nsIHttpUpgradeChannelListener : nsISupports
> +{
> + void OnTransportAvailable(in nsISocketTransport aTransport,
lowercase O
@@ +51,5 @@
> +};
> +
> +
> +[scriptable, uuid(9363fd96-af59-47e8-bddf-1d5e91acd336)]
> +interface nsIHttpUpgradeChannel : nsISupports
Please, add a comment about how this interface can be used.
Thank you for the review, I greatly appreciate it. There is nothing in here very structural at all so I can get it turned around asap.
> > + PRPackedBool mPreserveStream : 1; // in case of Upgrade
>
> I'd prefer you not to use the :1 for just one variable here. It won't save
> you any space, anyway.
>
ha! that's a botched merge of my out-of-tree patches with a recently landed patch that changed all those other flags from int foo:1 to be prpackedbool. Thanks for catching?
My inclination is strongly that this is the correct code for the class, which I suppose could be changed to an assert of some sort. There is no logical reason you cannot pipeline an upgrade (though I wouldn't pipeline after one!), though its true we don't for reasons of conservatism. I didn't make it an assert pretty much because I have an otherwise unrelated changeset that relies on the nshttppipeline structure a lot more often (to preserve the option of pipelining at a wider variety of points in time) and that change makes sense in that context.. but the change you quote is due to the logic of this patch, so carrying it over there is a bit of a catch-22 with them both unlanded. I'd prefer to just leave it in the upgrade patch as a harmless belt-plus-suspenders thing for now.
> @@ +4071,3 @@
> > // authentication request over it. this applies to connection based
> > // authentication schemes only. for request based schemes, conn is not
> > // needed, so it may be null.
>
> You really should rewrite this comment now that the connection is always
> initialized
>
you lost me on that one. Can you rephrase?
Created attachment 530771 [details] [diff] [review]
http upgrade v4
Websockets on the move - I love it!
Patch updated to reflect review comments in comment 10, modulo my small reservations in comment 11 and comment 12.
I'm not a process expert, but as I understand it this needs a sr because it adds an idl? Can you provide that service as well?
(In reply to comment #12)
> > You really should rewrite this comment now that the connection is always
> > initialized
> >
>
> you lost me on that one. Can you rephrase?
The comment implies that conn can be null sometimes. But with your change, it won't be null. So you should change the comment so that it still makes sense.
(In reply to comment #13)
> I'm not a process expert, but as I understand it this needs a sr because it
> adds an idl? Can you provide that service as well?
I can't; the superreview must be done by someone else. Quoting:
"This means that one reviewer cannot provide both review and super-review on a single patch."
Created attachment 530800 [details] [diff] [review]
http upgrade v5
updated comment as per comment 14
Comment on attachment 530800 [details] [diff] [review]
http upgrade v5
Review of attachment 530800 [details] [diff] [review]:
-----------------------------------------------------------------
Seems like there really should be unit tests for this, but I guess our frameworks wouldn't support that too well?
::: netwerk/base/src/nsPreloadedStream.cpp
@@ +155,5 @@
> + return NS_OK;
> + }
> +
> +private:
> + ~RunOnThread() {}
so this will work, but since it does inherit from nsRunnable, this destructor will be virtual anyway, so I'd make it public and virtual here.
::: netwerk/protocol/http/nsIHttpUpgradeChannel.idl
@@ +51,5 @@
> + * used as the bootstrapping channel and provide an implementation of
> + * nsIHttpUpgradeChannelListener that will have its onTransportAvailable()
> + * method invoked if a matching 101 is processed. The arguments to
> + * onTransportAvailable provide the new protocol the low level tranport
> + * streams that are no longer used by HTTP.
Please also add a note that onStartRequest/onStopRequest will be called even for the upgrade case, but then the listener gets full control over the socket. (That is what happens, right?)
Created attachment 531385 [details] [diff] [review]
http upgrade v6
updates from comment 16, carrying forward r=biesi.. asking for sr?
Comment on attachment 531385 [details] [diff] ...
Please put the big comment describing what the class does _above_ the include guard, and formatted like so:
/**
* Comment
* here
*/
so that it will show up in mxr.
Is there a good reason to add the new upgrade channel interface instead of just putting a new method on nsIHttpChannel or nsIHttpChannelInternal?
In the interface comments in the upgrade channel idl, s/nsIReqestObserver/nsIRequestObserver/ (missing 'u').
sr=me given an adequate explanation for the new upgrade channel interface.
Boris - thanks for taking the time for the review!
(In reply to comment #18)
> Comment on attachment 531385 [details] [diff] [review] ...
It's probably more accurate to say I wasn't away of multiplex input stream because I worked my way through the possibilities by seeing what classes implemented nsIAsyncInputStream... other than the "async" issue the other mismatch is that the multiplexed input stream takes N streams, while I really had 1 stream and 1 small buffer.. that buffer certainly could be turned into a stream but it is just adding work.
> Is there a good reason to add the new upgrade channel interface instead of
> just putting a new method on nsIHttpChannel or nsIHttpChannelInternal?
I have a strong bias against changing interfaces when just adding things. The fact that COM lets me extend implementations with multiple interfaces is something (maybe the only thing) I really like about it. This is all doubly true when dealing with a prominent interface like nsIHttpChannel.
That being said, I've received feedback recently from a number of sources along the lines of "just break it". So I'll probably incorporate that pov going forward ;)..
As for "Internal", the interface has applicability beyond internal things.. so I didn't think it apropos.
> sr=me given an adequate explanation for the new upgrade channel interface.
I'll update the patch for the changes in comments. Please let me know if I should proceed or you would like changes in the idl arrangement. Thanks!
Created attachment 532219 [details] [diff] [review]
http upgrade v7
updated for sr comments in comment 18 and comment 19
carry forward r=biesi sr=bz
> As for "Internal", the interface has applicability beyond internal things
Ah, ok. We've been using nsIHttpChannelInternal as effectively nsIHttpChannel2.
There's a cost to adding new interfaces: it makes objects bigger, makes QI on them slower, and increases cognitive load for future developers. I agree about willy-nilly interface changes being bad, but I think in this case it would make some sense to add to one of the existing http channel interfaces. Probably nsIHttpChannelInternal, honestly.
Created attachment 532262 [details] [diff] [review]
640213-http-upgrade v8
ok, I've reorganized it a bit to get rid of the nsIHttpUpgradeChannel.idl and merge that into the existing nsIHttpChannelInternal.. let me know if that is what you were envisioning..
Comment on attachment 532262 [details] [diff] [review]
640213-http-upgrade v8
sr=me
I just realized that the new stream doesn't actually read all the available data in Read/ReadSegments; this is probably not an issue since consumers shouldn't rely on that anyway.... though some OnDataAvailable implementations do, so I hope it's not passed there.
(In reply to comment #23)
> I just realized that the new stream doesn't actually read all the available
> data in Read/ReadSegments; this is probably not an issue since consumers
> shouldn't rely on that anyway.... though some OnDataAvailable
> implementations do, so I hope it's not passed there.
Is that really true? ODA should not read all available data, it should only read as much data as was passed into it as the length, and this impl still alows that.
nsStreamLoader, for example, just does:
return inStr->ReadSegments(WriteSegmentFun, this, count, &countRead);
and assumes that this will read |count| data. Same thing for nsDownloader. nsHtml5StreamParser does equivalent things with a Read() call.
So more precisely, if any of those consumers are passed this stream then whoever is doing the passing should not use Available() to determine the length to pass in.
Perhaps Available() should return mLen if mLen is not 0 and call through to mStream otherwise instead of adding the two together?
Or put another way, if someone is to pass this stream to OnDataAvailable, how would they do that right?
Now as far as I can tell we never do that with mSocketIn, so we should be ok, I think...
Pushed
For future reference Patrick, it's much appreciated if when you use checkin-needed you create a final patch using |hg export| containing a "User" line.
Backed out in - within the next four pushes, we filed bug 657177 and bug 657185, so I don't think video networking on Windows likes this patch very much. from a retriggered run on this bug's push (so much for it being all-green), a nsHttpConnectionMgr::nsConnectionHandle::Release() crash in content/base/test/test_CrossSiteXHR_origin.html, so apparently it was just coincidence that the first few were in media tests.
Though causing the failures in
was a handy service, since it pointed out bug 657190.
Created attachment 532946 [details] [diff] [review]
640213-http-upgrade v9
Fix the backout issue - the bug caused an AuthRetry cycle to be passed a sticky connection that was not always sticky. The fix is to split the connection references into two - one for the stickyAuthRetry (that may be null depending on sticky and keep-alive) like it used to be and a separate one for the http upgrade.
carrying forward the sr+ because this fix doesn't touch any interfaces. Let me know if that's wrong.
try was (eventually) able to reproduce the bug that got it backed out on mochi-1 on windows, and has passed 8+ runs of that config with this change.
Comment on attachment 532946 [details] [diff] [review]
640213-http-upgrade v9
removing r? as try has rejected this (intermittently on xp only). puzzling.
Created attachment 533282 [details] [diff] [review]
640213-http-upgrade v10
Try likes this one.
The missing insight was that the connection reference can only be meaningfully held across the transaction release if the connection is sticky (which upgrades are).
The interdiff against 8 is really simple, I'd like to get this landed quickly as it is the only websockets piece unlanded that really meaningfully tocuches non-ws things.
Comment on attachment 533282 [details] [diff] [review]
640213-http-upgrade v10
+ if (mUpgradeProtocolCallback && stickyConn &&
just one space before &&, please
Documentation added:
And mentioned on Firefox 6 for developers. | https://bugzilla.mozilla.org/show_bug.cgi?id=640213 | CC-MAIN-2017-09 | refinedweb | 2,976 | 62.88 |
Summary of JDK1.5 Language Changes 839."
Re:Looking to Get Back into Java (Score.
Re:Looking to Get Back into Java (Score:4, Informative)
My roommate told me about it, and once I started using it I never looked back.
Re:Give billg his due... (Score:2, Informative)
I see two trends: being a better C++ (typesafe enums and parameterized types), and borrowing features from Lisp (code metadata, auto boxing/unboxing). I don't like to tie developments like this to particular people, but I wonder how much Guy Steele has do to with the Lisp-like features, if in fact he is still working at Sun.
Re:Looking to Get Back into Java (Score:1, Informative)
Re:Write once, Rewrite forever? (Score:5, Informative)
Re:enumerators (Score:4, Informative)
The type-safe enum pattern shows the correct way of handling enumerations. And you can the Jakarta Commons Lang library [apache.org] to make it a bit easier.
Netbeans (Score:3, Informative)
Re:Give billg his due... (Score:1, Informative)
The idea of autoboxing came from C++ where you can define your own conversions. Autoboxing becomes necessary to reduce syntax clutter when you add Generics to Java. This is because the Java implementation of Generics only works for Objects, not base types.
Foreach and enum is in more languages than you can shake a stick at, so you can't say they came from C#.
The "static import" idea is new. If C# has it, then it's likely Java took it from C#. Other than that, I can't see anything that Java took from C#.
A better solution than Generics (Score:2, Informative)
As a pro Java developer, I want to use the native 'int' type in order to save memory, have less garbage collection, and perform better. Catching errors at compile time is helpful too. I think it is unreasonable for Sun not to include specializations for native data types. If I want to have an ArrayList of 10,000 ints I should be able to use 'int'.
The link on this page states up to 10x performance but I've seen it work up to 30x performance - and you can run the code below to see this for yourself.
NOTE:
30 = 7272727/236966 where:
1. 7272727 = 2nd iteration of Int2IntHashMap!
2. 236966 = 15th iteration of HashMap (Hot spot had 12 more iterations to optimize)
package com.wss.utils.test;
import java.util.*;
import it.unimi.dsi.fastUtil.*;
public class TestFastUtil {
public static void main(String[] args) throws Exception {
int count = 400000;
int timerCount = 20;
long start, end;
Integer tmp;
HashMap hashMap = new HashMap(count);
for (int timer=0; timer timerCount; ++timer) {
start = System.currentTimeMillis();
for (int i=0; i count; ++i) {
hashMap.put(new Integer(i), new Integer(i));
}
end = System.currentTimeMillis();
System.out.println("HashMap put(Integer, Integer) count:" + count +
", put/s:" + (count / ((float)(end-start) / (float)1000)));
start = System.currentTimeMillis();
for (int i=0; i count; ++i) {
Integer in = new Integer(i);
tmp = (Integer)hashMap.get(in);
if (!tmp.equals(in))
throw new Exception("failed equals()");
}
end = System.currentTimeMillis();
System.out.println("HashMap get(Integer) count:" + count +
", get/s:" + (count / ((float)(end-start) / (float)1000)));
}
timerCount = 100;
Int2IntHashMap int2IntHM = new Int2IntHashMap(count);
int j;
for (int timer=0; timer timerCount; ++timer) {
start = System.currentTimeMillis();
for (int i=0; i count; ++i) {
int2IntHM.put(i, i);
}
end = System.currentTimeMillis();
System.out.println("Int2Int put(Integer, Integer) count:" + count +
", put/s:" + (count / ((float)(end-start) / (float)1000)));
start = System.currentTimeMillis();
for (int i=0; i count; ++i) {
j = int2IntHM.get(i);
if (i != j)
throw new Exception("Int2Int failed equals()");
}
end = System.currentTimeMillis();
System.out.println("Int2Int get(Integer) count:" + count +
", get/s:" + (count / ((float)(end-start) / (float)1000)));
}
}
}
Re:Write once, Rewrite forever? (Score:3, Informative)
Re:Generics (Score:5, Informative).
Generic Java (Score:2, Informative)
A Compiler for generic Java has been available for years:
You can check out Pizza/GJ here [unisa.edu.au] or here [luc.edu]:not just sugar (Score:3, Informative)
-- Jack
What Bjarne Stroustrup has to say about Java (Score:5, Informative)
This is what he said about Java [att.com] and the [att.com] likes [att.com].
Also here [eptacom.net].
Re:Generics (Score:3, Informative)
The method compareTo is supposed to override the method in Comparable, which takes an object. So they create a bridge method that overrides it normally:
Re:Where is operator overloading? (Score:1, Informative)
JCP strikes again (Score:5, Informative)
Re:Give billg his due... (Score:1, Informative), IIRC.
In the meantime, perhaps we should write a new preprocessor, the "Operator Overloading Preprocessor System" or OOPS. jpp seemed like a good idea (it offered several features beyond operator overloading). Java coupled with a good preprocessor is a fine idea.
;-)
Re:Looking to Get Back into Java (Score:4, Informative):So basically C# minus generics (Score:2, Informative)
Generics
No such thing in C#.
Enhanced for loop
expression needs to implement IEnumerable, or declare a GetEnumerator that returns an IEnumerator
Autoboxing/unboxing
Typesafe enums
Metadata
Re:How this compares to C++ (Score:3, Informative)
struct Coin { enum { penny, nickel, dime, quarter }; };
Not equivalent, the Java version also supports writing as a String
System.out.println("coin: " + Coin.PENNY);
Re:I think these are all great... (Score:4, Informative)
Re:Article didn't mention new concurrency stuff (Score:4, Informative)
Re:So basically C# minus generics (Score:1, Informative)
Furthermore, next version will include other nifty features such as lambda-style anonymous methods.
pizzacompiler (Score:2, Informative)
my favorite language (extension) for the vm has always been pizza [sf.net]. It gives you said generics, but also
But to be honest: this seems to be a real great step for java. programmer with a certain need for aesthetics (and self regard) can now really use this language...
XDoclet (Score:2, Informative)
Uh, read the article (Score:4, Informative)
Re:Looking to Get Back into Java (Score:3, Informative):Programming shortcuts (Score:3, Informative)
(this next bit is a very simple and very halfassed explanation in pseudo C. flames from C programmers about the bad syntax will be ignored. i got a D in C so F it)
Inlining works like this. You write a function and assign it a variable name. Then, any place you want that function, you use the variable name. The "precompiler" converts any instance of the variable name into the original function. EX: Later, if you use the term PISETUP it will be replaced with that code by the precompiler.
This was a good way to facilitate rewriting the same thing over and over, while maintaining speed, and a single location to change/fix the function. Unfortunately, it was also a good way for lazy programmers to obfuscate code by creating precompiler directives and variables for common language patterns. EX: Used to make: into: Shorter, yes. But harder to read, much harder to understand, and absolutely confusing for the poor newbies. Kind of like learning how to read from logs of an AOL chatroom.
One of the big "innovations" of Java was the elimination of the precompiler, which made sense. Java runs object code, not directly runnable byte code, so it is in essence already performing the tasks of the precompiler. The virtual machine "compiles" the code while it is running, to optimize for the individial physical machine, no matter what it may be.
The idea worked IMO because, now that you can cut, paste, and find and replace (with regular and replace expressions), you don't really need these replacements. Might as well just be verbose as you want to be. Might as well use tiny little functions since inlining doesn't work anyway.
Besides, since Java is interpretted and recompiled while it's running, you don't gain anything from inlining. Any "contextual" function (the function as written in the code) might become "inlined" when run by a good virtual machine program that performs "Just In Time" (JIT) compilation. Call void incrememtI(){ i++; ) a lot? The compiler will notice this, and replace calls in the stack to incrememtI() with the actual code this function contains.
This is why Metadata is a bad idea, or could be. No real benefit to the code, not much of a benefit for the "thorough" developer, and yet there's a real chance for lazy folks to create disgusting, hard to maintain code. This is never a good idea...a lot of coders think that obfuscated code makes them more valuable to employers. Not half...they'll axe you first if they think you're playing the obscurity game and will not give great references if you leave first.
Of course, if all Metadata does is replace: with: Then it may be worthwhile. Time, and the Java Community, will tell.
(PS: Used to work for/with a Pat Doyle, but he's not you)
Re:So basically C# minus generics (Score:1, Informative)
C# steals these ideas from Java (Score:3, Informative)
C# didn't add generics at the start as they were waiting for Java to solidify how to do them.
Also, Java has a bit more of a wait for new features since Java goes through a real standards body instead of just being defined by what Microsoft wants. And of course lots of real production code that can break if you get things wrong.
You can only use Generics in 1.5 VMs (Score:1, Informative)
Sun made this decision by itself without listening to its users and even censored its discussion. You can read about in the Generics message board: | http://developers.slashdot.org/story/03/05/09/1514223/summary-of-jdk15-language-changes/informative-comments | CC-MAIN-2015-32 | refinedweb | 1,601 | 56.66 |
3 useful Python decorators
It was around a year ago that I first came across the concept of a decorator in Python. I was immediately intrigued: the use of a function of function appealed to me as a mathematician. However, other than the “time” example often used to advocate the use of decorators, I didn’t immediately find much use for them.
Largely, I found that though many promoted the use of decorators, simple examples demonstrating their usefulness were scarce. A year later decorators have become part of my everyday work flow, and so I wanted to share three useful decorators I use regularly.
If you’re unfamiliar with decorators Stackoverflow provides a surprisingly good starting point. As with all things Python a quick Google search will return a plethora of resources. This post is not intended to be an introduction to decorators, instead I hope to persuade you of their usefulness by way of three simple examples .
1. Timer 2.0.1
Before I introduce this decorator you should note that if you wish to time fast, performance critical sections of code you should probably be using ipython’s
%timeit functionality, not the decorator I suggest below. The decorator I describe here is useful for longer runs.
The most common example I stumbled upon when reading about decorators was the “timer” example. This oft used example wraps any function and prints out the time it took to execute in seconds. When running scripts that take hours to complete, reading the run time in seconds is irritating. However, it’s easy to alter this decorator to format the run time more intelligently.
I use the decorator below on a daily basis. This decorator prints and formats the run time depending on the length of the run. If the run takes less than an hour, the run time is printed in minutes and seconds. If the run takes longer than an hour, the run time is printed in hours and minutes.
import time from functools import wraps def my_time(function): """ Wrapper used to time how long a function takes to execute, and intelligently print run time. """ # Preserve docstring (and other attributes) of function_to_time @wraps(function) def wrapper(*args, **kwargs): t0 = time.time() results = function(*args, **kwargs) t1 = time.time() # If function took over an hour, print time in hours and minutes if t1 - t0 > 60**2: print('This run took {:.0f} hr(s) {:.0f} min(s) to complete'.format((t1 - t0) // 60**2, ((t1 - t0) % 60**2) // 60 )) else: print('This run took {:.0f} min(s) {:.0f} sec(s) to complete'.format((t1 - t0) // 60, (t1 - t0) % 60)) return results return wrapper
2. Completion alarm
Often I run scripts which take hours to complete. Instead of having to continually check up on my code, I wanted to be alerted (by way of sound) when my script completed. Cue our second decorator: the beeper.
I wrote the below decorator to play a sound once my run completes. This decorator allows you to concentrate on other work whilst your code runs, without the continual distraction of checking whether your run is complete.
You’ll notice that this decorator utilises the library
pygame. Pygame is a large project which does so much more than playing sounds, and its use here is certainly overkill. However, after playing around with snack, pyaudio and others, I found that Pygame was the easiest to get working straight out of the box.
Here I use the notification
Amsterdam.ogg which I found in my Ubuntu install at
/usr/share/sounds/ubuntu/notifications. Ensure that
music.load() can find your desired sound file, either by way of an absolute or relative path.
import pygame from functools import wraps def beeper(function): """ Decorator that plays a sound when function completes """ @wraps(function) def wrapper(*args, **kwargs): results = function(*args, **kwargs) # Try play beeper try: pygame.mixer.init() pygame.mixer.music.load("/data/surf_scoter/infer/inputs/Amsterdam.ogg") pygame.mixer.music.play() # If can't play (for example: running script over ssh), don't flip except: pass return results return wrapper
3. Capture std_out
Many of my scripts print summaries and other information about their run. In addition to the
my_time decorator introduced earlier, a single script may print from various different functions and places.
Sometimes, however, you may wish to print all this output to file for later reference. This could be done by altering all the print functions individually to write to file. However, this is not ideal and quickly becomes cumbersome for larger projects.
Instead, we can employ a decorator which captures all prints to
std_out. I first came across this context manager on stack exchange, where I found the following snippet:
from functools import wraps from io import StringIO import sys class Capturing(list): """ Used to capture printed output of running script. """ def __enter__(self): self._stdout = sys.stdout sys.stdout = self._stringio = StringIO() return self def __exit__(self, *args): self.extend(self._stringio.getvalue().splitlines()) # Free up memory del self._stringio sys.stdout = self._stdout
With this snippet taken, it’s easy for us to go ahead and wrap this context manager up into a decorator.
def capture_stdout(function): """ Decorator to be used to capture stdout of function """ # Preserve docstring (and other attributes) of function_to_listen @wraps(function) def wrapper(*args, **kwargs): with Capturing() as printed: results = function_to_listen(*args, **kwargs) return results, printed return wrapper
Summary
The examples above were chosen to be of general use, rather than more technical decorators used for specific projects. With this you can hopefully start using the above introduced immediately. I hope you’ve now been convinced that decorators can be useful, and deserve the attention: so get decorating! | https://jwalton.info/Useful-Python-Decorators/ | CC-MAIN-2019-22 | refinedweb | 947 | 55.03 |
I'm in the process of writing a python script that takes two arguments that will allow me to output the contents of a folder to a text file for me to use for another process. The snippet of I have is below:
#!/usr/bin/python
import cv2
import numpy as np
import random
import sys
import os
import fileinput
#Variables:
img_path= str(sys.argv[1])
file_path = str(sys.argv[2])
print img_path
print file_path
cmd = 'find ' + img_path + '/*.png | sed -e "s/^/\"/g;s/$/\"/g" >' + file_path + '/desc.txt'
print "command: ", cmd
#Generate desc.txt file:
os.system(cmd)
sh: 1: s/$//g: not found
images/*.png | sed -e "s/^/\"/g;s/$/\"/g" > desc.txt
its not sending the full text for your regular expression through to bash because of how python processes and escapes string content, so the best quickest solution would be to just manually escape the back slashes in the string, because python thinks they currently are escape codes. so change this line:
cmd = 'find ' + img_path + '/*.png | sed -e "s/^/\"/g;s/$/\"/g" >' + file_path + '/desc.txt'
to this:
cmd = 'find ' + img_path + '/*.png | sed -e "s/^/\\"/g;s/$/\\"/g" >' + file_path + '/desc.txt'
and that should work for you.
although, the comment on your question has a great point, you could totally just do it from python, something like:
import os import sys def main(): # variables img_path= str(sys.argv[1]) file_path = str(sys.argv[2]) with open(file_path,'w') as f: f.writelines(['{}\n'.format(line) for line in os.listdir(img_path) if line.endswith('*.png')]) if __name__ == "__main__": main() | https://codedump.io/share/ju9dmkN9QGhB/1/python-trying-to-put-the-contents-of-a-folder-into-a-text-file | CC-MAIN-2018-26 | refinedweb | 261 | 69.99 |
Excellent Work !!!
To keep from polluting the Space nav driver developmen thread with my glovepie scripting, i decided to make my own thread..
If you want to get your Space nav working as something that sends key commands, follow the directions here to get PPjoy and GlovePIE installed. Then load up my attached script in glovepie below..
Since i use roadrunner, here are some snippets from my ini file's to get it working with the mp3 player..
execTBL.ini:
and keyTBL:and keyTBL:Code:"SN_RT","RRNEXT" "SN_LT","RRPREV" "SN_UP","VOL+" "SN_DN","VOL-" "SN_1","PLAY"
For those of you not using RR, the keymappings are ctrl + alt + (numer 1, and the four directional arrow keys)For those of you not using RR, the keymappings are ctrl + alt + (numer 1, and the four directional arrow keys)Code:6038,"SN_UP" 6040,"SN_DN" 6037,"SN_LT" 6039,"SN_RT" 6049,"SN_1"
Edit your files accordingly, and then load up my script in GlovePIE and have some fun
as it is, press down on the hat to play/pause (ctrl + alt + 1)
tilt left and right to skip forewards/backwards through songs (ctrl + alt + (left/right arrow))
tilt up and down to toggle through the list, just liek you were pressing the up and down arrow keys..
script:
Code://ver 1.0 by inh //Routine to determine which axis is being acted upong the most due to it being //inevitable that more than one axis will change at a time //this checks all values after a specified time and determines which one changed the most //get a position reading from all axises var.roll.1 = MapRange(Joystick1.roll, -1,1, 0,1) var.yaw.1 = MapRange(Joystick1.yaw, -1,1, 0,1) var.pitch.1 = MapRange(Joystick1.pitch, -1,1, 0,1) var.x.1 = MapRange(Joystick1.x, -1,1, 0,1) var.y.1 = MapRange(Joystick1.y, -1,1, 0,1) var.z.1 = MapRange(Joystick1.z, -1,1, 0,1) //wait a bit, then get positions again wait 200ms var.roll.2 = MapRange(Joystick1.roll, -1,1, 0,1) var.yaw.2 = MapRange(Joystick1.yaw, -1,1, 0,1) var.pitch.2 = MapRange(Joystick1.pitch, -1,1, 0,1) var.x.2 = MapRange(Joystick1.x, -1,1, 0,1) var.y.2 = MapRange(Joystick1.y, -1,1, 0,1) var.z.2 = MapRange(Joystick1.z, -1,1, 0,1) //compare second reading to first usign subtraction, then use th absolute function so that even negative values //return whole numbers. the greated var.whatever.diff value will be the axis that has changed the most var.roll.diff = abs(var.roll.2 - var.roll.1) var.yaw.diff = abs(var.yaw.2 - var.yaw.1) var.pitch.diff = abs(var.pitch.2 - var.pitch.1) var.x.diff = abs(var.x.2 - var.x.1) var.y.diff = abs(var.y.2 - var.y.1) var.z.diff = abs(var.z.2 - var.z.1) //Filter to filter out very slight movements of the hat (it must cause an increas (or decrease) of more than 0.04 to continue...) if ((var.roll.diff >= 0.04) or (var.yaw.diff >= 0.04) or (var.pitch.diff >= 0.04) or (var.x.diff >= 0.04) or (var.y.diff >= 0.04) or (var.z.diff >= 0.04)) //super-sloppy if tree to figure out which var.*.diff is the greatest, and then act accordingly.. //as of now it just spits out which axis is being acted upon up in the debug bar at the top //of the scripting window.. if ((var.roll.diff > var.yaw.diff) and (var.roll.diff > var.pitch.diff) and (var.roll.diff > var.x.diff) and (var.roll.diff > var.y.diff) and (var.roll.diff > var.z.diff)) var.roll.dir = sign(var.roll.2 - var.roll.1) if (var.roll.dir = -1) debug = "Roll (Twist Left)" key.ctrl = 1 key.alt = 1 key.down = 1 wait 100ms key.ctrl = 0 key.alt = 0 key.down = 0 elseif (var.roll.dir = 1) debug = "Roll (Twist Right)" key.ctrl = 1 key.alt = 1 key.up = 1 wait 100ms key.ctrl = 0 key.alt = 0 key.up = 0 endif else if ((var.yaw.diff > var.roll.diff) and (var.yaw.diff > var.pitch.diff) and (var.yaw.diff > var.x.diff) and (var.yaw.diff > var.y.diff) and (var.yaw.diff > var.z.diff)) var.yaw.dir = sign(var.yaw.2 - var.yaw.1) if (var.yaw.dir = -1) debug = "Yaw (Tilt Right)" key.ctrl = 1 key.alt = 1 key.right = 1 wait 600ms key.ctrl = 0 key.alt = 0 key.right = 0 elseif (var.yaw.dir = 1) debug = "Yaw (Tilt Left)" key.ctrl = 1 key.alt = 1 key.left = 1 wait 600ms key.ctrl = 0 key.alt = 0 key.left = 0 endif else if ((var.pitch.diff > var.roll.diff) and (var.pitch.diff > var.yaw.diff) and (var.pitch.diff > var.x.diff) and (var.pitch.diff > var.y.diff) and (var.pitch.diff > var.z.diff)) var.pitch.dir = sign(var.pitch.2 - var.pitch.1) if (var.pitch.dir = -1) debug = "Pitch (Tilt Forewards)" key.up = 1 wait 100ms key.up = 0 elseif (var.pitch.dir = 1) debug = "Pitch (Tilt Backwards)" key.down = 1 wait 100ms key.down = 0 endif elseif ((var.x.diff > var.yaw.diff) and (var.x.diff > var.pitch.diff) and (var.x.diff > var.roll.diff) and (var.x.diff > var.y.diff) and (var.x.diff > var.z.diff)) var.x.dir = sign(var.x.2 - var.x.1) if (var.x.dir = -1) debug = "X (Slide Left)" elseif (var.x.dir = 1) debug = "X (Slide Right)" endif else if ((var.y.diff > var.roll.diff) and (var.y.diff > var.pitch.diff) and (var.y.diff > var.x.diff) and (var.y.diff > var.yaw.diff) and (var.y.diff > var.z.diff)) var.y.dir = sign(var.y.2 - var.y.1) if (var.y.dir = -1) debug = "Y (Slide Forewards)" elseif (var.y.dir = 1) debug = "Y (Slide Backwards)" endif else if ((var.z.diff > var.roll.diff) and (var.z.diff > var.yaw.diff) and (var.z.diff > var.x.diff) and (var.z.diff > var.y.diff) and (var.z.diff > var.pitch.diff)) var.z.dir = sign(var.z.2 - var.z.1) if (var.z.dir = -1) debug = "Z (Pull Up)" elseif (var.z.dir = 1) debug = "Z (Push Down)" key.1 = 1 key.ctrl = 1 key.alt = 1 wait 500ms key.1 = 0 key.ctrl = 1 key.alt = 1 endif else debug = " " endif else debug = "Press harder/longer girly-man" endif var.roll.1 = 0 var.yaw.1 = 0 var.pitch.1 = 0 var.x.1 = 0 var.y.1 = 0 var.z.1 = 0 var.roll.2 = 0 var.yaw.2 = 0 var.pitch.2 = 0 var.x.2 = 0 var.y.2 = 0 var.z.2 = 0
Excellent Work !!!
Thanks
I hope some other people took the time to get glovepie setup for use with the space nav.. i cant wait till we have a dedicated drive released, this thing is AWESOME for working with roadrunnerI hope some other people took the time to get glovepie setup for use with the space nav.. i cant wait till we have a dedicated drive released, this thing is AWESOME for working with roadrunner
Does GlovePIE let you use Activex/Com objects in your code ?
Not doing carpc anymore
not sure, why would you need to?
cashtexts - Earn money for receiving text messaged offers
cashtexts review not a scam
Space Navigator - 6 Axis input device: Take it apart - Driver App
RRCam - Video/webcam capture, text overlay, and recording: 2.0 Stable
I was thinking of doing maybe some different functions other than keypress when a certain skin in RR is started like my GPS which has no keyboard shortcuts or API yet. If we can call the RR SDK we could get information from RR about what screen is active, maybe there is another way of doing this, i don't know.
Not doing carpc anymore
ah ok gotcha.. take a look around the glovepie scripting, though i dont think it is able to do what ya want =[ why not implement keycombos in your gps? or is it not your gps app.. =[
cashtexts - Earn money for receiving text messaged offers
cashtexts review not a scam
Space Navigator - 6 Axis input device: Take it apart - Driver App
RRCam - Video/webcam capture, text overlay, and recording: 2.0 Stable
Will look through GlovePie site
I can't implement keyboard shortcuts in my gps appl. it's simply not supported, so i am thinking of doing a mouse emulation when RR starts my GPS, for this i would want to have RR send me window messages about current screen.
Not doing carpc anymore
I'm not sure I am in the good forum to expose my problem, but someone with glovepie/spacenavigator knowledge might have a solution. My situation is as follow; I got the space navigator device from 3d connexion. Having installed PPjoy and 3dxppjoy, I m sending midi messages to music programs through Midi yoke.
Here is an example of the very basic "script"
midi.BankSelectLevel = MapRange(Joystick2.x, -1,1, 0,127)/127
midi.ModWheel = MapRange(Joystick2.y, -1,1, 0,127)/127
midi.Breath = MapRange(Joystick2.z, -1,1, 0,127)/127
midi.Control3 = MapRange(Joystick2.pitch, -1,1, 0,127)/127
midi.FootPedal = MapRange(Joystick2.yaw, -1,1, 0,127)/127
midi.PortamentoTime = MapRange(Joystick2.roll, -1,1, 0,127)/127
Well, the data is sent and read with no problem from external music app, except for a very annoying detail: normally midi data consist of integrated numbers between 0 - 127. Well, in my case the data received by all tested applications shows that it is going beyond these limits. The result being that when for example a knob gets to its maximum, it "virtually" continues to increase above 127. It is very annoying as afterwards, the knob's value won t decrease before it gets back to 127 (which, depending on how long u held it, could take some time).
Is there anyway of preventing this by, let's say, limiting the output data threshold?
Thanks in advance for any suggestions
P.S. I have also tried to use EnsureMap, and EnsureRangeMap with no success.
Works great! Thanks a bunch. Only thing I'm curious about is if there is there any way to add context switching?
Bookmarks | http://www.mp3car.com/software-and-software-development/94985-space-navigator-pe-glovepie-script.html | CC-MAIN-2015-35 | refinedweb | 1,756 | 71.1 |
later in this chapter. The most
frequently used methods and overloads of a
PythonInterpreter instance
interp are the following.
PyObject interp.eval(String s)
Evaluates, in interp's
namespace, the Python expression held in Java string
s, and returns the
PyObject that is the expression's
result.
void interp.exec(String s)
void interp.exec(PyObject code)
Executes, in interp's
namespace, the Python statements held in Java string
s or in compiled
PyObject code (produced
by function _ _builtin_ _.compile of package
org.python.core, covered later in this chapter).
void interp.execfile(String name)
void interp.execfile(java.io.InputStream s)
void interp.execfile(java.io.InputStream s,String name)
Executes, in interp's
namespace, the Python statements read from the stream
s or from the file named
name. When you pass both
s and name,
execfile reads the statements from
s, and uses
name as the filename in error messages.
PyObject interp.get(String name)
Object interp.get(String name,Class javaclass)
Fetches the value of the attribute named
name from
interp's namespace. The
overload with two arguments also converts the value to the specified
javaclass, throwing a Java
PyException exception that wraps a Python
TypeError if the conversion is unfeasible. Either
overload raises a NullPointerException if
name is unbound. Typical use of the
two-argument form might be a Java statement such as:
String s = (String)interp.get("attname", String.class);
void interp.set(String name,PyObject value)
void interp.set(String name,Object value)
Binds the attribute named name in
interp's namespace to
value. The second overload also converts
the value to a PyObject.
The org.python.core
package supplies a class _ _builtin_ _ whose
static methods let your Java code access the functionality of Python
built-in functions. The compile method, in
particular, is quite similar to Python built-in function
compile, covered in Chapter 8 and Chapter 13. Your Java
code can call compile with three
String arguments (a string of source code, a
filename to use in error messages, and a
kind that is normally
"exec"), and compile returns a
PyObject instance p
that is a precompiled Python bytecode object. You can repeatedly call
interp.exec(p)
to execute the Python statements in p
without the overhead of compiling the Python source for each
execution. The advantages are the same as covered in Chapter 13.
Seen from Java, all Jython objects are
instances of classes that extend PyObject. Class
PyObject supplies methods named like Python
objects' special methods, such as _ _len_
_, _ _str_ _, and so on. Concrete
subclasses of PyObject override some special
methods to supply meaningful implementations. For example, _
_len_ _ makes sense for Python sequences and mappings, but
not for numbers; _ _add_ _ makes sense for numbers
and sequences, but not for mappings. When your Java code calls a
special method on a PyObject instance that does
not in fact supply the method, the call raises a Java
PyException exception wrapping a Python
AttributeError.
PyObject methods that set, get, and delete
attributes exist in two overloads, as the attribute name can be a
PyString or a Java String.
PyObject methods that set, get, and delete items
exist in three overloads, as the key or index can be a
PyObject, a Java String, or an
int. The Java String instances
that you use as attribute names or item keys must be Java interned
strings (i.e., either string literals or the result of calling
s.intern( ) on any Java
String instance s). In
addition to the usual Python special methods _ _getattr_
_ and _ _getitem_ _, class
PyObject also provides similar methods _
_findattr_ _ and _ _finditem_ _, the
difference being that, when the attribute or item is not found, the
_ _find methods return a Java
null, while the _ _get methods
raise exceptions.
Every PyObject
instance p has a method _
_tojava_ _ that takes a single argument, a Java
Class c, and returns an
Object that is the value of
p converted to
c (or raises an exception if the
conversion is unfeasible). Typical use might be a Java statement such
as:
String s = (String)mypyobj._ _tojava_ _(String.class);
Method _
_call_ _ of PyObject has several
convenience overloads, but the semantics of all the overloads come
down to _ _call_ _'s fundamental
form:
PyObject p._ _call_ _(PyObject args[], String keywords[]);
When array keywords has length
L, array args
must have length N greater than or equal
to L, and the last
L items of args
are taken as named actual arguments, the names being the
corresponding items in keywords. When
args has length
N greater than
L,
args's first
N-L
items are taken as positional actual arguments. The equivalent Python
code is therefore similar to:
def docall(p, args, keywords):
assert len(args) >= len(keywords)
deltalen = len(args) - len(keywords)
return p(*args[:deltalen], ** dict(zip(keywords, args[deltalen:])))
Jython supplies concrete subclasses of PyObject
that represent all built-in Python types. You can sometimes usefully
instantiate a concrete subclass in order to create a
PyObject for further use. For example, class
PyList extends PyObject,
implements a Python list, and has constructors that take an array or
a java.util.Vector of PyObject
instances, as well as an empty constructor that builds the empty list
[].
The Py class supplies
several utility class attributes and static methods.
Py.None is Python's
None. Method Py.java2py takes a
single Java Object argument and returns the
corresponding PyObject. Methods
Py.py2type, for all
values of type that name a Java primitive
type (boolean, byte,
long, short, etc.), take a
single PyObject argument and return the
corresponding value of the given primitive Java type. | http://etutorials.org/Programming/Python+tutorial/Part+V+Extending+and+Embedding/Chapter+25.+Extending+and+Embedding+Jython/25.2+Embedding+Jython+in+Java/ | CC-MAIN-2014-42 | refinedweb | 962 | 54.22 |
ASP.NET # MVC # 6 – ASP.NET MVC RadioButton(List)[Check/Uncheck ASP.NET MVC Radiobutton from controller and call OnClick event on it]
Hi Geeks,
Today we will see following things regarding ASP.NET MVC RadioButton
1) How to check/uncheck specific radio button in the list of two radio buttons (e.g.
)
2) To call the JavaScript method on onClick event of it.
Example:
Step I –
We have a class Employee_Details as follows.
public classEmployee_Gender {
public int Gender { get; set; }
}
Step II –
We take one strongly typed view named it as Employee inheriting the class Employee_Gender created above.
In the following view ,
We have two Radio buttons we call a java script method onRadioClick on its onClick event.
<%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<Radio.Models.clsEmployee>" %>
<asp:Content
Male <%:Html.RadioButtonFor(c => c.Gender, 0, new { @onClick = "onRadioClick(this);" })%>
Female <%:Html.RadioButtonFor(c => c.Gender, 1, new { @onClick = "onRadioClick(this);" })%>
<script type="text/javascript">
function onRadioClick(e) {
alert(e.value);
}
</script>
</asp:Content>
Step III –
In the Controller we will right one method called GetEmoloyee ,
The return type of the following method is ActionResult , so it will return the model to the view defined above named as Employee.
public ActionResult GetEmployee()
{
Radio.Models.clsEmployee obj = new Radio.Models.clsEmployee();
obj.Gender = 1;
return View("Employee",obj);
}
When you click on the radio buttons you will have output as
0 for male and 1 for female respectively
For More information on ASP.NET MVC DropdDownlist Control refer link ASP.NET MVC DropDownlist.
For More on Microsoft technologies visit our site Dactolonomy of WebResource
Thank you.
With internet radio, you get pretty much what you want, when you want..
This powerful and versatile solution is also available in three different versions, so you should have no
trouble finding the perfect one for you. | https://microsoftmentalist.wordpress.com/2011/09/02/asp-net-mvc-6-asp-net-mvc-radiobuttonlistcheckuncheck-asp-net-mvc-radiobutton-from-controller-and-call-onclick-event-on-it/ | CC-MAIN-2018-09 | refinedweb | 311 | 50.12 |
csEventTree Class ReferenceThis class is used to represent the event namespace (tree). More...
#include <csutil/cssubscription.h>
Detailed DescriptionThis class is used to represent the event namespace (tree).
Each node represents an event name (e.g., "crystalspace.input.mouse") and contains two data structures: a partial order graph representing subscribers to this event (including those who subscribed to parent event names) along with their ordering constraints, and a queue of subscribers representing a valid total order of that graph (used to speed up the common case).
Definition at line 41 of file cssubscription.h.
Member Function Documentation
Send the provided event to all subscribers, using the normal return and Broadcast rules.
Find a node with a given name in the event tree owned by q .
If no such node yet exists, create it (along with any necessary parent nodes, including the name root).
Return a csEventTree::SubscriberIterator for all subscribers to this event name (and to its parents).
Send the provided event to all subscribers, regardless of their return values.
Subscribe a given handler to a given event name subtree via a given event queue.
This is wrapped by csEventQueue::Subscribe which may be easier to use in some situations.
- See also:
- csEventQueue::Subscribe
Unubscribe a given handler to a given event name subtree via a given event queue.
This is wrapped by csEventQueue::Unsubscribe which may be easier to use in some situations. Note that unsubscribing is reentrant (an event handler can unsubscribe itself) but NOT thread-safe (only event handlers in the same thread as the event queue can unsubscribe).
- See also:
- csEventQueue::Unsubscribe
The documentation for this class was generated from the following file:
- csutil/cssubscription.h
Generated for Crystal Space 1.2.1 by doxygen 1.5.3 | http://www.crystalspace3d.org/docs/online/api-1.2/classcsEventTree.html | CC-MAIN-2015-27 | refinedweb | 292 | 56.05 |
Can someone please help me out here. I've got an assignment to finish up on C++ and it just doesn't run properly. its a care hiring program that asks for details and prints it out at the end. when i enter my house number, address, and phone number it just goes loco. Help will be greatly appreciated. here's the code:
#include <iostream>
#include <cstdlib>
#include <string>
#include <cmath>
void line ();
using namespace std;
// Declarations
float price;
float cost;
char choice;
char option;
int numdays;
// Function for car choice
void carchoice ()
{
do { // Start of loop
cout << "\n\t\t\t\tCARS FOR HIRE\n\n\n\tProduct Code\t\tItem Description\tCost Per Day\n";
line ();
cout << "\n\n\tA\t\t\tNissan\t\t\t$7.00\t\t";
cout << "\n\n\tB\t\t\tFord Focus\t\t$7.00\t\t";
cout << "\n\n\tC\t\t\tPeugeot 107\t\t$9.00\t\t";
cout << "\n\n\tD\t\t\tVolvo V40\t\t$7.00\t\t";
cout << "\n\n\tE\t\t\tVolksWagon\t\t$7.50\t\t";
cout << "\n\n\n\tEnter choice of vehicle you wish to hire. (A, B, C, D or E):";
cin >> choice;
// Switch case statement
switch (choice)
{
case 'A':
case 'a':
cout<< "\n\tYou selected Nissan\n\t" << endl;
price = 7.00;
break;
case 'B':
case 'b':
cout<< "\n\tYou selected Ford Focus\n\t" << endl;
price = 7.00;
break;
case 'C':
case 'c':
cout<< "\n\tYou selected Peugeot 107\n\t" << endl;
price = 9.00;
break;
case 'D':
case 'd':
cout<< "\n\tYou selected Volvo V40\n\t" << endl;
price = 7.00;
break;
case 'E':
case 'e':
cout<< "\n\tYou selected VolksWagon\n\t" << endl;
price = 7.50;
break;
default:
cout << "\n\tThe choice of vehicle can not be found!\n\t"<<endl;
break;
}
cout << "\tEnter number of days you wish to hire it for. ";
cin >> numdays;
cost = price*numdays;
cout << "\n\tThe cost of hire for this car is "<<char(156)<< cost << endl;
cout << "\n\tDo you wish to hire this car at this rate? (Y/N): ";
cin >> option;
system ("cls");
// Entry of neither "y" or "Y" will initiate loop
}while (option != 'Y' && option != 'y');
system("cls");
cin.get();
}
// function for entry of customer details
void customerdetails (char name[],char surname[],int addressn,
char adres[],char PNum[])
{
cout << "\nEnter first name: "; //Print out
cin >> name; // input
cout << "\nEnter surname name: ";
cin >> surname;
cout << "\nEnter house number: ";
cin >> addressn;
cout << "\nEnter your address: ";
cin >> adres;
cout << "\nPlease enter Phone Number: ";
cin >> PNum;
cout << "\n\tYour invoice will now be displayed\n"<< endl;
system("pause");
}
// Main Function
int main ()
{
using namespace std;
char name[50], surname[50], adres[50], PNum[50];
int addressn;
system ("color f6"); // System Colour
carchoice (); // Call up carchoice function#
cout << "\n\t\t\tHire and Personal Details Form\t\t\n\n";
// Call up customer details function
customerdetails (name,surname,addressn,adres,PNum);
system("cls");
cin.get();
// Display invoice
cout << "\n\n\t======================================================="<< endl;
cout << "\n\t\t\tInvoice Details" << endl;
cout << "\n\n\tCustomer Name: "<< name <<" "<< surname << endl;
cout << "\n\n\tVehicle Type: ";
//If,else statement to display the the car that user has selected in the invoice
if (choice == 'a' || choice == 'A' )
{ cout << "Nissan" << endl; }
else if ( choice == 'b' || choice == 'B' )
{ cout << "Ford Focus" << endl; }
else if ( choice == 'c' || choice == 'C')
{ cout << "Peugeot 107" << endl; }
else if ( choice == 'd' || choice == 'D')
{ cout << "Volvo V40" << endl; }
else if ( choice == 'e' || choice == 'E')
{ cout << "VolksWagon" << endl; }
cout << "\n\n\tNumber of days: "<< numdays << endl;
cout << "\n\n\tRental Cost: "<< cost << endl;
cout << "\n\n\tHome Address: "<< addressn <<" "<< adres << endl;
cout << "\n\n\tPhone Number: "<< PNum << endl;
cout << "\n\n\t======================================================="<< endl;
cin.ignore();
return 0;
}
void line()
{
for(int i=1; i < 41; i++)
cout <<"__";}
Firstly, when posting code please use code tags. Go advanced, select the code and click '#'.
it just goes loco
In what way? What happens? Have you tried to debug the code using the debugger to see where execution deviates from that expected from the program design?
All advice is offered in good faith only. You are ultimately responsible for effects of your programs and the integrity of the machines they run on.
Code:
void customerdetails (char name[],char surname[],int addressn,
char adres[],char PNum[])
note that you are passing addressn by value and not by ref so the changes made to addressn in the function will not be passed back to the caller.
When you try this program are the details you enter have less characters than the size of the arrays you are declaring? If, say, the entered address has more than 49 chars then buffer overflow will occur. Why not use the class string rather than an array of char? See Using class string means that you don't need to bother about the size of the string and whether the input will overflow it.
void customerdetails (char name[],char surname[],int addressn,
char adres[],char PNum[])
Forum Rules | http://forums.codeguru.com/showthread.php?545373-Desperately-need-help-have-little-time-( | CC-MAIN-2014-49 | refinedweb | 837 | 68.7 |
Introduction.
Main Article.
First, let’s take a look at the way the JVM uses memory. There are two main areas of memory in the JVM – the ‘Heap’ and the ‘Permanent Generation.’ In the diagram below, the permanent generation is shown in green. The remainder (to the left) is the heap.
The permanent generation is used only by the JVM itself, to keep data that it requires. You cannot place any data in the permanent generation. One of the things the JVM uses this space for is keeping metadata about the objects you create. So every time you create an object, the JVM will store some information in the permanent generation. So the more objects you create, the more room you need in the permanent generation.
The size of the permanent generation is controlled by two JVM parameters. -XX:PermSize sets the minimum, or initial, size of the permanent generation, and -XX:MaxPermSize sets the maximum size. When running large Java applications, we often set these two to the same value, so that the permanent generation will be created at its maximum size initially. This can improve performance because resizing the permanent generation is an expensive (time consuming) operation. If you set these two parameters to the same size, you can avoid a lot of extra work in the JVM to figure out if it needs to resize, and actually performing resizes of, the permanent generation.
The heap is the main area of memory. This is where all of your objects will be stored. The heap is further divided into the ‘Old Generation’ and the ‘New Generation.’ The new generation in turn is divided into ‘Eden’ and two ‘Survivor’ spaces.
This size of the heap is also controlled by JVM paramaters. You can see on the diagram above the heap size is -Xms at minimum and -Xmx at maximum. Additional parameters control the sizes of the various parts of the heap. We will see one of those later on, the others are beyond the scope of this post.
When you create an object, e.g. when you say byte[] data = new byte[1024], that object is created in the area called Eden. New objects are created in Eden. In addition to the data for the byte array, there will also be a reference (pointer) for ‘data.’
The following explanation has been simplified for the purposes of this post. When you want to create a new object, and there is not enough room left in eden, the JVM will perform ‘garbage collection.’ This means that it will look for any objects in memory that are no longer needed and get rid of them.
Garbage collection is great! If you have ever programmed in a language like C or Objective-C, you will know that managing memory yourself is somewhat tedious and error prone. Having the JVM automatically find unused objects and get rid of them for you makes writing code much simpler and saves a lot of time debugging. If you have never used a language that does not have garbage collection – you might want to go write a C program – it will certainly help you to appreciate what you are getting from your language for free!
There are in fact a number of different algorithms that the JVM may use to do garbage collection. You can control which algorithms are used by changing the JVM paramaters.
Let’s take a look at an example. Suppose we do the following:
String a = "hello"; String b = "apple"; String c = "banana"; String d = "apricot"; String e = "pear"; // // do some other things // a = null; b = null; c = null; e = null;
This will cause five objects to be created, or ‘allocated,’ in eden, as shown by the five yellow boxes in the diagram below. After we have done ‘some other things,’ we free a, b, c and e – by setting the references to null. Assuming there are no other references to these objects, they will now be unused. They are shown in red in the second diagram. We are still using String d, it is shown in green.
If we try to allocate another object, the JVM will find that eden is full, and that it needs to perform garbage collection. The most simple garbage collection algorithm is called ‘Copy Collection.’ It works as shown in the diagram above. In the first phase (‘Mark’) it will mark (illustrated by red colour) the unused objects. In the second phase (‘Copy’) it will copy the objects we still need (i.e. d) into a ‘survivor’ space – the little box on the right. There are two survivor spaces and they are smaller than eden in size. Now that all the objects we want to keep are safe in the survivor space, it can simply delete everything in eden, and it is done.
This kind of garbage collection creates something known as a ‘stop the world’ pause. While the garbage collection is running, all other threads in the JVM are paused. This is necessary so that no thread tries to change memory after we have copied it, which would cause us to lose the change. This is not a big problem in a small application, but if we have a large application, say with a 8GB heap for example, then it could actually take a significant amount of time to run this algorithm – seconds or even minutes. Having your application stop for a few minutes every now and then is not suitable for many applications. That is why other garbage collection algorithms exist and are often used. Copy Collection works well when there is a relatively large amount of garbage and a small amount of used objects.
In this post, we will just discuss two of the commonly used algorithms. For those who are interested, there is plenty of information available online and several good books if you want to know more!
The second garbage collection algorithm we will look at is called ‘Mark-Sweep-Compact Collection.’ This algorithm uses three phases. In the first phase (‘Mark’), it marks the unused objects, shown below in red. In the second phase (‘Sweep’), it deletes those objects from memory. Notice the empty slots in the diagram below. Then in the final phase (‘Compact’), it moves objects to ‘fill up the gaps,’ thus leaving the largest amount of contiguous memory available in case a large object is created.
So far this is all theoretical – let’s take a look at how this actually works with a real application. Fortunately, the JDK includes a nice visual tool for watching the behaviour of the JVM in ‘real time.’ This tool is called jvisualvm. You should find it right there in bin directory of your JDK installation. We will use that a little later, but first, let’s create an application to test.
I used Maven to create the application and manage the builds and dependencies and so on. You don’t need to use Maven to follow this example. You can go ahead and type in the commands to compile and run the application if you prefer.
I created a new project using the Maven archetype generate goal:
mvn archetype:generate -DarchetypeGroupId=org.apache.maven.archetypes -DgroupId=com.redstack -DartifactId=memoryTool
I took type 98 – for a simple JAR – and the defaults for everything else. Next, I changed into my memoryTool directory and edited my pom.xml as shown below. I just added the part shown in red. That will allow me to run my application directly from Maven, passing in some memory configuration and garbage collection logging parameters.
<project xmlns="" xmlns: <modelVersion>4.0.0</modelVersion> <groupId>com.redstack</groupId> <artifactId>memoryTool</artifactId> <version>1.0-SNAPSHOT</version> <packaging>jar</packaging> <name>memoryTool</name> <url></url> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> </properties> <build> <plugins> <plugin> <artifactId>maven-compiler-plugin</artifactId> <version>2.0.2</version> <configuration> <source>1.6</source> <target>1.6</target> </configuration> </plugin> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <configuration> <executable>java</executable> <arguments> <argument>-Xms512m</argument> <argument>-Xmx512m</argument> <argument>-XX:NewRatio=3</argument> <argument>-XX:+PrintGCTimeStamps</argument> <argument>-XX:+PrintGCDetails</argument> <argument>-Xloggc:gc.log</argument> <argument>-classpath</argument> <classpath/> <argument>com.redstack.App</argument> </arguments> </configuration> </plugin> </plugins> </build> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.8.1</version> <scope>test</scope> </dependency> </dependencies> </project>
If you prefer not to use Maven, you can start the application using the following command:
java -Xms512m -Xmx512m -XX:NewRatio=3 -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -Xloggc:gc.log -classpath <whatever> com.redstack.App
The switches are telling the JVM the following:
I have chosen these options so that you can see pretty clearly what is going on and you wont need to spend all day creating objects to make something happen!
Here is the code in that main class. This is a simple program that will allow us to create objects and throw them away easily, so we can understand how much memory we are using, and watch what the JVM does with it.
# com.redstack; import java.io.*; import java.util.*; public class App { private static List objects = new ArrayList(); private static boolean cont = true; private static String input; private static BufferedReader in = new BufferedReader(new InputStreamReader(System.in)); public static void main(String[] args) throws Exception { System.out.println("Welcome to Memory Tool!"); while (cont) { System.out.println( "\n\nI have " + objects.size() + " objects in use, about " + (objects.size() * 10) + " MB." + "\nWhat would you like me to do?\n" + "1. Create some objects\n" + "2. Remove some objects\n" + "0. Quit"); input = in.readLine(); if ((input != null) && (input.length() >= 1)) { if (input.startsWith("0")) cont = false; if (input.startsWith("1")) createObjects(); if (input.startsWith("2")) removeObjects(); } } System.out.println("Bye!"); } private static void createObjects() { System.out.println("Creating objects..."); for (int i = 0; i < 2; i++) { objects.add(new byte[10*1024*1024]); } } private static void removeObjects() { System.out.println("Removing objects..."); int start = objects.size() - 1; int end = start - 2; for (int i = start; ((i >= 0) && (i > end)); i--) { objects.remove(i); } } }
If you are using Maven, you can build, package and execute this code using the following command:
mvn package exec:exec
Once you have this compiled and ready to go, start it up, and fire up jvisualvm as well. You might like to arrange your screen so you can see both, as shown in the image below. If you have never used JVisualVM before, you will need to install the VisualGC plugin. Select Plugins from the Tools menu. Open the Available Plugins tab. Place a tick next to the entry for Visual GC. Then click on the Install button. You may need to restart it.
Back in the main panel, you should see a lit of JVM processes. Double click on the one running your application, com.redstack.App in this example, and then open the Visual GC tab. You should see something like what is shown below.
Notice that you can visually see the permanent generation, the old generation and eden and the two survivor spaces (S0 and S1). The coloured bars indicate memory in use. On the right hand side, you can also see a historical view that shows you when the JVM spent time performing garbage collections, and the amount of memory used in each space over time.
In your application window, start creating some objects (by selecting option 1). Watch what happens in Visual GC. Notice how the new objects always get created in eden. Now throw away some objects (option 2). You will probably not see anything happen in Visual GC. That is because the JVM will not clean up that space until a garbage collection is performed.
To make it do a garbage collection, create some more objects until eden is full. Notice what happens when you do this. If there is a lot of garbage in eden, you should see the objects in eden move to a survivor space. However, if eden had little garbage, you will see the objects in eden move to the old generation. This happens when the objects you need to keep are bigger than the survivor space.
Notice as well that the permanent generation grows slowly as you create new objects.
Try almost filling eden, don’t fill it completely, then throw away almost all of your objects – just keep 20MB. This will mean that eden is mostly full of garbage. Then create some more objects. This time you should see the objects in eden move into the survivor space.
Now, let’s see what happens when we run out of memory. Keep creating objects until you have around 460MB. Notice that both eden and the old generation are nearly full. Create a few more objects. When there is no more space left, your application will crash and you will get an OutOfMemoryException. You might have got those before and wondered what causes them – especially if you have a lot more physical memory on your machine, you may have wondered how you could possibly be ‘out of memory’ – now you know! If you happen to fill up your permanent generation (which will be pretty difficult to do in this example) you would get a different exception telling you PermGen was full.
Finally, another way to look at this data is in that garbage collection log we asked for. Here are the first few lines from one run on my machine:
13.373: [GC 13.373: [ParNew: 96871K->11646K(118016K), 0.1215535 secs] 96871K->73088K(511232K), 0.1216535 secs] [Times : user=0.11 sys=0.07, real=0.12 secs] 16.267: [GC 16.267: [ParNew: 111290K->11461K(118016K), 0.1581621 secs] 172732K->166597K(511232K), 0.1582428 secs] [Ti mes: user=0.16 sys=0.08, real=0.16 secs] 19.177: [GC 19.177: [ParNew: 107162K->10546K(118016K), 0.1494799 secs] 262297K->257845K(511232K), 0.1495659 secs] [Ti mes: user=0.15 sys=0.07, real=0.15 secs] 19.331: [GC [1 CMS-initial-mark: 247299K(393216K)] 268085K(511232K), 0.0007000 secs] [Times: user=0.00 sys=0.00, real =0.00 secs] 19.332: [CMS-concurrent-mark-start] 19.355: [CMS-concurrent-mark: 0.023/0.023 secs] [Times: user=0.01 sys=0.01, real=0.02 secs] 19.355: [CMS-concurrent-preclean-start] 19.356: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 19.356: [CMS-concurrent-abortable-preclean-start] CMS: abort preclean due to time 24.417: [CMS-concurrent-abortable-preclean: 0.050/5.061 secs] [Times: user=0.10 sys= 0.01, real=5.06 secs] 24.417: [GC[YG occupancy: 23579 K (118016 K)]24.417: [Rescan (parallel) , 0.0015049 secs]24.419: [weak refs processin g, 0.0000064 secs] [1 CMS-remark: 247299K(393216K)] 270878K(511232K), 0.0016149 secs] [Times: user=0.00 sys=0.00, rea l=0.00 secs] 24.419: [CMS-concurrent-sweep-start] 24.420: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 24.420: [CMS-concurrent-reset-start] 24.422: [CMS-concurrent-reset: 0.002/0.002 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 24.711: [GC [1 CMS-initial-mark: 247298K(393216K)] 291358K(511232K), 0.0017944 secs] [Times: user=0.00 sys=0.00, real =0.01 secs] 24.713: [CMS-concurrent-mark-start] 24.755: [CMS-concurrent-mark: 0.040/0.043 secs] [Times: user=0.08 sys=0.00, real=0.04 secs] 24.755: [CMS-concurrent-preclean-start] 24.756: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 24.756: [CMS-concurrent-abortable-preclean-start] 25.882: [GC 25.882: [ParNew: 105499K->10319K(118016K), 0.1209086 secs] 352798K->329314K(511232K), 0.1209842 secs] [Ti mes: user=0.12 sys=0.06, real=0.12 secs] 26.711: [CMS-concurrent-abortable-preclean: 0.018/1.955 secs] [Times: user=0.22 sys=0.06, real=1.95 secs] 26.711: [GC[YG occupancy: 72983 K (118016 K)]26.711: [Rescan (parallel) , 0.0008802 secs]26.712: [weak refs processin g, 0.0000046 secs] [1 CMS-remark: 318994K(393216K)] 391978K(511232K), 0.0009480 secs] [Times: user=0.00 sys=0.00, rea l=0.01 secs]
You can see from this log what was happening in the JVM. Notice it shows that the Concurrent Mark Sweep Compact Collection algorithm (it calls it CMS) was being used. You can see when the different phases ran. Also, near the bottom notice it is showing us the ‘YG’ (young generation) occupancy.
You can leave those same three settings on in production environments to produce this log. There are even some tools available that will read these logs and show you what was happening visually.
Well, that was a short, and by no means exhaustive, introduction to some of the basic theory and practice of JVM garbage collection. Hopefully the example application helped you to clearly visualise what happens inside the JVM as your applications run.
Thanks to Rupesh Ramachandran who taught me many of the things I know about JVM tuning and garbage collection. | http://www.ateam-oracle.com/visualising-garbage-collection-in-the-jvm | CC-MAIN-2019-35 | refinedweb | 2,882 | 69.38 |
Add a priority attribute to XMLHttpRequest.
Spec is as discussed in this thread on public-webapps:
Created attachment 63294 [details]
Patch
I added a comment here:
Comment on attachment 63294 [details]
Patch
This should at least be behind an ifdef, so that platforms that ignore this flag wouldn't look like they honor it to JS code. Maybe this shouldn't be in WebKit trunk at all, as it's purely an experiment.
+ const char* const names[5] = { "critical", "high", "normal", "low", "lowest" };
Normally, constants are numbers, not strings. See e.g. XMLHttpRequest.readyState.
(In reply to comment #3)
> (From update of attachment 63294 [details])
> This should at least be behind an ifdef, so that platforms that ignore this flag wouldn't look like they honor it to JS code. Maybe this shouldn't be in WebKit trunk at all, as it's purely an experiment.
The proposal () says it's safe to ignore:
"Browsers are not required to support the priority requested by applications, and may ignore it altogether."
I think it's fine that most WebKit implementations ignore it. Chrome will use it soon.
>
> + const char* const names[5] = { "critical", "high", "normal", "low", "lowest" };
>
> Normally, constants are numbers, not strings. See e.g. XMLHttpRequest.readyState.
It's modeled after lineCap and lineJoin in the canvas tag. I think the thinking is that it'd be easier to add to or change the priorities later.
(In reply to comment #2)
> I added a comment here:
Sorry I missed that. I'll send a new patch in a bit.
Created attachment 63363 [details]
Patch
Comment on attachment 63363 [details]
Patch
r- for the same reasons as before.
Guarding this behind a USE macro seems like the right thing to do.
Also, it's worth pointing out there is a mozilla patch (but it appears to be stalled):
FWIW, the moz patch goes the route of a integral constants instead of strings like ap suggests.
To clarify the ints vs strings issue:
- the very first draft (prior to feedback from the mlist) was ints.
- the mozilla patch was implemented before the feedback and has not been committed (as far as I know)
- based on mlist feedback we changed to strings
I don't think ints vs strings are critical, but the latest version is what we had achieved consensus around on the mlist.
> Guarding this behind a USE macro seems like the right thing to do.
My understanding is that these should be ENABLE macros, not USE ones. I don't have a link to any document describing the differences, unfortunately.
Created attachment 63539 [details]
Patch
I've added the ENABLE macro.
I left the enum as a string as it's defined in the proposal. I'm happy to change it to ints if you'd like, but I think we should change the proposal to match.
For the layout tests, I believe the correct thing to do is to have them expect to fail. Once this feature is enabled in Chrome (in a separate patch), I'll add a different -expected.txt for Chrome where all tests pass. Please let me know if this is wrong.
Thanks for adding the ENABLE guards!
+FAIL xhr.priority should be normal (of type string). Was undefined (of type undefined).
I'm not super happy with the addition of priority to xmlhttprequest-default-attributes test. Now we're getting a "FAIL" line in output for something that isn't a failure. This experimental feature may or may not get added to XHR spec, and may or may not be enabled in shipping products at some point, so startling developers with FAIL in common test results isn't great.
FYI, I expect to add this to the XMLHttpRequest Level 2 specification once I revised the original XMLHttpRequest test suite to match today's specification. I got nothing but positive feedback from people about this feature.
Oh, interesting. In general, I'm not thrilled by features that will likely be only used by 5-10 super incredibly optimized sites on the Web. If someone is willing to invest such enormous resources in optimizing a site (and manually specifying loading priorities is not easy), they are probably much better served by making a native application instead. For others, this is just an opportunity to make a mistake, and to slow down loading.
Created attachment 63617 [details]
Patch
I reverted the default value layout test and instead moved the priority default value test into the priority-enum layout test.
Comment on attachment 63617 [details]
Patch
WebCore/xml/XMLHttpRequest.cpp:401
+ const char* const names[5] = { "critical", "high", "normal", "low", "lowest" };
Since we have this list...
WebCore/xml/XMLHttpRequest.cpp:412
+ if (s == "critical") {
why not just use it here. And use a for loop for the search? Sure, it's "slow", but this code is not hot. And will be less code (and less error prone).
Otherwise this looks OK.
I agree with AP that this is a bad feature.
I'm not sure we want this in WebKit. I think this is just going to cause trouble for sites and browsers and not speed up the web.
I recommend we close this as WONTFIX.
My r- is more about my preference for implementation using a for instead of an if cascade. A "nit" for sure, but I think it might make the code slightly less error-prone.
I'm ambivalent about this. I feel very uninformed. Mike Belshe attempted to inform me some this afternoon.
I don't wish to stand in the way of progress.
Mike explained that this is designed to be an experiment to see if better performance can be delivered to complex applications like Google Maps. The "priority" level is intended as a hint to the browser, not as a service guarantee.
I wonder if the scheduling could/should be implemented at the WebCore level instead of in the network stack. Maybe it has to be implemented in the network stack.
How does the "priority" of XHR requests relate to the priority of requests generated from the browser?
Mike mentions that the desire here is to have an experiment which can be run for a few (3-4) months.
To clarify on some points for Eric:
* The browser implementation is intentionally left open so that we don't lock browsers into a very specific implementation. Thus, the attribute is a hint, not a mandate.
* We intentionally designed it to be backward compatible with the existing API, and, it can also easily be removed with no negative effect.
* It's not true that this can be implemented completely in JS today. In JS, you can schedule a lot of your own requests. But, JS applications cannot schedule their requests with knowledge of other work the browser is doing, cannot understand network properties like if there is a proxy, if it is a high speed link, a slow-speed link, etc.
* Note that we specifically only applied it to XHR, which is already an advanced corner of the web. It's not exposed onto HTML for a reason - to keep it as an advanced feature.
I do concede that we don't know that this will be a big win. I'd highly recommend most websites stay far away from this feature for a long time until it is thoroughly understood. But I also know that we can't research and test it without getting this API plumbed through.
Finally, just to make sure people don't think this is willy-nilly, here is some history.
* Various teams have been asking chrome for this for some time (maybe a year?)
* We put together a proposal about 3 months ago after we had solid improvement data from what the maps team was able to do in JS alone.
* That proposal went over to the W3C, and had no real negative feedback (as Anne noted already)
* Here we are.
I hope this background is useful.
Created attachment 63779 [details]
Patch
(In reply to comment #21)
> I wonder if the scheduling could/should be implemented at the WebCore level instead of in the network stack. Maybe it has to be implemented in the network stack.
I think it's doable to implement this in WebCore. We'd basically need to simulate the network layer's backlog when several XHRs have been sent from WebCore. I think we'd do this by placing a cap on the number of XHRs sent to the network layer from WebCore and queuing those that exceed the cap based on the description in the proposed spec. I can hack this up in a separate bug.
Are there any further comments on this bug?
Comment on attachment 63779 [details]
Patch
View in context:
Nits below. I don't have an opinion on whether WebKit wants this feature.
> WebCore/xml/XMLHttpRequest.cpp:176
> +#endif // ENABLE(XMLHTTPREQUEST_PRIORITY)
We don't need the comment here.
> WebCore/xml/XMLHttpRequest.cpp:406
> +void XMLHttpRequest::setPriority(const String& s, ExceptionCode& ec)
Please don't use one-letter variables names. Perhaps s => priority ?
>.
> WebCore/xml/XMLHttpRequest.cpp:418
> + }
I'd assert the invariants of m_priority at the end of this function, just to be clear about what's going on.
Created attachment 68286 [details]
Patch
Sorry for the extremely slow turnaround. I've addressed all of your comments.
(In reply to comment #25)
> (From update of attachment 63779 [details])
> View in context:
> >.
I used sizeof(array)/sizeof(array[0]) and stored that in a global constant. This seems to be the common way for WebKit code to determine array size.
Comment on attachment 68286 [details]
Patch
These ChangeLogs don't tell me anything interesting. Why are we making this change? Is there more that needs to be done to complete this feature? etc. Also, this patch doesn't seem to do anything. Has this feature actually been added to the XMLHttpRequest spec?
It's not. It is probably better implemented using a prefix for now, i.e. webkitPriority.
What is the state of this bug?
I was thinking to update
but before doing that we should agree what the API should look like.
In my patch I was using consts, you're using string, although
internally those are just converted to consts.
I think we could just allow any
priority between some minimum (0?) and max(100?) and then
it is up to the implementation to use those values as hints.
Having numbers makes it easier to increase and decrease the values.
(Mozilla's implementation does allow changing priority while the
XHR is processing the request. It is then up to the network layer to
handle that change.)
So the API could be for example
XMLHttpRequest {
...
const unsigned short LOW_PRIORITY = 0;
const unsigned short HIGH_PRIORITY = 100;
// when setting the value, if bigger than HIGH_PRIORITY
// priorityHint is set to HIGH_PRIORITY.
unsigned short priorityHint;
}
Also, seems like your setPriority may throw an exception.
The patch for Mozilla doesn't do that.
And as Anne says, .priority should be prefixed.
I still think that this would be a misfeature.
Since @ap replied that he thinks this is a misfeature, I wanted to counter that.
Support for XHR Prioritization:
1) Today browsers don't have a priority for XHR requests and cannot determine which order is best. Apps can help browsers run faster by hinting at which requests are needed most.
2) The solution is simple, backward-compatible, easy to implement, and advisory. It gives the browser more information to react intelligently without requiring specific behavior.
3) many websites today are writing their own XHR loaders to load all content in order to have priority based loading. As they do this, they make it more difficult for the browser to help them going forward, and throw a lot of code into javascript. These solutions are also sub-optimal, as a single webpage never has as much intelligence about network activity as the browser itself. Sites known that do this today include Google (maps, search, docs, and others) and Facebook. There are probably more.
4) as we build new web protocols which support priorities natively, XHR prioritization ensures that apps can leverage the new feature.
5) As Anne pointed out (comment #14), there really hasn't been any negative feedback on this feature from the XHR group.
Negative comments on XHR Prioritization
1) ap (comment #15) said that he doesn't like features which are "only used by 5-10 super incredibly optimized sites on the Web". I'd agree, except that those 5-10 websites represent a much larger percentage of web traffic.
2) the rest is syntax nits, and minor implementation notes.
Hi,
For our WebGL application, we stream down hundreds of individual assets to load a scene. Some of these assets (skeletons, meshes, low-res textures) are far more important than others (high-res textures). In addition, some objects in a 3D scene are more important than others. Your own 3D character is more important than others. The room is more important than props in the room.
To minimize load times, we want to make full use of the customer's pipe while also receiving data in order of decreasing importance.
In our native applications, we have the ability to prioritize network traffic appropriately, but on the web, we don't. Being able to prioritize XMLHttpRequest would be a large improvement to our customer experience.
Thanks,
Chad | https://bugs.webkit.org/show_bug.cgi?format=multiple&id=43400 | CC-MAIN-2020-34 | refinedweb | 2,235 | 64.81 |
std::binder1st<std::plus<int> > f(int n) { return std::bind1st(std::plus<int>(), n); }This is standard C++, though some people regard it as ugly. Not so much the use of 'std::', but the code itself, especially the syntactically significant white space between the nested >'s. That is why some people prefer languages with true first class functions. FunctoidsInCpp discusses FC++ which is a library implementing this and much more. -- JohnFletcher Some languages have built-in convenience mechanisms to do this. There are some examples in everyone's favorite languages below, as well as on the CommonHigherOrderFunctions page. It is rather simple to accomplish this in the PythonLanguage:
def f(n): def g(x): return x + n return gBlocksInRuby lists some great usages of higher order functions. In Ruby, they're called "blocks", they are a special language construct, and you cannot pass more than one to any routine. Incorrect, blocks are anonymous functions, not higher order functions. The higher order functions would be the function to which you pass the block. The combination of anonymous functions and higher order functions together are where you get the power from, the anonymous functions essentially specialize the higher order function. Any language that supports passing functions as parameters can support higher order functions, but without anonymous functions, they won't get used too often, CsharpLanguage is such a beast. -- RamonLeon [Although not for much longer - version 2 of the language will feature anonymous functions (which are fully-fledged LexicalClosures), along with a standard set of HigherOrderFunction-style methods on the collection classes (usual map/filter/forall/iter/etc, but with different names (presumably they don't want to "confuse" people who haven't seen them before... at the expense of confusing those of us who have seen them before...)). See for some preliminary details of the collection methods. -- MikeRoome] Not suprising, it is standard Microsoft practice to rename and reimplement and then pretend they invented something new. They'll be nice to have though. -- rl No, it's standard Microsoft practice to take something, rename it, patent it and pretend they invented something new. No, it's everyone else's practice to say "look at all these languages with feature x - isn't it great that we all build on each other like that?" unless someone at Microsoft does exactly the same thing. (But you didn't grasp the anti-Microsoft complain. Patenting means to take control and to stop everyone else to "build on that" without consent, so the "great we build on each other" will become "look, they have built on Microsoft's", plus "let's see if we can sue them" when applicable... Microsoft does not want you to say "we all build on each other", but "we all build on Microsoft's great original ideas", though they have just build on the other. See the difference? Of course, it does not happen to be always like this, but the complain is about this MS general attitude, which tears MS away from a community behaviour where you, like everyone else, can say happily "great that we build on each other")
(dolist (func (find-all-if #'suitable-p *my-objs-containing-a-func)) (funcall (object-function-slot func)))Without dolist it looks much better. ;-)
(mapc #'funcall (find-all-if #'suitable-p *my-objs-containing-a-func))But, as I say, that's just the built-ins. The real power comes from using this naturally in your own algorithms. For example, the StrategyPattern totally disappears in such a language: you just pass the function to use instead of creating classes for the strategy. (As a side note, many, if not most, of the patterns in the patterns book are unnecessary or considerably simpler in languages which support functions as first class objects. See PeterNorvig's DesignPatternsInDynamicProgramming. -- AlainPicard (but feel free to RefactorMe)
sub double { return 2 * shift; } @a = ( 1, 2, 3, 4, 5 ); @b = map { double( $_ ) } @a; ... or perl's "list-comprehension-ish" 'gather' clause use Perl6::Gather; @b = gather { take $_ * 2 } foreach 1..5;RubyLanguage example:
def double ( num ) return 2 * num end a = [ 1, 2, 3, 4, 5 ] b = a.collect { |value| double( value ) }Alternative Ruby version(s):
double = lambda { |num| 2 * num } b = a.map(&double)or even
b = a.map { |num| 2 * num }SmalltalkLanguage example: (very similar to the RubyLanguage example)
Number>>double ^ 2 * self a := #(1 2 3 4 5). b := a collect: [ :value | value double].or
(1 to: 5) collect: [ :i | 2 * i ]PythonLanguage example:
... with explicit function declaration def doubleit( num ): return 2*num a = ( 1, 2, 3, 4, 5 ) b = map( doubleit, a ) ... or with lambda b = map(lambda num: 2 * num, a) ... or with ListComprehension: b = [ 2 * num for num in a ]SchemeLanguage example:
(define (double num) (* 2 num)) (define a '(1 2 3 4 5)) (define b (map double a))or
(define b (map (lambda (x) (* 2 x)) a))CommonLisp example:
(defun double (num) (* 2 num)) (defvar *a* (list 1 2 3 4 5)) (defvar *b* (mapcar #'double *a*))or
(defvar *b* (mapcar #'(lambda (x) (* 2 x)) *a*)ObjectiveCaml example:
let b = let a = [1;2;3;4;5] in List.map (( * ) 2) a;;HaskellLanguage example:
b = map (*2) [1..5]In all cases, array b equals [ 2, 4, 6, 8, 10 ]. ErlangLanguage example:
... with higher-order map and a LambdaExpression lists:map(fun(X) -> 2*X end, lists:seq(1,5)). ... or with a ListComprehension: [ 2*X || X <- lists:seq(1,5) ].CeePlusPlus example:
vector<double> a, b; // set a values to whatever transform ( a.begin(), a.end(), // take this range back_inserter(b), // append to b bind1st(multiplies<double>(), 2)); // after multiplying by 2.or with boost::lambda (BoostLambdaLibrary):
transform (a.begin(), a.end(), back_inserter(b), _1 * 2);You can use a template function that uses one or more of its arguments as a FunctorObject in CeePlusPlus to make a Higher Order Function. GroovyLanguage example: (basically the same as RubyLanguage and SmalltalkLanguage)
a = [1,2,3,4,5] b = a.collect { 2 * it}CsharpLanguage v2 example:
int[] a = { 1, 2, 3, 4, 5 }; int[] b = Array.ConvertAll(a, delegate(int x) { return 2 * x; });or using the List collection class:
List<int> a = new List<int>(new int[] { 1, 2, 3, 4, 5 }); List<int> b = a.ConvertAll(delegate(int x) { return 2 * x; });
Array.prototype.map=function(aBlock){ var result=new Array(); for(var index=0;index<this.length;index++) result.push(aBlock(this[index])); return result; }usage...
var a = [1,2,3,4,5]; var b = a.map(function(x){return x*2;});or
var b=[1,2,3,4,5].map(function(x){return x*2});
function double($number) { return $number * 2; } $array = array(1,2,3,4,5); $result = array_map("double", $array); // perhaps using the array_map function is cheating? ;)PhpLanguage has a few interesting built-in functions to deal with higher-order functions. is_callable() will tell you if the given var is in fact a string containing a callable function name, an array containing an object and method name, or an array containing the class name and method name (for static methods). You can also quite easily use the contents of a variable to call a function....
$foo = "myFunkyFunc"; $foo($arg, $arg1); // Calls myFunkyFunk.Although it works, I prefer to use the following method, as it is a little more readable
$foo = "myFunkyFunc"; call_user_func($foo, $arg, $arg1);So, php has () as a postfix EvilEval? PHP has variable variable names and variable function names. I believe it is called reflection in OO terms. So, for example, you can say
$foo = "myVar"; $x = $$foo; // $x contains the same value as $myVar;Getting PHP to return functions is somewhat more problematic. There is the "create_function()" function, to which you pass the parameter list and source of the function body, but its return value is pretty much unusable for creating further functions without a lot of grief (it contains a NUL character that really messes things about when you try and embed it in the source code of a new function). Even a function to implement function composition is problematic. Not since PHP 5.3, which provides syntax for first-order functions (as another type of "callable" thing). As a side effect of the implementation, one can also write objects with a method that is called when the object is called as a function (i.e., as $obj('foo')).
$array = [1,2,3,4,5]; $result = array_map(function($number) { return $number * 2; }, $array);A function to implement function composition would be
function o($f, $g) { // Variables in the outer scope have to be imported explicitly return function($x)use($f, $g) { return $f($g($x)); }; }Of course, for proper generality you'd want o() to be curried:
function o($f) { return function($g)use($f) { return function($x)use($f, $g) { return $f($g($x)); }; }; }
Keys = Select.Data(Entity.Name) Loop Key = Remove.Field(Keys); * Removes Key from Keys Until Key = EOF
Repeat
Keys = Create.Array.Iterator(Select.Data(Entity.Name)) Iterate(Keys, Get.Object.Code("** Code for each key"))
Iterate(Create.Array.Iterator(Select.Data(Entity.Name)), Get.Object.Code("** Code for each key"))-- PeterLynch That is a rather long "line". I find code easier to read if parts are divided on multiple lines. I think C programmers prefer this version -
Iterate(Create.Array.Iterator(Select.Data(Entity.Name)), Get.Object.Code("** Code for each key"))Would read -
Iterate(Create.Array.Iterator(Select.Data(Entity.Name)), - {Code.Compiler.Directive}"** Code for each key"{/Code.Compiler.Directive})(where the Code.Compile.Directive is whatever language specific syntax is used to indicate code.) Is this correct? -- PeterLynch | http://c2.com/cgi/wiki?HigherOrderFunction | CC-MAIN-2016-36 | refinedweb | 1,609 | 55.03 |
_lwp_cond_reltimedwait(2)
- map a file object in the appropriate manner
#include <sys/mman.h> int mmapobj(int fd, uint_t flags, mmapobj_result_t *storage, uint_t *elements, void *arg);
The open file descriptor for the file to be mapped.
Indicates that the default behavior of mmapobj() should be modified accordingly. Available flags are:
Interpret the contents of the file descriptor instead of just mapping it as a single image. This flag can be used only with ELF and AOUT files.
When mapping in the file descriptor, add an additional mapping before the lowest mapping and after the highest mapping. The size of this padding is at least as large as the amount pointed to by arg. These mappings will be private to the process, will not reserve any swap space and will have no protections. To use this address space, the protections for it will need to be changed. This padding request will be ignored for the AOUT format.
A pointer to the mmapobj_result_t array where the mapping data will be copied out after a successful mapping of fd.
A pointer to the number of mmapobj_result_t elements pointed to by storage. On return, elements contains the number of mappings required to fully map the requested object. If the original value of elements is too small, E2BIG is returned and elements is modified to contain the number of mappings necessary.
A pointer to additional information that might be associated with the specific request. Only the MMOBJ_PADDING request uses this argument. If MMOBJ_PADDING is not specified, arg must be NULL.
The mmapobj() function establishes a set of mappings between a process's address space and a file. By default, mmapobj() maps the whole file as a single, private, read-only mapping. The MMOBJ_INTERPRET flag instructs mmapobj() to attempt to interpret the file and map the file according to the rules for that file format. The following ELF and AOUT formats are supported:
This format results in one or more mappings whose size, alignment and protections are as described by the file's program header information. The address of each mapping is explicitly defined by the file's program headers.
This format results in one or more mappings whose size, alignment and protections are as described by the file's program header information. The base address of the initial mapping is chosen by mmapobj(). The addresses of adjacent mappings would be with an mmap() of /dev/null.
Mappings created with mmapobj() can be processed individually by other system calls such as munmap(2).
The mmapobj_result structure contains the following members:
typedef struct mmapobj_result { caddr_t mr_addr; /* mapping address */ size_t mr_msize; /* mapping size */ size_t mr_fsize; /* file size */ size_t mr_offset; /* offset into file */ uint_t mr_prot; /* the protections provided */ uint_t mr_flags; /* info on the mapping */ } mmapobj_result_t;
The macro MR_GET_TYPE(mr_flags) must be used when looking for the above flags in the value of mr_flags.
Values for mr_flags include:
MR_PADDING 0x1 /* this mapping represents requested padding */ MR_HDR_ELF 0x2 /* the ELF header is mapped at mr_addr */ MR_HDR_AOU 0x3 /* the AOUT header is mapped at mr_addr */
When MR_PADDING is set, mr_fsize and mr_offset will both be 0.
The mr_fsize member represents the amount of the file that is mapped into memory with this mapping.
The mr_offset member is the offset into the mapping where valid data begins.
The mr_msize member represents the size of the memory mapping starting at mr_addr. This size may include unused data prior to mr_offset that exists to satisfy the alignment requirements of this segment. This size may also include any non-file data that are required to provide NOBITS data (typically .bss). The system reserves the right to map more than mr_msize bytes of memory but only mr_msize bytes will be available to the caller of mmapobj().
Upon successful completion, 0 is returned and elements contains the number of program headers that are mapped for fd. The data describing these elements are copied to storage such that the first elements members of the storage array contain valid mapping data.
On failure, -1 is returned and errno is set to indicate the error. No data is copied to storage.
The mmapobj() function will fail if:
The elements argument was not large enough to hold the number of loadable segments in fd. The elements argument will be modified to contain the number of segments.
MMOBJ_PADDING was not specified in flagsand arg was non-null.
The file to be mapped has a length of 0.
The fd argument refers to an object for which mmapobj() is meaningless, such as a terminal.
Insufficient memory is available to hold the program headers.
Insufficient memory is available in the address space to create the mapping.
The current user data model does not match the fd to be interpreted; thus, a 32-bit process that tried to use mmapobj() to interpret a 64-bit object would return ENOTSUP.
The fd argument is a file whose type can not be interpreted and MMOBJ_INTERPRET was specified in flags.
The ELF header contains an unaligned e_phentsize value.
An unsupported filesystem operation was attempted while trying to map in the object.
See attributes(5) for descriptions of the following attributes:
ld.so.1(1), fcntl(2), memcntl(2), mmap(2), mprotect(2), munmap(2), elf(3ELF), madvise(3C), mlockall(3C), msync(3C), a.out(4), attributes(5)
Linker and Libraries Guide | http://docs.oracle.com/cd/E23824_01/html/821-1463/mmapobj-2.html | CC-MAIN-2016-22 | refinedweb | 882 | 63.39 |
Proposed features/Pre-School (early childhood education)
From OpenStreetMap Wiki
Proposal
A tag for pre-school education centres. Also known as a kindergarten
key=amenity value=preschool
key=name value=<name of the nursery> (optional)
key=operator value=<name of the organisation operating the preschool> may be local council or a private org (optional)
Applies to
nodes and areas
Rendering
something similar to school?
Related
- This fits nicely with amenity=school, college, university from map features. Perhaps it's a shame we didn't go for a collective amenity & sub-tags initially, however this is fine as it stands. --DrMark 06:56, 3 January 2008 (UTC)
- Here in Belgium we have many schools that combine 'basic' school (6-11 year olds) with preschools or kindergarten (2,5-5 year olds). How would that get tagged? What about the Steiner school where children/young adults from 2,5 till 18 year old can go? There happens to be one in my neighbourhood and it was merged a few years ago into one big complex. Polyglot 07:37, 3 January 2008 (UTC)
- There are always going to be problems getting a tag working both generally internationally AND precisely tailored for a particular country's practices. May be what you need is a "creche" (French word?), in a number of English-speaking countries this has the connotation of a place for "very young children" with being specific as to age. MikeCollinson 09:20, 3 January 2008 (UTC)
- my understanding of a creche was that it dod not include education, was more of a aplce to elave kids while parents work/go shopping etc. they are everywhere, in supermarkets, shopping malls, the gym Myfanwy 22:33, 3 January 2008 (UTC)
- Sounds about right. So I now have a "creche" I want to map. It's a place in a ski resort where you can leave your small children. amenity=creche? used before 3 times: [1] -- Harry Wood 08:26, 22 January 2011 (UTC)
- I agree with DrMark's comments and will support this. It will generally work in all the countries I know of. Polyglot, I suggest what we need to do is define that amenity=school means "A general educational establishment for schoolchildren (age may vary from country to country but is generally from 5 or 6 to 16 or 18). Such schools may also have facilities for younger children." . Does that work for you? MikeCollinson 09:19, 3 January 2008 (UTC)
- why don't using: "amenity=nursery" for pre-school facilities? there are so much country-specific variations of pre-school/nursery-facilities, that I afraid this additional tag will only add chaos. --Cbm 09:38, 3 January 2008 (UTC)
- Because nurseries specifically do not do education, whereas preschools are geared for that task. Nurseries are more places that look after children while the parent(s) are working. IME nurseries are aimed towards parents working times (pickup at 5/6pm) whereas preschool ends perhaps 1-3pm. Which of course brings up the question of before/after-school-care... Kleptog 10:03, 3 January 2008 (UTC)
- you can't specify this for international usage because every country does it another ways. For e.g. in Germany there is educationan also in nurseries. Also the times are not clarifying this. --Cbm 11:25, 3 January 2008 (UTC)
- So if you have a unit that does both, which way should we tag it? Does one have priority over the other? All the child-care facilities around here that I've seen do both functions. DancingFool 03:07, 4 January 2008 (UTC)
- I agree with Cbm. The nursery value should be sufficient. Translated to German it means: Kindergarten, Kinderkrippe, Hort, Kindertagesstätte, Säuglingsheim, etc. And a nursery school is a pre-school (Vorschule). Thus I see no need for a new value. Toralf 10:56, 4 January 2008 (UTC)
- Unfortunately 'Nursery', in my country, is most commonly a place where young plants are grown. The term 'preschool' however is a compound noun with a very clear meaning. Even though I can think of few instances that are actually called something-or-other pre-school, the general term is that the (Kindergarten|Playcentre|Early Childhood Centre|Kohanga Reo) will consider themselves to be a 'pre-school' educational facility. Most of them would be offended to be called a 'Nursery'. The other general term I hear used is 'Early Childhood Educational Facility' but I think it's too much of a mouthful to be useful here. Karora 12:00, 4 January 2008 (UTC)
- I just looked at the map feature page and couldn't find the value "nursery" for amenity. This isn't an official value yet, it is proposed too. Toralf 21:21, 4 January 2008 (UTC)
- do we need a more distinctive key? something like key=amenity value=education key=ages value=1-3|3-5|5-16|11-16|5-18|16-18|....etc. this is getting bogged down in semantics and regional variations/ambiguities. should child care (i.e. no education, children left so parents can shop/work) be separate?Myfanwy 22:33, 3 January 2008 (UTC)
- I already find amenity=college to be pretty wierd, I admit. In New Zealand a 'College' is almost invariably a (secondary) 'School' for children aged from approximately 13-18 years of age. There does seem to be some international consensus around the terms 'pre-', 'primary', 'secondary' and 'tertiary' education sectors. I chose the term 'preschool' for this proposal because I believe it has greater international consensus and use than the term 'nursery' (which lacks educational associations in some countries, denotes a place for young plants in some countries and might be entirely meaningless in others) Karora 10:35, 4 January 2008 (UTC)
- I support the basic tag, but I think we should do some more work on age groups. Here in Ontario Canada, we have 3 basic "pre-school" groups -- infants (0-18 months), toddlers (18-30 months) and precshoolers (30months +). It would be useful if we could indicate what age groups a certain centre caters to. Hours of operation are something we should think about as well.
- Someone already mentioned "after school programs", but it should be pointed out that many centres are located in other buildings, ie Office buildings, schools, etc. Will this overlapping tagging cause issues (rendering, data wise?).
- Finally, should we indicate things like the playground separately? Historybuff 16:49, 4 January 2008 (UTC)
- amenity=university is pretty unambiguous internationally. amenity=school is generic. amenity=college is unfortunately already in use, because it can mean vastly different things internationally. Now adding another internationally problematic tag only makes more problems. Differentiating types of schools could IMHO better be done in a subtag (so amenity=school & school=whatever) or a namespace (amenity=school:whatever). This lets us keep the rendering rules for general maps simple, they can ignore the distinctions if they want. But it gets the data in the database, so you can search on it. (amenity=school|college|university are rendered in the same colour now anyway.) --Cartinus 13:05, 7 January 2008 (UTC)
- As I see it there are many different types of schools and throughout the world the same name is used with different meaning ( 'college' ). For verry basic interpretation by rendering engines I suggest using a verry basic tag, eg. amenity=school (See also comment of Cartinus). For more detailed information I suggest to use type=..., eg. type=university or type=primary school, elementary school. By allowing multiple entries for type a broader range can be covered, hence inlcuding eg. all ages from x to y years of age. Also this would make it possible to include the proposed feature Education Center and thelike. More types may inlcude: elementary school, grammar school, primary school, secondary school, high school, college, university, de:Universität, de:Fachhochschule, de:Fachoberschule, de:Volksschule, dancing school, music school, art academy, kindergarden, de:Waldorfschule, Steiner school, preschool, de:Vorschule, xx:nursery, equestrian school (riding horses), ... . To prevent wild excrescence I find it important to rethink gathering all kinds of amenities having to do with education under one single tag. Thomas P 10:49, 2 September 2008 (UTC)
- I think it's wrong to use language tags to give different meaning. If you're doing it that way, we should have country tags, and perhaps in some countries state tags as well. The same word may mean completely different things in the same language but at different places in the world. --Eimai 12:32, 2 September 2008 (UTC)
- According to Myfanwy's comment adding the age might help clearing ambiguities in names. Eg.: amenity=school type=college age=13-18 or amenity=school type=de:Volkshochschule age=adult. Thomas P 11:01, 2 September 2008 (UTC)
- Breaking down the type might help too. Eg. type=university subject=electronics, informatics, economy arts or type=arts academy subject=dancing, painting, music
- Perhaps amenity=education; education=preschool|primary|secondary|tertiary and amenity=child_care? It allows for the distinction between educational facilities and non-educational, and seems to define the type of education provided in more internationally recognised terms. Of course this doesn't deal with the fact that we already have amenity=school, amenity=college, etc.
- Since no one have done anything about this tag and the Tag:amenity=kindergarten tag have already been accepted I recommend abandoning the preschool. --Coax 17:33, 20 October 2010 (BST)
Voting
is not open yet. the tag needs to be proposed and discussed for at least two weeks before it is opened for voting. see the proposed features page for more info
According to date above (Proposed-Date: 2007-12-31) it alrady 8 months since propose date. When will be the voting? --UrSuS 14:11, 29 September 2008 (UTC) | http://wiki.openstreetmap.org/wiki/Proposed_features/Pre-School_(early_childhood_education) | CC-MAIN-2017-22 | refinedweb | 1,625 | 55.24 |
?
What is Mezzanine?.
Features
Mezzanine is an open source project managed using both the Git and Mercurial version control systems.
Video for Mezzanine blogging
What is Redmine?Redmine is a flexible project management web application. Written using the Ruby on Rails framework, it is cross-platform and cross-database.
Redmine is open source and released under the terms of the GNU General Public License v2 (GPL).
It is a cross-platform, cross-database, and open source tool that also has issue-tracking features. Users can manage multiple projects and subprojects, and have access to many planning, tracking, and documenting features available from similar commercial products.
Redmine has a news area where members can publish news items. It allows the creation of documents, such as user documentation or technical documentation, which can be downloaded by others. A Files module is a table that lists all uploaded files and its details.
Users can easily create project wikis with the help of a toolbar. Other features include custom fields for creating additional information, and a Repository to view a given revision and the latest commits. The software can be configured to receive emails for issue creation and comments. It also supports particular versions of different databases, such as MySQL, PostgreSQL, MS SQL Server, and SQLite. API and plug-ins are also available.
Video for Redmine Introduction
What is WSGI?
WSGI is the Web Server Gateway Interface.
It is a specification that describes how a web server communicates with web applications, and how web applications can be chained together to process one request.
WS.
Main Features
- WSGI gives you flexibility.
Application developers can swap out web stack components for others. For example, a developer can switch from Green Unicorn to uWSGI without modifying the application or framework that implements WSGI. From PEP 3333:
- WSGI servers promote scaling.
Serving thousands of requests for dynamic content at once is the domain of WSGI servers, not frameworks. WSGI servers handle processing requests from the web server and deciding how to communicate those requests to an application framework's process. The segregation of responsibilities is important for efficiently scaling web traffic.
Video for WSGI
What is Dash?
Dash is a Python framework for building analytical web applications. No JavaScript required..
What is PyTorch?
PyTorch is an open source machine learning library for Python, based on Torch, used for applications such as natural language processing. It is primarily developed by Facebook's artificial-intelligence research group, and Uber's "Pyro" software for probabilistic programming is built on it.
PyTorch is a python package that provides two high-level features:.
Video for PyTorch
What is
Commands
pip install aiohttp
You may want to install optional cchardet library as faster replacement for chardet:
pip install cchardet
For speeding up DNS resolving by client API you may install aiodns as well. This option is highly recommended:
pip install aiodns
Example
import aiohttpimport asyncio
async def fetch(session, url): async with session.get(url) as response: return await response.text()
async def main(): async with aiohttp.ClientSession() as session: html = await fetch(session, '') print(html)
if __name__ == '__main__': loop = asyncio.get_event_loop() loop.run_until_complete(main())
Video for aiohttp
What is Seaborn.
Example Code
import seaborn as snssns.set()tips = sns.load_dataset("tips")sns.relplot(x="total_bill", y="tip", col="time", hue="smoker", style="smoker", size="size", data=tips);
Video for Seaborn
Forgot Your Password?
2018 © Queryhome | https://www.queryhome.com/tech/174483/small-overview-about-mezzanine | CC-MAIN-2019-09 | refinedweb | 562 | 50.02 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.