text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
2386/print-the-line-if-it-contains-specific-string-by-using-groovy
I am having a Jenkins log file and I am trying to print those lines which are having string "FOUND" in it.
How do I write the code for it in Groovy as I am new to it? Can anyone help me with this?
Thanks.
You can try the code written below:
def lines = new File('/tmp/test123.txt').readLines()
def result = lines.findAll { it.contains('FOUND') }
println result*.toString()
I hope that it will resolve your query.
The option “Launch agent by connecting it ...READ MORE
Yes, you can create a Jenkins pipeline ...READ MORE
In the illustration below, i have created a ...READ MORE
The command looks something like this:
java -jar ...READ MORE
Hey @nmentityvibes, you seem to be using ...READ MORE
Consider this - In 'extended' Git-Flow, (Git-Multi-Flow, ...READ MORE
It can work if you try to put ...READ MORE
When you use docker-compose down, all the ...READ MORE
While you are trying , just try ...READ MORE
Since the container gets destroyed once the ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/2386/print-the-line-if-it-contains-specific-string-by-using-groovy | CC-MAIN-2019-47 | refinedweb | 192 | 78.85 |
Find Questions & Answers
Can't find what you're looking for? Visit the Questions & Answers page!
Hi,
I have a question of how you deal with the situation of returns in WIA (plant abroad) process.
Let's say your company is located in Country_A and there you have your "main plant". In Country_B you have a plant, for which you use Plant abroad functionality.
You create a purchase order with 100kg Material_A and transfer it from plant in Country_A to plant in Country_B. After creating the PO you create the delivery using VL10H and finally book invoice with VF01/VF04. After the goods have arrived at the destination you notice, that there is a problem with for example quality. Now you want to send Material_A back to Plant_A in Country_A.
Do you make a cancellation of above created documents, do you use same document and delivery types to process the return (like a stocktransfer from Country_B to Country_A), or do you set "returns" flag in PO? Or did you create new document and delivery types to process the returns....?
The main issue is to get the right tax and to be able to process intrastat etc correctly...just transfering stock is not the problem...
Best regards,
Mark
Hello,
Can you please share if it was fixed in your case? I have quite simmilar case. | https://answers.sap.com/questions/30605/plants-abroad-wia-returns.html | CC-MAIN-2018-13 | refinedweb | 224 | 73.58 |
1921/how-can-a-war-file-be-deployed-in-tomcat-7
In Java, you can escape quotes with \:
String ...READ MORE
How to manage two JRadioButtons in java ...READ MORE
Here are two ways illustrating this:
Integer x ...READ MORE
import java.io.BufferedWriter;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;
public class WriteFiles{
...READ MORE
You can run your app with sudo as the ec2-user is ...READ MORE
Hello @Ganesh,
To run and configure multiple instances ...READ MORE
You can use Java Runtime.exec() to run python script, ...READ MORE
First, find an XPath which will return ...READ MORE
public class Test {
...READ MORE
System.arraycopy is the most efficient way, but ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in. | https://www.edureka.co/community/1921/how-can-a-war-file-be-deployed-in-tomcat-7?show=1922 | CC-MAIN-2021-49 | refinedweb | 147 | 72.63 |
Feeling “bored” can result in a complete lack of motivation. You don’t have the drive to pick something from your list of chores. You can’t bring yourself to do anything except watch YouTube mindlessly.
At the same time, at the back of your mind is a small sense of guilt. A sense that you should be doing something more valuable with your time. Something creative or constructive.
If this sounds like you, you’re not alone. It’s something I used to struggle with.
There are still some days where this feeling does strike.
However, I’ve found a way to overcome boredom…
On my website (built in Jekyll) some of the links used are affiliate links. To let people know this, I include a disclaimer at the top of relevant pages.
Instead of manually copying and pasting this text for each new article I write, here’s how I implemented an affiliate disclaimer to specific posts or pages.
In the _includes folder, create a file called affiliate-disclaimer.html
Add the following code to that file:
{% if page.has-affiliate-link %}
<p><i>This article contains affiliate links.</i></p>
{% endif %}
If you want to implement this on your own Jekyll site, you can change the actual content…
A shutdown routine is something you do at the end of your workday that tells your brain, I’m finished with work today.
By making a shut down routine part of your daily habits, you’ll find that your mind can focus on other things outside of your work.
Once your mind knows that your tasks have been handled for the day, you’ll be less likely to think about that email you need to send or that random task you should prioritize.
You can spend time with family and friends and be free from distraction.
Doing Work Creates Open Loops
The case…
As a UI Designer, you are essentially paid for your ideas. They are your greatest source of value.
Because of this, your job is dependent on always creating more value. This type of tension can result in burnout if you don’t acknowledge that your creativity comes in waves.
It’s not always possible to come up with amazing ideas on the spot. But, if you realize this, you can leverage your creative moments of brilliance and store great ideas for when you need them.
To do this, let’s look at 5 ways you as a UI Designer can generate great ideas.
…
It’s easy to be bombarded by things you should work on or should do. Things from work, things in your life, things you read on social media.
This can cause overwhelm and it can be difficult to see the wood from the trees.
How can you overcome this to get your most important work done each day? How can you prioritize the things that will give you the most value in the long term?
Focus on one thing and make that one thing be consistency.
I'm a designer, developer, and creator. I talk about being a productive designer. | https://mishacreatrix.medium.com/ | CC-MAIN-2021-31 | refinedweb | 517 | 82.14 |
The Atlassian Community can help you and your team get more value out of Atlassian products and practices.
Hi,
I need suggestions for a ScriptRunner behaviour for the below condition which is based on the value of Xray "Test Repository Path"
WHEN the "Test Repository Path" is "NTG7 Widgets"
THEN the custom fields "Review" and "Review Comments" should be made mandatory.
Any help here is highly appreciated.
Thanks in advance,
Venkat
Hi there!
This is currently not possible via Behaviours, however, you can do this via a Simple Scripted Validator.
You can alter the following code to your requirements and add it to your validator.
def testRepo = cfValues["Test Repository Path"] def textbox = cfValues["textbox"] def result = false as Boolean if(testRepo == "test 1") { if(textbox == null) { result = false } else { result = true } } else { result = true } result
I don't have Xray so I can test and figure this out myself, but maybe I can give you some strategies for figuring it out yourself.
The critical thing you need to find out is if behaviour is even able to detect changes to Xray custom fields and getting the data.
So, create a behaviour configuration with the correct mapping and add your Xray field.
Then add a server-side script on that Xray field.
Add a short script to pull out the data from the custom field and display it on your form:
def xrayFld = getFieldById(fieldChanged)
def value = xrayFld.value
def rawValue = xrayFld.value
def formValue = xrayFld.formValue
def helpText = """value=$value (${value.getClass()})<br>
rawValue=$rawValue(${rawValue.getClass()})<br>
formValue =$formValue (${formValue .getClass()})<br>
"""
xrayFld.setHelpText(helpText)
If xray fields are supported by behaviour, you should see some red text below the field that will update each time you change the value in field.
If any of these values contain anything that looks like "NTG7 Widgets", then you should be able to detect that value and conditionally make the other 2 fields required or not.
For example, if the result of 'value' looks like "NTG7 Widgets (String)" then you could try a script like this:
def xrayFld = getFieldById(fieldChanged)
def requiredFieldNames = ['Review', 'Review Comments']
def required = (xrarFld.value == 'NTG Witdget)
requiredFieldNames.each{fieldName->
getFieldByName(fieldName).setRequired. | https://community.atlassian.com/t5/Adaptavist-questions/ScriptRunner-behaviour-based-on-Xray-custom-field/qaq-p/1755336 | CC-MAIN-2022-27 | refinedweb | 366 | 53.31 |
In addition to creating the object, the members of the class that are to be accessible beyond their package need to be declared as public in the class definition.
The members of this object can now be reached by using the dot operator after the instance name.
In this example, the members of the object MyRectangle are x, y and getArea().
package javaapplication19; public class JavaApplication19 { static class MyRectangle { public int x, y; public int getArea() { return x * y; } } public static void main(String[] args) { MyRectangle r = new MyRectangle(); r.x=10; r.y=5; int area = r.getArea(); System.out.println(area); } } | https://codecrawl.com/2014/11/20/java-accessing-object-members/ | CC-MAIN-2019-43 | refinedweb | 103 | 52.39 |
Why opencv2 has two kinds of programming styles?
For example
cv::imshow(WINDOW, cv_ptr->image);
and
cvShowImage( "Image", pImg );
How to know which style should I use?
Thank you~
For example
cv::imshow(WINDOW, cv_ptr->image);
and
cvShowImage( "Image", pImg );
How to know which style should I use?
Thank you~
answered 2012-01-11 02:02:33 -0500
The first is OpenCV in C++, the second is OpenCV in the basic C functions. The first example is calling the C++ version of OpenCv with namespaces and all of the functions included in the namespace. The second style is simply the OpenCV function in C. The C++ is just a wrapper for the C code, so you're essentially calling the same functions either way. Which you use is really up to you. I like the C functions better, mostly for the sake of simplicity and documentation.
Please start posting anonymously - your entry will be published after you log in or create a new account.
Asked: 2012-01-11 01:14:44 -0500
Seen: 259 times
Last updated: Jan 11 '12
groovy OSX install fails building opencv2 because LONG_BIT definition appears wrong for platform
OpenCV 2.3 RC Python SVM: TypeError: <unknown> is not a numpy array
services with sensor_msgs::Image response in diamondback ubuntu10.10
image_transport can't see the webcam image
Source Files failing to Build (C++ image subscriber tutorial)
Cannot Use OpenCV With -std=c++0x/gnu++0x in Fuerte
opencv undefined reference problem
Filling sensor_msgs/Image with uint8_t buffer array [closed]
How to manually build OpenCV to work with ROS? | https://answers.ros.org/question/12595/why-opencv2-has-two-kinds-of-programming-styles/?sort=oldest | CC-MAIN-2019-39 | refinedweb | 263 | 61.26 |
We’ve introduced a SplitPane control in JavaFX 2.0, and today I thought I’d point out an interesting subtlety in the API. For the longest time our SplitPane API primarily consisted of the normal ‘left’ and ‘right’ (or ‘top’ and ‘bottom’) properties (indeed, the JavaDoc as of today still refers to this API). These were synonomous – if you set ‘top’ and ‘bottom’, they were literally copied to the ‘left’ and ‘right’ code, and our SplitPaneSkin just knew to draw with the items stacked vertically, rather than to lay them out horizontally.
In the very, very late stages of the JavaFX 2.0 EA program, we decided we didn’t really like this API all that much. The concept of having both left/right and top/bottom just didn’t really gel with us. After some discussion, we thought we’d use the same API style as we do elsewhere in JavaFX 2.0: expose a collection and allow for developers to place their content into it. This mean that to use a SplitPane, you’d use code such as the following:
import javafx.application.Application; import javafx.geometry.Orientation; import javafx.scene.Group; import javafx.scene.Scene; import javafx.scene.control.Button; import javafx.scene.control.SplitPane; import javafx.scene.layout.HBox; import javafx.scene.paint.Color; import javafx.stage.Stage; public class SplitPaneSample extends Application { @Override public void start(Stage stage) { HBox hbox = new HBox(20); hbox.setTranslateX(20); hbox.setTranslateY(20); SplitPane splitPane1 = new SplitPane(); splitPane1.setPrefSize(200, 200); final Button l = new Button("Left Button"); final Button r = new Button("Right Button"); splitPane1.getItems().addAll(l, r); hbox.getChildren().add(splitPane1); Scene scene = new Scene(new Group(hbox), 560, 240); scene.setFill(Color.GHOSTWHITE); stage.setScene(scene); stage.setTitle("SplitPane"); stage.setVisible(true); } public static void main(String[] args) { launch(args); } }
The main segment of code is the middle block, where we create a SplitPane, set a preferred size, and add in the two buttons we want to display. I hope you see where I’m going with this: now that we aren’t constrained to just having ‘left’ and ‘right’ nodes, we can actually put in any number of nodes into the items collection – and the SplitPane will automatically create N – 1 dividers. For example:
SplitPane splitPane2 = new SplitPane(););
The result of this code is shown in the second SplitPane you see in the picture above. Now, a SplitPane wouldn’t be all that great if it only allows for the SplitPane content to run in a horizontal flow, so we also support setting an orientation on the SplitPane. As mentioned, the default value is HORIZONTAL, which is what you see in the screenshot above. You can also set it to VERTICAL, which you can see in the screenshot below.
SplitPane splitPane1 = new SplitPane(); splitPane1.setOrientation(Orientation.VERTICAL); splitPane1.setPrefSize(200, 200); final Button l1 = new Button("Left Button"); final Button r1 = new Button("Right Button"); splitPane1.getItems().addAll(l1, r1); hbox.getChildren().add(splitPane1); SplitPane splitPane2 = new SplitPane(); splitPane2.setOrientation(Orientation.VERTICAL););
Now, you can’t have a single SplitPane with multiple orientations. However, you can quite easily embed one SplitPane inside another to get this effect:
SplitPane splitPane1 = new SplitPane(); splitPane1.setOrientation(Orientation.VERTICAL); splitPane1.setPrefSize(200, 200); final Button l1 = new Button("Top Button"); final Button r1 = new Button("Bottom Button"); splitPane1.getItems().addAll(l1, r1); SplitPane splitPane2 = new SplitPane(); splitPane2.setOrientation(Orientation.HORIZONTAL); splitPane2.setPrefSize(300, 200); final Button c2 = new Button("Center Button"); final Button r2 = new Button("Right Button"); splitPane2.getItems().addAll(splitPane1, c2, r2); hbox.getChildren().add(splitPane2);
With the code above, you’ll end up with something a little like this:
I hope this helps to introduce you to our brand new SplitPane control, and I look forward to see you using it in your applications soon!
Good to be rid of Swingisms, but I don’t like the way you _have_ to embed one inside the other to get different orientations.
People don’t think in terms of embedded split panes, they just see a space partitioned by moveable dividers.
I’d like to see something more akin to a grid (with cell spanning), where the borders between cells are moveable.
At the very least you get to express things naturally and allow for a way to drag a corner point (eg in a 2×2 split pane, dragging the centre ‘cross’.)
One of the goals for JavaFX controls is that they be simple to use, but offer considerable power.
I think it is reasonable to ask developers to embed SplitPanes to allow for differing orientations in a user interface. The alternative approach, as suggested in your comment, would require considerably more API (and quite complex API at that) to enable all use cases. It would also require a possibly more complex API for the simple use cases as well. Take a look at the GridPane API we offer to get an appreciation of just how much configurability a grid layout requires.
Without actual devloper studies, it’s hard to say which API would be preferable to an end-developer, but my gut is telling me that we should keep the common use cases simple, whilst enabling the kind of UI that you and I are discussing. It’s probably easy to agree that most common use cases require one of the following combinations of SplitPane:
* A single divider.
* Two dividers in the same direction.
* Two dividers, in separate directions.
If you disagree, then I’m interested to learn more about your use cases.
There was in SwingX a JXMultiSplitPane (I think that was the name) that Hans Muller added which had a tree-like model instead of a grid, such nesting was done via nodes with hsplits & vsplits and such. Which is yet another way to handle it.
But I agree, I think having a split view run in a single direction is simpler to understand and use for the common cases. However it could be quite cool to have a MultiSplitPane which was basically the same API as Grid but used dividers along certain borders or some such, for those cases where a more complicated API is worth it. In particular I’d love to see something like this developed in the “wild” and if it is successful we could adopt the approach.
Hans Mullers MultiSplitPane is fine for simple applications (e.g. Applets). It uses a tree model with named leafs to add content. The model can be defined as a String. The parser return a RootNode that can be used as a model.
I admit that the algorithm is a bit complicated. I ported it to JavaFX as an exercise to learn JavaFX. I didn’t manage to create a skinnable control from it yet.
But for normal use cases I guess it would suffice to let SplitPane do the work. In order to get rid of the manual nesting users can extend it to support a simple tree model, like in this quick and dirty example ( don’t look to closely 😉 ):
import java.util.HashMap;
import javafx.geometry.Orientation;
import javafx.scene.Node;
import javafx.scene.control.SplitPane;
import javafx.scene.layout.StackPane;
public class MultiSplitPane extends SplitPane {
HashMap regionContent = new HashMap();
SplitNode root;
public void addNode(String name, Node node) {
StackPane contentPane = regionContent.get(name);
if (contentPane == null) {
throw new IllegalArgumentException(“invalid region ” + name);
}
contentPane.getChildren().add(node);
}
public void setLayout(SplitNode root) {
this.root = root;
setDividerPosition(0, 0.5f);
regionContent.clear();
updateModel(root, this);
}
private void updateModel(SplitNode root, SplitPane splitPane) {
splitPane.setOrientation(root.orientation);
if (root.first.leaf) {
regionContent.put(root.first.name, new StackPane());
splitPane.getItems().add(regionContent.get(root.first.name));
} else {
SplitPane childPane = new SplitPane();
childPane.setDividerPosition(0, 0.5f);
splitPane.getItems().add(childPane);
updateModel(root.first, childPane);
}
if (root.last.leaf) {
regionContent.put(root.last.name, new StackPane());
splitPane.getItems().add(regionContent.get(root.last.name));
} else {
SplitPane childPane = new SplitPane();
childPane.setDividerPosition(0, 0.5f);
splitPane.getItems().add(childPane);
updateModel(root.last, childPane);
}
}
public SplitNode testModel() {
SplitNode root = new SplitNode(false, null);
root.orientation = Orientation.HORIZONTAL;
SplitNode first = new SplitNode(true, “left”);
root.first = first;
SplitNode secondRoot = new SplitNode(false, null);
secondRoot.orientation = Orientation.VERTICAL;
root.last = secondRoot;
SplitNode topRight = new SplitNode(true, “top-right”);
secondRoot.first = topRight;
SplitNode bottomRight = new SplitNode(true, “bottom-right”);
secondRoot.last = bottomRight;
return root;
}
private static class SplitNode {
boolean leaf;
private SplitNode first;
private SplitNode last;
private Orientation orientation;
private String name;
public SplitNode(boolean leaf, String name) {
this.leaf = leaf;
this.name = name;
}
}
}
so you can use it like that:
public void start(Stage primaryStage) {
primaryStage.setTitle(“Hello World!”);
MultiSplitPane pane = new MultiSplitPane();
pane.setLayout(pane.testModel());
pane.addNode(“left”, new Button(“left”));
pane.addNode(“top-right”, new Button(“top-right”));
pane.addNode(“bottom-right”, new Button(“bottom-right”));
primaryStage.setScene(new Scene(pane, 300, 250));
primaryStage.show();
}
Still, I agree that a full blown docking framework with minimize/maximize, floating, docking, etc. like in Netbeans, Eclipse or Jide is definitely missing in JavaFX. It would be fantastic to have that out of the box in JavaFX.
Hello Jonathan,
The tricky part with a split pane is how to set the min/pref/max sizes of the individual splits. Does it take those values directly from the component in the split or can you set it on the split pane?
How do one specify how the splits should grow? This can be tricky to get right if it isn´t supported properly by the split pane since there´s no way to express this in min/pref/max size.
Will the split adhere to the maximum size of the component?
Cheers,
Mikael
Mikael,
I’m bound to get the bulk of these answers wrong, so I’ll ask the current owner of the SplitPane to answer your questions next week when he’s back at work.
The only thing I’ll add is that we support accessing the dividers directly from the SplitPane, such that you can set the position of each divider. We are thinking about adding further API to the SplitPane.Divider API in the future, so any feedback you have in this area would be much appreciated.
— Jonathan
Mikael,
The min/pref/sizes are taken directly from the content in the split. Currently the split does not adhere to the max size of the content. A bug has been filed for this issue already.
Kinsley
Thanks.
Just to be clear, with “splits” I mean the areas between the dividers.
On a separate note, it would be really useful if one could get notified of replies in some way. Now it´s easy to forget to go back and check..
Hmmm, I definitely get notification, so the emails can be sent out. I’ll look into adding some functionality to allow for notifications to be sent if requested.
Thanks,
Jonathan
Ok – I’ve installed a comment notification plugin. You can subscribe at the bottom of the comments section.
I hope it works 🙂
Now I just have to educate myself how remember to respond to a post and not the thread. 🙂
COOL! That was fast!
Hello, Jonathan!
Please, can you clarify a simple binding question, which can help to others, I hope?
I want to put button exactly in center of scene.
I use binding as:
btn.translateXProperty().bind(scene.widthProperty().divide(2).subtract(btn.widthProperty());
All work perfectly, except one thing – for real symmetrical view we must subtract only half of button’s width, of course. And that problem is dead end for me.
At this time, actual width of button equals zero, so naive subtraction of btn.getWidth/2 do nothing.
Best way, that I found for now is using constant preferredSize for button and subtraction of preferredSize/2. But it is quite verbose, I think…
Please, is here more elegant solution?
If you want to do this with binding, you need to use another high level construct like “minus” and “divide” so that when the button width changes, things are repositioned.
However personally I wouldn’t use binding for positioning of stuff like this, because it is too complicated and heavyweight. I would just use a layout pane. I believe StackPane already does what you want. If you had to write one from scratch, that is pretty easy to (just subclass Pane and override the layoutChildren method and do simple layout there).
hey guys…great to see some javafx examples online.
Why you’re not publishing these examples as applets? I would like to see these samples embed in the html page…at the end javafx 2.0 is a competitor to flash..isn’t it?
I had a look at the applet samples included in the SDK but I would like to applet next to the source code.
Do you plan to get the JavaFx ‘Samples’ section back online (it used to be at)
Best
kzer,
This is going to sound really lame – but it’s just because the three of us are swamped with work! Despite appearances, this is our personal blog and we don’t work on this blog in work hours. If writing this blog was our day job, I can guarantee it’ll have a heap more applets, as well as sample applications, tutorials, etc. Our day jobs are actually really, really, really busy right now with just getting JavaFX 2.0 out the door. We’re doing our best to keep this site up to date. If you’re at all interested, you can see what each of us do on the about page.
However, seeing as you’re asking for it I’m sure we’ll consider putting up some applet demos when time permits.
Finally, it would be great to get the JavaFX.com samples section up and running with all new samples. Jasper would be the best guy to ask about that – he is the one responsible for samples in JavaFX 2.0, including applications like Ensemble that is shipping with the beta.
— Jonathan
There is another even more important reason — at the moment you must install the FX SDK in order to have FX applets work in your browser. It won’t be until GA that a random person visiting your website will be able to download and install the FX runtime. So if we put applets up, they wouldn’t work yet except for people who have installed the SDK, which would cause people to get the impression that FX applets simply don’t work. Which would be incorrect.
I think I’ll mention this as I always thought it would be useful in Swing.
Drag and drop of slit panes! Yes I’d like the ability to be able to drag a pane from one area of split pane layout to another. Some kind of snap to feature would be nice. It’s just an idea but I think it would be useful.
If we take some of the examples seen on the site so far. Imagine an application with two information panels on the left, one on top of the other and say a map view as the main view on the right taking 3/4 of the space. Now normally this would be ideal but some users might prefer one of the information panels to be the main panel to be the main panel, the drag it over the map and the map snaps to where the information panel was and the info panel to where the map was.
Obviously there would be a vertical split between the two side panels and the main panel and the panel containing the side panel would have a vertical split for them.
Just an idea!
Hi Steve, this sounds more like a request for a docking framework, since the split pane itself can’t know anything about it (requires some kind of UI gesture based on the content within the split). Typically a docking framework would take TitledPanes (or some specialized subclass) which then give a title bar which can be dragged and so forth.
This is actually the second request for a docking framework I’ve heard, would you like to file a JIRA feature request?
Good idea Richard RT-14039 added.
What’s the fxml look like for this ?
where does one find at schema or DTD or xlt like thing for the xml in the FXML files ? I’m spending too much trial and error on figuring out how to nest stuff in splipanes. If I had a DTD of th javafx* it might help ?
Thanks,
-jim
Hi all,
I have a Swing JTabbedPane where I plot a Swing JInternalFrame, I would like to add a FX SplitPane inside this JInternalFrame, how can I accomplish this?
Thanks all
Alberto
how to restrict splitpane line so that it can’t be resized after running the application..plz mail me.
thanks | http://fxexperience.com/2011/06/splitpane-in-javafx-2-0/ | CC-MAIN-2020-16 | refinedweb | 2,822 | 65.32 |
No, this isn't possible. The reason for this is that a) streams often can't
be read twice and b) ZIP archives need random reading.
The list of files is attached at the end, so you need to skip the data,
find the file entry, locate the position of the data entry and then seek
backwards.
This is why code like Java WebStart downloads and caches the files.
Note that you don't have to write the ZIP archive to disk, though. You can
use ShrinkWrap to create an in-memory filesystem.
The best way to do this would be to sort the results in descending
date-time order (so the latest response is first) and then to limit the
result set by one. This would look something like:
db.proficiencies.find(YOUR QUERY).sort({'date': -1}).limit(1)
You should use the MongoDB Date type. This stores the date in a more
efficient format than a string and includes millisecond accuracy. You can
then have a single index on the field which you can query using any
date/time.
I'd also recommend reading
which has some recommendations for storing time series data in MongoDB,
especially if you're doing a lot of updates.
$hosts_lines = file("hosts.txt");
foreach($hosts_lines as $line) {
$temp = explode(":", $line);
$hosts[$temp[0]][] = $temp[1];
}
$localhost = file("localhost.txt");
foreach ($localhost as $line) {
$temp = explode(":", $line);
unset($hosts[$temp[0]]);
}
foreach($hosts as $ip => $ports) {
foreach ($ports as $port) {
printf("%s:%s", $ip, $port);
}
}
The following code will do the trick
db.searchTest.find({$and:[{$or:[{name:{$exists:false}},{name:"Joe"}]},{$or:[{age:{$exists:false}},{age:55}]},{$or:[{sex:{$exists:false}},{sex:"male"}]},{$or:[{name:{$exists:true}},{age:{$exists:true}},{sex:{$exists:true}}]}]})
To make this efficient, you need to build appropriate indexes on your data
A general solution is to use the always unique pseudo-column ctid:
DELETE FROM foo USING foo b WHERE foo.e2 = b.e1 AND foo.e1 = b.e2
AND foo.ctid > b.ctid;
Incidentally it keeps the tuple whose physical location is nearest to the
first data page of the table.
Try this, a little weird, but :
array_filter($csv_arr, function($v){return array_filter($v) == array();});
Completely untested and I don't remember if this is the proper syntax or
not for closures, but it could work.
Edit (tested and working):
<?php
$csv_arr = array(
0 => array(
'Enfalac' => 'alpha linolenic acid 300 mg',
'Enfapro' => 'alpha linolenic acid 200 mg'
),
1 => array(
'Enfalac' => 'arachidonic acid 170 mg',
'Enfapro' => ''
),
2 => array(
'Enfalac' => '',
'Enfapro' => ''
),
3 => array(
'Enfalac' => 'calcium 410 mg',
'Enfapro' => 'calcium 550 mg'
)
);
$c = function($v){
return array_fi
You could do something like this. This works fine. The header file is
ommitted, where all file opeartions are implemented.
#include <linux/module.h>
#include <linux/init.h>
#include <linux/cdev.h>
#include "my_char_device.h"
MODULE_AUTHOR("alakesh");
MODULE_DESCRIPTION("Char Device");
static int r_init(void);
static void r_cleanup(void);
module_init(r_init);
module_exit(r_cleanup);
static struct cdev r_cdev;
static int r_init(void)
{
int ret=0;
dev_t dev;
dev = MKDEV(222,0);
if (register_chrdev_region(dev, 2, "alakesh")){
goto error;
}
cdev_init(&r_cdev, &my_fops);
ret = cdev_add(&r_cdev, dev, 2);
return 0;
error:
unregister_chrdev_region(dev, 2);
return 0;
}
static void r_cleanup(void)
{
cdev_del(&a
Before you set the constraint, do manual delete first.
DELETE a
FROM tableName a
LEFT JOIN
(
SELECT customer_invoice_id, MAX(id) id
FROM tableName
GROUP BY customer_invoice_id
) b ON a.customer_invoice_id = b.customer_invoice_id AND
a.id = b.id
WHERE b.customer_invoice_id IS NULL
this will preserve the latest record for every customer_invoice_id. and you
can now execute this statement,
ALTER TABLE tableName ADD UNIQUE KEY idx1(customer_invoice_id)
Demo,
SQLFiddle Demo
Use set
line = line.split()
line = list(set(line))
set returns an un-ordered collection of unique elements, then convert it
back to list and then sort the list.
Edit:
line = line.split()
line = list(set(line))
out.write(" ".join(sorted(line, x: (int(x.split(':')[0]),
int(x.split(':')[1])))) + '
')
One way, using Enumerable.GroupBy and then select the highest according to
TimesTested.
var uniques = list1.Concat(list2)
.GroupBy(t => t.ID)
.Select(g => g.OrderByDescending(t => t.TimesTested).First());
Edit ... or in VB.NET (sorry, i didn 't see the tag first) and with your
method:
Friend Shared Function GetNewerEvalQs(list1 As List(Of
EvaluationQuestionData), list2 As List(Of EvaluationQuestionData)) As
List(Of EvaluationQuestionData)
list1 = If(list1 Is Nothing, New List(Of EvaluationQuestionData),
list1)
list2 = If(list2 Is Nothing, New List(Of EvaluationQuestionData),
list2)
If Math.Max(list1.Count, list2.Count) = 0 Then Throw New
ArgumentException("One of both lists must contain data")
Dim newUniqueData = list1.Concat(list2).
OrderByDes
As far as I know there is no single line function. You have to combine some
functionallity.
The first line finds the zeros in your cell array, while the second line
deletes those entries. Note the () parentheses i.s.o. {} for removal.
Try this:
idxZeros = cellfun(@(c)(isequal(c,0)), cell_arr);
cell_arr(idxZeros) = [];
Assuming your set contains the strings you want to remove, you can use the
keySet method and map.keySet().removeAll(keySet);.
keySet returns a Set view of the keys contained in this map. The set is
backed by the map, so changes to the map are reflected in the set, and
vice-versa.
Contrived example:
Map<String, String> map = new HashMap<>();
map.put("a", "");
map.put("b", "");
map.put("c", "");
Set<String> set = new HashSet<> ();
set.add("a");
set.add("b");
map.keySet().removeAll(set);
System.out.println(map); //only contains "c"
If the IList<T> reference happens to refer to an instance of
List<T>, casting to that type and using RemoveAll is apt to yield
better performance than any other approach that doesn't rely upon the
particulars of its implementation.
Otherwise, while the optimal approach will depend upon the relative
fraction of items that are going to be removed and the nature of the
IList<T>, I would suggest that your best bet might be to copy the
IList<T> to a new List<T>, clear it, and selectively re-add
items. Even if the items in the list are not conducive to efficient
hashing, the fact that the items in the IEnumerable<T> are in the
same sequence as those in the IList<T> would render that irrelevant.
Start by reading an item from the IEnumerable<T>.
You want to look at the section "Fetch Request Blocks and Deleting Orphaned
Objects" on this page. It requires you to be using an RKObjectManager
(which you say you are) and describes the way that you tell RestKit how to
find content in the data store that should be deleted (and it checks and
doesn't delete things that it just received from the server).
Have you checked your results? Because I think they are wrong. For example,
if you have
A = [...
1 2 3
4 5 6
7 8 9];
and you want to set element A(1,1) and A(2,3) to NaN. What you are doing is
A([1 2], [1 3]) = NaN
but that gives
A =
NaN 2 NaN
NaN 5 NaN
7 8 9
The easiest and fastest way around this is to not use find, but logical
indexing:
M = rand(360,360,2,4);
maximum = 0.05;
tic;
M(M(:,:,2,4) > maximum) = NaN;
toc
Which gives on my PC:
Elapsed time is 0.003547 seconds.
It's not working because remove takes a query conditions object, not the
list of documents to remove. You also need to put your find inside your
remove callback or it will be executed before the remove completes.
Try this instead:
Article.remove({}, function (err) {
if (!err) {
Article.find(function(err,articles){
if(!err){
console.log(articles);
}else{
console.log(err);
}
});
}
});
Try the following query:
collection.update(
{ _id: id },
{ $pull: { 'contact.phone': { number: '+1786543589455' } } }
);
It will find document with the given _id and remove the phone
+1786543589455 from its contact.phone array.
You can use $unset to unset the value in the array (set it to null), but
not to remove it completely.
Use a script mediator to write a JS to perform this conversion. Its not a
simple < to { conversion, as per your request some part of the
request[1] in XML format needs to be converted to something similar to
JSON.
[1]<Status>200</Status> -> "Status":"200"
Refer to the following link for more information on script mediator
Two options:
You can remove the "_id" field from the map created:
...
resultElementMap.remove("_id");
System.out.println(resultElementMap);
Or you can ask the query results to not include the _id field:
DBObject allQuery = new BasicDBObject();
DBObject removeIdProjection = new basicDBObject("_id", 0);
DBCollection collection = db.getCollection("volume");
DBCursor cursor = collection.find(allQuery, removeIdProjection);
DBObject resultElement = cursor.next();
Map resultElementMap = resultElement.toMap();
System.out.println(resultElementMap);
See the documentation on projections for all of the details.
Try something, like this:
BasicDBObject match = new BasicDBObject("_id", appId); //to match your
direct app document
BasicDBObject update = new BasicDBObject("list", email);
coll.update(match, new BasicDBObject("$pull", update));
It should work.
)))
Are the entries associated with a timestamp such that the new entries will
always have timestamps later than the old entries each time the file is
updated? If so, just keep the timestamp of the last entry you pushed, and
as you parse the XML file, push anything newer and discard anything older.
If not, Python has pretty decent set manipulation algorithms, so I'd try
this: keep a set of entries that have been pushed already.
already_pushed = set()
Each time your script runs, do this:
make another set of the entries from the file
from_file = parse_file()
"subtract" (set difference) the set of already-pushed entries
new_entries = from_file - already_pushed
push the new entries
push_all(new_entries)
go through the original set and prune anything older than 4 hours + 5
minutes
cuto.
it = db.Usage.find({'Usage': "",'Rating': "", 'Average Ratings':
""})[1001:1500]
to_rem = []
for doc in iter:
to_rem.append( doc['_id'] )
try:
db.Usage.remove( {'_id' :{ '$in': to_rem } })
except:
print "Unexpected error:", sys.exc_info()[0]
It should be either:
launchctl unload -w ~/Library/LaunchAgents/org.mongodb.mongod.plist
or
launchctl unload -w ~/Library/LaunchAgents/homebrew.mxcl.mongodb.plist
Probably there is a cleaner solution but this should work:
Create new Data field from date strings:
var cursor = db.user_track.find()
while (cursor.hasNext()) {
var doc = cursor.next();
db.user_track.update(
{_id : doc._id},
{$set : {access_time_ : new Date(doc.access_time)}})
}
Now you can retrieve some records by comparing dates:
db.user_track.find({access_time_: {$lt: new Date("Sep 01 2013 00:00:00
GMT+00:00")}})
If everything works as expected remove obsolete records:
db.user_track.remove({access_time_: {$lt: new Date("Sep 01 2013 00:00:00
GMT+00:00")}})
In the future use date objects not strings
I am using MongoDb for Windows 7, version 2.4.2 and the database was
correctly removed.
But it's possible delete the files from data directory in the variable
"dbpath". That can be found at the second line of "mongod"'s process log.
The files from this db starts with the same name of your database.
The reason for this problem, probably is because the user who runs the
process do not have all the permissions to manipulate the disk files.
There is nothing you can do in the current version to provide this
functionality.
In a future version when user defined roles are available you could define
a role which allows insert() and update() but not remove() or drop() etc.
and therefore make yourself log-in as a different higher-role user, but
that's not available in the current (2.4) version.
I would suggest using a $or query, something like:
DBObject query = QueryBuilder.start().or(
new BasicDBObject("Filename",
java.util.regex.Pattern.compile(KeyWord)),
new BasicDBObject("Content",
java.util.regex.Pattern.compile(KeyWord))).get();
and do it as a single query instead of two separate queries.
Please be advised that MongoDB will not be able to use an index for
non-anchored regular expressions. See for details.
Your syntax looks slightly incorrect. As per docs:
collection.update( { _id: @id }, { $unset: { herField: true } }, { multi:
true });
Need the 'multi' option if you want to update multiple documents. E.g. from
all records on this collection.
A possible solution for 106 values:
int k = 0;
double a;
double b;
double[] coarse;
double[] fine;
for (int i = 0; i < 15; i++)
for (int j = 0; j < 7; j++)
{
a = 1.0 - j * 1.0/7;
b = 1.0 - a;
fine[k++] = a * coarse[i] + b * coarse[i+1];
}
fine[k] = coarse[14];
This assumes that the 15 original values are stored in array coarse[].
The interpolated values will end in array fine[].
The DDL command to add a unique constraint on multiple columns:
ALTER TABLE `table_name`
ADD UNIQUE INDEX (`column1`, `column2`);
A linear search starts at the beginning and compares every element until it
finds what you're looking for.
A binary search splits the list in the middle and looks if your value is
greater or smaller than the pivot value. Then it continues doing so
recursively.
For example in a list of people. You're looking for John. The binary search
looks in the middle of the list and might find Mark. John is lower, so the
search discards the upper half of the list, since John will not be in it,
and repeats this on the lower half (recursion)
A binary search is much more efficient but the list must be sorted.
However - sorting a list is slower than a linear search. You won't win in
efficiency by sorting a unsorted list first.
I assume you have two table:
orders (list of your orders)
order_items (list of itemd of order)
two table are relationed by field fk_order.
try this:
SELECT o.id, (select count(*) from order_items i where i.status =
'delivered' and i.fk_order = o.id) as item_delivered, (select
count(distinct i.status) from order_items i where i.fk_order = o.id) as
status_no
FROM orders o
Most likely, the people whom it didn't work for have had .gif and/or .jpg
associated with a different ProgId than "giffile" or "jpegfile".
You can get Inno to add the registration to whatever the current ProgId
association is like so:
[Registry]
Root: HKLM; Subkey:
"SOFTWAREClasses{reg:HKLMSOFTWAREClasses.jpg|jpegfile}shellHalve size";
Flags: uninsdeletekey
Root: HKLM; Subkey:
"SOFTWAREClasses{reg:HKLMSOFTWAREClasses.jpg|jpegfile}shellHalve
sizecommand"; ValueType: string; ValueName: ""; ValueData:
"""{app}himgr.exe"" ""%1"""
Root: HKLM; Subkey:
"SOFTWAREclasses{reg:HKLMSOFTWAREClasses.png|pngfile}shellHalve size";
Flags: uninsdeletekey
Root: HKLM; Subkey:
"SOFTWAREclasses{reg:HKLMSOFTWAREClasses.png|pngfile}shellHalve
sizecommand"; ValueType: string; ValueName: ""; ValueData: """{app}hi. | http://www.w3hello.com/questions/Efficient-way-to-remove-all-entries-from-mongodb | CC-MAIN-2018-17 | refinedweb | 2,424 | 57.06 |
In C I need to make an array which contains every possible 5 letter string combination of the letters "A", "C", "G", "T". That is,
AAAAA,
AAAAG,
AAAAC,
etc.
And I need these stored in an array. I'm aware there are 1024 possible combinations, and therefore the array would be allocated with that in mind.
I think the memory allocation would look something like this:
char* combinations[] = calloc(1024, 5*sizeof(char));
The following code does what you want.
#include <stdio.h> #include <stdlib.h> char ** getCombinations(){ char letters[] = {'A','C','G','T'}; // memory to hold pointers to our strings // this is less memory efficient, but gives us our char** char ** combinations = (char**)calloc(1024, sizeof(char*)); char * strings = (char*)calloc(1024, 5*sizeof(char)); unsigned i; unsigned int j; for (i = 0; i < 1024; i++){ combinations[i] = &strings[i * 5]; for ( j = 5; j--;){ combinations[i][4 - j] = letters[(i >> (j * 2)) % 4]; } } return combinations; } int main(){ int i; char ** combinations = getCombinations(); for ( i = 0; i < 1024; i++){ printf("%.*s\n", 5, combinations[i]); } free(combinations); }
The important line is the inner loop
combinations[i][4 - j] = letters[(i >> (j * 2)) % 4];
The purpose of this line is to turn an index (0-1023) into a combination by simply counting up.
Lets break this down:
letters[... % 4] returns a letter based on whatever (
...) is. the
% 4 part just makes indexing at
1,
5,
9, ... all return
'C'
(i >> (j * 2)) this basically allows us to select in base
4 (the number of possible letters)
combinations[i][4 - j] sets the value to the
jth letter (counting from the right) of the
ith word in the list. | https://codedump.io/share/ccVvtxSOtNYD/1/how-to-generate-every-possible-string-combination-of-4-letters-and-store-in-array-in-c | CC-MAIN-2017-43 | refinedweb | 278 | 59.03 |
Template:LanguageExisting/doc
Contents
Purpose
This template is used in Template:Languages.
If a page exists in a particular language, this template generates a parameter name for Template:Languages/Interface to show the existence. Don't use this for "en". "en" case has a special naming rule, then it was separated to Template:LanguageExistingEn.
Input parameters
- ns: (optional) leading namespace prefix such as Help: or Template: (default is empty)
- 1st: (mandatory) language code
- 2nd: (mandatory) Page name
Return
- {{{1}}}e, if {{{ns|}}}{{{1}}}:{{{2}}} exists.
- {{{1}}}m, if {{{ns|}}}{{{1}}}:{{{2}}} is missing.
Note
This template should not be used to test the existence of more than 50 languages. So it should be used only on the major languages. For other languages, that will be displayed in an unrollable sublist but not by default, using static links (possibly red links), use the template: LanguageNotTested instead, with the same parameters. | https://wiki.openstreetmap.org/wiki/Template:LanguageExisting/doc | CC-MAIN-2018-34 | refinedweb | 149 | 56.15 |
Bad handling of exceptions with py-execute-region
Bug Description
I just installed python-
Let me show what I mean. Create an new Python code file called say "test.py".
The only contents of this file will be this line:
5=6
Now open the interactive Python shell with 'M-x py-shell'. Mark this line in Emacs and then try to send this marked region to the Python interactive process using this Emacs command:
M-x py-execute-region
As you know, "5=6" is not a valid Python expression. What I normally expect when sending this to Python interpreter is to see the error message that's produced by Python. Instead, I see in the Emacs message buffer a message: "Jumping to exception in file /usr/tmp/
Moreover, the buffer that used to display the Python shell now opens the temporary file with contents:
#! /bin/env python3
# -*- coding: utf-8 -*-
import os; os.chdir(
5=6
Now, I don't know if this was the intended behavior, but I think this is extremely unfriendly. Instead of seeing a Python interpreter error, I see the unexpected and totally uncalled for action of opening the temporary file that I personally never even created myself. After digging through python-mode.el, I find that I can suppress this behavior if I add to my .emacs file the following line:
(setq py-jump-
Apparently, py-jump-
Ok, after setting it to 'nil', I still see another issue. Let's repeat the same experiment. Create an empty python file with 5=6 in it, then then mark it, and run "M-x py-execute-region", again. This time I see the expected Python error in the Python shell buffer. However, at the same time, I am seeing in the Emacs message buffer a message that looks like this:
"Wrote /usr/tmp/
My question is, why is it displayed? Do I really need to see this? After digging though python-mode.el, I see I can suppress this if I replace the line that says:
(write-region (point-min) (point-max) file nil t nil 'ask)
with
(write-region (point-min) (point-max) file nil 0 nil 'ask)
Now, it's up to you to decide if this is a bug or feature. When I type an invalid expression into an interactive Python shell (started with "python -i"), I just see the Python error. With emacs and python-mode.el I see all sorts of distracting messages and actions.
Am 23.03.2013 13:26, schrieb <email address hidden>:
> Public bug reported:
>
Checked in a fix. Please try again from trunk.
Thanks,
Andreas | https://bugs.launchpad.net/python-mode/+bug/1159118 | CC-MAIN-2016-18 | refinedweb | 439 | 71.55 |
BundlingBundling
deno bundle [URL] will output a single JavaScript file for consumption in
Deno, which includes all dependencies of the specified input. For example:
deno bundle colors.bundle.js Bundle Download Download Emit "colors.bundle.js" (9.83KB)
If you omit the out file, the bundle will be sent to
stdout.
The bundle can just be run as any other module in Deno would:
deno run colors.bundle.js
The output is a self contained ES Module, where any exports from the main module supplied on the command line will be available. For example, if the main module looked something like this:
export { foo } from "./foo.js"; export const bar = "bar";
It could be imported like this:
import { bar, foo } from "./lib.bundle.js";
Bundling for the WebBundling for the Web
The output of
deno bundle is intended for consumption in Deno and not for use
in a web browser or other runtimes. That said, depending on the input it may
work in other environments.
If you wish to bundle for the web, we recommend other solutions such as esbuild. | https://deno.land/manual@v1.22.0/tools/bundler | CC-MAIN-2022-40 | refinedweb | 180 | 64.2 |
Opened 10 years ago
Closed 10 years ago
Last modified 7 years ago
#10335 closed (fixed)
tzinfo.py should use getdefaultencoding instead of getdefaultlocale[1]
Description
In some locales on Mac OS X, page rendering fails with an error like:
File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/django/template/debug.py", line 81, in render_node raise wrapped TemplateSyntaxError: Caught an exception while rendering: unknown encoding: mac-icelandic
(where the encoding varies, could be mac-roman or other Mac-specific encodings)
It turns out that Python does not recognize some Mac-specific encodings, including those that are default in some locales (such as our Icelandic one).
That should be okay, we would just configure Python or Django to use a different default encoding. But the trouble is, we can't. In tzinfo.py the default encoding is determined as locale.getdefaultlocale()[1], which calls _locale._getdefaultlocale(), which on a Mac calls CFStringGetSystemEncoding. Of this latter function, the Mac documentation says:
In Mac OS X, this encoding is determined by the user's preferred language setting. The preferred language is the first language listed in the International pane of the System Preferences.
So there appears to be no way, in some locales, to get Django to render pages. One is forced to change the user's global language preference in the operating system settings. That's a bit too inflexible.
In contrast, sys.getdefaultencoding is easily controlled by calling sys.setdefaultencoding in sitecustomize.py.
Thus tzinfo.py should obtain the default encoding using sys.getdefaultencoding instead — at least optionally, or perhaps as a special-case exception for the mac-specific encodings, if one wants to take great care to minimize behavior changes in working installations.
Attachments (4)
Change History (15)
Changed 10 years ago by
comment:1 Changed 10 years ago by
I attached a patch against trunk for the simplest change, just using sys.getdefaultencoding() instead of locale.getdefaultlocale()[1]. I ran the test suite before and after. Before, I got hordes of errors almost all of which included the words "unknown encoding: mac-icelandic". After, I got only this:
====================================================================== FAIL: Doctest: regressiontests.model_regress.models.__test__.API_TESTS ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/gthb/Library/Python/2.5/site-packages/django/test/_doctest.py", line 2180, in runTest raise self.failureException(self.format_failure(new.getvalue())) AssertionError: Failed doctest test for regressiontests.model_regress.models.__test__.API_TESTS File "/Users/gthb/extsvn/django-trunk/tests/regressiontests/model_regress/models.py", line unknown line number, in API_TESTS ---------------------------------------------------------------------- File "/Users/gthb/extsvn/django-trunk/tests/regressiontests/model_regress/models.py", line ?, in regressiontests.model_regress.models.__test__.API_TESTS Failed example: BrokenUnicodeMethod.objects.all() Expected: [<BrokenUnicodeMethod: [Bad Unicode data]>] Got: [<BrokenUnicodeMethod: Názov: Jerry>]
... which I also got when running the test suite on unchanged django trunk.
comment:2 Changed 10 years ago by
Just found this, for what it's worth, in :
This is a known limitation: getdefaultlocale should not be used in new code.
If the intention is to compute the locale's encoding, locale.getpreferredencoding should be used instead.
Still, no such warning is given in
In any case, locale.getpreferredencoding also just returns _locale._getdefaultlocale()[1] on darwin or mac, so switching to that will not resolve this issue.
Changed 10 years ago by
comment:3 Changed 10 years ago by
comment:4 Changed 10 years ago by
I'm nervous about this.
sys.setdefaultencoding should really never be called. It has too many dangerous side effects and is only in the language by accident (long history; search python-dev archives for details if you really care).
A better solution would be preferred. I'm not convinced that Django's behaviour is actually incorrect here: it asks for the system locale and is given the right answer (since the user's system is set up to return their preference). The problem is that we don't necessarily understand that answer. Maybe that's the level it needs to be addressed on.
comment:5 Changed 10 years ago by
Sorry for the late reply, I hadn't set up an email address for notifications.
In response to mtredinnick: good point about
setdefaultencoding, I didn't know that. A few notes:
- The above patch (second one) only calls
getdefaultencoding, and it does so only if the present implementation fails in a way that will break Django. The
setdefaultencodingcall would be made consciously by the user as a workaround, only upon encountering this problem, and they will only encounter it if
asciiis not good enough.
- Based on a quick grep, this
DEFAULT_ENCODINGis used in only one place in all of Django, in this same file:
return smart_unicode(time.tzname[self._isdst(dt)], DEFAULT_ENCODING)
Standard timezone names are probably all representable in
ascii anyway, so maybe
ascii is good enough here. I suppose
DEFAULT_ENCODING (or timezones) must have been used in more places when I ran into this some weeks ago, because Django broke right away, but now it breaks only when I add something involving timezones. I now reproduce it with a view that does this:
from django.utils.tzinfo import LocalTimezone from datetime import datetime LocalTimezone(datetime.now())
So maybe it is enough to settle for
ascii, when the default locale method fails or returns a bogus encoding.
- I will attach a third, even more conservative patch, that just adds the codec lookup within the first try block. The trouble with Mac OS X in the Icelandic locale is that
locale.getdefaultlocale()[1]does not raise an exception but instead just returns an encoding name that will result in an exception later when used. This most minimal patch just makes the failure come up right away, so that the
exceptblock deals with it. This way the comment "Any problems at all determining the locale and we fallback" is made more true, with no other effects.
- On correctness: the premise "since the user's system is set up to return their preference" assumes too much. My locale preference for the overall OS is not necessarily my locale preference for Django. In the case of Mac OS X I don't get to separate the two because of the hardcoding explained above. And if my preference is a locale with a Mac-specific encoding not recognized by Python (such as the Icelandic locale), then that triggers a serious problem in
django.utils.tzinfo. The problem is not that Django doesn't understand the answer, but that Python itself doesn't. This problem can be worked around with one of the options I propose here.
Changed 10 years ago by
Still more conservative patch. This one just checks that the encoding is known to Python (and slightly amends a comment)
comment:6 Changed 10 years ago by
Changed 10 years ago by
improved fix that moved the encoding constant to the encodings module because it can be used in more than one location, fixed a bug in repr
comment:7 Changed 10 years ago by
I attached a new patch that is basically the latest one but the constant moved to the encodings module (this is reusable in multiple places). Also if unicode data is returned the repr method will no longer return a unicode string.
comment:8 Changed 10 years ago by
comment:9 Changed 10 years ago by
comment:10 Changed 10 years ago by
comment:11 Changed 7 years ago by
Milestone 1.1 deleted
Patch for simplest possible change, getdefaultencoding() instead of getdefaultlocale()[1] | https://code.djangoproject.com/ticket/10335 | CC-MAIN-2019-09 | refinedweb | 1,244 | 55.44 |
Sensory is part of a series which focuses on building simple dog behaviors in a simulation game. This week’s article looks at how to build instant conditional checks like isDogNearby in a very similar way to the other actions. Also, you’ll be able to read a few tips for how to architect the source code your game logic, as this becomes more important when building up the complexity in a sensory system.
Video 1: Both dogs start off facing each other by design, then select behaviors randomly. At 0:13 the gray dog picks a new behavior, and since the brown dog is nearby, it picks a growling behavior. Once the brown dog has finished randomly rolling over, it also growls back at the gray dog. They both then back off around 0:17 as part of the same reaction behavior, and resume selecting behaviors randomly as they are out of range again.
Instant Condition
As you learned last time, certain behaviors require preconditions. These are conditions that should be checked just before the behavior runs to make sure it’s applicable. For example, it’s necessary to check if another dog is nearby before starting a behavior that reacts to it!
For this to work, the conditions must be implemented so they:
Complete instantly if the condition is currently true.
Fail silently if the condition doesn’t match at the moment.
The difference between the other actions (e.g. Rotate, Translate, Animate) and these instant conditions is that they last much longer, obviously. However, desipite this difference, it’s wise to use the exact same API to keep the implementation simple. In practice, both actions and conditions implement the same Task object, which means you can plug them together into composite behaviors interchangeably.
Screenshot 2: Two dogs within range of each other growling!
Code Structure
Building a sensory system means that more data will be exchanged within the engine. This can cause various dependency problems if there’s no formal architecture, and you quickly end up with a spaghetti mess as the codebase grows. Of course, having this structure is also very useful when you build your AI actions, and not only conditions.
The whole topic of structuring game logic is certainly worth another whole series of tutorials (it can be a controversial topic too), so this article sticks to describing what’s necessary to support the simple behaviors in the video, which is good starting advice generally. What you need to be aiming for is this:
Establish layering in the codebase, with modular libraries for graphics, animation audio, AI core, scene graph, etc.
Build a very simple Entity class that has optional components from each of these libraries.
Implement Actions and Conditions that depend on the Entity but only access the components they need.
In practice, you can implement a nice modular component system rather trivially in C++ using pointers to forward declared types.
namespace alive { class Brain; } // AI namespace Ogre { class SceneNode; } // World Graph namespace Ogre { class Entity; } // Model & Animation struct Entity { alive::Brain* m_pBrain; Ogre::SceneNode* m_pNode; Ogre::Entity* m_pEntity; };
If you feel so inclined, you should add accessors to set and get each of these objects safely, and possibly give them a NullObject by default. If you’re feeling more ambitious with component systems, you could try something more object oriented using an abstract component type with a visitor — but it’s not entirely necessary here!
Figure 1: The Entity class is composed of three lower-level classes, each from a separate library.
In the game logic, you’ll need to store a reference to all active Entity objects. Using an normal array, or a std::vector in C++, is a perfectly acceptable first choice. If it turns out to be to slow to access specific entities by position or type, then optimizations can be easily added later.
Implementing a Simple Condition
In this tutorial, the behaviors only need one condition for isDogNearby. However, entity checks are very common so it’s wise to implement all common logic once and for all in a base class. (Node, a Condition in the following code is defined to being an Action since there are no differences.)
struct EntityCheck : public Condition { // A customizable entry point (a.k.a. visitor) for // application logic. DEFINE_VISITABLE_AS(EntityCheck); // When initializing this class, the game logic uses a // visitor to call the setup() function. void setup(Entity&, Entities&); // Store a pointer to the entity bound to this condition. Entity* m_pEntity; // Also need to access all other entities in the world. Entities* m_pEntities; };
Any class that derives from this will be initialized with the correct entities executed within a behavior tree. In particular, the isEntityNearby condition simply overrides the main execute() function which implements the logic for the condition:
Status EntityNearby::execute() { if (m_pEntity == NULL || m_pEntities == NULL) { return FAILED; } Vector3 position = m_pEntity->m_pNode->getPosition(); // Loop over all the objects in the world. Entities::iterator i = m_pEntities->begin(); for (; i != m_pEntities->end(); ++i) { Entity* other = *i; if (m_pEntity == *i || other->m_pDogNode == NULL) { continue; } // Check the distance to object compared to the reference. Vector3 d = (position-other->m_pNode->getPosition()); if (d.squaredLength() < m_Settings.getDistanceSq()) { return COMPLETED; } } return FAILED; }
That’s nearly the simplest code you can write to check if another entity is nearby. There’s lots of room for improvement in performance and features, but it’s good enough for a first design.
Responding to Other Dogs
With the isEntityNearby condition implemented, it’s possible for the dogs to react to each other. Here are the two behaviors chosen for the video, which seem the simplest and most appropriate:
A fighting animation, which looks like the dog is biting and shaking something, combined with a growling sound.
A rather short barking animation which can be looped multiple times, combined with the typical woof sound.
The most important part, however, is to make sure the dogs don’t get stuck forever growling and barking at each other. Keep in mind that the dogs don’t have any memory yet, and in fact, they’ve only just started seeing! So the only way to prevent problems is the same solution that was applied to keep the random behaviors consistent: structuring the behavior tree.
This time, after barking or growling, each dog with turn slightly and move backwards. This is achieved by playing a limping animation in reverse, which doesn’t look too bad. (See these animation tricks for ways to get the most out of your assets.)
Figure 3: The behavior tree for the new growl reaction. It’s a sequence that starts with a condition, then a parallel for sound and animation, and finally a parametric motion for limping backwards.
The sequence that represents this behavior can be put in any of the top-level selectors of the behavior, depending on when it should be activated. If you’re unfamiliar with any of the concepts in this behavior tree, see the previous tutorials.
Thinking Ahead
In summary, you can build simple instant conditions just like actions; the difference is that they check for information inside components of the game entities, using the code and data from other libraries (e.g. the scene graph in this case). When you plug such conditions at the start of sequences, they act as pre-conditions that dynamically filter the activation of any behavior. This essentially makes the choices in the tree subject to the constraints of the environment.
Of course, there’s lots of room for improvement in future tutorials and essays, including:
How to allow information from the sensory system to interrupt behaviors and not only affect upcoming decisions.
How behaviors can adapt parametrically to objects in the environment.
Screenshot 4: The condition isDogNearby triggers a static reaction.
Stay tuned to AiGameDev.com for more in this series. Questions welcome of course!
Discussion 0 Comments | http://aigamedev.com/open/tutorial/dynamic-choices-instant-conditions/ | CC-MAIN-2018-22 | refinedweb | 1,308 | 53.21 |
Cycle Detection Hackerrank
A linked list is said to contain a cycle if any node is visited more than once while traversing the list.
Complete the function provided for you in your editor. It has one parameter: a pointer to a Node object named that points to the head of a linked list. Your function must return a boolean denoting whether or not there is a cycle in the list. If there is a cycle, return true; otherwise, return false.
Note: If the list is empty, head will be null.
Input Format
Our hidden code checker passes the appropriate argument to your function. You are not responsible for reading any input from stdin.
Constraints
0<=list size<=100 Inputs
Sample Output
0
1
Explanation
The first list has no cycle, so we return false and the hidden code checker prints 0 to stdout.
The second list has a cycle, so we return true and the hidden code checker prints 1 to stdout.
Solution
There are 3 scenarios to consider:
The list is empty (i.e., head is null).
The list does not contain a cycle, so you can traverse the list and terminate once there are no more nodes (i.e., next is null).
The list contains a cycle, so you will be stuck looping forever if you attempt to traverse it.
To solve this problem, we must traverse the list using two pointers that we’ll refer to as slow and fast. Our slow pointer moves forward 1 node at a time, and our fast pointer moves forward 2 nodes at a time. If at any point in time these pointers refer to the same object, then there is a loop; otherwise, the list does not contain a loop.
We recommend that you check out Floyd’s Tortoise and Hare cycle-finding algorithm.
bool has_cycle(Node* head) { Node* fast = head; Node* slow = head; while(fast != NULL && slow != NULL && fast->next) { fast = fast->next->next; slow = slow->next; if(fast == slow) { return 1; } } return 0; }
boolean hasCycle(Node head) { Node fast = head; while(fast != null && fast.next != null) { fast = fast.next.next; head = head.next; if(head.equals(fast)) { return true; } } return false; }
/* Detect a cycle in a linked list. Note that the head pointer may be 'null' if the list is empty. A Node is defined as: class Node { int data; Node next; } */ boolean hasCycle(Node head) { if(head != null) { if(head.data == -1) { return true; } else { int a = head.data; head.data = -1; boolean x = hasCycle(head.next); head.data = a; return x; } } return false; }
def has_cycle(head): fast = head; while(fast != None and fast.next != None): fast = fast.next.next; head = head.next; if(head == fast): return True; return False;
2 comments: On Cycle Detection Hackerrank problem solution
Hi , I do believe this is a great site. I stumbled upon it
on Yahoo , I shall come back once again.
I wanted to thank you for this great read!! I definitely enjoying every
small touch of it I have you bookmarked to take a look at new material
you post. | https://coderinme.com/cycle-detection-hackerrank-problem-solution/ | CC-MAIN-2018-39 | refinedweb | 512 | 75.1 |
The QWorkspace widget provides a workspace window that can contain decorated windows, e.g. for MDI. More...
#include <qworkspace.h>
Inherits QWidget.
List of all member functions.
An MDI (multiple document interface) application has one main window with a menu bar. The central widget of this window is a workspace. The workspace itself contains zero, one or more document windows, each of which displays a document. the geometry of the MDI windows it is necessary to make the function calls to the parentWidget() of the widget, as this will move or resize the decorated window. Similarily you have to make the function calls to the parentWidget() of the MDI window to get the geometry of decorated window.
A document window becomes active when it gets the keyboard focus. You can activate it using setFocus(), and the user can activate it by moving focus in the normal ways. The workspace emits a signal windowActivated() when it detects the activation change, and the function activeWindow() always returns a pointer to the active document window.
The convenience function windowList() returns a list of all document windows. This is useful to create a popup menu "Windows".
See also Main Window and Related Classes and Organizers.
Example: mdi/application.cpp.
See also tile().
Example: mdi/application.cpp.
Returns TRUE if the workspace provides scrollbars; otherwise returns FALSE. See the "scrollBarsEnabled" property for details.
Sets whether the workspace provides scrollbars to enable. See the "scrollBarsEnabled" property for details.
See also cascade().
Example: mdi/application.cpp.
This signal is emitted when the window widget w becomes active. Note that w can be null, and that more than one signal may be fired for one activation event.
See also activeWindow() and windowList().
Example: mdi/application.cpp.
This property holds.
Set this property's value with setScrollBarsEnabled() and get this property's value with scrollBarsEnabled().
This file is part of the Qt toolkit. Copyright © 1995-2003 Trolltech. All Rights Reserved. | http://doc.trolltech.com/3.1/qworkspace.html | crawl-001 | refinedweb | 321 | 51.85 |
Not everyone knows that InterSystems Caché has a built-in tool for code profiling called Caché Monitor.
Its main purpose (obviously) is the collection of statistics for programs running in Caché. It can provide statistics by program, as well as detailed Line-by-Line statistics for each program.
Using Caché Monitor
Let’s take a look at a potential use case for Caché Monitor and its key features. So, in order to start the profiler, you need to go to the terminal and switch to the namespace that you want to monitor, then launch the %SYS.MONLBL system routine:
zn "<namespace>" do ^%SYS.MONLBL
As the result, you will see the following:
There are just two options here: the first one allows you to actually launch the profiler, the second one – to imitate the launch for assessing the amount of memory needed. Please note that simultaneous monitoring of a large number of programs may require a considerable amount of memory. If you still need it, you may want to increase the value of the gmheap parameter (this can be done in Management Portal => System Administration => Configuration => Additional Settings => Advanced Memory Settings. Caché will have to be restarted after you change this parameter).
Let’s select option one. The program will prompt you for the name of the program to be watched. You can use a mask with a "*" symbol. To view the list of programs that are already being watched, use the following command: "?L".
In this example, I will monitor all the programs in my namespace, so I’ll just type in "*".
An empty line marks the end of input. As you can see, 246 programs have been selected for monitoring.
We will now need to select monitoring metrics. Caché Monitor supports over 50 metrics, including the following ones:
- Number of global references
- Time spent on line execution
- Number of Caché Object Script lines
- Number of executed lines
- The number of executions of a particular line
- The number of blocking commands
- The number of successful blocking commands, etc.
In the majority of cases, the developer will need a minimum of metrics (number of lines, number of executed lines, number of executions of a particular line, time of execution of a particular line). These stats will be collected if you select option 1 (default value). If necessary, you can collect all stats or pick particular metrics only. In my example, I will be using a minimal set of metrics:
On the next step, the system will suggest choosing processes to be monitored. You can collect statistics for particular processes (you’ll need to provide a list of their PID’s), for the current process or for all of the. Again, I will stick with the default option.
Should the start be successful, you will see a “Monitor started” message. Hit Enter to open the monitor menu:
Let’s run through the items.
Stop Monitor – full termination of monitoring activities. This will also delete all statistics collected so far.
Pause Monitor – suspension of monitoring activities with a possibility to resume them and preserve the collected statistics.
Clear Counters – reset statistics.
The following four commands are responsible for the output of collected statistics. The first two are the most popular ones: output of detailed line-by-line information for a particular program and output of combined statistics for all executed programs. When these commands are selected, the system will prompt for a list of programs for which statistics will be shown and the name of the file the information will be saved to (if you do not do this, statistics will be shown right in the terminal). Also, you can always include the source code (if any), in addition to INT code, into the output file, but there will be no detailed statistics for it.
Everything is ready, let’s start the necessary programs and take a look at the statistics.
Attention! Using Caché Monitor may affect the system performance, which is why it’s not recommended to use it in production systems. Please use on development and debugging stages only.
A small bonus
To make the viewing of monitoring results more convenient, I have written a small web application that you can download from our GitHub repository
The application can currently do the following:
- Start, Stop, Pause/Resume Caché Monitor.
- Collect a minimal set of metrics for programs.
- Show a statistics summary for programs.
- Show detailed Line-by-Line statistics for a particular program.
- Highlight syntax.
- Jump from a call to a class-method or program (program label) to the code of this class-method of program (if the code is available and is being tracked by the monitor).
- Namespace changing.
A couple of screenshots
Please note that the application is distributed under an MIT license (that is, distributed “as is”), as it uses undocumented functions of Caché and its operation is not guaranteed on different versions due to possible changes of undocumented features from version to version.
This is a translation of a Russian-language article published on the Habrahabr 12/16/2014: Профилируем код с помощью Caché Monitor
Also see:... | https://community.intersystems.com/post/profiling-code-using-cach%C3%A9-monitor | CC-MAIN-2019-43 | refinedweb | 851 | 52.8 |
COLOR MANAGEMENT PROFILEgiannis72str Apr 8, 2012 10:00 AM
Hello!
I would like to ask this:
I want to export my project and in such a way that the final clip to look the same in every monitor (PC, Mac, TV, even film). The
only way of achieving that is by embeding a color profile. That can be done only in After effects. The problem is that I use After
effects in order to make the effects or some color correction. You can never use After effects to render the whole movie.
So, the final render is done by Premiere Pro. Premiere Pro though is not equiped with color management. So how can I embed
a certain color profile when exporting the final project from Premiere?
If I render the clips consisting the whole project in After effects and render them out with the embeded color profile and then
import them into Premiere for the final editing and exporting, will it be possible to maintain the spesific profile, since during clips
exporting from After effects, I checked "include metadata" (on)?
Thank you!
1. Re: COLOR MANAGEMENT PROFILEHarm Millaard Apr 8, 2012 11:30 AM (in response to giannis72str)
Show me a DVD or BR player and a TV that makes use of embedded color profiles. AFAIK these do not exist, so the short answer is NO, it is not possible.
2. Re: COLOR MANAGEMENT PROFILEgiannis72str Apr 8, 2012 12:45 PM (in response to Harm Millaard)
you don't seem to know much about this matter...
3. Re: COLOR MANAGEMENT PROFILEHarm Millaard Apr 8, 2012 1:15 PM (in response to giannis72str)
No, I'm one of the most well known nitwits around here. OTOH you don't seem to understand display devices, DVD or BR.
4. Re: COLOR MANAGEMENT PROFILEJim Curtis Apr 9, 2012 8:25 AM (in response to giannis72str)
giannis72str wrote:
Hello!
I would like to ask this:
I want to export my project and in such a way that the final clip to look the same in every monitor (PC, Mac, TV, even film).
This isn't possible. Just about every monitor has display settings that can defeat whatever your intention was to view it. This especially goes for digital TVs that have different viewing modes, and Never Twice the Same Color, NTSC.
The Color Management in Ae is extremely untrustworthy (in many professional opinions), and I'd guess that most people using Ae turn it off completely, and instead use LUTs with their monitor, with the possible exception of people doing film outs.
You can never use After effects to render the whole movie.
Wrong again.
I think the best you can hope for is to color your project on a high quality calibrated monitor, and just wish and hope that people down line have made a similar attempt to set up their display devices properly.
5. Re: COLOR MANAGEMENT PROFILEgiannis72str Apr 9, 2012 12:27 PM (in response to Jim Curtis)
Guys, it can be done. There is no doupt about that. Have a look at this tutorial. I am not asking whether it can be done or not. I know it can be done..... I just asked if it can be done with Premiere.
Link:
6. Re: COLOR MANAGEMENT PROFILEHarm Millaard Apr 9, 2012 12:36 PM (in response to giannis72str)
No. Not on DVD, not on BR,
7. Re: COLOR MANAGEMENT PROFILETodd_Kopriva
Apr 9, 2012 3:35 PM (in response to Harm Millaard)
giannis72str,
I think that you're misunderstanding how color management works in general. But, before addressing that, here's the simple statement: Premiere Pro does not do color management.
OK, so about that misunderstanding: You don't need to embed a color profile to use color management. The way that color management works is that you tell the color-managed application (e.g., After Effects) what kind of output device you're targeting, and it creates RGB/YUV values in the output such that the colors appear as you intend on that single kind of device.
Yes, you can embed a color profile, but that is for handing off files between post-production applications, so that you can tell another application (e.g., Photoshop) about that rendering intent mentioned above. A DVD player will not read or use an embedded color profile; it just reads the the RGB/YUV values and displays them in its own color space---which you allowed for by creating the color values from a color-managed application.
(BTW, I'm the one who created the video that you linked to.)
8. Re: COLOR MANAGEMENT PROFILEgiannis72str Apr 10, 2012 7:05 AM (in response to Todd_Kopriva)
Exactly!
I am only interested in creating a project and be sure that when I play it on my HDTV, I will watch exactly the same colors. I don't need to embed a color profile. I will just choose my output device I am targeting, as you said, and that's all.
I just wanted to know if that can be done in premiere because after effects is not suitable for a whole movie final editing and exporting. That's why I asked if the color profile can pass to Premiere through metadata, but it cannot.
By the way the tutorial was very enlightening!
I would like to ask though, since you know a lot about such matters (by saying that I am not being aggressive to Harm Millaard, who has a very very good spherical knowledge, since his responses have helped me a lot in several issues) If there is a way to ensure that color, gamma and contrast quality of a Premiere project can be reflected exactly as it is on other output devices (TV and especialy film), or the only thing we can do is just be sure to be whithin safe broadcasts limits.
Thank you!
9. Re: COLOR MANAGEMENT PROFILETodd_Kopriva
Apr 10, 2012 8:35 AM (in response to giannis72str)
Note what Jim said above about color management not being very useful for television, because there are so many settings that the user can tweak on their set (not to mention other parts of the system along the way).
The best that you can realistically do is to monitor/preview your colors on a broadcast monitor as you work and then do a final check of your output on a device of the same sort that you expect your end-users to use.
If you do want to use color management, you can create a master out of Premiere Pro and then bring that through After Effects for creation of multiple outputs for various output devices---but that isn't necessary for a single output like HDTV. | https://forums.adobe.com/thread/986847 | CC-MAIN-2017-04 | refinedweb | 1,123 | 66.78 |
CodePlexProject Hosting for Open Source Software
Quick start for prism 4 does not have examples of connecting to database using Entity framework. 4. Can Prism team or a prism user can provide some sample application or tutorial how to do so.
Thanks in advance for help.
I am using a service I put into my infrastructure project, that connects to the Data Library. Also, created a controller for the application and the entity, to facilitate initialization. Then inject via the viewmodel and bind accordingly.
public interface IEntityService{
//miscellaneous methods / properties
}
[Export(typeof(IEntityService))]
public class EntityService : IEntityService{
//implementation of IEntityService
}
For validation handling I created partial classes in a separate file for each entity and make the necessary overrides for handling null/length and RegEx input validation. Since they are partial, they are part of the entity, examples can be found by searching
for Beth Massi and EF Validation in WPF, a whole series of videos will appear and you can search more out on this if you need.
Hope this helps.
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | http://compositewpf.codeplex.com/discussions/238060 | CC-MAIN-2017-39 | refinedweb | 208 | 55.44 |
Django Form Basics an
AuthorForm class.
Create a new file called
forms.py, if not already exists in the blog app i.e
TGDB/django_project/blog (same place where
models.py file is located) and add the following code to it.
TGDB/django_project/blog/forms.py
from django import forms class AuthorForm(forms.Form): name = forms.CharField(max_length=50) email = forms.EmailField() active = forms.BooleanField(required=False) # required=False makes the field optional created_on = forms.DateTimeField() last_logged_in = forms.DateTimeField()
Form fields are similar to model fields in the following ways:
- Both correspond to a Python type.
- Both validate data in the form.
- Both fields are required by default.
- Both types of fields know how to represent them in the templates as HTML. Every form fields are displayed in the browser as an HTML widget. Each form field is assigned a reasonable Widget class, but you can also override this setting.
Here is an important difference between the model fields and form fields.
Model fields know how to represent themselves in the database whereas form fields do not.
Form States #
A form in Django can be either in a Bound state or Unbound state. What is Bound and Unbound state?
Unbound State: In Unbound state the form has no data associated with it. For example, an empty form displayed for the first time is in unbound state.
Bound State: The act of giving data to the form is called binding a form. A form is in the bound state if it has user submitted data. It doesn't matter whether the data is valid or not.
If the form is in bound state but contains invalid data then the form is bound and invalid. On the other hand, if the form is bound and data is valid then the form is bound and valid.
is_bound attribute and is_valid() method #
We can use
is_bound attribute to know whether the form is inbound state or not. If the form is in the bound state then the
is_bound returns
True, otherwise
False.
Similarly, we can use the
is_valid() method to check whether the entered data is valid or not. If the data is valid then
is_valid() returns
True, otherwise
False. It is important to note that if
is_valid() returns
True then
is_bound attribute is bound to return
True.
Accessing Cleaned Data #
When a user enter submits the data via form Django first validate and clean the data. Does this mean that the data entered by the user were not clean? Yes, for the two good reasons:
When it comes to submitting data using forms you must never trust the user. It takes a single malicious user to wreak havoc on your site. That's why Django validates the form data before you can use them.
Any data the user submits through a form will be passed to the server as strings. It doesn't matter which type of form field was used to create the form. Eventually, the browser would will everything as strings. When Django cleans the data it automatically converts data to the appropriate type. For example
IntegerFielddata would be converted to an integer,
CharFielddata would be converted to a string,
BooleanFielddata would be converted to a bool i.e
Trueor
Falseand so on. In Django, this cleaned and validated data is commonly known as cleaned data. We can access cleaned data via
cleaned_datadictionary:
cleaned_date['field_name']
You must never access the data directly using
self.field_nameas it may not be safe.
Django Forms in Django Shell #
In this section we will learn how to bind data and validate a form using Django Shell. Start Django Shell by typing
python manage.py shell command in the command prompt or terminal. Next, import the
AuthorForm class and instantiate a
AuthorForm object:
.forms import AuthorForm >>> f = AuthorForm() >>>
At this point, our form object i.e
f is unbound because there is no data in the form. We can verify this fact by using
is_bound attribute.
>>> >>> f.is_bound False >>>
As expected
is_bound attribute returns
False. We can also check whether the form is valid or not by calling
is_valid() method.
>>> >>> f.is_valid() False >>>
It is important to understand that
is_bound and
is_valid() are related. If
is_bound is
False then
is_valid() will always return
False no matter what. Similarly, if
is_valid() returns
True then
is_bound must be
True.
Calling
is_valid() method results in validation and cleaning() will throw an
AttributeError exception.
Obviously, now the question arises "How do we bind data to the form"?
To bind data to a form simply pass a dictionary as an argument to the form class(in this case
AuthorForm) while creating a new form object.
>>> >>> data = { ... 'name': 'jon', ... 'created_on': 'today', ... 'active': True, ... } >>> >>> >>> f = AuthorForm(data) >>>
Our form object
f has data now, so we can say that it is bound. Let's verify that by using
is_bound attribute.
>>> >>> f.is_bound True >>>
As expected, our form is bound now. We could also get a bound form by passing an empty dictionary (
{}).
>>> >>> data = {} >>> f2 = AuthorForm(data) >>> f2.is_bound True >>>
Okay let's now try accessing
cleaned_data attribute before invoking
is_valid() method.
>>> >>> f.cleaned_data Traceback (most recent call last): File "<console>", line 1, in <module> AttributeError: 'CategoryForm' object has no attribute 'cleaned_data' >>> >>>
As expected, we got an
AttributeError exception. Now we will validate the form by calling
is_valid() method.
>>> >>> f.is_valid() False >>> >>> f.cleaned_data {'active': True, 'name': 'jon'} >>>
Our validation fails but we now have
cleaned_data dictionary available. Notice that there is no
created_on key in the
cleaned_data dictionary because Django failed to validate this field. In addition to that, the form validation also failed to validate
last_logged_in field of the
AuthorForm because we haven't provided any data to it.
Always remember that the
cleaned_data attribute will only contain validated and cleaned data, nothing else.
To access errors, the form object provides an
errors attribute which is an object of type
ErrorDict, but for the most part you can use it as a dictionary. Here is how it works:
>>> >>> f.errors {'created_on': ['Enter a valid date/time.'], 'email': ['This field is required.' ], 'last_logged_in': ['This field is required.']} >>> >>>
Notice that there are three fields which failed the validation process. By default,
f.errors returns error messages for all the fields which failed to pass validation. Here is how to get the error message for a particular field.
>>> >>> f.errors['email'] ['This field is required.'] >>> >>> >>> f.errors['created_on'] ['Enter a valid date/time.'] >>> >>> >>> f.errors['last_logged_in'] ['This field is required.'] >>>
The
errors object provides two methods to ouput errors in different formats:
>>> >>> >>> f.errors {'created_on': ['Enter a valid date/time.'], 'email': ['This field is required.' ], 'last_logged_in': ['This field is required.']} >>>
>>> >>> f.errors.as_data() {'created_on': [ValidationError(['Enter a valid date/time.'])], 'email': [Valida tionError(['This field is required.'])], 'last_logged_in': [ValidationError(['Th is field is required.'])]} >>> >>>
>>> >>> f.errors.as_json() '{"created_on": [{"code": "invalid", "message": "Enter a valid date/time."}], "e mail": [{"code": "required", "message": "This field is required."}], "last_logge d_in": [{"code": "required", "message": "This field is required."}]}' >>> >>>
Note that unlike the
cleaned_data attribute the
errors attribute is available to you all the time without first calling the
is_valid() method. But there is a caveat, trying to access
errors attribute before calling
is_valid() method results in validation and cleaning of form data first, consequently creating
cleaned_data attribute in the process. In other words, trying to access
errors attribute first will result in a call to
is_valid() method implicitly. However, in your code should always call
is_valid() method explicitly.
To demonstrate the whole process one more time, let's create another form object, but this time we will bind the form with data that will pass the validation.
>>> >>> import datetime >>> >>> >>> data = { ... 'name': 'tim', ... 'email': 'tim@mail.com', ... 'active': True, ... 'created_on': datetime.datetime.now(), ... 'last_logged_in': datetime.datetime.now() ... } >>> >>> >>> f = AuthorForm(data) >>> >>> >>> f.is_bound True >>> >>> >>> f.is_valid() True >>> >>> >>> f.cleaned_data {'name': 'tim', 'created_on': datetime.datetime(2017, 4, 29, 14, 11, 59, 433661, tzinfo=<UTC>), 'last_logged_in': datetime.datetime(2017, 4, 29, 14, 11, 59, 433 661, tzinfo=<UTC>), 'email': 'tim@mail.com', 'active': True} >>> >>> >>> f.errors {} >>>
Digging deep into Form Validation #
When
is_valid() method is called Django does the following things behind the scenes:
The first step is to call Field's
clean()method. Every form field has a
clean()method, which does the following two things:
Convert the field data (recall that the data is sent by the browser as a string to the server) to the appropriate Python type. For example, if the field is defined as
IntegerFieldthen the
clean()method will convert the data to Python
int, if it fails to do so, it raises a
ValidationErrorexception.
Validate the converted data received from the step 1. If validation succeeds, cleaned and validated form fields.
Finally, Form's class
clean() method is called. If you want to perform validation which requires access to multiple fields override this method in your form class.
Note: This is an oversimplified view of Django Validation Process. The reality is much more involved but that's enough, to begin with.
Let's take an example to understand how Django performs cleaning and validation when
is_valid() method is called on
AuthorForm class.
clean_email() is a method called (assuming this method is defined in the form class and
ValidationError is not raised in step 3) to perform some additional validation. At this point, it is guaranteed that
clean_email() method.
However, there is no guarantee that the data from other fields, for example, the name field is available inside
clean_email() method. So, you should not attempt to access
name field inside
clean_email()
AuthorForm class. Here are things we want to achieve.
- Prevent users to create Author named
"admin"and
"author".
- Save the email in lowercase only. At this point, nothing is stopping us to save the email in uppercase.
Open
forms.py and modify the code as follows:
TGDB/django_project/blog/forms.py
from django import forms from django.core.exceptions import ValidationError class AuthorForm(forms.Form): #... last_logged_in = forms.DateTimeField() def clean_name(self): name = self.cleaned_data['name'] name_l = name.lower() if name_l == "admin" or name_l == "author": raise ValidationError("Author name can't be 'admin/author'") return name def clean_email(self): return self.cleaned_data['email'].lower()
Restart the Django shell for the changes to take effect and then enter the following code. Here we are trying to validate a form where author name is
"author".
>>> >>> from blog.forms import AuthorForm >>> >>> import datetime >>> >>> >>> data = { ... 'name': 'author', ... 'email': 'TIM@MAIL.COM', ... 'active': True, ... 'created_on': datetime.datetime.now(), ... 'last_logged_in': datetime.datetime.now() ... } >>> >>> >>> f = AuthorForm(data) >>> >>> >>> f.is_bound True >>> >>> >>> f.is_valid() False >>> >>> >>> f.cleaned_data {'last_logged_in': datetime.datetime(2017, 9, 12, 22, 17, 26, 441359, tzinfo=<UT C>), 'created_on': datetime.datetime(2017, 9, 12, 22, 17, 26, 441359, tzinfo=<UT C>), 'active': True, 'email': 'tim@mail.com'} >>> >>> >>> f.errors {'name': ["Author name can't be 'admin/author'"]} >>> >>>
As expected, form validation failed because
"author" is not a valid author name. In addition to that
cleaned_data contains
clean_email() method.
Notice that form's
errors attribute returns the same error message we specified in the
clean_name() method. Let's try validating form data once more, this time we will provide valid data in every field.
>>> >>> >>> data = { ... 'name': 'Mike', ... 'email': 'mike@mail.com', ... 'active': True, ... 'created_on': datetime.datetime.now(), ... 'last_logged_in': datetime.datetime.now() ... } >>> >>> >>> f = AuthorForm(data) >>> >>> >>> f.is_bound True >>> >>> >>> f.is_valid() True >>> >>> >>> f.cleaned_data {'last_logged_in': datetime.datetime(2017, 9, 12, 22, 20, 25, 935625, tzinfo=<UT C>), 'name': 'Mike', 'created_on': datetime.datetime(2017, 9, 12, 22, 20, 25, 93 5625, tzinfo=<UTC>), 'active': True, 'email': 'mike@mail.com'} >>> >>> >>> f.errors {} >>> >>>
This time validation succeeds because data in every field is correct.
AuthorForm class:
TGDB/django_project/blog/forms.py
from django import forms from django.core.exceptions import ValidationError from .models import Author, Tag, Category, Post class AuthorForm(forms.Form): #... def clean_email(self): return self.cleaned_data['email'].lower() def save(self): new_author = Author.objects.create( name = self.cleaned_data['name'], email = self.cleaned_data['email'], active = self.cleaned_data['active'], created_on = self.cleaned_data['created_on'], last_logged_in = self.cleaned_data['last_logged_in'], ) return new_author
Nothing new here, in line 3, we are importing models from the blog app. In lines 12-20, we are defining the
save() method which uses form data to create a new
Author object. Notice that while creating new
Author object we are accessing form data via
cleaned_data dictionary.
Restart the Django shell again and Let's try creating a new Author via
AuthorForm.
>>> >>> from blog.forms import AuthorForm >>> >>> import datetime >>> >>> data = { ... 'name': 'jetson', ... 'email': 'jetson@mail.com', ... 'active': True, ... 'created_on': datetime.datetime.now(), ... 'last_logged_in': datetime.datetime.now() ... } >>> >>> f = AuthorForm(data) >>> >>> f.is_bound True >>> >>> f.is_valid() True >>> >>> f.save() <Author: jetson : jetson@mail.com> >>> >>> >>> from blog.models import Author >>> >>> a = Author.objects.get(name='jetson') >>> >>> a.pk 11 >>> >>> a <Author: jetson : jetson@mail.com> >>> >>>
Sure enough, our newly created category object is now saved in the database.
Our form is fully functional. At this point, we could move on to create form classes for the rest of the objects like Post, Tag etc; but there is a big problem.
The problem with this approach is that fields in the
AuthorForm class map closely to that of
Author models. As a result, redefining them in the
AuthorForm is redundant. If we add or modify any field in the
Author model then we would have to update our
AuthorForm class accordingly.
Further, as you might have noticed there are few differences in the way we have defined model fields and form fields. For example:
The
Author model is defined like this:
On the other hand, the same field in
AuthorForm is defined like this:
Notice that the
AuthorForm doesn't have
unique=True attribute because
unique=True attribute is only defined for the model fields, not for the form fields. One way to solve this problem is to create a custom validator by implementing
clean_email() method like this:
from django import forms from .models import Author, Tag, Category, Post class AuthorForm(forms.Form): #... def clean_email(self): email = self.cleaned_data['email'].lower() r = Author.objects.filter(email=email) if r.count: raise ValidationError("{0} already exists".format(email)) return email.lower()
Similarly, Form fields don't provide default,
auto_add_now and
auto_now parameters. If you want to implement functionalities provided by these attributes then you would need the write custom validation method for each of these fields.
As you can see, for each functionality provided by the Django models, we would have to add various cleaning methods as well as custom validators. Certainly, this involves a lot of work. We can avoid all these issues by using
ModelForm.
Removing redundancy using ModelForm #
The
ModelForm class allows us to connect a
Form class to the
Model
Author.
TGDB/django_project/blog/forms.py
from django import forms from django.core.exceptions import ValidationError from .models import Author, Tag, Category, Post class AuthorForm(forms.ModelForm): class Meta: model = Author def clean_name(self): name = self.cleaned_data['name'] name_l = name.lower() if name_l == "admin" or name_l == "author": raise ValidationError("Author name can't be 'admin/author'") return name def clean_email(self): return self.cleaned_data['email'].lower()
There is still one thing missing in our
AuthorForm class. We have to tell
Author double underscore).
fields = '__all__' # display all the fields in the form fields = ['title', 'content'] # display only title and content field in the form
Similarly, there exists a complementary attribute called
exclude which accepts a list of field names which you don't want to show in the form.
exclude = ['slug', 'pub_date'] # show all the fields except slug and pub_date
Let's update our code to use the
fields attribute.
TGDB/django_project/blog/forms.py
from django import forms from django.core.exceptions import ValidationError from .models import Author, Tag, Category, Post class AuthorForm(forms.Form): class Meta: model = Author fields = '__all__' def clean_name(self): name = self.cleaned_data['name'] name_l = name.lower() if name_l == "admin" or name_l == "author": raise ValidationError("Author name can't be 'admin/author'") return name def clean_email(self): return self.cleaned_data['email'].lower()
Notice that we haven't changed
clean_name() and
clean_email() method because they work with
ModelForm too.
Additional Validation in ModelForm #
In addition to Form validation,
ModelForm also performs its own validation. What is meant by that? It simply means that ModelForm performs validation at the database level.
ModelForm (using unique parameter).
Which validation occurs first Form validation or Model validation?
Form Validation occurs first.
How do I trigger this Model validation?
Just call
is_valid() method as usual and Django will run Form validation followed by
ModelForm validation.
Creating Form classes for the remaning objects #
Before we move ahead, let's create
PostForm,
CategoryForm and
TagForm class in the
forms.py file.
TGDB/django_project/blog/forms.py
#... from .models import Author, Tag, Category, Post from django.template.defaultfilters import slugify #... class AuthorForm(forms.ModelForm): #... class TagForm(forms.ModelForm): class Meta: model = Tag fields = '__all__' def clean_name(self): n = self.cleaned_data['name'] if n.lower() == "tag" or n.lower() == "add" or n.lower() == "update": raise ValidationError("Tag name can't be '{}'".format(n)) return n def clean_slug(self): return self.cleaned_data['slug'].lower() class CategoryForm(forms.ModelForm): class Meta: model = Category fields = '__all__' def clean_name(self): n = self.cleaned_data['name'] if n.lower() == "tag" or n.lower() == "add" or n.lower() == "update": raise ValidationError("Category name can't be '{}'".format(n)) return n def clean_slug(self): return self.cleaned_data['slug'].lower() class PostForm(forms.ModelForm): class Meta: model = Post fields = ('title', 'content', 'author', 'category', 'tags',) def clean_name(self): n = self.cleaned_data['title'] if n.lower() == "post" or n.lower() == "add" or n.lower() == "update": raise ValidationError("Post name can't be '{}'".format(n)) return n def clean(self): cleaned_data = super(PostForm, self).clean() # call the parent clean method title = cleaned_data.get('title') # if title exists create slug from title if title: cleaned_data['slug'] = slugify(title) return cleaned_data
The
TagForm and
CategoryForm are very similar to
AuthorForm class but
PostForm is a little different. In
PostForm, we are overriding Form's
clean() method for the first time. Recall that we commonly use Form's
clean() method when we want to perform some validation which requires access to two or more fields at the same time.
In line 56, we are calling Form's parent
clean() method which by itself does nothing, except returning
cleaned_data dictionary.
Next, we are using dictionary object's
get() method to access the
title field of the
PostForm, recall that in the Form's
clean() method none of the fields is guaranteed to exist.
Then we test the value of the
title field. If the
title field is not empty, then we use
slugify() method to create slug from the
title field and assign the result to
cleaned_data['slug']. At last, we return
cleaned_data from the
clean() method.
It is important to note that by the time Form's
clean() method is called,
clean() methods of the individual field would have already been executed.
That'all for now. This chapter was quite long. Nonetheless, we have learned a lot about Django forms. In the next chapter, we will learn how to render forms in templates. | https://overiq.com/django/1.10/django-form-basics/ | CC-MAIN-2018-09 | refinedweb | 3,162 | 60.41 |
Library implementing DevAssistant PingPong protocol
Library implementing protocol used for communication between DevAssistant and PingPong scripts (a.k.a. executable assistants). The protocol specification can be found at TODO:link.
Note that this library implements both “server” and “client” side. The “server” side is only used by DevAssistant itself. If you’re considering implementing a DevAssistant PingPong library for another language, you just need to implement the “client” side.
Usage
To write a simple PingPong script, you need to create a minimal Yaml assistant, that specifies metadata, dependencies needed to run the PingPong script (which is a Python 3 script in this case):
fullname: PingPong script example description: A simple PingPong script using DevAssistant PingPong protocol dependencies: # TODO: once dapp library is on PyPI/packaged in Fedora, it should also be added to list of deps - rpm: [python3] args: name: flags: [-n, --name] help: Please provide your name. files: script: &script source: script.py run: - pingpong: python3 *script
Let’s assume that the above assistant is ~/.devassistant/assistants/crt/test.yaml. The corresponding PingPong script has to be ~/.devassistant/files/crt/test/script.py and can look like this:
#!/usr/bin/python3 import dapp class MyScript(dapp.DAPPClient): def run(self, ctxt): if 'name' in ctxt: name = ctxt['name'].capitalize() else: name = 'Stranger' self.call_command(ctxt, 'log_i', 'Hello {n}!'.format(n=name)) return (True, 'I greeted him!') if __name__ == '__main__': MyScript().pingpong()
Things to Note
- The PingPong script class has to subclass dapp.DAPPClient.
- The run method has to accept two arguments, self (Python specific argument pointing to the object) and ctxt. The ctxt is a dict (Python mapping type) that holds the global context of the Yaml DSL (e.g. it contains the name argument, if it was specified by user on command line/in GUI).
- You can utilize DevAssistant commands [1] by calling call_command method. This takes three arguments - global context, command type and command input. The first is (possibly modified) context that was passed to the run method and the other two are the same as explained at [1].
- The ctxt dict can possibly get modified by running the command, check documentation of every specific command to see what it does and whether it modifies anything in the global context.
- The call_command method returns a 2-tuple - logical result and result of the command. Again, these are documented for all commands at [1].
- The run method has to return a 2-tuple - a logical result (e.g. True/False) and a result, just as any other command would.
- If you want the assistant to modify the global context, just modify the ctxt variable. All commands that you possibly run after pingpong in the Yaml file will then see all the modifications that you did.
- Note, that to actually start the PingPong script, you have to call pingpong() method of the script class, not the run() method.
[1]
Download Files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/dapp/ | CC-MAIN-2017-51 | refinedweb | 501 | 57.16 |
The term broadcasting refers to the ability of NumPy to treat arrays of different shapes during arithmetic operations. Arithmetic operations on arrays are usually done on corresponding elements. If two arrays are of exactly the same shape, then these operations are smoothly performed.
If the dimensions of two arrays are dissimilar, element-to-element operations are not possible. However, operations on arrays of non-similar shapes is still possible in NumPy, because of the broadcasting capability. The smaller array is broadcast to the size of the larger array so that they have compatible shapes.
Broadcasting is possible if the following rules are satisfied −
- Array with smaller ndim than the other is prepended with '1' in its shape.
- Size in each dimension of the output shape is maximum of the input sizes in that dimension.
- An input can be used in calculation, if its size in a particular dimension matches the output size or its value is exactly 1.
- If an input has a dimension size of 1, the first data entry in that dimension is used for all calculations along that dimension.
A set of arrays is said to be broadcastable if the above rules produce a valid result and one of the following is true −
- Arrays have exactly the same shape.
- Arrays have the same number of dimensions and the length of each dimension is either a common length or 1.
- Array having too few dimensions can have its shape prepended with a dimension of length 1, so that the above stated property is true.
The following program shows an example of broadcasting.
import numpy as np a = np.array([[0.0,0.0,0.0],[10.0,10.0,10.0],[20.0,20.0,20.0],[30.0,30.0,30.0]]) b = np.array([1.0,2.0,3.0]) print('First array:') print(a) print('\n') print('Second array:') print(b) print('\n') print('First Array + Second Array') print(a + b)
The Output is as follow
First array: [[ 0. 0. 0.] [10. 10. 10.] [20. 20. 20.] [30. 30. 30.]] Second array: [1. 2. 3.] First Array + Second Array [[ 1. 2. 3.] [11. 12. 13.] [21. 22. 23.] [31. 32. 33.]]
The following figure demonstrates how array b is broadcast to become compatible with a.
Reference:
Note: only a member of this blog may post a comment. | http://www.tutorialtpoint.net/2021/12/broadcasting-example-program-in-python.html | CC-MAIN-2022-05 | refinedweb | 392 | 63.09 |
REST and the Real World
February 20, 2002
In the last article I described a new model for web services construction. It is called Representational State Transfer (REST), and it applies the principles of the Web to transaction-oriented services, rather than publishing-oriented sites. When we apply the strategy in the real world, we do so using web technologies such as URIs, HTTP, and XML. Unlike the current generation of web services technologies, however, we make those three technologies central rather than peripheral -- we rethink our web service interface in terms of URIs, HTTP, and XML. It is this rethinking that takes our web services beyond the capabilities of the first generation technologies based on Remote Procedure Call APIs like SOAP-RPC.
In this article I discuss the applicability to REST of several industry buzzwords such as reliability, orchestration, security, asynchrony, and auditing. Intuitively, it seems that the Web technologies are not sophisticated enough to handle the requirements for large-scale inter-business commerce. Those who think of HTTP as a simple, unidirectional GET and POST protocol will be especially surprised to learn how sophisticated it can be.
Quick Review
REST is a model for distributed computing. It is the one used by the world's biggest distributed computing application, the Web. When applied to web services technologies, it usually depends on a trio of technologies designed to be extremely extensible: XML, URIs, and HTTP. XML's extensibility should be obvious to most, but the other two may not be.
URIs are also extensible: there are an infinite number of possible URIs. More importantly, they can apply to an infinite number of logical entities called "resources." URIs are just the names and addresses of resources. Some REST advocates call the process of bringing your applications into this model "resource modeling." This process is not yet as formal as object oriented modeling or entity-relation modeling, but it is related..
HTTP's extensibility stems primarily from the ability to distribute any payload with headers, using predefined or (occasionally) new methods. What makes HTTP really special among all protocols, however, is its built-in support for URIs and resources. URIs are the defining characteristic of the Web: the mojo that makes it work and scale. HTTP as a protocol keeps them front and center by defining all methods as operations on URI-addressed resources.
Auditing and Securing REST Web Services
The most decisive difference between web services and previous distributed computing problems is that web services must be designed to work across organizational boundaries. Of course, this is also one of the defining characteristics of the Web. This constraint has serious implications with respect to security, auditing, and performance.
REST first benefits security in a sort of sociological manner.).
In the previous article I discussed how it is possible to use Access Control Lists (ACLs) to secure services which use URIs as their organizational model. It is much harder to secure an RPC-based system where the addressing model is proprietary and expressed in arbitrary parameters, rather than being group together in a single URI.
With URI-based (REST) web services, administrators can apply ACLs to the service itself and to every document that passes through the service, because each of them would have a URI. As two business partners work their way through a process, each step could be represented by a new document with an attached ACL. Other partners (or auditors) could be given access to this document later merely by manipulating the ACLs.
In the REST model, both business partners would have a shared view of the URI-space representing the process. Rather than sending each other business documents through an email-style pipe, they would PUT them on each other's web sites with shared URIs and passwords. These could be easily checked for discrepancies. Third parties can be brought into the process (perhaps for auditing) merely by pointing them at one or both of the URIs. Standard HTTPS and HTTP authentication and authorization would be sufficient to keep intruders from also being able to look at the documents.
Of course, HTTP-based Web Services can go through firewalls easily, but that is the only point of similarity with RPC tunneling through HTTP such as XML-RPC or SOAP-RPC over HTTP. When you use HTTP over a firewall, you are being very explicit about what is going on. Your system administrator can look at the logs to determine what services are running and who is accessing them. She can disable PUT or POST to make certain parts of the service read-only. She can use standard filtering and hacker detection tools. When you use HTTP as HTTP, you and the system administrator are on the same team.
Conversely, when you tunnel XML-RPC or SOAP on top of HTTP, just to get through a firewall, you are deliberately subverting her work and reducing the efficacy of her tools. Because you are hiding your real actions in the XML body, you make it much harder for her to use standard filtering tools. This greatly expands the opportunity for new security holes. Famed security expert Bruce Schneier has spoken out against SOAP over HTTP for exactly this reason.
Service Context
People building web services often complain that the "tricky bit" of web services is maintainingshared context
- What organization does the client program represent?
- Where are we in this business process?
- What transactions have we done in the past?
- Are there any resources that I promise to hold for you?
- Are there any notifications I promise to deliver to you later?
- What permissions do you have?
There are three main ways that two partners can share context. One is to send the entire context with every message. This is obviously not very scalable: as the relationship deepens the context will grow larger and larger. Another option is to merely require each partner to keep context privately and presume that the other partner has the same idea of context. As you can imagine, this is quite unreliable: a network hiccup or programming bug could make the contexts diverge. The mechanism used on the Web today is to assign URIs to the context. For instance, on Expedia there is a "My Itinerary" URI for each individual. Within that, every purchase you have recently made has its own URI. While you are purchasing a new ticket, each step in the process is represented by another URI. The client may keep copies of resources for its own protection, but the context is always mutually available as a series of linked documents at a URI.
Orchestration
Every method that can be invoked on a resource or service is a possible connector between the client and the service. If every service has a hundred different methods then your connections become very complex -- essentially point-to-point integrations rather than reusable patterns.
There are various systems in the computing world that have proven the power of having just a few methods rather than many. For instance, every true Unix hacker knows that the command line is incredibly powerful because it is possible to pipe data from one process to another using the redirection "methods", ">", ">>", "<". The other Unix command line tools act as standardized filters and transformers connected by these methods.
Similarly, if you think of a SQL table as a resource, the methods SQL makes available are only SELECT, UPDATE, INSERT and DELETE. The rest of SQL is a set of transformers and filters that allow you to combine these methods into services. .NET My Services has Query, Insert, Replace, Update and Delete. As I showed in the last article, UDDI has get_*, delete_* and save_*. This pattern is ubiquitous: a small number of methods applied to diverse kinds of data.
HTTP has GET, PUT, POST, and DELETE. Anything that can be done with SOAP RPC or any other RPC can be done with those four methods. In fact, it is precisely because HTTP has few methods that HTTP clients and servers can grow and be extended independently without confusing each other. Rather than invent new methods they find ways to represent new concepts in data structures (increasingly XML data structures) and headers.
Now that we've boiled down our system to these basic methods, it turns out that we have the beginnings of a web service coordination, orchestration, and assembly language. I could imagine defining a new web service as easily as:
i = GET m = GET if i > m: WITH AUTHENTICATION $myuserid $mypassword POST else: WITH AUTHENTICATION $myuserid $mypassword POST
Or maybe we don't need a new language: we could incorporate these principles into existing scripting languages. The point is that unifying the method vocabulary of the Web provides tremendous opportunities for simplifying interactions. Nobody learning a new web service would ever have to learn the semantics of the various methods again. Web services can be combined through simple Unix-style pipes: GET this, GET that, transform, PUT there.
Of course there is no free lunch. Using someone else's web service requires you to understand their data structures (XML vocabulary and links between documents). But this is true whether we use REST or SOAP RPC..
UDDI has an implicit relational data model. .NET My Services has a concept of a "virtual document". The Web already has a data model of URI-addressed resources connected by hyperlinks. This shared data model has already been implemented in thousands of products, and already allows for startling levels of coordination. Consider a BabelFish translation of a Google result page aggregating multiple otherwise unrelated resources.
URI-centric web services are inherently easier to integrate because the output from one service can be easily used as the input to another. You do this merely by supplying its URI. There is nothing magical or special about this. It is how programs on your desktop computer share information today. The amazing thing is that specifications like SOAP and WSDL have no real support for this simple but powerful data sharing mechanism.
Once you begin to orchestrate multiple web services, transaction processing becomes much harder. It is very difficult for the client to get the various services to have a common view of a "transaction" so that a failure on any service causes a complete rollback on all of them. HTTP does not have a magical solution to this problem but neither do specifications such as SOAP or ebXML. The solutions proposed by the OASIS Business Transactions working group are currently protocol agnostic and should work fine with HTTP.
Asynchrony
People often complain that HTTP is not asynchronous. Unfortunately they often mean different things by that term. Most often they compare it to SMTP as an example of an asynchronous protocol. However, you could make the case that both HTTP and SMTP are synchronous or asynchronous depending on how you define those terms.
Email is asynchronous not primarily because of SMTP itself, but because of the collection of software that does store-and-forward, failure notification and replies. This software can be reused in an HTTP-services world through gateways. HTTP->mail gateways will be necessary for these packages in order to support SOAP-over-HTTP to SOAP-over-SMTP gateways regardless.
Still, HTTP does not have perfect support for asynchrony. It primarily lacks a concept of "callback" (also called a "notification" or "reply address"). Although this has been done many times in many places, there is no single standardized way to do this. Software dealing with notifications is not as reusable as other HTTP-based software modules. There is work under progress to correct this situation, under the nameHTTPEvents
HTTP needs concepts of explicit and implicit store and forward relays, transformational intermediaries, and return paths. These may be borrowed from SOAP headers and SOAP routing. In fact, some REST proponents believe that this is the only part of SOAP that is strongly compatible with the Web and HTTP.
Many commercial applications have been built using HTTP in a peer-to-peer, asynchronous fashion, by companies as large as Microsoft to as small as KnowNow. Now there is active effort standardizing best practices.
Reliability
Networks are inherently unreliable and the Internet is an extreme case of this. You must build network-based software to cope with failure, no matter what protocol you are using. Nevertheless, the right combination of software and protocols can make message delivery more reliable than it would otherwise be. What most people ask for is that a message be delivered if at all possible, and be delivered at most once. If it is not possible to deliver it, then it should be reported to the application.
Writing reliable-delivery software with HTTP is relatively easy. Although it is not rocket science, a full description does take more than a couple of paragraphs.
The bottom line is that software written on top of HTTP can make all of the same guarantees that expensive message queuing software can -- but on top of a protocol that you can send across the Internet to standards-compliant business partners. HTTP applications have traditionally not gone to this level of engineering effort but the underlying protocol does not prevent it. If you are not bothered by the fact that most reliable messaging software uses proprietary protocols then it is easy to tunnel HTTP on top of that proprietary protocol for use within a single organization.
Understanding Addressing
Hopefully I have demonstrated that REST can address all of the industry buzzwords successfully. Now let me ask for your help in promoting addressing to the same level. We as an industry need to understand that this is not just an issue, it is the issue. Until we get it right, we cannot expect to make progress on other issues. As I have shown, it is the basis of security, web service orchestration, and combination. It is the difference between a unified web services Web and a balkanized one.
For instance, a balkanized way to reference about a particular stock value is as
WebService.stockQuote("KREM"). This syntax is particular to some programming
language and is not available outside of it. It can only be used by some other service
through some form of glue. A universally addressable way is
"". This can be accessed by any application
anywhere in the world (with proper security credentials) whether or not it was written
to
understand stock quotes.
A balkanized way of submitting a purchase order is to call an RPC end-point which returns a corporation-specific purchase order identifier (even a UUID). A universally addressable way is to ask the server to generate a new location (using POST) where the purchase order can be uploaded (using PUT). Thereafter it can be referenced in legal materials, annotated with RDF or XLink, secured with ACLs and used as the input to other services. Once the purchase order is given a URI it becomes part of a World Wide Web of documents and services.
Since the dawn of the Web, proprietary addressing schemes have been swept away or rendered irrelevant in the face of the URI onslaught. Examples include Hyper-G, Archie and Microsoft Blackbird. Competing with URIs is neither healthy nor productive. Other addressing schemes survive on closed networks with a sort of second class status: AOL keywords, Windows "Universal Naming Convention" (UNC) names, "Windows Internet Naming" and so forth.
Now consider that every single RPC service by definition sets up its own addressing scheme and data model. If history repeats itself we can expect each RPC-based service to be relegated to second class syntax in comparison to their competitors that embrace the universal addressing scheme provided by URIs. More likely, all services will provide both interfaces (as, for example, better UDDI implementations do) and the RPC interface will just fall into disuse. "Full of Sound and Fury. Signifying Nothing."
Case Studies
Despite its advantages, HTTP-based, URI-centric resource modeling is not a common way of thinking about networking issues and REST-based web services (other than web sites) are not very common. On the other hand, useful, scalable, public RPC-based web services are also quite difficult to find.
The most obvious examples of HTTP-based web services are regular web sites. Any site that presents a purchasing process as a series of web pages can trivially be changed to do the same thing with XML. People who go through this process get all of the benefits of REST web services and none of the expense of re-implementing their business logic around a SOAP-RPC model.
Two businesses that have created (admittedly simple) REST web services are Google and O'Reilly. Google offers to its paid subscribers the ability to have search results published as XML rather than HTML. This makes it easy to build various sorts of sophisticated programs on top of Google without worrying about shifting HTML formats.
The Meerkat Example
O'Reilly's Meerkat is one of a very few useful, public web services. Unlike the majority of the services described on XMethods, Meerkat is used by thousands of sites every single day.
Meerkat uses the three foundation technologies of second-generation web services. It uses a standardized XML vocabulary: RSS. Meerkat would never have become as powerful and scalable if it had invented its own vocabulary. It absolutely depends on the fact that it can integrate information from hundreds of sites that use the RSS vocabulary and the HTTP protocol.
In addition to using HTTP and RSS, Meerkat uses URIs as its addressing scheme. It has a very sophisticated URI-based "API".
Meerkat's content is also available through an XML-RPC API. Before the REST philosophy was popularized it was not clear that Meerkat's HTTP/XML-based interface was already a complete web service. It would be an interesting project to compare and contrast these two interfaces formally.
One interesting point, however, is that all of Meerkat's content aggregation is done through HTTP, not XML-RPC or SOAP. It would be ludicrous to suggest that every content publisher in the world should not only support XML-RPC and SOAP but also some particular set of methods. This would be the situation if instead of inventing the RSS vocabulary the world had standardized the "RSS API."
To be fair, HTTP's advantages would have been less pronounced if Meerkat's interaction with these sites had required two-way communication instead of a simple one-way information fetch. At that point you do need some glue code or at least an orchestration language.
Meerkat shows that when many sites share an XML vocabulary, a protocol and a URI namespace, new services can arise organically. It is arguably the first equivalent in the Web Services world to a large-scale, distributed service like Yahoo. Meerkat's success suggests strongly that the most important enabler of large-scale, distributed web services will be common XML vocabularies.
REST limitations
There is no free lunch. REST is not a panacea. The biggest problem most will have with REST is that it requires you to rethink your problem in terms of manipulations of addressable resources instead of method calls to a component. Of course you may actually implement it on the server side however you want. But the API you communicate to your clients should be in terms of HTTP manipulations on XML documents addressed by URIs, not in terms of method calls with parameters.
Your customers may well prefer a component-based interface to a REST interface..
HTTP is also not appropriate in some circumstances. Because HTTP runs on top of TCP, it can have high connection times compared to protocols intended first and foremost for efficiency. HTTP is designed primarily for the kind of coarse-grained interactions that are used on the public internet, not the kind of fine-grained ones that might be appropriate on a single desktop, within a department or even in certain enterprise situations.
Once again, if DCOM or CORBA solves your fine-grained problem then there is no reason to move to REST. In my opinion, REST will first dominate primarily in the world of partner-facing, external Web Services. Once this happens, it will start to migrate to the Intranet, just as the Web did.
The Best Part
The best part about REST is that it frees you from waiting for standards like SOAP and WSDL to mature. You do not need them. You can do REST today, using W3C and IETF standards that range in age from 10 years (URIs) to 3 years (HTTP 1.1).
Whether you start working on partner-facing web services now or in two years, the difficult part will be aligning your business documents and business processes with your partners'. The technology you use to move bits from place to place is not important. The business-specific document and process modeling is.
There is no doubt that we need more standards to make partner-facing web services into a commodity instead of an engineering project. But what we need are electronic business standards, not more RPC plumbing. Expect the relevant standards not to come out of mammoth software vendors, but out of industrial consortia staffed by people who understand your industry and your business problems -- and not networking protocol wonks like Don Box and myself.
REST does not offer a magic bullet for business process integration either. What REST brings to the table is merely freedom from the tyranny of proprietary addressing models buried in RPC parameters. Do not be afraid to use hyperlinks and URI addresses in your business documents and processes. Specifications like SOAP and WSDL may make that near impossible, but that is a problem with those specifications, not with your understanding of your problem. If you use hyperlinks in your business document and process modeling (as you should) then there is a protocol that does not get in your way: HTTP.
REST: Everything Old is New Again
The rhetoric around web services describes them as "like the web, but for machine to machine communications." They are said to be a mechanism for publishing processes as the Web published data. REST turns the rhetoric into reality. With REST you really do think of web services as a means of publishing information, components and processes. And you really do use the technologies and architecture that make the Web so effective, in particular URIs, HTTP and now XML.
Thanks to Jeff Bone and Mark Baker for some core ideas. | https://www.xml.com/pub/a/ws/2002/02/20/rest.html | CC-MAIN-2022-05 | refinedweb | 3,748 | 53.31 |
The Atlassian Community can help you and your team get more value out of Atlassian products and practices.
I am new to Scriptrunner so bear with me... I am trying to set the value of a custom field based on the value of another custom field. When I make the change to the issue I can see that the secondary field is updating, however I don't see this change for a while in my filters. They still display the original field values. Eventually they catch up. Is there some sort of time delay in propagating these changes to the filters? I don't really understand what is going on. Here is my code:
def ClosureCode = getFieldByName("Closure Code") def piReview = getFieldByName("PI Review") def ClosureCodeValue = ClosureCode.getFormValue(); def piReviewValue = piReview.getFormValue(); log.error("TEST LOG BEHAVIOR") log.error("Closure Code is "+ClosureCodeValue) log.error("PI Review is "+piReviewValue) log.error("TEST LOG BEHAVIOR") if (ClosureCodeValue=="11314") //Successful set to N/A { log.error("Successful closure code") (piReview=="11328") piReview.setFormValue("11328") } if (ClosureCodeValue=="11316") // Unsuccessful set to pending { log.error("Unsuccessful closure code") (piReview=="11341") piReview.setFormValue("11341") }
I am not sure which of the two methods is correct to set the value so i am using both....
I disagree with Juan ... with Behaviours, there is no indexing impact. The form fields are updated live in your browser just like a human would using keyboard and mouse.
When you submit the form, the changes are stored in the DB and the index is refreshed.
Do you see similar delays for changes you make manually appearing in the filter? If so, I'd investigate for something wrong with the automatic indexing processes.
Btw.. this does nothing:
(piReview=="11328")
Thanks for tips @Peter-Dave Sheehan , I removed the code that does nothing. I will add some images that show what is going on... very strange. When closure code is successful I am setting PI Review to "N/A" in Scriptrunner since we don't need to review successful changes in our CAB meetings.
I just noticed if I open the issue to edit and press "Update" without changing ANYTHING, the filter will display properly. It is as if Scriptrunner is not getting the tables to update properly - I really haven't a clue...
Can you view the PI Review value after the change is applied by behaviour and you submit the form in the issue view? Does it show the new value?
What about in the change history tab?
I can't see why it would make a difference, but can you try to change
piReview.setFormValue("11328")
to
piReview.setFormValue(11328L)
@Peter-Dave Sheehan , I only see the change in the history tab after I press the update button, not after the value is changed by the behaviour.
I also changed form value to 11328L - no difference
My original observation "Eventually they catch up." was not accurate, it was probably just that I had updated the form and didn't attribute that to the catch up
Is there a way to force the update after changing the value in Scriptrunner? That may do the trick
What you are experiencing just makes no sense to me ... Behaviors work by modifying the form value on the browser side. Just like a person would with keyboard and mouse.
The edit/create/transition screen submit should take care of updating the database and index.
If you want to have changes take effect AFTER the update, you need to use a script listener instead of a behaviour. But behavious is designed to do exactly what you are describing.
Oh wait I now see this comment from you
I only see the change in the history tab after I press the update button, not after the value is changed by the behaviour.
That is completely expected. That's what behaviour does. Changes the values on the screen and lets the user submit the change. List any other changes on the screen, nothing is reflected in the DB or index until the user actually saves the change.
@Peter-Dave Sheehan okay, that makes sense. So I need a script listener... can you point me in the right direction as to a good example to do what I need done?
You will have a similar issue though.
You won't see the change in PI Review until you save the change to Closure Code.
What changes the closure code? Is this via some automation or some other external source? Or is that a user-triggered change?
If user-triggered, that I don't see why you can't keep using the current behaviour configuration.
But if you think listener is the way to do, you can find some reference in the adaptavist documentation:
Specifically, you will want a custom listener:
And using the "Issue Updated" event, look in the changeLog for a change in the Closure Code field, and if found then make the corresponding change to the PI Review. Then save the change and force a re-index of the issue.
@Peter-Dave Sheehan the closure code is changed by the user. It is a form field that appears when user closes the issue. I'll review the documentation, thanks for your support!
@Juan Manuel Ibarra , thanks I tried to re-index the project and it did not solve the problem. Thanks for pointing this out however. | https://community.atlassian.com/t5/Jira-Core-Server-questions/Scriptrunner-Behaviour-updates-issue-immediately-but-time-delay/qaq-p/1687920 | CC-MAIN-2022-40 | refinedweb | 900 | 66.54 |
Data::Direct - Perl module to emulate seqeuntial access to SQL tables.
use Data::Direct;
$dd = new Data::Direct("dbi:Informix:FBI", "bill_c", "M0n|c4", "porn_suppliers", "PRICE < 99.99", "ORDER BY PUBLICATION_DATE" || die "Failed to connect";
Last two arguments can be ommitted.
while (!$dd->eof) { # Iterate over all records if ($dd{'LAST_MODIFIED'}) { $dd->delete; # Mark RIP flag next; } # Change fields $dd->{'KILL'}++ if ($dd->{'REVENUE'} > 199.99); $dd->update; # Update record in memory $dd->next; # Goto next record }
$dd->addnew; # Add a new record $dd->{'PRICE'} = 999.99; $dd->{'KILL'} = 0; $dd->{'REVENUE'} = 199.99; $dd->update; # Update new record in memory
$dd->flush; # Rewrite table
Data::Direct selects rows from a table and lets you updated them in a memory array. Upon calling the flush method, it erases the records from the table and inserts them from the array. You can supply a WHERE filter to be applied both on query and on deletion, and additional SQL code for sorting the records.
Fetches the next record. Returns undef if gone past end.
Fetches the previous record. Returns undef if gone past beginning.
Returns true if cursor is after all the records.
Simillar, checks beginning of table.
Returns the number of records in the buffer
Returns the number of records in the buffer which are not deleted. recs and rows are not the same!
Sets a named bookmark, to be used for gotobookmark.
Takes the cursor to the specific bookmark.
Retrieve a numbered record.
Returns the row number the cursor is at.
Binds a column to a scalar, using a scalar reference.
Binds each column to a variable with the same name, under the package given. Use bindsimple with no parameters to bind to the main namespace.
Update record after fields have been changed by accessing the members of the object or the bound variables.
Add a new record and point the cursor on it.
Mark a record for deletion.
Unmark a record for deletion.
Check if a record is marked for deletion.
Writes a text file where every line represents a record, launch the process $editor, then update the table with the saved file. Records are serialized and deserialized by the code references in the last parameters.
$dd->spawn("grep <-v> <-i> Bill", sub {join(":", @_);}, sub {my $l = <$_>; chop $l; split(/:/, $l);});
Uses the string as a delimiter to serialize and deserialize records.
Uses CSV format to serialize and deserialize records.
Launches vi or whatever $ENV{'EDITOR'} points to as an editor.
Ariel Brosh, schop@cpan.org | http://search.cpan.org/dist/Data-Direct/Direct.pm | CC-MAIN-2016-40 | refinedweb | 419 | 68.67 |
Breakup of 'Hello World' Program in Java
This example explains each and every symbol & keyword used in 'Hello World' program in Java.
Breakup of 'Hello World' Program (very useful to know for Java beginners)
- { }
- ( )
- public
- class
- public Class Main
- public static void main()
- String[] args
- System.out.println()
- " "
- ;
Example:
public class Main { public static void main(String[] args) { System.out.println("Hello World"); } }
- { }
- ( )
- public
- class
- public class Main
- public static void main()
- String[] args
- System.out.println()
- " "
- ;
- these are called opening and closing braces. some of us also called it curely braces but ideally in programming context these are Braces only.
- these are called opening and closing parentheis. some of us also called it bracket but ideally in programming context these are Parenthesis only.
- public is basically keyword (predefined/reserved word by java) but known as access specifier. fresher should just remember that it is used classes, variables to make them accessible from outside world. You will get deeper knowledge on this in next lessons.
- class is also a keyword (predefined/reserved word by java). This keyword is used create our own data type which can have Data Member(variables) and Member Functions.
- Here we are creating 'Main' class since it is preceded by 'class' keyword. Public means this class is accessbile by othe programs.
- Main() is predfined method which is must to define in the project. Every project has atleast one Main() method.
- void means the Main() methods would not return any value (detail on this you will find in next chapters)
- static is again a keyword. Main() method should preceds with static keword to run the code
- args is array of string. this is known syntax which is called command line argument if used with Main(String[] args)
- System is a class
- out is PrintStream object
- println() method to print text on screen
- This is know as double quotes in java. Usually strings are put inside double quotes.
- this is called semicolumn, it is also known as statement terminator in Java
Note: Java is case sensitive langauge. Class and class are two different word for Java
CppBuzz Forum☆
Now you can ask programming questions on our forum [continue..] | https://www.cppbuzz.com/java/tutorial/breakup-of-hello-world-program-in-java | CC-MAIN-2019-09 | refinedweb | 360 | 64.61 |
design keyboard and how to move hands on it
this program is not running...how we have to do that...
can you please elaborate...
Post your Comment
Handling Key Press Event in Java
Handling Key Press Event in Java
... the handling key press event in java. Key
Press is the event is generated when you press any key to the specific
component. This event is performed by the KeyListener
Automatic key press
Automatic key press Hi
Please any one help me
is it possible automatic key press in j2me
Java Swing Key Event
Java Swing Key Event
In this tutorial, you will learn how to perform key event in java swing. Here
is an example that change the case of characters to uppercase as the user enters
any character. The given code accepts the string from
Java window state event
Java window state event Two java windows, one on top of another... Window at the back is deactivated when the other is on the top. Can iconify when there is only one window using win+d key.
In windows, press win+d, window
SWT "Enter" key event - Swing AWT
SWT "Enter" key event Can any one post me the sample code to get the enter key event?
My requirement is , I want some SWT button action..., listener);
shell.setSize(200, 200);
shell.setText("Press Key
KEY EVENT HANDLING
KEY EVENT HANDLING I am trying to write a program that receives every key stroke even when the window is not active or it's minimized.
Is there anyway to do
JavaScript Key Event
by using the onkeyup event. It occurs when a keyboard
key is released.
Following...
JavaScript onkeyup Event...;
This section illustrates you the use of onkeyup event in JavaScript.
In the given
Press Releases writing
Press release writing
Why press releases are important... used by companies to gain publicity has always
been the press release... or any entity that needs
public attention a press release plays a vital
Event management
Event management Hi,
I want event management application like maintaining email notifications while task creation and update in broader way using spring java
Different types of event in Java AWT
Different types of event in Java AWT
... the indication related to the key operation in
the application if you press any key for any purposes of the object then the generated event gives
Java event handling
Java event handling What event results from the clicking of a button
Java event-listener
Java event-listener What is the relationship between an event-listener interface and an event-adapter class
Java event delegation model
Java event delegation model What is the highest-level event class of the event-delegation model
Flex event
Flex event Hi.....
How many events are fired when your focus goes in one text box, you enter some text and then press tab?
Please give the name of all events which is used in this.....
Thanks
Java JButton Key Binding Example
Java JButton Key Binding Example
In this section we will discuss about how to bind a specific key to the
button.
Key binding feature is supported... the value into the textfield and press the Enter key then
the value
Click event
Click event hi............
how to put a click event on a particular image, so that it can allow to open another form or allow to display result in tabular form????????/
can u tell me how to do that????????/
using java swings event hierarchy
Java AWT event hierarchy What class is the top of the AWT event hierarchy? The java.awt.AWTEvent class is the highest-level class in the AWT event-class hierarchy
Display set of names in array when we press the first letter
Display set of names in array when we press the first letter Please... click the starting letter (like in gmail if we press the letter it will show the available names)using java
Have a look at the following link
registration key - Java Beginners
registration key I want the serial key for JCreator Pro 4.50. I lost it. If any body has it please send
Need an Example of calendar event in java
Need an Example of calendar event in java can somebody give me an example of calendar event of java
display a list of names(when we press first letter)
display a list of names(when we press first letter) If i gave 1character the name start wth that characteer has to be displayed... using java
import javax.swing.*;
class Name
{
public static void main (String
mouse event - Java Beginners
=getContentPane();
setLayout(new FlowLayout());
setTitle("Mouse Event... Event");
f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
f.add(btnClose
How to handle the text using Key Listener Interface
to handle the text using the key events
on the Java Awt component. All the key events... create two text fields text1 and text2. When you
press the key to write the text...
How to handle the text using Key Listener Interface
Tyoing hanan February 18, 2012 at 10:29 PM
design keyboard and how to move hands on it
queryGaurav Sharma May 17, 2012 at 1:41 PM
this program is not running...how we have to do that... can you please elaborate...
Post your Comment | http://roseindia.net/discussion/18430-Handling-Key-Press-Event-in-Java.html | CC-MAIN-2014-15 | refinedweb | 873 | 60.95 |
Definition of a goal region that can be sampled, but the sampling process can be slow. This class allows sampling the happen in a separate thread, and the number of goals may increase, as the planner is running, in a thread-safe manner. More...
#include <ompl/base/goals/GoalLazySamples.h>
Detailed Description
Definition of a goal region that can be sampled, but the sampling process can be slow. This class allows sampling the happen in a separate thread, and the number of goals may increase, as the planner is running, in a thread-safe manner.
- Todo:
- The Python bindings for GoalLazySamples class are still broken. The OMPL C++ code creates a new thread from which you should be able to call a python Goal sampling function. Acquiring the right threads and locks and messing around with the Python Global Interpreter Lock (GIL) is very tricky. See ompl/py-bindings/generate_bindings.py for an initial attempt to make this work.
Definition at line 71 of file GoalLazySamples.h.
Constructor & Destructor Documentation
◆ GoalLazySamples()
Create a goal region that can be sampled in a lazy fashion. A function (samplerFunc) that produces samples from that region needs to be passed to this constructor. The sampling thread is automatically started if autoStart is true. The sampling function is not called in parallel by OMPL. Hence, the function is not required to be thread safe, unless the user issues additional calls in parallel. The instance of GoalLazySamples remains thread safe however.
The function samplerFunc returns a truth value. If the return value is true, further calls to the function can be made. If the return is false, no more calls should be made. The function takes two arguments: the instance of GoalLazySamples making the call and the state to fill with a goal state. For every state filled in by samplerFunc, addStateIfDifferent() is called. A state computed by the sampling thread is added if it is "sufficiently
different" from previously added states. A state is considered "sufficiently different" if it is at least minDist away from previously added states.
Definition at line 43 of file GoalLazySamples.cpp.
The documentation for this class was generated from the following files:
- ompl/base/goals/GoalLazySamples.h
- ompl/base/goals/src/GoalLazySamples.cpp | http://ompl.kavrakilab.org/core/classompl_1_1base_1_1GoalLazySamples.html | CC-MAIN-2021-10 | refinedweb | 373 | 65.01 |
UINT64_Cmacro in C?
I saw this code:
bool contains_zero_byte(uint64 v) { return (((v)-UINT64_C(0x0101010101010101)) & ~(v)&UINT64_C(0x8080808080808080)); }
What is
UINT64_C(0x0101010101010101) doing?
UINT64_C is a macro defined as:
// Appends the correct suffix to a 64-bit unsigned integer literal. #define UINT64_C(c) c ## ULL
The
## token instructs the preprocessor to “paste together” the tokens on either side of it. So
UINT64_C(0x0101010101010101) results in the output
0x0101010101010101ULL.
But what is
ULL in
0x0101010101010101ULL? I’ll write more about these suffixes in the next post.
I wrote this because I felt like it. This post is my own, and not associated with my employer.Jim. Public speaking. Friends. Vidrio. | https://jameshfisher.github.io/2017/01/20/uint64_c_macro.html | CC-MAIN-2019-18 | refinedweb | 109 | 68.87 |
Silverlight and WPF both are User Interface technologies that allow developers and designers to create amazing user interface experiences.
This apparently leads a lot of people to the conclusion that UI creation in these environments must be a complex and time consuming undertaking. I often hear, “I can do things so much faster in WinForms.” Yet nothing could be further from the truth! Using appropriate techniques, UI development should be much faster in WPF and Silverlight. Unfortunately, many such techniques often are overlooked as developers have settled into old UI paradigms, and new paradigms are misunderstood. This article discusses one such technique that can be used with equal success both in WPF and Silverlight. When applied properly, this technique allows for user interface development not just much faster, but also in a much more reusable and maintainable and yet less error prone way. I challenge anyone to come up with a faster UI development approach than outlined here, regardless of the utilized technology or platform!
WPF and Silverlight (which I will use interchangeably, as the techniques discussed in this article apply to both) introduce some very intriguing user interface paradigms. Unfortunately, many of these paradigms often go dismissed as “not applicable to my development effort,” or “not applicable to business applications.” Styling comes to mind as a primary example. Developers generally tend to think of styling as a way to create a graphically designed consumer experience. In reality, however, styling is a technique that allows for generic and interchangeable external definition of property settings. Of course, developers set a lot of properties, so why would styling, which is all about setting properties productively, not be applicable to development and only be thought of as a designer task? I suppose the word “style” is a poor choice of terms in this sense, as it carries a graphical connotation, but whoever is “in the know” about styles understands that this is a very useful feature for business application development.
Another example of a misunderstood WPF/SL paradigm is “layout elements.” Both WPF and Silverlight support an interesting list of layout options (with “layout” being the task of positioning UI elements on a window or other surface). Layout elements allow the developer to position controls inside of a “layout container” (such as a Grid, among many others) and have the system perform automatic arrangements of elements. Concepts such as “fixed positioning of controls” or “flow layout” or “stacks of elements” and much more are supported natively, and new layout elements can be created with ease. However, most developers stick to one or two of those elements (commonly the Grid and StackPanel elements) and ignore the vast potential custom layout can provide.
A Naïve Example
Let’s start with some layout basics. Let’s say we want to create a typical data entry form. The second most naïve approach (I am deliberately ignoring the most naïve approach of putting things into a Canvas, in the hope the world has moved beyond that pitfall) is to put elements inside a Grid and position the elements using margins and alignment. Listing 1 shows a simple user control (or Window) for a data interface. This code, in fact, is not entirely unlike the paradigm used in WinForms and many other rich-client UI environments: We put controls into some sort of UI container and position them by defining left and top positions as well as control height and width. (Left and top positions are expressed as margins in the case of a Grid layout element, but the idea remains the same.) Note that sometimes, default values work and do not have to be explicitly specified, as in the case of the height and width of text blocks (which simply means that these values are still there but the system handled them for us).
You may have never really thought about what happens here, but let’s take a minute to reflect on how this code results in the UI shown in Figure 1. There really are at least three completely different types of information carried in this interface definition. First, XAML is used to define which controls we desire (text blocks and text boxes in this simple example). This provides quite a bit of business value and is part of the core problem we are trying to solve when we define an interface (“which controls should be used to show different elements of information?”). The second part of interest is that we bind elements to data. This is also a core business value we provide (“which data should be displayed and manipulated?”). Third, we are defining the details of the layout of the UI. In this example, I do this by setting margins as well as alignments. The alignments ensure that WPF understands that all control positions are to be relative to the top left edge, and no automatic resource logic of any kind is desired. (I took a shortcut of putting the alignment information into local resources as styles, since I didn’t feel like typing it for every control.)
This, of course, begs the question “What business value was delivered by defining layout information (the third aspect)?” And frankly, the answer is “very little.” This simply is an annoying detail we must deal with in order to create any kind of useful UI. It is time consuming (setting the right positions was by far the part that took me the longest in this example) and it is annoying (most developers hate it, and are not good at creating good looking UI layouts). It is also error prone. Furthermore, the UI we created so far is not very “WPF-ish” in the sense that it can’t do any resizing or any other fancy WPF stuff. Change the style of text elements to use a larger font throughout the app, and all of a sudden, our labels overlap the textboxes! It is also not very reusable. Try moving this UI to Windows Phone 7 and you will probably notice that although it works, it isn’t very useful and probably doesn’t fit on the screen at all.
So that’s a bit of a bummer, isn’t it? The majority of the time I spent creating this example went towards an annoying task that provided practically no business value, and created a result that is not very reusable and not very functional. Furthermore, I created this entire UI entirely by hand. Most people would probably want to use some sort of visual designer. If you do that, your XAML will probably contain a lot more code than shown in Listing 1. Most of it probably not being what you really want (changes are the controls towards the bottom of the UI are aligned relative to the bottom - a choice automatically made by the design tool - creating very odd results when the user resizes the screen). I often have people tell me that this is one of the greatest frustrations they face when working with WPF. Clearly, this needs to be fixed!
How Layout Works
Another aspect that is probably worth reflecting on for a moment is how the layout actually worked. Once again, this is probably something you normally don’t care about very much, but humor me and follow along.
As you can see in Listing 1, I define a list of text blocks and text boxes inside a Grid element. Each of these controls has properties set (either to a default or a hardcoded value) that convey layout information, such as height and width, alignment, or margin of the control. This simply is data associated with the control. There is nothing magical about a text box having a width of 200. This doesn’t automatically make the textbox 200 pixels wide. It is simply a small tidbit of information stored with the control instance.
A key aspect is that all these controls exist inside a Grid layout element. When the Grid is asked to render itself, it looks at all its child elements and various properties it cares about. In particular, the Grid looks at the margin, alignment, and dimension properties to determine what to do with the child element. For instance, it finds the first textbox, sees that it has a margin of 100 pixels on the left and 3 pixels at the top, and thus decides to use this information to position the element according to these settings. The grid also looks at the alignment information. The defined style sets the alignment to top and left, thus telling the Grid to ignore bottom and right margins and instead calculate the total space given to the textbox based on its height (which is automatically determined, since I didn’t set it explicitly) and width.
What is important to realize is that it is totally up to the Grid to look at this information and decide what to do with it, or whether to respect it at all. The Grid can also choose to add other information. For instance, Grids can arrange elements in rows and columns, and in that case, row and column information is stored with the element. To do so, we could simply at Grid.Column="10" indicate to a Grid that the element is meant to be put into the eleventh column of the Grid. This information can be stored with any element, regardless of whether it exists inside a Grid or not. However, chances are this type of setting would only be respected by Grids and no other layout element. Furthermore, this type of a setting can impact other things. If an element inside a Grid has a column set, then the Grid will look at the margin of the control and position it with that margin inside the column, rather than just within the Grid. A textbox with a left margin of 10 pixels with a column assignment of 2 will appear at a very different position than an element with the same margin but a different column setting.
All of this may appear very obvious in its use (it “just works” as expected), but there are some very interesting aspects here when you really think about it: Layout-related properties on elements are really just simple pieces of data that are only given meaning by the layout container those elements live in, and each container can choose to interpret this information differently.
A Slightly Better Approach
So how can we improve on our first example? One way would be to offload as much layout responsibility to the layout container as possible. A simple way to do this would be to remove as much position information as possible from each element, and instead configure the layout container in a more sophisticated way. Listing 2 shows such an example. You can see the result in Figure 2, which is practically indistinguishable from Figure 1.
Listing 2 is longer than Listing 1, but it took less time to write. The idea is to not put any layout information without business value into each element. Therefore, it becomes trivial to define the elements with their binding information. (I do set the width of the textboxes, as I have a business need to show text input areas that indicate support for a certain amount of text.) I then define three columns within the Grid (one for all the labels, one for the textboxes, and a small spacer in between… which could have alternatively been handled by a styled textbox margin) as well as seven rows (one for each label and textbox, plus a spacer row between “Position” and “Phone”). Most of these rows and columns have their height/width set to “Auto” so they can automatically adjust to the size required by the element within. All that remains now is to assign each control its own row and column, and voila, we have our final form.
There are several advantages here. For one, the tedious and error-prone task of calculating element positions (margins) is now gone. This is the main reason Listing 2 is quicker to produce than Listing 1, even though it is longer. This approach is also more powerful. For instance, we could change the font size of all our text elements, and the layout would automatically adjust appropriately. If you look closely, you may even notice that the layout in Figure 2 is a bit better than Figure 1 in how the labels are positioned. In the naïve attempt, I hand-coded the position of the labels to line up with the textboxes vertically in a way that seemed to be reasonably pleasing to the eye (for the current font size). In the second approach, the labels are centered vertically within each row, thus being always perfectly aligned, no matter what.
Unfortunately, this approach also has some downsides. For one, there is a lot of typing. For every UI of this kind that I have to create (imagine a business app with 500 data entry forms), I also have to create a grid with the appropriate number of grid rows and columns. I also have to set the row and column setting for each element (where is the business value of that task?). And if I ever want to add another control (say a fax number) in between existing controls, I have to change my grid row definition and the row assignment of all the controls below the inserted element. Not good. Also, this approach still isn’t nearly as flexible as we’d want it to be. Run this on Windows Phone 7 and the result is only slightly better than before. Yes, the second approach could adapt well to fonts available on the phone, but it can do little about the size problem. We can do a lot better than that!
Code Redux!
There are other layout elements we could try out. One approach is a StackPanel as shown in Listing 3. The result (shown in Figure 3) is clearly not as nice as the previous layouts, but there are some intriguing aspects here. As you can see, the amount of code required is drastically reduced. All the “non-business aspects” are now gone. Compared to the first naïve attempt, this new approach has now completely eliminated the annoying and benefit-free third aspect of this UI’s definition! This is fast to create and not error-prone at all! We probably have little need for a visual design tool to create this user interface. Heck, we could use a code generator to create this UI, or even create it dynamically on the fly! We can also easily add more elements with little effort. We can use this layout in practically any environment. This would work much better on Windows Phone 7 than any of our other approaches.
Unfortunately, there are also downsides. The main problem being that the UI looks so simplistic. Frankly, it is useless in the real world. Especially if you take this beyond a simple example and add a lot more elements as you would likely have it in a real UI. So clearly, we could not use this in a real-world app. Nevertheless, the potential offered by eliminating all non-essential aspects should tickle your curiosity. We now have an interface approach that allows for very productive development. Ugly or not, you can’t create a UI faster than this in WinForms! Perhaps we can travel further down this path?
A “Layoutless” Panel
What if we could take some of this “layout stuff” out of our UIs (again, consider a business application with hundreds of different data UIs) and create a single style that handles layout? Unfortunately, with the approaches we have taken so far, that isn’t possible. If we code a Grid element into our UserControl, than that will always remain a Grid, no matter what styles you create. If you create a StackPanel, it will also always remain a StackPanel. You can’t just re-style that. And you have to put one of these elements into the UserControl, because a UserControl requires a single layout panel as its only child.
But what if you had some sort of container element that didn’t have a hardcoded layout associated with itself? What if the desired layout element for such a container could be entirely styled?
I call this approach a “layoutless panel.” WPF and Silverlight already have the concept of “lookless controls,” which are controls that have only behavior and all the visual aspects are provided by styles. So if you drop a button on a form, the button doesn’t have an appearance. Instead, you can provide that appearance by defining a style. Oh, and if you don’t define your own style, then the system brings in an appropriate default style. So why not do the same thing for layout? Create a panel that does not have a default layout behavior, and instead define the layout through a style. And of course, we can bring in a default style in case no explicit style is defined.
WPF and Silverlight already have a panel called ItemsControl that provides almost this very behavior. I like to derive my own panel from it so I can give it a meaningful name and set appropriate defaults. Since I mostly work with MVVM frameworks, I like to call my “layoutless panel” a “View.” And yes, I define all my Views with this panel. Here’s what it takes to code such a View panel:
public class View : ItemsControl { public View() { var defaultTemplate = new ItemsPanelTemplate( new FrameworkElementFactory( typeof(StackPanel))); defaultTemplate.Seal(); ItemsPanelProperty. OverrideMetadata( typeof(View), new FrameworkPropertyMetadata( defaultTemplate)); } }
The ItemsControl class is a panel that can contain a number of children. It already comes with an ItemsPanel property that can be used to define the layout template that is to be applied to the children. Really, I could have gotten away with using the ItemsControl directly, but I like the ability to define a default layout strategy (a StackPanel in this case). Also, in real-world implementations, you’ll probably end up adding more features to this object, as you’ll see below.
Now that we have this control, we can redefine our interface as shown in Listing 4. As you can see, this code segment is even smaller than Listing 3, as we are getting closer to boiling our UI down to the bare business essentials. The only layout information left in this example is the margin definition in the “Phone” textblock, which creates a visual separator between blocks of elements. Note that I was able to completely get rid of the UserControl element, because the new View panel eliminates the UserControl container entirely (which surely can’t be bad for performance and also creates a cleaner definition of the UI essentials).
If you run this, you end up with a user interface that’s identical to the version shown in Figure 3, since we use a StackPanel as our default layout style. However, we could now create a completely different style to change the layout. Try putting this style definition into your app.xaml (or some other resource dictionary that’s included in your project):
<Style TargetType="l:View"> <Setter Property="ItemsPanel"> <Setter.Value> <ItemsPanelTemplate> <WrapPanel IsItemsHost="True" /> </ItemsPanelTemplate> </Setter.Value> </Setter> </Style>
This uses a wrap panel as the layout strategy and thus positions one element after the other, left to right, until there is no more horizontal space, at which point the layout continues at the next line. (This is basically how document layout works, as in systems such as HTML pages). Not very beautiful, but it works.
You can also go back to a Grid layout using this approach:
<Style TargetType="l:View"> <Setter Property="ItemsPanel"> <Setter.Value> <ItemsPanelTemplate> <Grid IsItemsHost="True" /> </ItemsPanelTemplate> </Setter.Value> </Setter> </Style>
Since there is no Grid-specific information in our view, we’d have to add Grid.Column and Grid.Row information to every element again (as well as add grid rows and columns to the style definition). Note that it would be perfectly fine to add Grid-specific properties to the view definition. As discussed above, it is up to each layout element to decide which properties to consider and which ones to ignore. Therefore, if our layout strategy is a StackPanel, it will ignore Grid.Column and Grid.Row setting entirely. It does not hurt to have these settings there (except perhaps for a tiny bit of memory consumption).
Of course, putting Grid-specific information into the view definition is not exactly “clean UI code” in my opinion. Neither is the definition of margins. Using wrap panels or stack panels looks bad. So clearly we have not arrived at our goal in terms of the resulting style. But we are already getting very close with the UI definition. What we are lacking is a more sophisticated layout strategy.
Data Edit Layout Panel
So let’s create our own layout container! This is actually much easier than you might think. Each layout container has to perform two fundamental tasks: First, the layout container has to iterate over all its children and give them a chance to measure themselves. This gives all the child elements, no matter what they are, and no matter whether they have more child elements themselves, an opportunity to indicate how much space they would like to occupy (with a little bit of optional guidance from the container as to how much space might actually be available). For instance, a text element with no height or width set, will try to lay out all its text in one line and will thus determine how much horizontal and vertical space it needs (unless the parent control indicates a horizontal or vertical space constraint, in which case the element could potentially choose to determine how much space would be needed if word wrapping was used). Similarly, a button may measure the space it wants to take up based on its contents or its height and width settings. Using this approach, we arrive at a fundamental idea as to how much space each child element would like to take up for itself. The layout panel can thus take this information and calculate the total space it would like to occupy itself.
The second step is to perform the actual layout by positioning the controls as needed. This is done by iterating over the child controls and setting their position as well as size. The size here may or may not be the size the control desires in the measurement step. For instance, a label may want to take up a lot of horizontal space, but it may be forced into a smaller space, resulting in text being cut off.
For the first example, I will create a simple data edit form layout container. You can see the code for this container in Listing 5. This panel follows the process outlined above. First, we override the MeasureOverride() method to measure all the child elements and determine the size the panel itself needs. To do so, we have to make sense of the child elements within the panel. Since this is a special data layout panel, we can make the assumption that the elements inside the panel are pairs, where one element is always some sort of label, and the next is some sort of edit control (as is the case in all our examples so far). Of course this is a bit of a simplistic approach as real-world UIs are generally a bit more complex. However, it would be relatively simple to construct a more sophisticated approach to derive meaning from the child element arrangements. For this example, our current approach suffices. Therefore, I created a method called GetColumns(), which iterates over all the child elements and sticks them into a simple ControlPair class, which holds references to a pair of elements that belong together.
Note that I snuck in a bit of extra functionality in the View class as well. I added two attached properties that can be used to indicate grouping of elements as well as column breaks. Previously, I created a sort of visual grouping by adding a margin between controls (or a spacer row in the Grid). However, this approach was not elegant. After all, there was no real meaning in the margin beyond the creation of whitespace. It is much more desirable to express concepts such as groupings explicitly, which can then be handled by different styles and layout strategies in different ways. Perhaps a style chooses to implement a group distinction by adding whitespace, or perhaps the style creates a different indicator, such as a horizontal line. This can only be achieved by embedding real meaning into the UI definition, rather than plump spacing information.
The GetColumns() method uses this information to set a flag in the ControlPair class indicating a new group. The GetColumns() method sticks all these controls into a List<ControlPair>. In fact, it even goes a step further and also checks for a column break property. If a column break is indicated, a new List<ControlPair> is started entirely. Thus we end up with multiple lists, each representing a column of control pairs. All these columns are then returned as a list of columns, thus ending up as a List<List<ControlPair>>. This provides a very handy logical arrangement of all our controls.
Now that we have this arrangement, we can iterate over all columns and all control pairs within columns. We then trigger measuring for each label and each control (with an indication that all these elements are free to use up as much room as they desire). As we look at each of these measurements, we memorize the widest label and the widest control in each column. We also add up the total height for each column by summing up the heights of all elements within a column (plus some spacing). Whenever we encounter a control that indicates a group break, we add some extra vertical height. Once we are done with a column, we memorize its total height and the maximum width taken up, and we then move on to the next column, where the whole process repeats. Once we render that next column, we add the width of the widest elements, and we check whether the total column height was greater than the previous greatest column height, and so forth. This process repeats until we have handled all columns and pairs, and thus know how much room we desire for our layout. This is the value which we return to the WPF/SL infrastructure from the MeasureOverride() method.
Now, all that’s left to do is the actual arrangement of the columns and controls. We do this in the ArrangeOverride() method. Once again, we get our list of columns and control pairs, and process column by column. In each column, we first look at all the labels and controls to find the widest of each. Once we have this information, we iterate over all pairs a second time and arrange all the labels at the same left position as well as all the controls at the same left position in a way that accommodates the widest label with all controls being left aligned. We stack each control pair vertically, and we add a bit of extra space before the pairs with a “new group” indicator. Once we are done, the same process is repeated for the next column. In the end, we return the actual total height and width we used up using this layout approach.
Voila! Our data edit layout panel is complete! To use it, we create a style that uses this panel as the layout element:
<Style x: <Setter Property="ItemsPanel"> <Setter.Value> <ItemsPanelTemplate> <l:SimpleEditLayout </ItemsPanelTemplate> </Setter.Value> </Setter> </Style>
To make the view a bit more interesting, I added a few more elements to the view definition, as shown in Listing 6. You can see the resulting interface in Figure 4. Not bad at all, considering how simple the view definition now is, and how quickly it can be produced, even by entry-level developers!
What is nice about this approach is that we can now stop worrying about UI layout. We set this up once and it gets used in every single data edit view we ever create. In real-world scenarios, the layout strategy may be more complex. After all, real-world UIs are more sophisticated than my example here. But that’s OK. Whoever creates the layout panel simply adds more features to add more meaning to the view definition. It would be easy, for instance, to add the ability to add secondary controls (such as a “…” button after a file name textbox). But most of the developers on your team do not have to worry about the implementation of such concepts, as only the style and the layout panel it uses have to be concerned with such details. The layout panel becomes a very small part of the overall project.
Sometimes people ask me how far one can take this layout concept. “Can you really build real-world UIs beyond simple examples?” they wonder. And the answer is: Absolutely! After all, you can simply add more and more features to your layout logic. Think of it this way: If push came to shove, you could always add top, left, height, and width properties and thus have exactly the same features a Canvas or Grid would provide. In fact, you could use a Canvas or Grid as your layout strategy in those cases where you really can’t figure out how to do it otherwise. Alas, it stands to reason that absolutely any kind of layout is achievable with this approach, with less effort (and potential sources of errors) than other UIs.
Note that although you always have the option to add positional information or use Grids, it is more desirable to stick to a more abstract approach. Once you start to hard-code positional or spacing information, you are back in a territory that provides little or no business value and serves a plain mechanical purpose. For instance, it is much better to indicate that “this „…’ button goes with the file name textbox” than it is to say “I want to position a „…’ button on the right edge of the window and I want to make the file name textbox 30 pixels narrower.” The first approach simply allows for much more flexible styling and reuse.
Adding Actions
At this point, we are missing the ability to trigger actions such as “Save” or “New” in our edit form. Of course, we could add buttons to our screen layout and trigger appropriate code in their event handlers. However, that approach would be somewhat inflexible and not nearly as reusable as we would like it to be. Also, a professional UI may want to use advanced concepts such as Ribbons or a fancy toolbar of some sort. New form factors, such as slate PCs with NUIs (Natural User Interfaces) may want completely different UI elements to trigger such actions. Phones may need an entirely different approach again. So clearly, if we want to create truly reusable UIs, we need a more flexible approach. Furthermore, we want to make sure we can define these elements as quickly as possible to retain our productive development approach.
The fastest approach is to not create any such UI elements at all. Instead, define the available abstractions more generic, and create a style that detects this definition and shows appropriate UI elements.
Here at EPS/CODE Consulting, we are generally doing this by defining a collection of what we call, for want of a better term, “Actions”. An “Action” is simply a specialized Command object. There are tons of different implementations of commands in WPF and Silverlight, and I am not going to engage in a discussion of which command approach is best. Instead, I am going to introduce you to a simple implementation of our action pattern and you can then apply it to your preferred command setup (or simply go with the simple setup presented here).
Our approach starts with two interfaces: An IAction interface as well as a collection of action objects called IHaveActions:
public interface IHaveActions { IEnumerable<IAction> Actions { get; } } public interface IAction : ICommand { string Caption { get; set; } bool BeginGroup { get; set; } }
As you can see, IHaveActions is a trivial interface with only an enumerable list of actions. The IAction interface itself is a bit more interesting. It implements ICommand and adds more features to it. In this case, all I have added is a Caption and an indicator that tells us whether this action represents the start of a new group. In real-world implementations, we often add a lot more, such as visuals (icons), descriptions (often used in tooltips), and even hierarchies of actions. You can make this setup as complex and powerful as you want. The basic idea always remains the same.
The actual implementation of IAction is pretty simple, although a bit longer. You can see it in Listing 7. It simply implements the interfaces by creating the required properties and delegates and it also provides a convenient way to create a new action by passing parameters to the constructor, all of which have default values for ease of use.
We can now use this setup in our customer view model by implementing IHaveActions and creating a list of whatever actions we desire. You can see a simple example in Listing 8. Note how this view model simply defines actions by creating instances of the Action object and passing in parameters, including lambda expressions as execution delegates. This enables for extremely straightforward and productive definition of behavior that can be triggered from the user interface. (On a side note: I am using message boxes in the lambda expressions. This is an absolute no-no in the real world, as it completely ties our UI to the WPF/SL message box classes, thus killing reuse. It is only used here to present a simple example.)
Now that we have our view model configured to have associated actions all that is left to do is creating a style for our View object that shows appropriate UI elements in case the view is bound to a model that has actions. Listing 9 shows a modified style that includes a Template definition. The template defines the overall arrangement of a view. By default, our view object simply shows all the items within itself, based on whatever layout strategy we styled. We can modify this template at will. In this simple example, I change the view’s template to itself be a Grid with two rows. One for a toolbar-like element and the second row (which takes up the remainder of the view) is used to show the actual items in the view (which are then laid out by our layout style).
The toolbar row of the grid is set to auto size to accommodate whatever elements are within it. In this example, I made the content of the first row an “Action Grid.” This is a special subclass of the default Grid, which has one additional property called “Model.” Whenever this property is set (or bound) to an object that implements IHaveActions, the action grid sets itself visible, otherwise it collapses. Otherwise, an ActionGrid works just like any other Grid in WPF. (Listing 10 shows the implementation of the ActionGrid class.) Using this class bound to the current data context provides me an element that is automatically visible or invisible based on the bound view model. So everything I put inside the grid will only show up when there are actions. So I can now simply put an items control inside this grid and bind it to the list of actions. All that’s left to do now is create a data template for each item, which I set to a simple button that is bound to the appropriate caption and command to automatically trigger our execute delegate. There also is a simple trigger that makes sure there is a gap before the button if the button starts a new group, by increasing the left margin. Figure 5 shows the result.
Of course, this is a pretty simple style since my space here is limited. But you can improve this style any way you want. You could use a real toolbar, or a Ribbon. You can even use multiple elements to do the same thing. For instance, you can use a toolbar as well as a menu, plus a right-click menu. The sky’s the limit, once you grasp the power of styling. Figure 6 and Figure 7 show two more example styles. I do not have enough room in this article to explain the details that go with the creation of these styles, but you can take a look at the sample code that comes with this article to see how it is done. (Note that both screens utilize a slightly enhanced version of the Actions infrastructure to support styleable icons. The implementation of this can also be seen in the downloadable source code.)
Now that we have this part of our little framework completed as well, we can be pretty happy with what we have achieved. We can now create data edit UIs extremely quickly with our layout element, and we can add actions even faster since we now don’t have to worry about that part of the interface at all anymore. Our style simply is reused in all UIs we create. I encourage you to download the code sample associated with this article and try to add more interfaces by defining additional views. It should take no time at all and chances of introducing bugs should be minimal, even if you are not an experienced WPF or Silverlight developer.
Reusing the View
We have now achieved our task of creating WPF and Silverlight UIs in an extremely productive fashion. We can now have developers churn out UIs at a very rapid pace and we have eliminated many sources of errors. Developers now simply do not have to worry about creating good looking UIs, and designers can now tweak the visuals in just a handful of relatively simple styles. I simply cannot imagine a more productive way of creating user interfaces.
However, it gets better!
Not only can we now create professional UIs extremely fast, but the UIs we create are now also highly reusable. Look at the definition of our single example view in Listing 4 one more time. (We only have one view in this article, but real-world projects would have hundreds of such views, while there still is just one style.) The view definition is now extremely simple and there is nothing in it that would tie it to a specific screen size or layout, or even a specific device. As a result, the view can be used in many ways.
For instance, we can reuse the view as is in a Windows Phone 7 application. All we have to do to make that work is define a new set of styles and a different layout strategy to lay out the UI in a way that is appropriate for the smaller screen (most likely a relatively simple vertical stack or something similar). One of the issues I encounter when I take apps to the mobile space is that one often needs to scale down functionality and limit the number of fields shown. We could easily accommodate that need by adding a property to each element, such as View.Significance="Low", which can then be used by the layout engine to filter out less important elements on smaller resolutions.
In similar fashion, we could adapt our UI for different WPF or Silverlight setups. In particular, we can easily support Natural User Interfaces (NUIs) and other multi-touch and slate environments.
One of the things that might surprise you is that our view definition is now not confined to XAML scenarios anymore. Due to the extremely generic nature of our view definition, we can now simply use our view definition as XML. With that, a whole new set of options becomes available to us. For instance, we can use our view definition in ASP.NET MVC by adding a relatively simple view engine. Now think about this last statement again! We can use our XAML UI definition to create ASP.NET MVC web applications! This isn’t something that is supported out of the box in MVC, but it is not rocket science to add that functionality.
In a similar fashion, we can use our XML/XAML in completely different environments such as iPhones, iPads, or Andriod devices. The idea here is similar to the MVC idea: Read the XAML and parse it as XML and dynamically create a UI in memory that contains everything the XAML view definition provides. This is possible because the view definition is now abstract enough to not tie us directly to WPF or Silverlight.
The actual implementation of an ASP.NET MVC view engine or an engine for iPhones or Andriod devices (and other platforms and devices) is beyond the scope of this article, but it is clearly possible and feasible. The amount of effort that goes into writing these algorithms is certainly going to be far less than the effort that would be required to re-write your UIs for those platforms.
Conclusion
There simply is no quicker and more productive way to create user interfaces! As I mention in the introduction, I challenge anyone to create WinForms or even web UIs faster than this and provide a similar level of flexibility. It can’t be done. This is the power of WPF and Silverlight at its best! Focus on the business benefits only by creating mainly view models and relatively simple views. People often claim WPF and Silverlight are hard to learn. Well, creating the styles and layout strategies is an advanced task, but with these pieces in place you can use developers that are much more junior than any WinForms or ASP.NET project would allow.
Yes, you have to create your layout strategy and your styles. But you do that once (or at least only once per platform you would like to support) and be done with it. It is a very small task in the big picture. (And you can use the code provided here as a starting point.) Using these techniques, you should be able to shorten your development cycle drastically and at the same time improve quality and reusability. You can move your UIs to new platforms. You even gain abilities I can’t discuss in detail in this article due to space constraints. For instance, it becomes very easy to create these views dynamically based on the underlying data models or customizable fields. It also gets much easier to create a UI editor for your user, in case you want to include such a feature in your application. And I would not be surprised if you could think of uses I have never thought of myself.
People often ask me why they should use WPF or Silverlight. “What is the business benefit?” they say. Or “Why would I put the extra effort into WPF when I am familiar with WinForms?” The simple answer is that you can build better and more professional UIs in less time and at a higher level of quality, and even reuse it widely and for a long time to come. I would be hard-pressed to name a lot of other technologies that provide such a wide range of business reasons for adoption at such a low entry barrier. There is a very large list of benefits and reasons that make WPF and Silverlight a great choice. But just the benefits demonstrated in this article alone should convince any CFO or CTO to take these technologies seriously. | https://www.codemag.com/article/1011071 | CC-MAIN-2019-13 | refinedweb | 7,354 | 60.85 |
I am trying to grasp the meaning of the different types of brackets/parentheses/braces used in C# and what the rules are or purpose of using different types in different situations.
Currently I have no trouble using the brackets/parentheses/braces but I feel as though I use them on a case-by-case basis without really grasping "why" I am using them and I would like to get an understanding of this.
Just for example, these are instances where I would use the brackets/parentheses/braces:
if(Row.Cells[0].Value != null)
{
listThings = new List<thing>();
//More code here
}
Curly brackets
{} are used to group statements. In your case, the
then clause of a standard
if - then statement is wrapped in
{} to group the statements together.
Square brackets
[] are used for arrays, indexers, and attributes.
cells[0] means "Cell with index of 0", which in a more practical sense would mean "first cell".
Parentheses
() are used to specify casts or type conversions:
double x = 1234.7; int a; a = (int)x; // Cast double to int
As well as invoking methods or delegates:
TestMethod();
Edit: As mentioned by itsme86 in the comments,
() are also used for iteration statements like
for(),
foreach(), etc, and namespace keywords like
using(), etc.
Angle Brackets
<> are used to specify a type argument.
listThings = new List<thing>(); specifies a list of type
thing | https://codedump.io/share/fxwcGc1v0Zju/1/what-is-the-meaning-of-the-different-types-of-bracketsparenthesesbraces-used-in-c | CC-MAIN-2018-05 | refinedweb | 228 | 60.45 |
DLOPEN(3) Linux Programmer's Manual DLOPEN(3)
dlclose, dlopen, dlmopen - open and close a shared object
#include <dlfcn.h> void *dlopen(const char *filename, int flags); int dlclose(void *handle); #define _GNU_SOURCE #include <dlfcn.h> void *dlmopen (Lmid_t lmid, const char *filename, int flags); Link with -ldl... Any initialization returns (see below) are called just once. However, a subsequent dlopen() call that loads the same shared object with RTLD_NOW may force symbol resolution for a shared object earlier loaded with RTLD_LAZY. reference count drops to zero, then the object is unloaded..
On success, dlopen() and dlmopen() return a non-NULL handle for the loaded library.).
dlopen() and dlclose() are present in glibc 2.0 and later. dlmopen() first appeared in glibc 2.3.4.dlopen(), dlmopen(), dlclose() │ Thread safety │ MT-Safe │ └───────────────────────────────┴───────────────┴─────────┘
This page is part of release 5.00 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. Linux 2017-09-15 DLOPEN(3)
Pages that refer to this page: pldd(1), pmcd(1), mmap(2), uselib(2), vfork(2), atexit(3), backtrace(3), dladdr(3), dlerror(3), dlinfo(3), dl_iterate_phdr(3), dlsym(3), lttng-ust(3), lttng-ust-dl(3), pmda(3), rtld-audit(7), ld.so(8) | http://man7.org/linux/man-pages/man3/dlopen.3.html | CC-MAIN-2019-13 | refinedweb | 217 | 67.15 |
Obtain the MAC address in C Code
Hi All,
As per the title can anyone assist with a method of obtaining the Onions MAC address using C code?
Kind Regards,
UFD
- Maximilian Gerhardt
@UFD
One of the first Google results for "C linux get MAC":
Either cross-compile the program or compile locally. No special flags needed
getmac.c
#include <sys/types.h> #include <sys/socket.h> #include <sys/ioctl.h> #include <net/if.h> #include <stdio.h> #include <string.h> void mac_eth0(unsigned char MAC_str[13]) { #define HWADDR_len 6 int s,i; struct ifreq ifr; s = socket(AF_INET, SOCK_DGRAM, 0); strcpy(ifr.ifr_name, "br-wlan"); ioctl(s, SIOCGIFHWADDR, &ifr); for (i=0; i<HWADDR_len; i++) sprintf(&MAC_str[i*2],"%02X",((unsigned char*)ifr.ifr_hwaddr.sa_data)[i]); MAC_str[12]='\0'; } int main(int argc, char *argv[]) { unsigned char mac[13]; mac_eth0(mac); puts(mac); return 0; }
root@Omega-17FD:~# gcc -o getmac getmac.c root@Omega-17FD:~# ./getmac 40A36BC117FF
Notice how the printed MAC on the Onion does not match the output of the program. That's because the sticker is actually wrong - although my sticker reads
MAC: 40A36BC117FD, the MAC returned by
ifconfigis actually
40:A3:6B:C1:17:FFfor
br-wlan0and
42:A3:6B:01:17:FDfor
apcli0.
- Chris Stratton
@Maximilian-Gerhardt said in Obtain the MAC address in C Code:
Notice how the printed MAC on the Onion does not match the output of the program. That's because the sticker is actually wrong
The sticker probably isn't wrong. The board has two network interfaces, and as a result it has two MAC addresses.
They're related by a simple numerical substitution rule in one of the startup scripts.
is actually 40:A3:6B:C1:17:FF for br-wlan0
The above is the formally assigned address
and 42:A3:6B:01:17:FD for apcli0
Whiel that is a self-assigned address derived from the above via a substitution rule. Note the "2" in the first octet, which indicates it is a "locally administered" (vs unique-in-the-world) MAC address.
As for why the lowest octet in br-wlan0 is FF and not FD as assigned, that might be some sort of bug or odd decision in the startup code - seems like it wouldn't be a sequencing mistake at the factory, or the locally administered address would show it too.
If you poke around in the first few MTD partitions you can probably find what is actually saved in the flash chip.
Thanks @Maximilian-Gerhardt
I just changed "br-wlan" to "ra0" as this interface has a MAC value consistent with the sticker. | http://community.onion.io/topic/2441/obtain-the-mac-address-in-c-code | CC-MAIN-2018-51 | refinedweb | 440 | 63.9 |
1.3 ! jdf 1: **Contents** ! 2: ! 3: [[!toc levels=3]] ! 4: 1.1 jdf 5: # The Domain Name System 6: 1.2 jdf 1.1 jdf 12: (DNSRD) at [](). 13: 14: ## DNS Background and Concepts 15: 1.2 jdf 1.1 jdf 22: number of additional implementations available for many platforms. 23: 24: ### Naming Services 25: 1.2 jdf 26: Naming services are used to provide a mapping between textual names and 27: configuration data of some form. A *nameserver* maintains this mapping, and 1.1 jdf 28: clients request the nameserver to *resolve* a name into its attached data. 29: 1.2 jdf 30: The reader should have a good understanding of basic hosts to IP address mapping 1.1 jdf 31: and IP address class specifications, see 32: [[Name Service Concepts|guide/net-intro#nsconcepts]]. 33: 1.2 jdf 34: In the case of the DNS, the configuration data bound to a name is in the form of 35: standard *Resource Records* (RRs). These textual names conform to certain 1.1 jdf 36: structural conventions. 37: 38: ### The DNS namespace 39: 1.2 jdf 40: The DNS presents a hierarchical name space, much like a UNIX filesystem, 1.1 jdf 41: pictured as an inverted tree with the *root* at the top. 42: 1.2 jdf 43: TOP-LEVEL .org 44: | 45: MID-LEVEL .diverge.org 46: ______________________|________________________ 47: | | | 1.1 jdf 48: BOTTOM-LEVEL strider.diverge.org samwise.diverge.org wormtongue.diverge.org 49: 1.2 jdf 1.1 jdf 55: "net2.diverge.org" and one in "net1.diverge.org". 56: 1.2 jdf 1.1 jdf 62: of subordinate Domain Names (or both, or something else). 63: 1.2 jdf 64: Unlike most filesystem naming schemes, however, Domain Names are written with 65: the innermost name on the left, and progressively higher-level domains to the 66: right, all the way up to the root directory if necessary. The separator used 1.1 jdf 67: when writing Domain Names is a period, ".". 68: 1.2 jdf 1.1 jdf 80: required as configuration parameters in some circumstances. 81: 1.2 jdf 82: On the Internet, there are some established conventions for the names of the 83: first few levels of the tree, at which point the hierarchy reaches the level of 84: an individual organisation. This organisation is responsible for establishing 1.1 jdf 85: and maintaining conventions further down the tree, within its own domain. 86: 87: ### Resource Records 88: 1.2 jdf 1.1 jdf 93: even multiple records of the same type. 94: 95: #### Common DNS Resource Records 96: 1.2 jdf 97: * *A: Address* -- This record contains the numerical IP address associated with 1.1 jdf 98: the name. 99: 1.2 jdf 1.1 jdf 105: to be bound to the same name. 106: 1.2 jdf 107: It is common for these records to be used to point to hosts providing a 108: particular service, such as an FTP or HTTP server. If the service must be 109: moved to another host, the alias can be changed, and the same name will reach 1.1 jdf 110: the new host. 111: 1.2 jdf 112: * *PTR: Pointer* -- This record contains a textual name. These records are 113: bound to names built in a special way from numerical IP addresses, and are 114: used to provide a reverse mapping from an IP address to a textual name. This 1.1 jdf 115: is described in more detail in [[Reverse Resolution|guide/dns#bg-reverse]]. 116: 1.2 jdf 1.1 jdf 122: [[Delegation|guide/dns#bg-delegation]]. 123: 1.2 jdf 1.1 jdf 130: workstation. 131: 1.2 jdf 132: * *HINFO: Host Information* -- Contains two strings, intended for use to 133: describe the host hardware and operating system platform. There are defined 134: strings to use for some systems, but their use is not enforced. Some sites, 1.1 jdf 135: because of security considerations, do not publicise this information. 136: 1.2 jdf 137: * *TXT: Text* -- A free-form text field, sometimes used as a comment field, 138: sometimes overlaid with site-specific additional meaning to be interpreted by 1.1 jdf 139: local conventions. 140: 1.2 jdf 1.1 jdf 145: caching of the results of DNS queries. 146: 147: ### Delegation 148: 1.2 jdf 1.1 jdf 156: below that point, excluding names below any subsequent delegations. 157: 1.2 jdf 1.1 jdf 164: to itself, splitting the domain into several zones kept on the same server. 165: 166: ### Delegation to multiple servers 167: 1.2 jdf 1.1 jdf 176: narrowing their search. This is occasionally called *walking the tree*. 177: 1.2 jdf 1.1 jdf 183: root nameservers. 184: 185: ### Secondaries, Caching, and the SOA record 186: 1.2 jdf 1.1 jdf 195: authoritative for several zones. 196: 1.2 jdf 197: When nameservers receive responses to queries, they can *cache* the results. 198: This has a significant beneficial impact on the speed of queries, the query load 199: on high-level nameservers, and network utilisation. It is also a major 1.1 jdf 200: contributor to the memory usage of the nameserver process. 201: 1.2 jdf 202: There are a number of parameters that are important to maintaining consistency 203: amongst the secondaries and caches. The values for these parameters for a 1.1 jdf 204: particular domain zone file are stored in the SOA record. These fields are: 205: 206: #### Fields of the SOA Record 207: 1.2 jdf 208: * *Serial* -- A serial number for the zone file. This should be incremented any 209: time the data in the domain is changed. When a secondary wants to check if 210: its data is up-to-date, it checks the serial number on the primary's SOA 1.1 jdf 211: record. 212: 1.2 jdf 213: * *Refresh* -- A time, in seconds, specifying how often the secondary should 214: check the serial number on the primary, and start a new transfer if the 1.1 jdf 215: primary has newer data. 216: 1.2 jdf 217: * *Retry* -- If a secondary fails to connect to the primary when the refresh 218: time has elapsed (for example, if the host is down), this value specifies, in 1.1 jdf 219: seconds, how often the connection should be retried. 220: 1.2 jdf 221: * *Expire* -- If the retries fail to reach the primary within this number of 222: seconds, the secondary destroys its copies of the zone data file(s), and 223: stops answering requests for the domain. This stops very old and potentially 1.1 jdf 224: inaccurate data from remaining in circulation. 225: 1.2 jdf 226: * *TTL* -- This field specifies a time, in seconds, that the resource records 227: in this zone should remain valid in the caches of other nameservers. If the 228: data is volatile, this value should be short. TTL is a commonly-used acronym, 1.1 jdf 229: that stands for "Time To Live". 230: 231: ### Name Resolution 232: 1.2 jdf 233: DNS clients are configured with the addresses of DNS servers. Usually, these are 234: servers which are authoritative for the domain of which they are a member. All 235: requests for name resolution start with a request to one of these local servers. 1.1 jdf 236: DNS queries can be of two forms: 237: 1.2 jdf 1.1 jdf 244: send its request to one of these servers. 245: 1.2 jdf 1.1 jdf 251: negative) and returned to the client. 252: 1.2 jdf 1.1 jdf 259: single query (which can then be cached), rather than reply with referrals. 260: 261: ### Reverse Resolution 262: 1.2 jdf 263: The DNS provides resolution from a textual name to a resource record, such as an 264: A record with an IP address. It does not provide a means, other than exhaustive 265: search, to match in the opposite direction; there is no mechanism to ask which 1.1 jdf 266: name is bound to a particular RR. 267: 1.2 jdf*, 1.1 jdf 274: despite the inaccurate implications of the term. 275: 276: The manner in which this is achieved is as follows: 277: 1.2 jdf 278: * A normal domain name is reserved and defined to be for the purpose of mapping 279: IP addresses. The domain name used is `in-addr.arpa.` which shows the 280: historical origins of the Internet in the US Government's Defence Advanced 1.1 jdf 281: Research Projects Agency's funding program. 282: 1.2 jdf 1.1 jdf 290: places higher level domains on the right of the name. 291: 1.2 jdf 1.1 jdf 298: to the nameserver asking for a PTR record bound to the generated name. 1.2 jdf 299: 1.1 jdf 300: * The PTR record, if found, will contain the FQDN of a host. 301: 1.2 jdf 1.1 jdf 308: record bound to it which contains the same IP address. 309: 1.2 jdf 310: While there is no such restriction within the DNS, some application server 311: programs or network libraries will reject connections from hosts that do not 1.1 jdf 312: satisfy the following test: 313: 1.2 jdf 314: * the state information included with an incoming connection includes the IP 1.1 jdf 315: address of the source of the request. 316: 317: * a PTR lookup is done to obtain an FQDN of the host making the connection 318: 1.2 jdf 319: * an A lookup is then done on the returned name, and the connection rejected if 1.1 jdf 320: the source IP address is not listed amongst the A records that get returned. 321: 1.2 jdf 322: This is done as a security precaution, to help detect and prevent malicious 323: sites impersonating other sites by configuring their own PTR records to return 1.1 jdf 324: the names of hosts belonging to another organisation. 325: 326: ## The DNS Files 327: 1.2 jdf 328: Now let's look at actually setting up a small DNS enabled network. We will 329: continue to use the examples mentioned in [Chapter 24, *Setting up TCP/IP on 330: NetBSD in practice*](chap-net-practice.html "Chapter 24. Setting up TCP/IP on 1.1 jdf 331: NetBSD in practice"), i.e. we assume that: 332: 333: * Our IP networking is working correctly 334: * We have IPNAT working correctly 335: * Currently all hosts use the ISP for DNS 336: 1.2 jdf 1.1 jdf: 1.2 jdf 352: This is not exactly a huge network, but it is worth noting that the same rules 1.1 jdf 353: apply for larger networks as we discuss in the context of this section. 354: 1.2 jdf 1.1 jdf 359: issues which are left out here. 360: 1.2 jdf 361: The NetBSD operating system provides a set of config files for you to use for 362: setting up DNS. Along with a default `/etc/named.conf`, the following files are 1.1 jdf 363: stored in the `/etc/namedb` directory: 364: 365: * `localhost` 366: * `127` 367: * `loopback.v6` 368: * `root.cache` 369: 1.2 jdf 370: You will see modified versions of these files in this section, and I strongly 1.1 jdf 371: suggest making a backup copy of the original files for reference purposes. 372: 1.2 jdf 373: *Note*: The examples in this chapter refer to BIND major version 8, however, it 374: should be noted that format of the name database and other config files are 375: almost 100% compatible between version. The only difference I noticed was that 1.1 jdf 376: the `$TTL` information was not required. 377: 378: ### /etc/named.conf 379: 1.2 jdf 380: The first file we want to look at is `/etc/named.conf`. This file is the config 381: file for bind (hence the catchy name). Setting up system like the one we are 1.1 jdf: 1.2 jdf 1.1 jdf 428: the case on most systems. 429: 1.2 jdf, 1.1 jdf 434: which our server couldn't resolve itself. 435: 1.2 jdf 436: Looks like a pretty big mess, upon closer examination it is revealed that many 437: of the lines in each section are somewhat redundant. So we should only have to 1.1 jdf 438: explain them a few times. 439: 440: Lets go through the sections of `named.conf`: 441: 442: #### options 443: 1.2 jdf 444: This section defines some global parameters, most noticeable is the location of 445: the DNS tables, on this particular system, they will be put in `/etc/namedb` as 1.1 jdf 446: indicated by the "directory" option. 447: 448: Following are the rest of the params: 449: 1.2 jdf 450: * `allow-transfer` -- This option lists which remote DNS servers acting as 451: secondaries are allowed to do zone transfers, i.e. are allowed to read all 452: DNS data at once. For privacy reasons, this should be restricted to secondary 1.1 jdf 453: DNS servers only. 454: 1.2 jdf 455: * `allow-query` -- This option defines hosts from what network may query this 456: name server at all. Restricting queries only to the local network 457: (192.168.1.0/24) prevents queries arriving on the DNS server's external 1.1 jdf 458: interface, and prevent possible privacy issues. 459: 1.2 jdf 460: * `listen-on port` -- This option defined the port and associated IP addresses 461: this server will run 462: [named(8)]() 463: on. Again, the "external" interface is not listened here, to prevent queries 1.1 jdf 464: getting received from "outside". 465: 1.2 jdf 1.1 jdf 472: highlight just one of their records: 473: 474: #### zone diverge.org 475: 1.2 jdf 476: * `type` -- The type of a zone is usually of type "master" in all cases except 477: for the root zone `.` and for zones that a secondary (backup) service is 1.1 jdf 478: provided - the type obviously is "secondary" in the latter case. 479: 1.2 jdf 480: * `notify` -- Do you want to send out notifications to secondaries when your 1.1 jdf 481: zone changes? Obviously not in this setup, so this is set to "no". 482: 1.2 jdf 483: * `file` -- This option sets the filename in our `/etc/namedb` directory where 484: records about this particular zone may be found. For the "diverge.org" zone, 1.1 jdf 485: the file `/etc/namedb/diverge.org` is used. 486: 487: ### /etc/namedb/localhost 488: 1.2 jdf 489: For the most part, the zone files look quite similar, however, each one does 1.1 jdf: 1.2 jdf 505: * *Line 1*: This is the Time To Live for lookups, which defines how long other 506: DNS servers will cache that value before discarding it. This value is 1.1 jdf 507: generally the same in all the files. 508: 1.2 jdf 1.1 jdf 517: "jrf.diverge.org."). 518: 1.2 jdf 1.1 jdf 524: the serial number. 525: 1.2 jdf 526: * *Line 4*: This is the refresh rate of the server, in this file it is set to 1.1 jdf 527: once every 8 hours. 528: 529: * *Line 5*: The retry rate. 530: 531: * *Line 6*: Lookup expiry. 532: 533: * *Line 7*: The minimum Time To Live. 534: 1.2 jdf, 1.1 jdf 538: i.e. "diverge.org") is, well, "localhost". 539: 1.2 jdf 540: * *Line 9*: This is the localhost entry, which uses an "A" resource record to 541: indicate that the name "localhost" should be resolved into the IP-address 1.1 jdf 542: 127.0.0.1 for IPv4 queries (which specifically ask for the "A" record). 543: 1.2 jdf 544: * *Line 10*: This line is the IPv6 entry, which returns ::1 when someone asks 545: for an IPv6-address (by specifically asking for the AAAA record) of 1.1 jdf 546: "localhost.". 547: 548: ### /etc/namedb/zone.127.0.0 549: 1.2 jdf 550: This is the reverse lookup file (or zone) to resolve the special IP address 1.1 jdf: 1.2 jdf 1.1 jdf 570: "1.0.0.127.in-addr.arpa" is queried, which is what is defined in that line. 571: 572: ### /etc/namedb/diverge.org 573: 1.2 jdf 574: This zone file is populated by records for all of our hosts. Here is what it 1.1 jdf: 1.2 jdf 592: There is a lot of new stuff here, so lets just look over each line that is new 1.1 jdf 593: here: 594: 1.2 jdf 1.1 jdf 598: here is if "strider" cannot handle the mail, then "samwise" will. 599: 1.2 jdf 600: * *Line 11*: CNAME stands for canonical name, or an alias for an existing 601: hostname, which must have an A record. So we have aliased `` 1.1 jdf 602: to `samwise.diverge.org`. 603: 1.2 jdf 604: The rest of the records are simply mappings of IP address to a full name (A 1.1 jdf 605: records). 606: 607: ### /etc/namedb/1.168.192 608: 1.2 jdf, 1.1 jdf: 1.2 jdf 629: This file contains a list of root name servers for your server to query when it 630: gets requests outside of its own domain that it cannot answer itself. Here are 1.1 jdf: 1.2 jdf 666: This file can be obtained from ISC at <> and usually comes 667: with a distribution of BIND. A `root.cache` file is included in the NetBSD 1.1 jdf 668: operating system's "etc" set. 669: 1.2 jdf 670: This section has described the most important files and settings for a DNS 671: server. Please see the BIND documentation in `/usr/src/dist/bind/doc/bog` and 672: [named.conf(5)]() 1.1 jdf 673: for more information. 674: 675: ## Using DNS 676: 1.2 jdf 677: In this section we will look at how to get DNS going and setup "strider" to use 1.1 jdf 678: its own DNS services. 679: 1.2 jdf 680: Setting up named to start automatically is quite simple. In `/etc/rc.conf` 681: simply set `named=yes`. Additional options can be specified in `named_flags`, 682: for example, I like to use `-g nogroup -u nobody`, so a non-root account runs 1.1 jdf 683: the "named" process. 684: 1.2 jdf [ndc(8)]() 690: and 691: [named.conf(5)]() 692: man pages for more details on setting up communication channels between "ndc" 1.1 jdf 693: and the "named" process. 694: 1.2 jdf 1.1 jdf 702: `nsswitch.conf`: 703: 704: ... 705: group_compat: nis 706: hosts: files dns 707: netgroup: files [notfound=return] nis 708: ... 709: 1.2 jdf 710: The line we are interested in is the "hosts" line. "files" means the system uses 711: the `/etc/hosts` file first to determine ip to name translation, and if it can't 1.1 jdf 712: find an entry, it will try DNS. 713: 1.2 jdf 714: The next file to look at is `/etc/resolv.conf`, which is used to configure DNS 715: lookups ("resolution") on the client side. The format is pretty self explanatory 1.1 jdf 716: but we will go over it anyway: 717: 718: domain diverge.org 719: search diverge.org 720: nameserver 192.168.1.1 721: 1.2 jdf 1.1 jdf 727: be used to resolve DNS queries. 728: 729: To test our nameserver we can use several commands, for example: 730: 731: # host sam 732: sam.diverge.org has address 192.168.1.2 733: 1.2 jdf 734: As can be seen, the domain was appended automatically here, using the value from 1.1 jdf: 1.2 jdf 749: Other commands for debugging DNS besides 750: [host(1)]() are 751: [nslookup(8)]() 1.1 jdf 752: and 1.2 jdf 753: [dig(1)](). Note 1.1 jdf 754: that 755: [ping(8)]() 1.2 jdf 756: is *not* useful for debugging DNS, as it will use whatever is configured in 1.1 jdf 757: `/etc/nsswitch.conf` to do the name-lookup. 758: 1.2 jdf 759: At this point the server is configured properly. The procedure for setting up 760: the client hosts are easier, you only need to setup `/etc/nsswitch.conf` and 1.1 jdf 761: `/etc/resolv.conf` to the same values as on the server. 762: 763: ## Setting up a caching only name server 764: 1.2 jdf 1.1 jdf 770: necessary to use the already known `/etc/hosts` file. 771: 1.2 jdf 1.1 jdf: 1.2 jdf 786: Now that the server is running we can test it using the 787: [nslookup(8)]() 1.1 jdf: 1.2 jdf 815: As you've probably noticed, the address is the same, but the message 816: `Non-authoritative answer` has appeared. This message indicates that the answer 817: is not coming from an authoritative server for the domain NetBSD.org but from 1.1 jdf 818: the cache of our own server. 819: 820: The results of this first test confirm that the server is working correctly. 821: 1.2 jdf 822: We can also try the 823: [host(1)]() and 824: [dig(1)]() commands, 1.1 jdf 1.2 jdf 853: ;; MSG SIZE sent: 32 rcvd: 175 1.1 jdf 854: 1.2 jdf 855: As you can see 856: [dig(1)]() gives 857: quite a bit of output, the expected answer can be found in the "ANSWER SECTION". 1.1 jdf 858: The other data given may be of interest when debugging DNS problems. 859: | https://wiki.netbsd.org/cgi-bin/cvsweb/wikisrc/guide/dns.mdwn?annotate=1.3 | CC-MAIN-2017-17 | refinedweb | 3,668 | 78.04 |
How to sum all the values in a dictionary?
In Python 2 you can avoid making a temporary copy of all the values by using the
itervalues() dictionary method, which returns an iterator of the dictionary's keys:
sum(d.itervalues())
In Python 3 you can just use
d.values() because that method was changed to do that (and
itervalues() was removed since it was no longer needed).
To make it easier to write version independent code which always iterates over the values of the dictionary's keys, a utility function can be helpful:
import sysdef itervalues(d): return iter(getattr(d, ('itervalues', 'values')[sys.version_info[0]>2])())sum(itervalues(d))
This is essentially what Benjamin Peterson's
six module does.
Sure there is. Here is a way to sum the values of a dictionary.
'key1':1,'key2':14,'key3':47}sum(d.values())62d = { | https://codehunter.cc/a/python/how-to-sum-all-the-values-in-a-dictionary | CC-MAIN-2022-21 | refinedweb | 145 | 56.25 |
Getting started
With Microsoft Expression Encoder, you can work with audio and video files by using the Expression Encoder object model (OM), which is based on the Microsoft .NET Object Model Framework. To work with the Expression Encoder OM, you must have Expression Encoder, and we recommend that you use Microsoft Visual Studio 2010 for coding.
You can access most Expression Encoder features through the OM. The OM is designed to make the functionality of Expression Encoder available without having to write much code. Using the OM, you can use different job types to encode your media: Transcoding, Live Broadcasting, and Screen Capture.
In Visual Studio, click Project, and then click Add Reference.
In the Add Reference dialog box, click the .NET tab at the top.
Press and hold the CTRL key, and then click Microsoft.Expression.Encoder, Microsoft.Expression.Encoder.Api2, Microsoft.Expression.Encoder.Types, and Microsoft.Expression.Encoder.Utilities.
Click OK.
Now that you have added the references to the Expression Encoder assemblies, you are ready to start coding. The following C# code sample uses the Expression Encoder OM to create a job, add a video file to the job, and then encode the video.
Running the Simple example
This example creates a job, imports a media item, encodes that item with default presets, and saves the job to a local folder. The application displays its progress on the screen as it encodes.
If you follow the comments in the code, you can see the outline of the steps for encoding a video by using C# and the Expression Encoder OM. The following code has six steps:
Identify the media sources that you want to process.
Create a job to process the media sources, and then add the media sources.
Identify the location for the output.
Optionally, add a progress callback function to view the encoding progress.
Execute the project.
Clean up the job.
using Microsoft.Expression.Encoder; static void Main(string[] args) { MediaItem mediaItem = new MediaItem(@"C:\videoInput\video.wmv"); //Creates job and media item for the video to be encoded Job job = new Job(); job.MediaItems.Add(mediaItem); //Sets output directory job.OutputDirectory = @"C:\videoOutput"; //Sets up progress callback function job.EncodeProgress += new EventHandler<EncodeProgressEventArgs>(OnProgress); //Encodes Console.WriteLine("Encoding…"); job.Encode(); Console.WriteLine("Finished encoding."); job.Dispose(); } static void OnProgress(object sender, EncodeProgressEventArgs e) { Console.Write("\b\b\b\b\b\b\b"); Console.Write("{0:F2}%", e.Progress); }
Explaining the Simple example you can now use to locate a video or audio file, extract relevant information about the file, and encode it.
Locate the following code.
After the last curly brace, add the following declaration, replacing the generic path with the path of the video that you want to encode.
Now that you have declared the MediaItem class, your next steps are to create a job and then to add the media file to the job.
After the declaration, add the following code.
If you want to encode multiple items, you must declare multiple MediaItem classes and then repeat
job.MediaItems.Add(mediaItem); until you have added all the files that you want to encode. Each MediaItem class must have a unique name, such as mediaItem1 or mediaItem2. Before you begin encoding the files, set the directory in which you the following:). After the file is encoded, clean up the job object to release resources.
Running the Live Broadcasting example
This example sets up a LiveJob, adds a media source to it, and streams that source through a broadcast port until the user stops the broadcast.
If you follow the comments in the code, you can see the outline of the steps for encoding a video by using C# and the Expression Encoder OM. The following code has seven steps:
Create a job to process the media sources.
Create and add a source for encoding.
Set the playback mode.
Activate the source.
Set the publishing format.
Encode.
Stop encoding on user prompt.
using Microsoft.Expression.Encoder; using Microsoft.Expression.Encoder.Live; namespace Live { class Program { static void Main(string[] args) { // Creates a new LiveJob. LiveJobs are IDisposable objects. With the using statement, // the clean-up is handled automatically. using (LiveJob job = new LiveJob()) { // Creates file source for encoding LiveFileSource fileSource = job.AddFileSource(@"C:\videoInput\video.wmv"); // Sets playback to loop on reaching the end of the file fileSource.PlaybackMode = FileSourcePlaybackMode.Loop; // Sets this source as the current active one job.ActivateSource(fileSource); // Creates the publishing format for the job PullBroadcastPublishFormat format = new PullBroadcastPublishFormat(); format.BroadcastPort = 8080; // Adds the publishing format to the job job.PublishFormats.Add(format); // Starts encoding job.StartEncoding(); Console.Write("Press 'x' to stop streaming…"); while (Console.ReadKey(true).Key != ConsoleKey.X) ; Console.WriteLine("Streaming stopped."); job.StopEncoding(); } } } }
Explaining the Live Broadcasting example
Just as in the last example, you have to add using statements to declare that the Expression Encoder and Live namespaces are being used. At the top of the file that you just created, locate the other using statements, and then add using Microsoft.Expression.Encoder and using Microsoft.Expression.Encoder.Live. The code at the top of the page should now resemble the following.
Next, you have to create a new LiveJob instance. In this case, you are enclosing this in the using statement, which will handle cleaning up the job afterward. You could instead have declared a job and disposed of it as in the first example. In a LiveJob, you must create the job before you can create the source. Unlike with Transcoding jobs, all sources must be registered through the job, in part because the job determines the publishing format for all items in that job.
Add these lines inside the curly braces under the
Main(string[] args) method.
You have now created a job and added a source to the job. The LiveFileSource variable fileSource gives you access to information about that media item in addition to setting some actions. Just as with a Transcoding job, all LiveFileSource variables should have unique names.
Now that you have added a source, you can set what action to take after the media file completes playback.
The default action that a source takes when the playback of the file finishes is to pause on the last frame. The other options are to have the source continuously loop or jump to another file or device. In this example, you chose to have the source loop the item until you end your broadcast. However, if you want to display additional file-based or device-based sources during your broadcast, you could also set them to begin playing when another file finishes playing, as shown in the following example.
In the next step, you set the active source, which is the source that the job encodes first.
The last thing to do before you start encoding is to determine what type of output you want. There are three different types of publishing formats available in Live: Pull, FileArchive, and Push.
Broadcast (Pull)
This is the publishing type selected for the sample. Set the port that you are broadcasting from and the maximum number of other computers that can connect. The default maximum number of connections is ten.
Archive (FileArchive)
Archiving saves the encoded media to a physical disk on your computer or network. Remember to include the output extension, and make sure that it matches the type of encoding that you are performing. If you are encoding using a VC-1 codec, use the file name extension .wmv. If you are encoding using MP4, use the file name extension .mp4. If you are using Smooth Streaming, use the file name extension .ismv.
Publish
Publishing requires having a server set up with a publishing point already established. You must type the address provided by the server as the publishing point. If a user name and password are required, the format supports those also.
For more information about setting up publishing points, see the Expression Encoder User Guide.
In each of these cases, you add the publishing format to the job after setting the format's required properties. In this manner, you can add multiple publishing formats to a job. Note that using multiple formats can require more resources.
Finally, you start encoding. Although there is no graphic representation of the encoding progress, you can choose to halt encoding at any time by pressing the key indicated on a screen that displays during your broadcast session. Encoding occurs on a separate logic thread, so the displaying of the screen will not interrupt the encoding process.
Running the ScreenCapture example
This example records all the actions that occur within a specified rectangular section of the screen until the user prompts Expression Encoder to stop capturing. The sample saves this recording to a local folder. The following code has five steps:
Create a job to capture the action on the screen.
Create and set the size and coordinates of the capturing rectangle.
Set the output directory for the capture.
Start the capture.
Stop the capture at the user's prompt.
using Microsoft.Expression.Encoder.ScreenCapture; namespace MyEncoderApplication { class Program { static void Main(string[] args) { // Creates new job using (ScreenCaptureJob job = new ScreenCaptureJob()) { // Sets the top right coordinates of the capture rectangle int topRightX = 200; int topRightY = 200; // Sets the bottom left coordinates of the capture rectangle int BottomLeftX = topRightX + 300; int BottomLeftY = topRightY + 150; job.CaptureRectangle = new Rectangle(topRightX, topRightY, BottomLeftX, BottomLeftY); job.ShowFlashingBoundary = true; job.OutputPath = @"c:\output"; job.Start(); Console.WriteLine("Press 'x' to stop recording."); while (Console.ReadKey(true).Key != ConsoleKey.X) ; Console.WriteLine("Recording stopped."); job.Stop(); } } } }
Explaining the ScreenCapture example
As with the first two examples, you have to add using statements to declare that the Expression Encoder ScreenCapture namespace is being used. In addition, if you want to define a custom-capture rectangle, you have to use the Drawing namespace. At the top of the file that you just created, locate the other using statements, and then add using Microsoft.Expression.Encoder.ScreenCapture and System.Drawing, as shown in the following example.
Next, you create the job. Just as in the Live example, use a using statement.
Optionally, you can create a set of coordinates to capture a certain area of the screen. The default setting is to capture the full screen. In this case, you set the x-coordinate and the y-coordinate of the upper right and the coordinates of the lower left to create the rectangular capture range. This example uses the starting position in the second set of coordinates so that if you want to move the rectangle without changing the size, only one set of coordinates would have to be changed.
At this point, you can choose to display the boundary for the capture area. The only remaining step before capturing is to set the output path for the capture to be stored. | http://msdn.microsoft.com/en-us/library/cc761462(v=expression.40).aspx | CC-MAIN-2014-52 | refinedweb | 1,811 | 57.27 |
Solution architect: We should start concentrating more on frameworks!!
Dumb developer: Well, I think I know about the .NET Framework. You mean, something like that?
After reading this article, you'll be able to: may do this process based on a scheduler - because you need to send card data to banks a few times a day (once or twice) and pull the results back.
And to make matters more interesting, more banks will be added to your system in future.
Well, now you need to propose a solution. Interesting.. huh?
Let us see what all solutions are not good enough.
Let us see what all decisions are good design decisions.
So finally, let us decide to:
Now, you speak with some of your friends and they ask you.. "Oh dude! Why can't you make it a generic interfacing Framework so that we can re-use it?". "Quite a good idea" you feel .. "But man, how exactly will I make it a Framework?"
There we are . Let us see.
Basically, a framework is:
Solution architect 1: "The day we complete building frameworks for all human work flows happening in this world, solution architects may go out of job".
Solution architect 2: "No, we can still do we create a framework for solving the above design issues? Here are some final thoughts regarding the framework we are going to build:,
Well, this may ring the bell. You suddenly think.. "separating functionality using a plug-in based model is nothing new - and an existing Design Pattern should be in existence for this". Well, that is what (actually, some what ) the.
Altogether, the idea about the 'Provider pattern' is quite simple. There are various ways of implementation; here is a quite simple one:
public interface IMyInterface
System.Reflection
Much like this:
IPublisher
Init
Get
Put
PublisherFactory
FTPPublisher and SOAPPublisher are two plug-in classes which implement the IPublisher interface. KeyValueCollection is a set of name-value pairs, to pass some settings (like Host=myhost, Port=30).
FTPPublisher
SOAPPublisher
KeyValueCollection
Now the easiest part - implementing the framework (unfortunately, most people start thinking from this phase only).
Credits to Sandeep (sandeepkumar@sumerusolutions.com) for implementing the actual code, based on the design.
As we discussed, the interface definition is pretty simple. The type SettingCollection is synonymous to the KeyValue collection we discussed above.
SettingCollection
KeyValue
/// );
}
The publisher factory has a very simple static function, CreatePublisher.
CreatePublisher
The Publisher factory simply loads the type (i.e., our plug-in class) from the assembly, creates an object of it, and returns it. Looks so simple, huh? If the type we loaded is not an IPublisher, an error may occur at the line ipub=(IPublisher) pub because explicit casting may fail. This way, we make sure that all plug-ins we load should obey the IPublisher interface. It will also call the Init function to pass the parameters to the plug-in after loading it.
ipub=(IPublisher) pub
/// ;
}
}
}
If you remember, one of our requirements was to load the plug-in information from an XML file. The PublisherLoader class is exactly for that. This class is not there in the above UML diagram - pardon me, but it is still a part of the framework. It will load an XML configuration file, load each plug-in specified, instantiate it by calling PublisherFactory, and call other functions in the plug-in like Get() and Put(). Before we go to the class definition, here is some more meat regarding how to create your own XML configuration files and how to actually read them.
PublisherLoader
Get()
Put()
For example, does this work? Read on. How do we read a configuration file like the one above? Fortunately, it is very simple in .NET. The steps are:
AssemblyName
ClassName
Host
Port, but you will see, the Publishers element can contain more than one Publisher element. A Publisher element has four attributes - Name, AssemblyName, ClassName, and Version - and a Publisher element can contain more than one Setting element.
Publishers
Publisher
Setting
Name
Version
A Setting element has two attributes- a Key and a Value. This schema exactly describes the configuration document structure we need. Save the schema to a file named Publisher.xsd.
Key
Value
Now, to create a data structure out of our schema, I've several options. My favorite option is to use XsdObjectGenerator. It will generate a set of classes which are capable of representing my the attributes) to describe the elements in Publisher.xsd. The /n switch specifies the namespace for the newly created classes. And this creates a file Sumeru.Publisher.Framework.Data.cs with all the classes - please see the attached source code if required. There will be a Sumeru.Publisher.Framework.Data.Publishers class (if you remember, Publishers is the root element according to our schema), and we can use an object of this class to de-serialize an XML document.
Sumeru.Publisher.Framework.Data.Publishers
This part is pretty simple. We just create an object of, the Publishers element has a collection of Publisher elements, and each Publisher element has can see, we load the configuration, iterate each Publisher element, and pass the class name and assembly name to PublisherFactory for creating an IPublisher object. Then we call the functions Get() and Put(). If you notice, we are also passing the Setting collection of a Publisher element to PublisherFactory, and PublisherFactory will in-turn pass this Setting collection by calling the Init(..) function in a Publisher object during instantiation (see the code of PublisherFactory). We are almost done. Let us write a simple plug-in for our framework now.
Init(..)
To develop a plug in, I've done the following steps:
FTPPublisherPlugin
Here is the FTPPublisherPlugin class. It just implements all the functions defined in IPublisher. You may need to add your own code for putting and getting files to/from);
}
}
}
Now, let us develop a very simple Loader application which consumes our framework. It basically invokes the LoadPublisher function in the PublisherLoader class we created earlier. I created a form, with a text box which has the path to the XML configuration file.
LoadPublisher
The XML configuration file has information to load our newly written plug-in. I am can see, we are using the PublisherLoader class to read the configuration file and to load the plug-ins, as explained earlier. Now, if you click the Load button, the plug-ins will be loaded, and you may see the message boxes from the plug-in we developed. (Let us keep it simple
Here are some interesting design variations so that you can implement for practice, and for some additional brainstorming:
Hope you enjoyed it. I tried to keep it very simple. Some more information if you are health conscious::
You can visit my website at for more articles, projects, and source code. Also, you can view my tech-bits blog, and download my Open Source projects. If you have some time to read some insights, see my intuitions blog. Thanks a lot
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
if(pub is IPublisher)
{
ipub=(IPublisher) pub;
}
else
{
throw new Exception("This assembly class does not implement IPublishes interface.");
}
IPublisher ipub = pub as IPublisher;
if(ipub == null)
{
throw new ApplicationException("This assembly class does not implement IPublisher interface.");
}
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/17062/Learn-How-to-Build-a-Provider-Framework-With-an-Ea?fid=373671&df=90&mpp=25&noise=3&prof=False&sort=Position&view=Quick&spc=Relaxed | CC-MAIN-2016-18 | refinedweb | 1,253 | 56.25 |
Opened 7 years ago
Closed 6 years ago
#2377 closed enhancement (fixed)
Separate CSS stylesheet
Description
Actually, ProgressMeterMacro puts the required CSS styles into a wiki page every time the macro is called. Solve this by importing separate stylesheet for CSS styles used by macro.
Attachments (0)
Change History (4)
comment:1 follow-up: ↓ 2 Changed 6 years ago by qwp0
- Resolution set to wontfix
- Status changed from new to closed
comment:2 in reply to: ↑ 1 Changed 6 years ago by osimons
If you know any other way how to solve this, let me know ;-)
I know nothing about this plugin, just wanting to mention that adding a stylesheet from a macro is doable and supported - Trac does this itself for many purposes. Like if you add a {{{#!diff}}} processor (=macro), it will also add the stylesheet needed to support inline viewing of patches regardless of what handler it is in.
from trac.web.chrome import add_stylesheet add_stylesheet(formatter.req, 'myplugin/css/mysheet.css')
You also need to extend the ITemplateProvider interface to make a named htdocs location available so your sheet can be located (if not done already).
As an example, I also add a stylesheet in a macro that is part of the fullblogplugin.
comment:3 Changed 6 years ago by qwp0
- Resolution wontfix deleted
- Status changed from closed to reopened
Thanks for information and pointing at your plugin -- I have looked for a macro which includes separate stylesheets! I will look at this problem again, trying to get it done using your solution. Thanks!
comment:4 Changed 6 years ago by qwp0
- Resolution set to fixed
- Status changed from reopened to closed
(In [3298]) ProgressMeterMacro: Using separate CSS, fixes #2377 (v0.1)
This enhancement can't be done, AFAIK, because importing of separate CSS styles is possible just for plugins which define new URL handlers. And because this is _macro_ plugin, it is used within wiki pages -- but they (the wiki pages) have predefined handler (the wiki handler handle all '/wiki/*' requests) so we don't define new handler and because of that we can't import the separate CSS stylesheets.
If you know any other way how to solve this, let me know ;-) | http://trac-hacks.org/ticket/2377 | CC-MAIN-2014-35 | refinedweb | 368 | 54.46 |
Use external libraries
Where can I find documentation for using/importing other libraries on my LoPy.
I want to use the Paho MQTT Python library. Should I put it somewhere in the /flash over ftp?
I'm new to Pycom and the Python programming language but I'm an experienced developer in Java, C++, Arduino,... So if you can point me to any other valuable info to get me started please share!
Thx in advance!
Note that
/flashand
/flash/libare in
sys.pathby default. If you want to put your libraries somewhere else, don't for get to add that directory to your path in boot.py.
import sys sys.path.append('/flash/custom-dir')
I think that /flash/lib folder is also right and logical place for external libraries. ;o)
Library is just another python file. If you put library.py into the same folder as the main.py you can than import it in the main.py file like this:
import library
and use objects in the library:
library.ExampleClass() | https://forum.pycom.io/topic/247/use-external-libraries | CC-MAIN-2019-09 | refinedweb | 172 | 69.48 |
BUILD Day 3: My Summary
BUILD Day 3: My Summary
Join the DZone community and get the full member experience.Join For Free
No keynote today, but Scott Hanselman’s 8:30AM talk was a great replacement – he himself called it an “unkeynote”. Also, before the talk Scott looped some hilarious videos, such as MacBook Wheel from The Onion.
Scott Hanselman – One ASP.NET and the Cloud
This was really keynote material in that Scott highlighted some of the recent developments in what the cloud means for developers. He started by showing the Azure PowerShell cmdlets, which give you provisioning of websites or virtual machines by virtue of a few keystrokes. These are the “five computers” somewhere in the world – Azure, Amazon, and other cloud providers – which provide infinite compute capacity at your fingertips.
The next point Scott made is that the browser itself is a virtual machine – running JavaScript – which provides a plugin-less experience for very complex software: Commodore 64 and x86 emulators being a notable example of how the browser becomes a virtual machine.
Screenshot of the JSLinux effort, which runs a Linux system on top of an x86 emulator implemented entirely in JavaScript.
Instead of treating browsers like dumb terminals that wait for HTML to pour down from a powerful server, we can leverage the power of the user’s multiprocessor device with hardware-accelerated graphics if we run code inside the browser’s virtual machine. Some pretty cool demos with the D3 visualization and graphics library really drove the point home.
Scott couldn’t not mention the VanillaJS effort, which you should definitely check out. This is a 0 bytes download, giving you exceptional performance and a slew of great features ;-)
Some notable quotes from the talk:
“Furiously Googling with Bing”
“JavaScript is the assembly language of the web”
“JavaScript is an operating system”
Anders Hejlsberg – TypeScript
This talk was very educational although I already had a chance to fool around with TypeScript a little bit. The big problem TypeScript is designed to address is that of developing large-scale JavaScript applications. With the lack of modules, classes, clean inheritance and strong static typing, JavaScript makes large-scale development exceptionally hard, and tools can’t help with auto-completion, refactoring, compile-time errors and other goodies.
Unlike other efforts (such as Google’s Dart), TypeScript starts from pure JavaScript and adds support for arrow functions (lambdas), static typing, interfaces, classes, and modules. The whole thing is compiled down to readable JavaScript code that you can debug and inspect using the standard developer tools in any browser.
TypeScript source:
class Player { constructor(public name: string, public id: number = 0) { } play(game: string) { document.write(this.name + " is playing " + game); } } var p = new Player("Sasha"); p.play("Diablo");
JavaScript produced by the TypeScript compiler:
var Player = (function () { function Player(name, id) { if (typeof id === "undefined") { id = 0; } this.name = name; this.id = id; } Player.prototype.play = function (game) { document.write(this.name + " is playing " + game); }; return Player; })(); var p = new Player("Sasha"); p.play("Diablo");
With this static typing comes the power of the tools – in-browser playground or Visual Studio, you get auto-completion, error squigglies at compile-time, and refactoring support letting you keep your sanity when working on large JavaScript code bases (to quote: “at some point, JavaScript turns into write-only code”). Anders drove this point home by demonstrating a refactoring of the TypeScript compiler, which is itself written in TypeScript!
Anders then showed a bunch of TypeScript features, including classes, modules, interfaces, type inference, optional properties, accessor functions, static methods, default parameter values, inheritance, and many others. This is really not the same JavaScript you learned to hate!
import connect = module('connect'); import express = module('express'); var app = express.createServer(); app.get('/echo/:message', (req, res) => { res.end('Your message: ' + req.params.message); }); var port = process.env.port || 8080; app.listen(port);Using modules and arrow functions in Node
Finally, you don’t have to write TypeScript declarations for standard libraries. The TypeScript installation ships with type declarations for libraries like jQuery and jQuery UI, whereas WinRT type declarations are automatically generated from WinMD files.
If you haven’t tried TypeScript yet, there’s nothing to download to get started: head on to and start fooling around in the online playground.
Nathan Totten – JavaScript from Client to Cloud This was a Node talk, and I didn’t expect to learn anything new but mostly to see how Microsoft perceives Node and its accompanying Azure support. Nathan provided a quick overview of Node, the single-threaded event loop model, and discussed some scenarios in which to use Node, such as lightweight server-side applications and web services. Perhaps more importantly, Nathan discussed when to not use Node – for CRUD solutions over data (Rails or ASP.NET are better), and for CPU-bound processing (because of the single-threaded model). To conclude the overview part of the talk, Nathan mentioned iisnode, which is what the Azure Web Sites infrastructure uses to host Node.js apps on Azure. iisnode can scale node.exe processors to multicore machines, and now has also WebSockets support with Windows Server 2012.Then, Nathan turned to the demo. In the demo, he started with a local website that uses socket.io for WebSockets, mentioned some developer tools like node-supervisor and node-inspector, and then uploaded the Node app to Azure and developed a simple Windows 8 JavaScript app connected to the same server. At the very end of the talk, he even managed to squeeze in a quick demo of the azure Node module, which he used to access Azure Storage from the on-premises Node app.
All in all, this has been a nice overview of Node and Windows Azure in just sixty minutes. I will definitely be applying some lessons from this talk to my own presentations ;-)
I am posting short updates and links on Twitter as well as on this blog. You can follow me: @goldshtn
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/build-day-3-my-summary | CC-MAIN-2018-30 | refinedweb | 1,020 | 52.7 |
I would like the preprocessor to run through my files (not to be confused with program flow) and to substitute all occurences of some #define with the next in sequence, starting from initial value.
So the question has nothing to do with CPU resources - just purely dumb precompiler's txt job.
Example 1 (avr asm, works):
.set my_stamp =6 ; initial value .macro increment_stamp .if(my_stamp>0x5A) .error "Too many stamps" .else .set my_stamp (my_stamp+1) .endif .endmacro ;increment_stamp
Whenever I need a stamp, I just insert it into the code and increment it later if necessary:
ldi temp,my_stamp ; ldi temp,6 increment_stamp ; .set my_stamp = 7 ... subi pointer,my_stamp ;subi pointer,7 .org my_stamp*7 ; .org 49 increment_stamp ; .set my_stamp =8 ...
Example 2 pseudocode in C:
#define OCDR_STAMP 1 //Initial value #define DEBUG_PRINT(report_string) \ #ifdef DEBUG #if(OCDR_STAMP > 0xFF) \ #error "Too many OCDR stamps" \ #else \ { \ //__attribute(section((.debug_str_section)) char my_string[]=report_string; \ OCDR=OCDR_STAMP; \ #redefine OCDR_STAMP (OCDR_STAMP+1) \ } \ #endif \ #endif
So that such code worked as I expect it to:
... void Init_hardware(void){ Enable_LDO(); DEBUG_PRINT("LDO running"); //pushes 1 to OCDR Enable_PLL(); DEBUG_PRINT("PLL running"); //pushes 2 to OCDR USB_Init(); } int main(void){ DEBUG_PRINT("Entering main()"); //pushes 3 to OCDR Init_hardware(); DEBUG_PRINT("USB Initializatin completed"); //pushes 4 to OCDR PORTB=LED_ON; ...
Occurences of DEBUG_PRINT should also place some strings into some elf section to be read out by debugger later but never mind that now - just how to make a define increment/redefine inside of a macro in C?
Does this help:
But this is just an incrementing variable each time it is used and is not available for all C compilers (but as you see it is in GCC).
Cliff
PS I thought:
might allow you to initialise the counter but this just resulted in all 4 uses being 6.
Top
- Log in or register to post comments
(untested, possibly an intermediate step is needed for the concatenation)
Stefan Ernst
Top
- Log in or register to post comments
BTW as well as __COUNTER__ some Google results show __LINE__ being used on the basis that each line where it's used is going to have a unique number.
Top
- Log in or register to post comments
The __COUNTER__ seems to be working!
It always starts from 0, so the offset must be added but I will try sternst's trick later if I go that far..
I thought of some general idea of #define "variables", but for now __COUNTER__ helps in what I need actually.
As you can see each time I must check if __COUNTER__ didn't hit 0x100 which would mean an overflow on OCDR.
I cannot test the value of __COUNTER__ in the code, nor use its value more than once as this increments __COUNTER__.. But even when I use(wasting last write):
how to insert:
in a macro function (same as in a pseudocode example)? That # generates a compile time error..
OCDR accepts values from 0x00 to 0xFF so __func__, __FILE__, __LINE__ is good with assert() but not with OCDR/DWDR.
No RSTDISBL, no fun!
Top
- Log in or register to post comments
foo and bar will have consecutive values known at compiler-time.
In C++, they can even be used in constant expressions.
They cannot be used in preprocessor expressions.
Is it racist to discriminate against someone who changes species?
Top
- Log in or register to post comments
Some related ideas:
I have problems with several things in here:
A. I do not know how to pass a warning string from asm("") macro into .warning in case the count of debug strings is exceeded
B. How to send only low(debug_message_index)? LOW() macro does not work..
C. I do not want additional "ldi r16,0" in the code if a compiler already has some register==0 with appropriate value.. How to tell it to the compiler?
D. How to create new section in elf and put string there? Obviously I do not want to place those strings in .progmem
and other
No RSTDISBL, no fun!
Top
- Log in or register to post comments
Try lo8() for low()
See:
The compiler keeps 0 in R1.
Do it in C then see the .s file:
yields:
Top
- Log in or register to post comments
Ad. B: lo8 does the job, problem solved.
Ad. C: But I do not know which values it has in which registers - I would like to tell it:
"If you have 0x74 in r7 currently, be so kind and write it to OCDR".
What I did right now is asm volatile so even when it has 0 in r0, it is still pushing ldi temp,0 as I told it..
Ad. D:
But foo:
cannot be local. Unfortunately the debug macro must be local..
No RSTDISBL, no fun!
Top
- Log in or register to post comments
I'll defer to others but I don't know how (short of compile time simulation of the running code) the assembler could possibly know what values may be in any registers at any given time.
Clearly a local (unless 'static') cannot be located at a given address (or within a given section placed by the linker) because it's dynamically created and destroyed on the stack. So I'm afraid that's never going to be possible.
Top
- Log in or register to post comments
I am sure it is aware of r0. If so, it can also be aware of other values. Anyway, that "C" problem is a minor problem.
Now coming back to "D" - foo section. It must be local as all of those strings have the same name(which I do not need at all). But when I define those as static, "string" macro argument does not seem to be appended correctly:
and there is no .foo section in .s file (not mentioning it warns about unused report_str)
The strings are global and are not to be placed on the stack. Actually these are in the section which is neither allocable nor loadable.
The linker script is a problem close to "K" in here :)
No RSTDISBL, no fun!
Top
- Log in or register to post comments
Pages | http://www.avrfreaks.net/forum/redefine-stamp-stamp1?name=PNphpBB2&file=viewtopic&t=110301 | CC-MAIN-2014-42 | refinedweb | 1,023 | 72.76 |
I have the "feeling" that shrink-to-fit is somehopw broken. Printing URLs like or always draws over the right border...
I forgot the build info: 2002-06-13-08-trunk on Solaris and Linux. Trying to reduce the scale to 50% does not work either... ;-(
Created attachment 88459 [details] testcase The problem in the testcase is the "font-size" style set to 10pt. If that is removed it does STF fine. I think the issue here is that the style is cached or it font size isn't being recalculated.
Created attachment 89258 [details] [diff] [review] patch The FontSize needs to be scaled or zoom depending on whether we are printing/PP or in Galley mode. At this time they are mutually exclusive. I changed the helper function to be able to do zoom or scale depending on the above. The DocumentViewer calculates the scale and sets it into the PresContext if scaling is needed. The RuleNode then retreives the scale from the PresContext to scale the font size accordingly. The helper function also now supports un-doing the scale/zoom for where it is needed.
Comment on attachment 89258 [details] [diff] [review] patch r=dcone
At a quick glance (I haven't tested yet), I'm guessing that you're now going to be double-scaling any font sizes specified in px units. I think the correct solution to this problem is a little more complicated, and it will also serve as a usable solution for a "full zoom" feature. In particular, I think what you really want to do is give the document viewer (or pres context or whatever) getters and setters for both canonical-pixel-scale and zoom (or scaling or whatever). Then you want to give it a getter that returns the product of the two, which is what most callers of GetCanonicalPixelScale should use, except for those that are doing unit conversions from pixels to twips rather than those doing final rendering. Or something like that. I need to think it through in a little more detail, and look at the patch more closely.
Created attachment 89415 [details] better testcase
Created attachment 89416 [details] [diff] [review] better patch Yes, all I need to use is the CanonicalPixelScale from the DC, then I just need to know when NOT to do the scaling. No changes are needed in the PresCOntext or DV.
Created attachment 89699 [details] screenshot of attachment 89415 [details], printed, both before (above) and with (below) patchCreated attachment 89699 [details] screenshot of attachment 89415 [details], printed, both before (above) and with (below) patch
OK, a full review this time: At a syntactic complexity level the patch would be greatly simplified if you changed the the signature to: static float FontZoomFor(nsIPresContext* aPresContext) and then made changes to the callers that were like: fontData->mSize = fontData->mFont.size = - ZoomFont(mPresContext, fontData->mFont.size); + nscoord(fontData->mFont.size * FontZoomFor(mPresContext)); - nsCOMPtr<nsIDeviceContext> dc; - aPresContext->GetDeviceContext(getter_AddRefs(dc)); - float textZoom; - dc->GetTextZoom(textZoom); - nscoord parentSize = nscoord(aParentFont->mSize / textZoom); + nscoord parentSize = nscoord(aParentFont->mSize / GetZoomFor(aPresContext)); However, I'm still a bit worried about correctness. Using the canonical pixel scale seems like it could do weird things to printing, as opposed to print preview. I tested printing your testcase with and without the patch, and both sets of output are incorrect, but in different ways. I've attached, as attachment 89699 [details], a screenshot of attachment 89415 [details], printed, both before and after the patch. A more thorough testcase might be , and it's certainly important to test printing in addition to print preview since the canonical pixel scale becomes something other than 1. So what's wrong? The basic problem is that you want to do scaling while printing. We already have a bunch of things involved in scaling: 1. We have text zoom, which only scales text, so it's not sufficient or really even related. 2. We have the pixels to twips value. This defines the conversion between physical lengths (used internally) and a reference pixel, as described in section 4.3.2 of CSS2, which describes the meaning of a pixel unit in CSS or pixel-sizes in HTML or the intrinsic sizes of images. This prevents sizes specified in pixels from "shrinking" by a factor of 5 when going from a 120ppi screen display to a 600ppi printer. nsDeviceContextPS sets p2t so that a reference pixel is 1/72 of a point. On some interfaces pixels are called dev units. 3. We have the app units to dev units value. This is the same as the pixels to twips value, except that for printing, we use the value from the non-printing context. 4. We have the canonical pixel scale. This is typically the ratio of the previous two values. In other words, if the display is 96ppi and the reference pixel on the printer is 72ppi, then this value is ??? (which way?). [Before today, I seriously misunderstood what the canonical pixel scale value was. I'm still only basing my understanding on a quick reading of the code and not any testing to look at the actual values when printing.] Based on these descriptions, we have three values representing only two independent pieces of information: 1. the ratio of reference pixels and physical lengths, where the native units of the display are either one or the other 2. the ratio of those two things to twips. [Note that physical pixels aren't involved here at all. I used to think the canonical pixel scale involved physical pixels] These three values could represent a third thing, an overall scale, and there does need to be a third value for overall scale, since documents have lengths map to both reference pixels (px units, intrinsic sizes of images, HTML unitless numbers meaning pixels, default font sizes in prefs) and to physical lengths (CSS units for physical lengths, default font sizes for printing (I hope). However, this would require clear definitions of when all these values should be used, and I don't see those in nsIDeviceContext.h, and I somehow doubt they're used consistently across the app. So, that said, I'd like to see clear comments in nsIDeviceContext.h defining the model through which we can use these three values for scaling so that we define the "right" way to fix scaling bugs. (It would also be nice if we, at some point, renamed some of the functions to use consistent terminology (pixels and dev units; twips and app units).) Without such rules, it's a little hard to know which changes are correct and which aren't, since the values don't seem to be the logical ones to pick if one were designing a system that allowed for such conversions. (For example, in an ideal system, I might want to have three constants for: 1. converting reference pixels to app units (which would be points for printing or fractions of pixels for display) 2. converting physical lengths to app units, and 3. an overall scale.) The screenshot I attached also seems to show bugs in scaling of font sizes even without any special shrink-to-fit. It might be useful to fix those bugs first before attempting to fix this one, since those are problems that show up even when these three values represent only two independent concepts and the third can be computed from the other two. Or am I misunderstanding something here?
What you said is correct.. but a few clarifications. 1.) DevUnitsToAppUnits.. .. it is PixelsToTwips.. Dev (Device) units are Pixels and App (Application) units are twips. These should be renamed and/or we should have just one set of methods to convert these. Right now we have a variety of methods that do the same thing. 2.) The canonical pixel scale maps a DevUnit from one DeviceContext to another DeviceContext I think the original author made it for converting from pixels to pixels depending for things like scroll bars.. or things that were based on a pixel scale not on a twips scale. For example.. windows have scroll bars and can return the width of scroll bars. Printer have no such thing, how do I get easily get this measurment to the printer? DO not use this value for converting twips based on one device to twips for another device. 3.) Name needs to be changed on the canonical.. this is so misleading.. or confusing. No one gets it right or can even pronounce this.
So.. for a start we should 1.) consolidate the mTwipsToPixels and mAppUnitsToDevUnits. 2.) consolidate the mPixelsToTwips and mDevUnitsToAppUnits. From what I can see.. these are always the same values. 3.) rename above to something like CoreUnitsToDevUnits and DevUnitsToCoreUnits. 4.) rename the Canonical to DeviveToDeviceRatio. To follow.. my ideas on what should be done to fix the scaling problem..
Created attachment 91153 [details] [diff] [review] full patch This patch has the DV set the current scale into the PresContext. When the font is Points it uses only the scale to change its size, otherwise it uses the ZoomScaleFont method that uses the CanonicalPixelScale after converting the twips to Pixels.
Comment on attachment 91153 [details] [diff] [review] full patch r=dcone
Comment on attachment 91153 [details] [diff] [review] full patch This patch: 1. doesn't address my comments on the ZoomFont function in comment 9. 2. Doesn't explain why you're doing what you're doing, which isn't obvious at all and which needs to be explained. 3. Makes the |aDoZoom| cases asymmetric for the printing case, which seems incorrect. 4. Ignores the text zoom parameter for printing, which someone may want to use at some point. We have no reason to want to break it.
Overview: First off, the "text zoom" is not used when printing or in print preview. I deliberately turn all text zoom (set it 1.0). In order to a "real" zoom for PP we need to step back an look at the entire problem of zoom the entire presentation of the page and may be a transformation matrix. The way text zoom works is to actually reflow the document. One might want to argue that we should somehow convert the text zoom value into a meaningful 'scale" value for Printing and PP. As it stands, the size of the font has already been converted to twips and we need to convert it back to pixels and use the canaonical pixel scale to get the right value. That is unless the twips values for the font size came from points and then we want to use the current "scale" to directly convert the value. The canonical pixel scale is a scaling value between two devices, the screen and the printer. For scaling in Printing and PP we take that value and multiply it times the user's desired scale value (in a way it is kinda of a hybred value). So at this point in the execution the only way to know what the true scale percentage is, is to add a new method to set/get it from the PresContext, which is what I have done. The aDoZoom don't mean mean do zoom or do-scale it means zoom or unzoom, scale or unscale. Look at the various ZoomScaleFont calls and one of them needs to undo the zoom/scale. The only "special case" I have added in all this code is when we discover that the font is a "point" value which is always converted exactly to the same twips value no matter what the canonical pixel scale is or what device we are going to. Therefore we want to use only the true scale for it. I also don't want to use the suggestion of ZoomFontFor because that just involves more code everywhere else where we woul have to do all the converting to and from pixels. I should add more comments and will put up another patch with those, but this is the best approach for now until we want to address a full zoom feature which would have more to do with "magnification" than text zooming.
Created attachment 92430 [details] An even more complete test
Created attachment 92776 [details] [diff] [review] patch The nsRuleNode is responsible for zooming and scaling fonts. Right now in Printing/PP we cache the current Zoom value and then set it to 1.0 and then allow for the scaling of the presentation. The RuleNode has all Zooming and Scaling on font size computed in ZoomScaleFont. It checks to see if we are in Paginated mode and then does scaling. ZoomScaleFont is also capable of un-zooming or un-scaling. The trick is that all fonts with sizes defined in pixels already have the scale built into their value. This is because, before doing reflow the DocumentViewer (DV) calculates the scale (or it is set by the user) and then multiples it by the CanonicalPixelScale and resets the CanonicalPixelScale's value. So from the statr when everything is created they use the CanonicalPixelScale with the scale already built into it. So this means as pixel sized fonts are created and they convert their size to twips they are doing the scaling at that time. Point size fonts would typically get directly converted over to twips via a constant, but now they are multiplied by the scale. ------- Summary --------- Zooming: Zooming only effects the size of fonts and nothing else, that's why it is referred to as "text zooming". It never takes into account the CanonicalPixelScale. Scaling: Scaling scales everything. It does this by setting the CanonicalPixelScale that most all calculates use in one form or another for calculating there size for the dislpay. The bug is really that the Point size fonts don't get there size correctly scaled. This is because of the direct conversion from Points to twips. So what this patch does is the following: It scales all Point size font when Printing/PP. It changes the function ZoomFont to ZoomScaleFont. It makes sure that when Printing/PP pixel sized fonts are not scale (because they already have been) and that point sized fonts are. The changes to the DocumentViewer and PresContext are for getting the correct scale value and storing it in the PresContext so it can be used by the RuleNode. Note: I have tested Printing, Print Preview and Text Zooming. This patch actually fixes a couple of problems with text zooming also.
Comment on attachment 92776 [details] [diff] [review] patch r=dcone.
Comment on attachment 92776 [details] [diff] [review] patch The regression of this patch with the PostScript printing module (on Linux) seems unacceptable. I presume you've tested this a good bit, so I'm guessing you're doing your testing on Windows. Based on my memory of looking at some of the printing code a few months ago and based on the symptoms I'm seeing in the screenshot, I suspect that the problem here is that the Windows printing code and the PostScript printing code are doing different things -- i.e., they're reporting different numbers for these various values. I suspect this comes back to the question of defining what the model is (see comment 9). Have we documented the current model of what these three numbers mean well enough to know which one is right? Which one is right? Why? Can we fix the other one easily?
Looking at the screenshot, I don't see a regression. I see that "medium" and "font-size: 12pt" still aren't right on Linux, but overall it is an improvement. Please point out the regression.
12pt is right in the upper screenshot, but way too small in the lower. Still, why isn't this working cross-platform? If the model were well-defined, it would just work.
David Baron wrote: > 12pt is right in the upper screenshot, but way too small in the lower. Still, > why isn't this working cross-platform? If the model were well-defined, it > would just work. 1. The results using attachment 92776 [details] [diff] [review] do not differ much from the output when the original shrink-to-fit code was working properly. I don't see a reason why this patch should be blocked. 2. The PostScript module has more than one bug related to font size calculations. I would not wonder when we hit one or the other issue there...
*Why* does the postscript module have these issues?
David Baron wrote: > *Why* does the postscript module have these issues? What do you want to hear ? I am not sure whether the authors did any tests with small font sizes and/or a scaling factor != 1.0 when the code was written in ~1999.
Yes, I think the patch should go in also. Think of this patch as fixing the XP portion of the font sizing/scaling/zooming bug. Then we can open a new bug for the Linux specific issues (or maybe there are bugs already filed). If the font sizing is already wrong it can't really be a regression....Also, once this is in it can help us better understand what is wrong with Linux. Plus, this patch fixes some problems with text zooming as it is.
Is there a slight chanche that this patch gets it's way into the 1.0.1-branch or should we add a comment to the "1.0.1 release notes" that the "shrink-to-fit functionality has been regressed since 1.0 and that people have to use either 1.0FCS or wait for fixed trunk builds to get usefull printouts on paper" ?
nsbeta1+. This is a very visible bug, since fit to page is the default setting.
Created attachment 97953 [details] [diff] [review] Updated patch The patch need to be re-merged with the tree.
Created attachment 97997 [details] [diff] [review] re-merged patch
Created attachment 98067 [details] [diff] [review] screen shot of print output with re-merged patch This is the output from the "re-merged" patch. It now looks correct. I can't explain why the previous screen shots were not correct. This patch is ready to be checked in.
Created attachment 98069 [details] screen shot of print output with re-merged patch Try this againCreated attachment 98069 [details] screen shot of print output with re-merged patch Try this again
Comment on attachment 97997 [details] [diff] [review] re-merged patch r=dcone
I don't know what kind of bugs drivers is targeting for the 1.0 branch, but is there any chance that this type of fix would be wanted for 1.0.2? This is a bug that has not only made Mozilla evangelism harder, but has forced me to use IE on a couple of occasions. And I might as well ask, is there any document that defines what kind of patches will be considered on the 1.0 branch?
Comment on attachment 97997 [details] [diff] [review] re-merged patch >Index: content/base/src/nsRuleNode.cpp >=================================================================== >RCS file: /cvsroot/mozilla/content/base/src/nsRuleNode.cpp,v >retrieving revision 1.43 >diff -u -r1.43 nsRuleNode.cpp >--- content/base/src/nsRuleNode.cpp 4 Sep 2002 02:31:46 -0000 1.43 >+++ content/base/src/nsRuleNode.cpp 5 Sep 2002 19:50:34 -0000 >@@ -204,7 +204,19 @@ > { > NS_ASSERTION(aValue.IsLengthUnit(), "not a length unit"); > if (aValue.IsFixedLengthUnit()) { >- return aValue.GetLengthTwips(); >+ // When Printing/PP scale the Point size here when it is initially calculated >+ // NOTE: Point size fonts have nothing to do with CanonicalPixelScale >+ // and all fonts that have pixel derrived sizes have already been scaled >+ // by the CannonicalPIxelScale >+ PRBool isPaginated = PR_FALSE; >+ aPresContext->IsPaginated(&isPaginated); >+ if (isPaginated) { >+ float scale; >+ aPresContext->GetPrintScale(&scale); >+ return NSToIntRound(float(aValue.GetLengthTwips()) * scale); >+ } else { >+ return aValue.GetLengthTwips(); >+ } This comment runs past the 80-character limit. Also, I don't see why, to implement a full zoom, you have to be mucking around at this level of unit calculation. It seems like this change is at the wrong point. > } > nsCSSUnit unit = aValue.GetUnit(); > switch (unit) { >@@ -1621,14 +1633,44 @@ > return res; > } > >+/** >+ * This helper function will zoomScale OR scale the font depending on whether we are in >+ * Galley mode or Printing (Print Preview). >+ * >+ * aDoZoomScale - indicates whether we should zoom or unzoom, scale or unscale >+ * >+ * NOTE: We only want to scale fonts using the CananonicalPixelScale where the twips values >+ * came from a Pixel type value. For example, Pixel, EM, or XHeight >+ * For font sized difine by Points, inches, millimeters, centimeters etc. >+ * we need to multiple them directly by the scale factor >+ */ > static nscoord >-ZoomFont(nsIPresContext* aPresContext, nscoord aInSize) >+ZoomScaleFont(nsIPresContext* aPresContext, >+ nscoord aInSize, >+ PRBool aDoZoomScale = PR_TRUE) > { >- nsCOMPtr<nsIDeviceContext> dc; >- aPresContext->GetDeviceContext(getter_AddRefs(dc)); >- float textZoom; >- dc->GetTextZoom(textZoom); >- return nscoord(aInSize * textZoom); >+ >+ PRBool isPaginated = PR_FALSE; >+ aPresContext->IsPaginated(&isPaginated); >+ if (isPaginated) { >+ float scale; >+ aPresContext->GetPrintScale(&scale); >+ if (aDoZoomScale) { >+ return nscoord(float(aInSize) * scale); >+ } else { >+ return nscoord(float(aInSize) / scale); >+ } >+ } else { >+ nsCOMPtr<nsIDeviceContext> dc; >+ aPresContext->GetDeviceContext(getter_AddRefs(dc)); >+ float textZoom; >+ dc->GetTextZoom(textZoom); >+ if (aDoZoomScale) { >+ return nscoord(aInSize * textZoom); >+ } else { >+ return nscoord(aInSize / textZoom); >+ } >+ } > } I don't see any reason to disable text zoom for printing, other than the fact that you're adding this code at the same level as text zoom (which seems like the wrong level) so that it makes text zoom harder to understand. More on this below. I don't like boolean parameters with confusing meanings like this. I much prefer the syntax I used in my patch on bug 154751 (not checked in yet). > aPresContext->GetFontScaler(&scaler); > float scaleFactor = nsStyleUtil::GetScalingFactor(scaler); > >- zoom = PR_TRUE; >+ zoomScale = !isPaginated; > if ((NS_STYLE_FONT_SIZE_XXSMALL <= value) && > (value <= NS_STYLE_FONT_SIZE_XXLARGE)) { Changes **like these** make text zoom (which is a rather complicated operation since it has to scale all text exactly once, which requires not doing multiple-scaling on text sizes that are inherited from or relative to an already scaled number, but which also requires knowing the scaled size at this level of the system in order to scale line heights) even harder to understand. I don't see why you need changes in this area of code at all if you're implementing a "full zoom" -- that should be able to live entirely towards the gfx end of the system (perhaps with a few changes to layout/style consumers, but not of this scale or complexity). >- // We want to zoom the cascaded size so that em-based measurements, >+ // We want to zoomScale the cascaded size so that em-based measurements, > // line-heights, etc., work. No, that comment makes sense for text zoom. It makes no sense at all for full zoom. Furthermore, "zoomScale" isn't an English word that I'm familiar with. >- if (zoom) >- aFont->mSize = ZoomFont(aPresContext, aFont->mSize); >+#ifdef DEBUG_rods >+ printf("Before: %d", aFont->mSize); >+ PrintCSSValueType("***** %-25s ", sizeUnit); >+ >+ if (zoomScale) >+ aFont->mSize = ZoomScaleFont(aPresContext, aFont->mSize); >+ >+ printf(" After: %d\n", aFont->mSize); >+#else >+ if (zoomScale) >+ aFont->mSize = ZoomScaleFont(aPresContext, aFont->mSize); >+#endif #else is very dangerous -- you could just use two |#ifdef DEBUG_rods| regions with the non-debug code between them, if you even feel this needs to be checked in at all. >@@ -2331,10 +2464,10 @@ > SETCOORD_LH | SETCOORD_NORMAL, aContext, mPresContext, inherited); > > // line-height: normal, number, length, percent, inherit >+ const nsStyleFont* font = NS_STATIC_CAST(const nsStyleFont*, >+ aContext->GetStyleData(eStyleStruct_Font)); > if (eCSSUnit_Percent == textData.mLineHeight.GetUnit()) { > aInherited = PR_TRUE; >- const nsStyleFont* font = NS_STATIC_CAST(const nsStyleFont*, >- aContext->GetStyleData(eStyleStruct_Font)); > text->mLineHeight.SetCoordValue((nscoord)((float)(font->mSize) * > textData.mLineHeight.GetPercentValue())); > } else { Any reason you chose to potentially slow this code down a bit?
There are really three things that keep getting talked about with this code: 1) Text Zoom (requires reflow) 2) Scaling (require reflow) 3) Zooming/Magnification (gfx only no reflow needed) #3 is completely unrelated to any of these changes or this problem. Comments running past 80 chars, you have to be kidding me? At least there are comments, so little code is checked in with detailed comments. Currently Tex Zoom is turned off for printing, I have never once seen a requirement for TextZoom to work printing. It makes more sense to have a full on scale for printing were everything gets scaled. At the moment, print scaling and Text Zooming do not behave well with each other when printing (or PP) and since there is no requirement for those to work together at this time, it is easier and saner to turn off TextZooming and just obey scaling. I think it would be very confusing for a user to have TextZooming effect how a "scaled" document is Print Previewed or printed out. I don't think they want to be adjusting two things at once. You wrote: "I don't see why you need changes in this area of code at all if you're implementing a "full zoom" I AM NOT IMPLEMENTING FULL ZOOM! I am not sure how many times I need to explain this, I am implementing scaling. You must still not understand the problem that is being solved here. This checkin has absolutely NOTHING to do with Full Zoom (magnification), this checkin deals with how font sizes get scaled for printing and in PP. It is complicated by the fact that Pixel-based fonts are already scaled because the CanonicalPixelScale and Point-based sized fonts are not scaled. So I have to figure out what to scale and what not to scale. What is up with this comment: "Furthermore, "zoomScale" isn't an English word that I'm familiar with." Are you implying that every variable and function name must be found in the English dictionary? I can change it to "isZoomOrScale If you don't understand the problem being solved just say so, I can explain it in more detail. But nit picking the patch that works and fixes issues with Text Zooming in Galley mode and fixes several problems with Scaling for Printing and Print Preview is just not right. The patch is well documented even if it does exceed 80 chars, it is clean, well thought out, it solves ONLY the problem with scaling fonts. I can't see where it adds an undue amount of complication or where it would even make the code noticably slower. It does check to see if it paginated but that check isn't a high cost operation.
> But nit picking the patch that works and fixes issues with Text > Zooming in Galley mode and fixes several problems with Scaling for Printing and > Print Preview is just not right. I already tried general comments earlier, but you weren't interested in them, and kept repeatedly asking for review of a detailed patch that I said was wrong, so I gave it. What issues does it fix with text zooming in galley mode? How is scaling different from full zoom? What does it not zoom?
(Note that what I refer to as "full zoom" does require reflow -- you do magnify everything at a gfx level, but the size of the page / browser is the same, so you have to reflow.)
Let me summarize the issue as I understand it: What is being implemented here is what you call scaling, but what I call full zoom. It is a type of zoom that scales everything, but that requires reflow since the size of the page and/or window is not scaled. This patch is an attempt to compensate for bugs that I described in comment 9 by doing reverse and/or missed corrections at a level of the code that should not know anything about a type of zoom that scales all objects, uncessarily complicating (i.e., making unmaintainable) the code that does a different type of zooming (text zoom).
> What is up with this comment: > "Furthermore, "zoomScale" isn't an English word that I'm familiar with." No, I'm implying that the comment that you were modifying was describing the issue in complete English sentences. The variable name |zoomScale| represents something that would be described in English as, perhaps, "zoom or scale", so the comment should reflect You're clearly moving a function call that's only needed in some cases so that it's made in more cases than it's needed. That does, unquestionably, slow things down, although probably not measurably. There's also no clear reason why you made the change.
I apologize for my last comment in the bug. Please ignore it, I will have an updated patch coming soon.
This is seriously messed up. We should not be special-casing print contexts in the style system. We need to fix our units system along the lines dbaron suggested. Apart from anything else I've been thinking about what we have for an hour and I still don't understand it, and I'm not feeling particularly stupid today. I'd vote for a system where we keep everything in coordinates which are 1/256 of the "CSS pixels" for a device, and where the device context tells us the number of CSS pixels it gets per one physical inch. Isn't that all we need? GFX/widget can take care of mapping those coordinates to whatever the device or toolkit coordinate system is. If we had that then we wouldn't need all this special case code, right? The style system would just look up the CSSpixels/inch value to convert every fixed length and font size. GFX would be responsible for doing print scaling.
Robert, I agree with you that this patch is not optimal and doesn't fix the real problem. The fix for the real issue is Bug 167162. To fix the real problem we will end up touching gfx (especially nsFont) and all of layout and the ultimately it will be rather time consuming and risky and that is after we can agree on an approach. The reality is that "Shrink To Fit" is turned on as the default, and every page with a CSS defined Point sized font will not scale correctly, look bad and get cropped (see original testcase). The code that is there is already wrong. This patch doesn't violate anything that hasn't already been violated, in fact, it actually documents some of the issues that are causing the problems. I consider this to be a temporary patch until the real issue can be fix (Bug 167162). Again I request that this patch be allowed in so pages don't get cropped. I really don't want to see the next review of Mozilla say something like "Shrink To Fit works great except that it doesn't shrink all the text, some still gets cropped." (A final patch with dbaron's suggestions will be attached soon)
I can accept this temporary fix if there's a real commitment from you and other interested parties to work toward a real fix. A real fix should carefully define, and simplify, the systems of units we use, and the way we do scaling, across the entire system, as dbaron has talked about above. Bug 167162 may be a step in the right direction but my gut feeling is that the presence of GetCanonicalPixelScale/GetDeviceRatio means we're doing something fundamentally wrong.
Created attachment 100403 [details] [diff] [review] patch with dbaron's suggestion
Comment on attachment 100403 [details] [diff] [review] patch with dbaron's suggestion bringing don's r= forward
Can we can get a sr on the patch for this bug soon?. We need to get some testing on this before Mozilla1.2 goes out. I can set up a teleconference so we can start discussing a long term solution.
Comment on attachment 100403 [details] [diff] [review] patch with dbaron's suggestion If I have the power to say no to this approach, then I've said no, and asking for another review of the same or a similar patch won't change that answer. See comment 39 for a summary of my objections. See also comment 20, comment 9, and comment 5.
The target milestone for this bug was recently changed to Mozilla 1.2 final, which doesn't seem realistic. As far as I can tell from the comments, this bug should be marked "WONTFIX", or should be marked as a duplicate of bug 167162.
Because the fix for 167162 will probably be quite involved I am still hoping to get this in as interim fix.
I'm afraid I don't understand comment #49: "(From update of attachment 100403 [details] [diff] [review]) ...I've said no [to this approach]" Which lines in the patch are the approach that dbaron has said no to? He can't be referring to this entire bug, as he has already added 14 comments with constructive critism about this bug in general and rods@netscape.com's patches in particular, most of which rods@netscape.com has addressed. That this is a temporary fix is clear; work is being done on bug 167162, with a rough patch submitted and comments being solicited. Shouldn't a working albeit temporary patch get sr now, since it is not clear how long it will take to bang a patch for the larger bug 167162 into shape?
> Shouldn't a working albeit temporary > patch get sr now, since it is not clear how long it will take to bang a patch > for the larger bug 167162 into shape? That's precisely the problem. The code will become permanent and make fixing other bugs a lot harder, and nobody will have an incentive to fix this one correctly.
It's 1.3alpha now. Please convene the right people, come up with a plan in the 1.3 cycle to fix the underlying bug(s) correctly. Do what you need to do, take "risks" (all changes have risks; risk is p * cost; the risk of patching around underlying bugs is already exacting a high cost in Gecko, because the probability that the temporary patching will be permanent gunk complicating the wrong layeres of code is nearly 1), don't perpetuate the underlying problems. /be
Here are our options: 1) Check in the patch, it doesn't make the code significantly less maintainable or more complicated. In fact, as I have said before, it actually documents (in several) areas the hidden issues that already exist today. 2) Not check this patch until either david fixes it or I can get to it. The problem with "doing it right", is that the correct fix will touch a lot of areas: gfx (fonts, cannoical pixel scale, etc) layout (several frame classes), content (the printing code). It will be a significant change that will require a lot of thought, design, investigation, etc. The correct fix isn't just risky, it has a high level of risk associated with it, and because of the breadth of the changes the impact to the various areas and the amount of risk (which implies a lot and lot of testing). This bug will require a signicant amount of time. Time which I certainly do not have in the 1.3 milestone and possibly even 1.4. So, I am all for fixing things right given the time and the resources, neither of which we have right now. Since I don't feel this fix overly complicates the code or makes it less maintainable I think the fix should go. That or it stays broken for at least one or two more milestones (unless David signs up to fix it) This is a very visible bug, fonts defined in point sizes do not scale. I think it is a shame it wasn't fixed 1.2.
I think we should spend whatever time and resources we have on fixing "deep" bugs like bug 177805 rather than symptoms, like here. Otherwise we won't make real progress. So far, my proposal in bug 177805 seems to be withstanding scrutiny. The next step is to figure out how to get there from here and how to break up the work into manageable pieces.
here is a proposal. since we expect distribution of 1.2 to be low let's hold off on this implementation in favor of doing work on bug 177805 and addressing the problem there. We do expect higher distribuition off of 1.3 or 1.4 so if the development work for 177805 is looking to land in that time frame many could benefit by the the fix in that milestone. If 177805 is not done by then we could reconsider adding the patch here to the 1.3 or 1.4 branch as a temporary fix that gets users the benefits of this fix. No need for more discussion here or any more decision making on this bug. We can revisit the status of work on 177805 in a month or two and make decisions then...
*** Bug 161080 has been marked as a duplicate of this bug. ***
-> jkeiser
*** Bug 166388 has been marked as a duplicate of this bug. ***
Four months later... no progress. Printing major pages is still defunct due this bug. I'll vote to get the current patch "in" - just preventing people from getting usefull printouts because we wait for stuff which is not going to be implemented in the near future is just silly.
ADT: Nominating topembed
[Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.4b) Gecko/20030507] Got this bug today: [ <link href="impression.txt" rel="stylesheet"> ] with [ font-size : 16pt; ] (different sizes are used) inside. The page scaled horizontally, but not vertically: allways got 2 pages instead of 1. I had to use I.E. (v6.0sp1) :-( I read all the comments; I support the idea of an temporary fix: this bug has been opened for almost a year, with a patch standing by for 9 months.
Addition to comment 63: Afterward, I tried to print the page on paper: got 2 pages, exactly as expected from Print Preview display :-| In this case, Mozilla is consistent :-) but verified unusable to print that page :-( NB: By the way, the page I am talking of is a 1 page (otherwise quite simple) electronic (HTML) document generated by an Intranet application developped by my company for its employees :-(( {I could attach it to this bug if needed.}
jkeiser, has further dev work been doen on this? If not can we get it into 1.4.
[Mozilla/5.0 (Windows; U; Win95; en-US; rv:1.4b) Gecko/20030507] Addition to comment 63: Same bug with v1.4b/W95. ***** Quoting <>, [ Fix crucial Gecko layout architecture bugs, paving the way for a more maintainable, performant, and extensible future. ] I believe that: the patch should go in for v1.4, _which will replace v1.0_; and the underlying issue planned for v1.5/v1.6/etc cycle.
Renominating for adt consideration. Is this an adt1?
never worked, probably requires some re-architecting in layout. not gonna block 1.4 for this.
adt: nsbeta1-
This is just highly unlikely to happen for 1.4.
*** Bug 210411 has been marked as a duplicate of this bug. ***
are these duplicates? bug 207083 bug 208000 bug 212303 bug 186918 bug 193680
For what it's worth... As a user, I'd think of these various zooming functionalities as follows: 1. Text size adjustment: The font size specified by the webpage's designer is adjusted to suit the user's needs. Note that nothing is zoomed in the same sense as a camera's zoom lens zooming into a subject. ---layout happens after text size adjustment--- 2. Content scaling: The visual content, whose various elements are already laid out with respect to one another, is scaled altogether as a whole before it gets rendered onto a stroke-level canvas. This canvas represents the medium where the final rendering will take place, e.g., paper. But this canvas can be previewed onscreen first. In the Print Preview feature, the user sees this canvas as it will be rendered. 3. Magnification/zooming: This is the real zooming feature where the user can zoom in and out of the canvas as it is projected onto the screen. The attachment "print preview.pdf" might illustrate what I meant. Note that the ability to adjust the text wrapping margin (not only in the print preview mode) is especially useful in a tabbed browser in which several tabs might have different appropriate wrapping margins. I hate those pages that let their text run across the width of my 18" monitor... so hard to keep track of the lines. Thanks. David
Created attachment 161043 [details] Ideas for consolidating various "zooming" functionalities
Bug 205001 has a nice simple patch that fixes this bug without adding huge amounts of additional complexity. Apparently somebody did work out which numbers in our unit system represent the third dimension -- although it could certainly be clearer. *** This bug has been marked as a duplicate of 205001 ***
Actually, it didn't actually work. So I think the fix for this should look a lot like a combination of comment 5 and the patch in bug 205001.
David Baron or Somebody: What's status in this bug? If nobody works on this, I want to work on this. # Or should we wait bug 177805? But that is not active in this 9 months...
Bug 177805 is waiting until we can remove support for non-cairo graphics. I recommend waiting until then before working on this. If you run out of other things to do, I can make some suggestions :-)
Thank you for your reply. I wait bug 177805, because I see by your reply why it is stopped. thanks.
I'm unsure what people need: The bug was opened 2002. It's 2006 now, and "fit to page" doesn't work in Firefox 1.5.0.1, Mozilla 1.7.12, nor Microsoft IE 6. All in Windows/XP. When printing the rightmost part of a table is simply cut off. Would an example file help?
Please read the bug before commenting.
(In reply to comment #81) > Please read the bug before commenting. Actually I did (read the comments), and I was wondering why it couldn't be fixed within 4 years. (Konqueror of KDE can do the scaling correctly). The gecko bug is just as old. Do you need another test case? I guess not.
Bug 177805 (Fix the use of units in Gecko) was fixed by "2007-02-06 23:46 Eli Friedman". What would be the next step for this print/scale bug ?
Maybe we should set the "helpwanted" keyword. This bug seems to be abandoned, even though we need it for full page zoom.
Is this not already fixed on trunk by the changes in bug 177805 (units patch)? If not, it should be easy to fix now.
Already fixed, as far as I can tell. | https://bugzilla.mozilla.org/show_bug.cgi?id=153080 | CC-MAIN-2017-13 | refinedweb | 7,129 | 71.44 |
However, for this problem, I found that in order to apply the algorithm accurately, more than the standard amounts of Double and Int precision were necessary. So, I had to use a FixedPoint package to give me access to 128-bit Integers and 256-bit fixed fractions. The use of these custom data types increased the runtime to about 2 minutes on my machine, but it's still fairly reasonable.
And now that I have "freed up Python", a solution I wrote a while ago will be able to cause the first forward momentum on problems in a long time.
import Data.List import Data.FixedPoint import Debug.Trace -- Replace Double with Rational or with FixedPoint cfr :: FixedPoint256256 -> (Int -> Int128) cfr = repindex . cfr_helper repindex :: ([Int128], Int) -> Int -> Int128 repindex (coeffs, rep) index | index < (length coeffs) = coeffs !! index | otherwise = coeffs !! (rep + ((index - (rep)) `mod` ((length coeffs) - rep))) cfr_helper :: FixedPoint256256 -> ([Int128], Int) cfr_helper alpha = let a0 = floor alpha in let alpha1 = alpha - (fromIntegral a0) in cfr_rec alpha1 [a0, floor (1.0/alpha1)] [alpha1] cfr_rec :: FixedPoint256256 -> [Int128] -> [FixedPoint256256] -> ([Int128], Int) gauss_map :: FixedPoint256256 -> FixedPoint256256 gauss_map alpha = (1.0 / alpha) - (fromIntegral (floor(1.0 / alpha))) fuzzyIndex :: FixedPoint256256 -> [FixedPoint256256] -> Maybe Int fuzzyIndex _ [] = Nothing fuzzyIndex x (y:ys) | abs(x - y) < 1e-5 = Just 0 | otherwise = case (fuzzyIndex x ys) of Just i -> Just (i + 1) Nothing -> Nothing cfr_rec alpha0 as alphas = let alpha1 = gauss_map(alpha0) in case fuzzyIndex alpha1 alphas of Just i -> (as,i+1) Nothing -> cfr_rec alpha1 (as ++ [floor(1.0 / alpha1)]) (alphas ++ [alpha1]) min_soln :: Int -> (Int128,Int) min_soln d = let coeffs = ((cfr . sqrt' . fromIntegral) d) in min_soln_rec d (coeffs) (coeffs 0) 1 1 0 1 min_soln_rec :: Int -> (Int -> Int128) -> Int128 -> Int128 -> Int128 -> Int128 -> Int -> (Int128,Int) min_soln_rec d coeffs p1 q1 p0 q0 n = let an = coeffs n in let (pn,qn) = (p1 * an + p0, q1 * an + q0) in if (pn^2 - (fromIntegral d)*qn^2) == 1 then (pn,d) else min_soln_rec d coeffs pn qn p1 q1 (n+1) maxSoln :: (Int128,Int) -> (Int128,Int) -> (Int128,Int) maxSoln (x1,d1) (x2,d2) | x1 > x2 = (x1,d1) | otherwise = (x2,d2) ans :: Int -> Int isNotSquare :: Int -> Bool isNotSquare x = let xx = fromIntegral x in (floor (sqrt xx))^2 /= (floor xx) ans b = snd . foldr maxSoln (0,0) $ map min_soln $ filter isNotSquare [1..b] main = putStrLn $ show $ ans 1000 | http://pelangchallenge.blogspot.com/2016/02/problem-66-haskell-continued-fractions.html | CC-MAIN-2020-29 | refinedweb | 384 | 59.23 |
Dave, The following testcase is an example of code used in a glibc testcase. I'm trying hard to shake out the bugs in the glibc testsuite for debian, and one testsuite failure looks like a compiler issue. The expected behaviour is for the testcase to print the raw IEEE754 value of -NAN. The observed behaviour, when -DALT is on the command line, is that the testcase prints the incorrect raw value e.g. NAN. GCC 4.4.3 in debian doesn't compile this code correctly. Could you have a loot at my analysis and tell me if you have seen this before? cat >> tst-mul-nan.c <<EOF #include <stdio.h> #include <math.h> #ifdef ALT volatile double nanval; #else #define nanval NAN #endif int main () { #ifdef ALT nanval = NAN; #endif printf ("0x%llx\n", -nanval); return 0; } EOF gcc -g3 -O0 -save-temps -o test-mul-nan-OK test-mul-nan.c; ./test-mul-nan-OK 0xfff7ffffffffffff This is the correct result e.g. -NAN. In the correct case the compiler has already computer -NAN and it's loaded directly from the local symbol e.g. .LC1: .word -524289 .word -1 This is the case that is not working correctly: gcc -g3 -O0 -save-temps -DALT -o test-mul-nan-NG test-mul-nan.c; ./test-mul-nan-NG 0x7ff7ffffffffffff That result is not -NAN, it is NAN. This is incorrect. In the incorrect compilation the compiler loads NAN from a local constant: .LC0: .word 2146959359 .word -1 This is 0x7ff7ffffffffffff e.g. NAN.? In the incorrect case the compiler tries to multiply this value by NAN to get a result of -NAN. addil LR'nanval-$global$,%r27 copy %r1,%r19 ldo RR'nanval-$global$(%r19),%r19 fldds 0(%r19),%fr23 ldil LR'.LC2,%r19 ldo RR'.LC2(%r19),%r19 fldds 0(%r19),%fr22 fmpy,dbl %fr23,%fr22,%fr22 It seems like it should work (even if fr22 is -1.875), since the sign of the output NAN is the XOR of the signs of the inputs, therefore "- XOR + = -" and the the result should be -NAN, but it's not, it's NAN? Why? PA-RISC 2.0 Architecture, Floating Coprocessor 8-23 "Operations With NaNs", and 8-24 "Sign Bit" can be referenced for information on NANs. After the multiplication fr22 still contains NAN, and that is what is printed instead of the expected result of -NAN. Any idea what is going on here? Thanks for your help. Cheers, Carlos. | https://lists.debian.org/debian-hppa/2010/05/msg00014.html | CC-MAIN-2014-10 | refinedweb | 414 | 74.19 |
The official blog of the Microsoft SharePoint Product Group
While the Program Managers are busy trying to get Beta 2 done, I thought I would post a blog entry about one of my favorite new features of Office SharePoint Server 2007: the Business Data Catalog (BDC), which enables integration with LOB and other applications by connecting via Web services or ADO.NET interfaces and displaying business data on the portal without any coding..
Business Data Web Parts.
Table 1 provides a brief description of the Business Data Web Parts.
Web Part
Description.
Web Part
Description
Business Data List
Displays a list of entity instances from a business application registered in the Business Data Catalog. For example, you can use a Business Data List Web Part to display all the customers or orders from the AdventureWorksDW database.
Business Data Items
Displays the details of an entity instance from a business application. For example, you can use a Business Data Items Web Part to display the details of a particular customer or order from the AdventureWorksDW database.
Business Data Related List
Displays a list of related entity instances from a business application. For example, you can use a Business Data Related List Web Part to display all the orders for a particular customer from the AdventureWorksDW database.
Business Data Actions..
Lawrence Liu - Senior Technical Product Manager and Community Lead
If you would like to receive an email when updates are made to this post, please register here
RSS
PingBack from
PingBack from
I did get this to work on a clean install of MOSS B2TR, but when I try to connect it to a BDC web part, I get errors like:
(Column titles DO display)
The Business Data Catalog is configured incorrectly. Administrators, see the server log for more information.
In the server app error log:
A Metadata Exception was constructed in App Domain '/LM/W3SVC/669577904/Root-1-128069273844023446'. The full exception text is: The Property with name 'SsoApplicationId' is missing on the LobSystemInstance.
I'd like to know what type of connections can/should be made using BDC and user profiles. I'd like to allow updates to AD or ADAM, and wonder if BDC could facilitate this.
Is there an easy way for SharePoint Search to crawl and index contents stored in OpenText LiveLink system? Maybe by using BDC or using custom code integrated with SharePoint. And upholding permission settings set by LiveLink would be nice, but not sure if that is possible...
Thanks to those of you who partipated in my SharePoint Connection basic and advanced Deployment talks
Hola, tienes algun link de BDC pero en el RTM?, puesto que ya instalar el RTM de SPS2007 y tanto las pantallas de configuración como el proceso de registro del xml de configuracion cambia y no he encontrado informacion de como utilizarlo...
gracias
I have the saem question as Kris (26/8/206).
The tool on Codeplex doesn't seem to help me. Any pointers on how to update the BDC-Columns on a programmatically updated Item. Just assigning a value doesn't seem to do anything.
We have thousands of documents to migrate and this is the only thing we can't solve at the moment.
TIA,
Ed
i got below error while loading application definition file.
Application definition import failed. The following error occurred: The root element of a valid Metadata package must be 'LobSystem' in namespace ''. The root in the given package is 'LobSystem' in namespace ''. Error was encountered at or just before Line: '1' and Position: '2'
Regards,
Arulraja.S
Hi,
I've tried setting a BusinessData type column for a file using the 'Lists.asmx' UpdateListItems webservice method, but can't get it to work. Though I can set the primary BDC column (for example 'Person'), the webservice won't let me set the 'RelatedField' (People_ID) with the long ID number (eg '__ck74754356465.....').
I am high experienced with setting all other SharePoint types (choice, datetime, lookupmulti etc) but haven't been able to find any information on this yet.
Thanks heaps in advance
Sam
PingBack from
Has anyone found a solution to this error. I have been getting it as well and have found no answers. I am running SQL 2005 on Windows 2003 Enterprise Server and RTM MOSS. Any help would be great. My email is ccalcut_at_gmail_dot_com if it doesn't show up on my MSDN profile.
****I did get this to work on a clean install of MOSS B2TR, but when I try to connect it to a BDC web part, I get errors like:
A Metadata Exception was constructed in App Domain '/LM/W3SVC/669577904/Root-1-128069273844023446'. The full exception text is: The Property with name 'SsoApplicationId' is missing on the LobSystemInstance.******
Sam, check the EntityDataTableAdapter for the __serializedId property when generating a datatable.
I am working with a web service, i have managed to handcraft a BDC that correctly searchs my ws, i am returning an user defined data object from the ws. However, i can't manage to replicate this in code in order to create the BDC automatically.
Any ideas?
does anyone know how to add a bdc column as a lookup field in a regular list?
Does anyone know how to go about connecting a BDC Data List WP to a BDC Related List WP? Sounds pretty straight forward at first glance, I know, but here's the problem though, the BDC List WP and the BDC Related List WP are connected to two different data sources which have different primary keys. Can an association still be created in this scenario? Perhaps the using the Business Data Item Builder for this? Thoughts?
If anyone is interested in writing a application definition file to connect to oracle, here is a blog from me:
Nidhi Seth
whenever you try to use the type System.Guid as an identifier in your BDC application definition the following exception is thrown:
System.Collections.Generic.KeyNotFoundException: The given key was not present in the dictionary.
Any way to define a Content Type in SharePoint 2007 that references Business Data? My end solution would look something like...using a content type template, a user would select business data from our LOB system. Using fields in word, map that business data directly into the Word fields. Viola, you end up with a document populated with data directly from you LOB system. Any thoughts on how to do this out of the box in SharePoint 2007?
I'd like to hear a response to the question above about Business Data in Content Types posed above...
This seems to be the whole point of having BDC columns, and using them in a Document Library...
This capability was available (Content Types with BDC Colums) prior to Beta2TR, and was removed.....
Thanks
I don't know if I have posted this question in the correct place, but when I import an application definition file on MOSS2007 it gives error
Could not create profile page for entity <MyEntityName>. The error is: Cannot create a new connection to closed web part "g_813e1ec9_c1ce_4d69_ab7d_3ef226d74bb8"
It does add the entity to the application but when I try to place this entity in a Business Data List webpart, it generates the same error on the server log file and does not allow me to use this entity.
Hi there
i am getting similar error as poseted by Rahim above. could someone reply if they know how to fix this error please?
thanks
ananth
Rahim / Ananth
Can you post/send your Applicatin Definiton File.
Nidhi
Nidhi, whats your email alias please?
In my Business Data Catalog/ Application dropdownlist the Change Settings not visible on dropdown list.
How can I enable it for my Business Data Catalog Applications ?
Is it possible to connect to PeopleSoft.
I attended a great webcast by Mike Fitz a few weeks ago. Included was a compelling demo showing SAP information displayed in MOSS 2007 through BDC.
Is Microsoft planning to share the metadata used for that demo? I try not to reinvent the wheel if I dont have too, and obtaining that XML metadata will cut through hours of xml tweaking.
LOB integration is a strong selling point for migration to MOSS 2007, and my clients are eager to pull SAP information into their SharePoint portal. Not to mention that SAP integration is a strong push at Microsoft!
Nicolas L.
This question was already asked twice, but not answered yet.
Has anyone an idea what to do with this exception:
A Metadata Exception was constructed in App Domain '/LM/W3SVC/352480820/Root-1-128164404474871104'. The full exception text is: The Property with name 'SsoApplicationId' is missing on the LobSystemInstance.
I've created the BDC file with BDC Meta Man. The other tool, MOSS BDC Meta Data Manager, doesn't work either for me!?
Has anyone ideas?
@Rahim / Ananth and Nidhi
I have exactly the same Problem. For AuthenticationMode I use "WindowsCredentials" and have added a "enterprise application definition" with the same ID as the value from the Proporty "SsoApplicationId" in my BDC xml file.
I can't understand the error. I also deleted all my colsed Web Parts on Sharepoint.
@Mel
I can feel whit you. i have the same problem
I also had this same issue until I read somewhere that using certain types of AuthenticationMode require SSO, thus the error because I assume you aren't using SSO. I got it working by using the AuthenticationMode of RevertToSelf and adding my SharePoint Service Account read access to the DB.
Hi, How does the BDC 'crawl' a web service. Is there an ability to specify a range of values for some sort of index field?
You can view my blogs for configuration
Hi, I have uploaded the Sample Amazon Web Service BDC to my SSP, however, when I try to get it out and display data in BDC web part, I was being throw an error stating that there is no BDC file uploaded.
Any idea?
Thanks.
Hi
We have a custom ADO.NET provider written for our datasource and I want BDC to use our data provider. Is there a way to specify custom ADO.NEt providers in BDc metadata.
I have a BDC application that works for authenicated users. I have not been able to get it working for anonymous users. What are the methods and best practices for enabling anonymous access to BDC data and BDC web parts?
i installed MOSS 2007, and also create SSP. but in Shared Services Administration page i cannot see Business Data Catalog section. is i am missing any thing??
Rajeev, you've likely installed the standard verion of MOSS 2007. Use an Enterprise key to enable the Enterprise features and your BDC functionality will show on the SSP administration page.
As promised during today's webcast: ( overview w/screen shots ) 1) Sample ADF template 2) BDC Metadata
I enabled BDC search functionality . but only the person with permission on bdc can able to see result of search not another user who has access on sharepoint site. is there any way in which without specifying permission on bdc user can see search result by using site access permission only.
Thanks Greg, Its work now
Thanks again !!
Has anyone written BDC App Def file for a WCF service??
Any help appreciated...thanks
I have created a web service which exposes some data from our time attendance application and we need to display this on our sharepoint portal page. Using BDC i have read that its possible to connect to web service and metalogix also provides some tool to create the xml defs .... is there any other tool and which is free ? or open source .... which will help me achieve it ?
and if i create a web part in vs 2005 consuming this service ? which way is effecient .. a c# web part or a bdc consuming ws?
Regards
Arif Habib Shadan
Hi
Can any one please help me. I am trying to connect to commerce server sql database which is on a different server from share point using BDC. When I try to connect i get the following error. Does any one have any idea about the following error. I tried all the four authentication methods....
Could not open connection using 'data source=Ztcfv0n1\comprod;initial catalog=CSharpSite_productcatalog;integrated security=SSPI;pooling=false;persist security info=false' in App Domain '/LM/W3SVC/1623789249/Root-1-128297584703937482'. The full exception text is: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'.
for creating the bdc application definition metadata for Database, you can use the tool downloadable from. It will really save a lot of time for you.
It seems that Business Data Catalog does not work with forms authentication and SharePoint SSO. Is there any workaround to use Business Data Catalog on a SharePoint site with forms authentication? Thanks.
Is it my company only? Or it is normal that managers refuse to use BDC to pull data from SQL Server because they already have another reporting system that they use to charge their customers. BDC will expose the data for the customer with no data reporting fees!
I was importing BDC xml files. After importing couple of the files it start throughing me the following error:
The current operation could not be completed. The job ID is not valid.
Can anyone please tell me what is going wrong hear?
Thanks
Tanzil
Where to get the BDC definition file for adventureworks sample?
Where are the tools/gui to create new BDC objects? Where is the how to create a BDC object, from scratch, step by step (do not just point users to the Adventureworks samples)?
Nice stuff, but it's easy enough for decent programmers to write their own "business data web parts", I have, and we just run WSS 3.0 without paying for MOSS.
That it actually does NOT import everything but just pulls the data in on demand? You probably just answered
i have the prob when connect with oracle 8.x
with key defined as varchar2 which i define in xml is System.String the return result is ora-12704.
How to give diffrent icons for CRM searc results using BDC.. Eg. if the serach result is from Account entity I need to give Acoount entity image along with the result
Hi there,
Ik have installed a trial versiol of Sharepoint MOSS 2007 but the Business Data Catalog is missing. I also isn't listed in de site features so I'm trying to find de dwp-file but with no succes. Can anyone tell me how I can enable the BDC or where I can download this file so I can import and install it.
Hay KristoFM,
have you created SSP?
BDC is avialable in SSP.
When I import an application definition file on MOSS2007 it gives error
Could not create profile page for Entity dbo.MVP_CandidateReviewProcessView. The error is: Cannot create a new connection to closed Web Part "g_9e9db8f4_1ccb_48bd_9962_f49baa277a5c".)
"The BDC enables easy and rapid integration between SharePoint Server and business applications and databases by leveraging an application or database connection"
You have GOT TO BE KIDDING!
I have spent over six months trying to expose a SIMPLE CUSTOMER LIST FROM A SIMPLE SQL TABLE with BDC. It just point blank refuses to do it.
Half the time the ADF XML fails to import for absolutely no fathomable reason. On the rare occasion it does import (sometimes within five minutes of failing to import on the previous attempt, despite no change being made to the ADF in the meantime), the web parts refuse to render any data and I get dumb errors with spelling mistakes in trace - like
Could not run query/stored proceedure 'SELECT [Customer Name], Address, PostCode, Phone, Fax, [Last Updated], Created FROM [dbo].[Customer] WHERE [Customer Name] LIKE @CustomerName' ...... The full exception text is: Must declare the scalar variable "@CustomerName".
(And yes the scalar variable IS declared.
When it does give a reason, it's complete garbage, eg
Could not run query/stored proceedure 'SELECT [Customer Name], CustomerRef, [Found In], [Listed As], Address, PostCode, Phone, Fax, [Last Updated], Created FROM [dbo].[Customer] WHERE [Customer Name] LIKE @CustomerName' using **** The full exception text is: Must declare the scalar variable "@CustomerName".
SQL Server, Access, and even a very ancient version of Excel are all very happily doing what I want, when I want, with the exact same data, and all manage to do it without complaining.
I have even tried using those automation tools to generate the ADF. My best result so far imports OK, but do the web parts work with the data? No.
Did some numpty at Microsoft put in a statement in the code, to the effect of, "IF NOT USING ADVENTUREWORKS THEN THROW AN EXCEPTION"?
Come the revolution, whoever came up with this overcomplicated, flakey option which apparently can't even cope with a stored procedure that every other Microsoft product seems able to execute without any complaining, will be first against the wall.
Why BDC is a big fail:
1. If you have approvals on in a document library and refresh the BDC column by using the hurricane icon all documents are set back to pending.
2. If you have versioning turned on in doc lib doing refresh will increase version and affect last modified.
3. If you have on change workflows and you refresh - your items all just "changed" watch your workflows freak out.
This is dumb.
Very nice Resources!!
Thanks a lot!!
-Edison,NJ
Here is the step by step walkthrough of your first BDC example..
How do I call BDC actions from Infopath? | http://blogs.msdn.com/sharepoint/archive/2006/04/18/578194.aspx | crawl-002 | refinedweb | 2,940 | 63.8 |
By: Edwin Sarmiento | Updated: 2008-05-20 | Comments (4) | Language Integrated Query LINQ
Problem
Language-Integrated Query (LINQ) is a groundbreaking innovation in Visual Studio 2008 and the .NET Framework version 3.5 that bridges the gap between the world of objects and the world of data. As LINQ is part of the development enhancements in SQL Server 2008, how can I have an understanding of how it works and how I can use it in other areas of administration, not just SQL Server?
Solution
Let's start off by explaining LINQ. LINQ is a codename for a project which is a set of extensions to the .NET Framework that encompasses language-integrated query, set and transform operations. It extends C# and VB with native language syntax for queries and provides class libraries to take advantage of these capabilities, available only in .NET Framework 3.5. For developers who write code that regularly access a recordset, this means a lot. The fact that queries are usually expressed in a specialized query language for different data sources makes it difficult for developers to learn a query language for each data source or data format that they must access. This is what LINQ is all about. It simplifies data access by providing a consistent model for working with data across various kinds of sources and formats. In LINQ, data is translated into objects, something that developers are more comfortable with working . Understanding LINQ will give us an idea of its capabilities and its benefits
Create a simple LINQ project
Let's start by creating a simple console project using the C# language in Visual Studio 2008. You can also download the free Visual C# 2008 Express Edition from the MSDN Download Center. Make sure you select .NET Framework 3.5 from the target framework drop-down menu.
This will open up your Program.cs file. Notice that by simply creating a project that targets the .NET Framework 3.5 automatically adds a using directive for the System.Linq namespace as this is already a part of the System.Core assembly. The System.Linq namespace provides classes and interfaces that support queries that use LINQ. We will start with this to understand the basics of LINQ.
Let's start writing some code inside the static void Main(string[] args):
We'll examine the basic components of a LINQ query. Any LINQ query consists of three distinct actions. These are obtaining the data source, creating the query and executing the query. The first thing that we need to do is to have a data source. In this case, it's an array of strings which supports the generic IEnumerable(T) interface. This makes it available for LINQ to query. A queryable type does not require special modification to serve as a LINQ data source as long as it is already loaded in memory. If not, you would have to load it into memory so LINQ can query the objects. This is applicable to data sources like XML files. Next, is the query. A query specifies information to retrieve from the data source. This is similar to a SQL query which includes syntaxes like SELECT, FROM, WHERE, GROUP BY, etc. Looking at the code above, you'll notice that its not like your typical SQL statement as the FROM clause appeared before the SELECT clause. There are a couple of reasons for this. One, it adheres to the programming concept of declaring. This provides the appropriate properties and methods, making it easy for the developers to write their code.
Let's look at how the code was constructed. The from clause specifies the data source, in this case, the carNames collection. The where clause applies the filter, in this case, the list of all elements in the collection containing the letter 'e'.. As I mentioned, you would do it in previous versions of Visual Studio - specifying the DataSource property of the control to be the query variable and calling the DataBind() method.
Another area to highlight in the code is the use of the keyword var, which is a new keyword introduced in C# 3.0. What this does is it looks at the value assigned to the variable, then determines and sets the appropriate one. This concept is called type inference. From the code above, the query variable, query, appears to be an array of string. So the compiler will automatically assume that it is a variable of type IEnumerable. This is helpful if you do not know the variable type during runtime. But this does not mean that any type can be assigned to the variable after the initial assignment - something like a dynamic type - since .NET is a strongly typed language platform. This simply means that an object can take on a different type and the compiler can simply handle that. Assigning a different type to an already existing one violates the concept of polymorphism in object-oriented programming. Let's say you assign the value 12 to the query variable, query. This will throw a type conversion exception as the original type of the variable is a string collection.
Your output will look like this when you run your project in Visual Studio. You can press F5 or click on Debug - Start Debugging in Visual Studio
Next Steps
- You have seen how powerful LINQ queries are and how similar they are to SQL queries. There are other data access methods that LINQ implements including
- LINQ to SQL
- LINQ to XML
- LINQ to Objects
- LINQ to Entities
- LINQ to DataSets
- Give this example a try and change the query parameters so you can have a feel of how LINQ works.
- Have a look at the 101 LINQ Samples at the MSDN Visual C# Developer Center.
- For samples in Visual Basic, you can check the Getting Started with LINQ in Visual Basic site from MSDN
Last Updated: 2008-05-20
About the author
View all my tips | https://www.mssqltips.com/sqlservertip/1502/introduction-to-language-integrated-query-linq/ | CC-MAIN-2019-39 | refinedweb | 994 | 64.41 |
Kivy - Open source Python library for rapid development of applications
that make use of innovative user interfaces, such as multi-touch apps.
Cross platform.
Business Friendly.
GPU Accelerated
The graphics engine is built over OpenGL ES 2, using a modern and fast graphics pipeline.
The toolkit comes with more than 20 widgets, all highly extensible. Many parts are written in C using Cython, and tested with regression tests.
Be social !
Usage example
See how easy it is to create a simple Hello World application that just shows an actionable button:
from kivy.app import App from kivy.uix.button import Button class TestApp(App): def build(self): return Button(text='Hello World') TestApp().run()
Result
Download
The current version is 1.9.1, released on Jan 1st, 2016. Read the Changelog.
Installation instructions can be found here.
Android
Demo examples are published on the Android market:
Create you own APK by following the documentation about Packaging for Android
IOS
We haven't published our demos, but the winner of Kivy game contest #1 have been packaged for IOS: Deflectouch on iTunes.
Read the documentation about Packaging for IOS
Virtual Machine
A Virtual Machine with Android SDK and NDK and all other pre-requisites pre installed to ease apk generation:
- Kivy Buildozer VM
- Or select the Torrent
Source code
git clone
Take a look at our instructions on Installation of Development version.
Documentation
- Getting started into Kivy
- API Reference
- Or see the Wiki for a list of projects, snippets and more
Community Support
- Report a bug or request a feature in our issue tracker
- Ask your questions on the Kivy users forums
- Or send a mail at kivy-users@googlegroups.com
You can also try to contact us on IRC (online chat), but make sure to read the IRC rules before connecting. Connect to Webchat
Licenses
Kivy logo have been done by
Vincent Autin. The logo is placed under
All the screenshots on the website that came fromHe is an Information Systems engineer and working at Tangible Display, an NUI/innovative interactions company. He lives in France.
On IRC, he's tshirtman.
- He is freelance developer. He is from India.
On IRC, he's qua-non.
- He is a Python, Android, and Linux lover who thinks tablets will be everyone's computer in the future. He lives in Michigan.
On IRC, he's brousch.
- He is an independent game developer who is very interested in creating game development tools for Android. He lives in Utah.
On IRC, he's kovak.
- He is a postdoc in.
- Ryan PessaHe is a software developer in Kansas City. He enjoys being well-bearded. He has a cat.
On IRC, he's kived.
- Sebastian ArminHe is an independent developer from the Carpathian wilderness.
On IRC, he's dessant.
- Thomas-Karl PietrowskiPython developer and Debian/Ubuntu package creator, who publishes new, interessant projects or other software in his PPAs on launchpad.net
On IRC, he's thopiekar, but you should prefer contacting him by mail.
- was one of the initial authors of the framework. Besides his tremendous work on shaping Kivy, including his contributions to the graphics pipeline, he has also supported our annual contests.
- Christopher Denter (dennda) who was a core-developer in the first stage of the project. He contributed a lot by improving the documentation, implementing the Kivy extension system, pep8 hook and fixes, spelling provider. He also did 1 GSOC on the previous PyMT framework on implementing a virtual keyboard with a better navigation.
- Edwin Marshall (aspidites) helped with quality and documentation, as well as adding a few features.
- Jeff Pittman who helped with documentation and features, and have been a core-contributor for a long time, before moving to other adventures.
- Brian Knapp was a core-developer who created Kivy's interactive launcher and provided many valuable patches for the framework.
- Special thanks
- Mark Hembrow, who is one of our first sponsor, by giving us a Mac Mini. Which is currently | https://kivy.org/ | CC-MAIN-2016-44 | refinedweb | 659 | 57.98 |
I.
0:000> !dumpdomain
…
I:
- Process/Private Bytes
- Process/Virtual Bytes
- .NET CLR Memory/All counters – .NET heaps, time in GC etc.
- .NET CLR Loading/All counters – Loader Heap, App Domains, Assemblies etc..
Domain 2: 0xf4fa0
LowFrequencyHeap: 0x000f5004
HighFrequencyHeap: 0x000f505c
StubHeap: 0x000f50b4
Name: /LM/w3svc/1/root/MemoryIssues-1-127843748880710932
Assembly: 0x00100850 [System.Web, Version=1.0.5000.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a]
ClassLoader: 0x000f71d0
Module Name
0x0013b2b0 c:\windows\assembly\gac\system.web\1.0.5000.0__b03f5f7f11d50a3a\system.web.dll
Assembly: 0x0011d060 [System, Version=1.0.5000.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]
ClassLoader: 0x001012b8
Module Name
0x001496d0 c:\windows\assembly\gac\system\1.0.5000.0__b77a5c561934e089\system.dll
Assembly: 0x000f4638 [System.Xml]
ClassLoader: 0x0015efc8
Module Name
0x01f323b8 c:\windows\assembly\gac\system.xml\1.0.5000.0__b77a5c561934e089\system.xml.dll
Assembly: 0x01f336a8 [System.Web.RegularExpressions]
ClassLoader: 0x00165e60
Module Name
0x01f42120 c:\windows\assembly\gac\system.web.regularexpressions\1.0.5000.0__b03f5f7f11d50a3a\system.web.regularexpressions.dll
…
Assembly: 0x01f8e548 [tfwlio3y]
ClassLoader: 0x01f71c20
Module Name
0x01f8ce80 Unknown Module
Assembly: 0x01f3e338 [4fbkoonb]
ClassLoader: 0x01f73a08
Module Name
0x01f3d588 Unknown Module
Assembly: 0x01f31870 [d68fddbc]
ClassLoader: 0x01f8af48
Module Name
0x01f31068 Unknown Module
Assembly: 0x01f3d768 [ncmnosbh]
ClassLoader: 0x01f37600
Module Name
0x01f83f40 Unknown Module
Assembly: 0x01f30580 [5-we60ay]
ClassLoader: 0x01f84568
Module Name
0x01f84120 Unknown Module
Assembly: 0x01f30df8 [nymoqqrb]
ClassLoader: 0x01f30ec0
Module Name
0x01f9bc40 Unknown Module
Assembly: 0x01fb9be8 [ubg1lguq]
ClassLoader: 0x01fb9cb0
Module Name
0x01fb9f50 Unknown Module
Assembly: 0x01fbaaa8 [crn6r9yj]
ClassLoader: 0x01fbab70
Module Name
0x01fbae10 Unknown Module
Assembly: 0x01fbb988 [3ivrpfyn]
ClassLoader: 0x01fbba50
Module Name
0x01fbbcf0 Unknown Module
Assembly: 0x01f9c8a0 [ar9raomt]
ClassLoader: 0x01f9c968
Module Name
0x01f9cbe0 Unknown Module
Assembly: 0x01f9d748 [8po4sj1t]
ClassLoader: 0x01f9d810
Module Name
0x01f9dad8 Unknown Module
Assembly: 0x00170730 [k3hcfvrs]
ClassLoader: 0x001707f8
Module Name
0x01fa1578 Unknown Module
Assembly: 0x01f98628 [trggh3di]
ClassLoader: 0x01f986f0
Module Name
0x01f98990 Unknown Module
We can also get to these assemblies by running !DumpDynamicAssemblies (!dda) and find that we have 5315 dynamic assemblies in the /LM/w3svc/1/root/MemoryIssues-1-127843748880710932 domain.
0:000> !dda
Domain: ….
——————-
Domain:
——————-
Domain: DefaultDomain
——————-
Domain: /LM/w3svc/1/root/MemoryIssues-1-127843748880710932
——————-
Assembly: 0x1f8e548 [tfwlio3y] Dynamic Module: 0x1f8ce80 loaded at: 0xc391000 Size: 0x3200((null))
Assembly: 0x1f3e338 [4fbkoonb] Dynamic Module: 0x1f3d588 loaded at: 0xc551000 Size: 0x3200((null))
Assembly: 0x1f31870 [d68fddbc] Dynamic Module: 0x1f31068 loaded at: 0xc571000 Size: 0x3200((null))
…
Assembly: 0x1f27aba0 [6r53rdxs] Dynamic Module: 0x1f27af28 loaded at: 0x26501000 Size: 0x3200((null))
Assembly: 0x1f27ba28 [ijpwg6f0] Dynamic Module: 0x1f27be38 loaded at: 0x26511000 Size: 0x3200((null))
Assembly: 0x1f27c938 [c3rkovcm] Dynamic Module: 0x1f27cd48 loaded at: 0x26521000 Size: 0x3200((null))
Assembly: 0x1f280dc8 [dvctwism] Dynamic Module: 0x1f2811d8 loaded at: 0x26531000 Size: 0x3200((null))
Assembly: 0x1f281cd8 [xmeznvof] Dynamic Module: 0x1f2820e8 loaded at: 0x26551000 Size: 0x3200((null))
Assembly: 0x1f282870 [wdhhric4] Dynamic Module: 0x1f282c80 loaded at: 0x26541000 Size: 0x3200((null))
Assembly: 0x1f2727a0 [hbxmezcu] Dynamic Module: 0x1f272b90 loaded at: 0x26561000 Size: 0x3200((null))
Assembly: 0x1f273398 [euitep_p] Dynamic Module: 0x1f273bb0 loaded at: 0x26581000 Size: 0x3200((null))
Assembly: 0x1f2746d8 [uxqgnty0] Dynamic Module: 0x1f274b00 loaded at: 0x26591000 Size: 0x3200((null))
Assembly: 0x1f287320 [-oyygxql] Dynamic Module: 0x1f287730 loaded at: 0x265b1000 Size: 0x3200((null))
Assembly: 0x1f275040 [oibxqwve] Dynamic Module: 0x1f286b70 loaded at: 0x265a1000 Size: 0x3200((null))
Assembly: 0x1f287d70 [agqc38wk] Dynamic Module: 0x1f283650 loaded at: 0x265c1000 Size: 0x3200((null))
Assembly: 0x1f284128 [jbfxc_s7] Dynamic Module: 0x1f28b4d0 loaded at: 0x265d1000 Size: 0x3200((null))
Assembly: 0x1f284268 [qvqomxgp] Dynamic Module: 0x1f284640 loaded at: 0x265e1000 Size: 0x3200((null))
————————————–
Total 5,315 Dynamic Assemblies, Total size: 0x40e1600(68,032,000) bytes.
=======================================
Let’s take a closer look at one of them (Assembly address: 0x1f2727a0 Module address: 0x1f272b90).
We can dump the module with !dumpmodule
0:000> !dumpmodule 0x1f272b90
Name Unknown Module
dwFlags 0x00000080
Attribute PEFile
Assembly 0x1f2727a0
LoaderHeap* 0x000f5004
TypeDefToMethodTableMap* 0x26572038
TypeRefToMethodTableMap* 0x26572050
MethodDefToDescMap* 0x265720b4
FieldDefToDescMap* 0x2657212c
MemberRefToDescMap* 0x26572190
FileReferencesMap* 0x265722d8
AssemblyReferencesMap* 0x265722dc
MetaData starts at 0x265644d8 (0x142c bytes)
From the raw MetaData we can see that it is an XMLSerialization.GeneratedAssembly and it seems to have the types XmlSerializationReaderPurchaseOrder and XMLSerializationWriterPurchaseOrder defined.
0:000> dc 0x265644d8 0x265644d8+0x142c
265644d8 424a5342 00010001 00000000 0000000c BSJB…………
265644e8 312e3176 3233342e 00000032 00050000 v1.1.4322…….
…
26564bf8 00001388 00000000 03bc0001 00000000 …………….
26564c08 00000000 6f4d3c00 656c7564 6268003e …..
.hb
26564c18 7a656d78 642e7563 53006c6c 65747379 xmezcu.dll.Syste
26564c28 6d582e6d 7953006c 6d657473 6c6d582e m.Xml.System.Xml
26564c38 7265532e 696c6169 6974617a 58006e6f .Serialization.X
26564c48 65536c6d 6c616972 74617a69 576e6f69 mlSerializationW
26564c58 65746972 6d580072 7265536c 696c6169 riter.XmlSeriali
26564c68 6974617a 72576e6f 72657469 63727550 zationWriterPurc
26564c78 65736168 6564724f 694d0072 736f7263 haseOrder.Micros
26564c88 2e74666f 2e6c6d58 69726553 7a696c61 oft.Xml.Serializ
26564c98 6f697461 65472e6e 6172656e 41646574 ation.GeneratedA
26564ca8 6d657373 00796c62 536c6d58 61697265 ssembly.XmlSeria
26564cb8 617a696c 6e6f6974 64616552 58007265 lizationReader.X
26564cc8 65536c6d 6c616972 74617a69 526e6f69 mlSerializationR
26564cd8 65646165 72755072 73616863 64724f65 eaderPurchaseOrd
26564ce8 58007265 65536c6d 6c616972 72657a69 er.XmlSerializer
26564cf8 6c6d5800 69726553 7a696c61 61427265 .XmlSerializerBa
26564d08 00316573 63727550 65736168 6564724f se1.PurchaseOrde
26564d18 72655372 696c6169 0072657a 536c6d58 rSerializer.XmlS
26564d28 61697265 657a696c 706d4972 656d656c erializerImpleme
26564d38 7461746e 006e6f69 536c6d58 61697265 ntation.XmlSeria
26564d48 657a696c 6e6f4372 63617274 654d0074 lizerContract.Me
26564d58 79726f6d 75737349 50007365 68637275 moryIssues.Purch
26564d68 4f657361 72656472 69725700 5f316574 aseOrder.Write1_
26564d78 63727550 65736168 6564724f 64410072 PurchaseOrder.Ad
26564d88 73657264 72570073 32657469 6464415f dress.Write2_Add
26564d98 73736572 69725700 5f336574 656a624f ress.Write3_Obje
26564da8 4f007463 72656472 74496465 57006d65 ct.OrderedItem.W
26564db8 65746972 724f5f34 65726564 65744964 rite4_OrderedIte
26564dc8 6e49006d 61437469 61626c6c 00736b63 m.InitCallbacks.
26564dd8 74697257 505f3565 68637275 4f657361 Write5_PurchaseO
26564de8 72656472 74632e00 5200726f 31646165 rder..ctor.Read1
26564df8 7275505f 73616863 64724f65 52007265 _PurchaseOrder.R
26564e08 32646165 6464415f 73736572 61655200 ead2_Address.Rea
26564e18 4f5f3364 63656a62 65520074 5f346461 d3_Object.Read4_
26564e28 6564724f 49646572 006d6574 64616552 OrderedItem.Read
26564e38 75505f36 61686372 724f6573 00726564 6_PurchaseOrder.
26564e48 37316469 6574495f 6d614e6d 64690065 id17_ItemName.id
26564e58 4e5f3131 00656d61 5f386469 70696853 11_Name.id8_Ship
26564e68 74736f43 35646900 6574495f 6900736d Cost.id5_Items.i
26564e78 4f5f3664 72656472 74496465 69006d65 d6_OrderedItem.i
26564e88 5f363164 61727241 4f664f79 72656472 d16_ArrayOfOrder
26564e98 74496465 69006d65 545f3964 6c61746f edItem.id9_Total
26564ea8 74736f43 31646900 64415f30 73657264 Cost.id10_Addres
26564eb8 64690073 435f3331 00797469 31326469 s.id13_City.id21
26564ec8 6e694c5f 746f5465 69006c61 535f3364 _LineTotal.id3_S
26564ed8 54706968 6469006f 4c5f3231 31656e69 hipTo.id12_Line1
26564ee8 31646900 74535f34 00657461 5f346469 .id14_State.id4_
26564ef8 6564724f 74614472 64690065 5a5f3531 OrderDate.id15_Z
26564f08 69007069 535f3764 6f546275 006c6174 ip.id7_SubTotal.
26564f18 30326469 6175515f 7469746e 64690079 id20_Quantity.id
26564f28 555f3931 5074696e 65636972 32646900 19_UnitPrice.id2
26564f38 6574495f 6469006d 445f3831 72637365 _Item.id18_Descr
26564f48 69747069 69006e6f 505f3164 68637275 iption.id1_Purch
26564f58 4f657361 72656472 696e4900 73444974 aseOrder.InitIDs
26564f68 65724300 52657461 65646165 72430072 .CreateReader.Cr
26564f78 65746165 74697257 58007265 65526c6d eateWriter.XmlRe
26564f88 72656461 6e614300 65736544 6c616972 ader.CanDeserial
26564f98 00657a69 69726553 7a696c61 65440065 ize.Serialize.De
26564fa8 69726573 7a696c61 65670065 65525f74 serialize.get_Re
26564fb8 72656461 74656700 6972575f 00726574 ader.get_Writer.
26564fc8 6f63736d 62696c72 73795300 2e6d6574 mscorlib.System.
26564fd8 6c6c6f43 69746365 00736e6f 68736148 Collections.Hash
26564fe8 6c626174 65720065 654d6461 646f6874 table.readMethod
26564ff8 65670073 65525f74 654d6461 646f6874 s.get_ReadMethod
26565008 72770073 4d657469 6f687465 67007364 s.writeMethods.g
26565018 575f7465 65746972 6874654d 0073646f et_WriteMethods.
26565028 65707974 72655364 696c6169 7372657a typedSerializers
26565038 74656700 7079545f 65536465 6c616972 .get_TypedSerial
26565048 72657a69 79530073 6d657473 70795400 izers.System.Typ
26565058 61430065 7265536e 696c6169 5200657a e.CanSerialize.R
26565068 65646165 72570072 72657469 61655200 eader.Writer.Rea
26565078 74654d64 73646f68 69725700 654d6574 dMethods.WriteMe
26565088 646f6874 79540073 53646570 61697265 thods.TypedSeria
26565098 657a696c 53007372 65747379 65532e6d lizers.System.Se
265650a8 69727563 41007974 776f6c6c 74726150 curity.AllowPart
265650b8 6c6c6169 75725479 64657473 6c6c6143 iallyTrustedCall
265650c8 41737265 69727474 65747562 78626800 ersAttribute.hbx
265650d8 637a656d 006e0075 6f00736e 4e736900 mezcu.n.ns.o.isN
265650e8 616c6c75 00656c62 6465656e 65707954 ullable.needType
265650f8 69725700 754e6574 61546c6c 74694c67 .WriteNullTagLit
26565108 6c617265 6a624f00 00746365 54746547 eral.Object.GetT
26565118 00657079 746e7552 54656d69 48657079 ype.RuntimeTypeH
26565128 6c646e61 65470065 70795474 6f724665 andle.GetTypeFro
26565138 6e61486d 00656c64 65637845 6f697470 mHandle.Exceptio
26565148 7243006e 65746165 6e6b6e55 546e776f n.CreateUnknownT
26565158 45657079 70656378 6e6f6974 69725700 ypeException.Wri
26565168 74536574 45747261 656d656c 5700746e teStartElement.W
26565178 65746972 54697358 00657079 70696853 riteXsiType.Ship
26565188 4f006f54 72656472 65746144 69725700 To.OrderDate.Wri
26565198 6c456574 6e656d65 72745374 00676e69 teElementString.
265651a8 6564724f 49646572 736d6574 69725700 OrderedItems.Wri
265651b8 6e456574 656c4564 746e656d 63654400 teEndElement.Dec
265651c8 6c616d69 62755300 61746f54 6d58006c imal.SubTotal.Xm
265651d8 6e6f436c 74726576 536f5400 6e697274 lConvert.ToStrin
265651e8 72570067 45657469 656d656c 7453746e g.WriteElementSt
265651f8 676e6972 00776152 70696853 74736f43 ringRaw.ShipCost
26565208 746f5400 6f436c61 4e007473 00656d61 .TotalCost.Name.
26565218 74697257 74744165 75626972 4c006574 WriteAttribute.L
26565228 31656e69 74694300 74530079 00657461 ine1.City.State.
26565238 0070695a 576c6d58 65746972 72570072 Zip.XmlWriter.Wr
26565248 54657469 64657079 6d697250 76697469 iteTypedPrimitiv
26565258 74490065 614e6d65 4400656d 72637365 e.ItemName.Descr
26565268 69747069 55006e6f 5074696e 65636972 iption.UnitPrice
26565278 61755100 7469746e 694c0079 6f54656e .Quantity.LineTo
26565288 006c6174 74697257 61745365 6f447472 tal.WriteStartDo
26565298 656d7563 5400746e 654c706f 456c6576 cument.TopLevelE
265652a8 656d656c 6300746e 6b636568 65707954 lement.checkType
265652b8 61655200 6c754e64 6d58006c 6175516c .ReadNull.XmlQua
265652c8 6966696c 614e6465 4700656d 73587465 lifiedName.GetXs
265652d8 70795469 706f0065 7571455f 74696c61 iType.op_Equalit
265652e8 65670079 614e5f74 6700656d 4e5f7465 y.get_Name.get_N
265652f8 73656d61 65636170 6f6f4200 6e61656c amespace.Boolean
26565308 58734900 736e6c6d 72747441 74756269 .IsXmlnsAttribut
26565318 6e550065 776f6e6b 646f4e6e 6f4d0065 e.UnknownNode.Mo
26565328 6f546576 7478654e 72747441 74756269 veToNextAttribut
26565338 6f4d0065 6f546576 6d656c45 00746e65 e.MoveToElement.
26565348 5f746567 6d457349 45797470 656d656c get_IsEmptyEleme
26565358 5300746e 0070696b 64616552 72617453 nt.Skip.ReadStar
26565368 656c4574 746e656d 6c6d5800 65646f4e tElement.XmlNode
26565378 65707954 766f4d00 436f5465 65746e6f Type.MoveToConte
26565388 6700746e 4e5f7465 5465646f 00657079 nt.get_NodeType.
26565398 5f746567 61636f4c 6d614e6c 65670065 get_LocalName.ge
265653a8 614e5f74 7073656d 55656361 52004952 t_NamespaceURI.R
265653b8 45646165 656d656c 7453746e 676e6972 eadElementString
265653c8 72724100 45007961 7275736e 72724165 .Array.EnsureArr
265653d8 6e497961 00786564 64616552 45646e45 ayIndex.ReadEndE
265653e8 656d656c 5300746e 6e697268 7272416b lement.ShrinkArr
265653f8 54007961 6365446f 6c616d69 74656700 ay.ToDecimal.get
26565408 6c61565f 52006575 54646165 64657079 _Value.ReadTyped
26565418 6d697250 76697469 6f540065 33746e49 Primitive.ToInt3
26565428 72430032 65746165 6e6b6e55 4e6e776f 2.CreateUnknownN
26565438 4565646f 70656378 6e6f6974 6c6d5800 odeException.Xml
26565448 656d614e 6c626154 65670065 614e5f74 NameTable.get_Na
26565458 6154656d 00656c62 00646441 526c6d78 meTable.Add.xmlR
26565468 65646165 626f0072 7463656a 65536f54 eader.objectToSe
26565478 6c616972 00657a69 74697277 72007265 rialize.writer.r
26565488 65646165 6d580072 7265536c 696c6169 eader.XmlSeriali
26565498 6974617a 65476e6f 6172656e 43646574 zationGeneratedC
265654a8 0065646f 5f746567 65746e49 6c616e72 ode.get_Internal
265654b8 636e7953 656a624f 53007463 65747379 SyncObject.Syste
265654c8 68542e6d 64616572 00676e69 696e6f4d m.Threading.Moni
265654d8 00726f74 65746e45 65730072 74495f74 tor.Enter.set_It
265654e8 45006d65 00746978 65707974 00000000 em.Exit.type….
265654f8 00501b00 00720075 00680063 00730061 ..P.u.r.c.h.a.s.
26565508 004f0065 00640072 00720065 0d000100 e.O.r.d.e.r…..
26565518 00680053 00700069 006f0054 004f1300 S.h.i.p.T.o…O.
26565528 00640072 00720065 00610044 00650074 r.d.e.r.D.a.t.e.
26565538 00490b00 00650074 0073006d 004f1700 ..I.t.e.m.s…O.
26565548 00640072 00720065 00640065 00740049 r.d.e.r.e.d.I.t.
26565558 006d0065 00531100 00620075 006f0054 e.m…S.u.b.T.o.
26565568 00610074 1100006c 00680053 00700069 t.a.l…S.h.i.p.
26565578 006f0043 00740073 00541300 0074006f C.o.s.t…T.o.t.
26565588 006c0061 006f0043 00740073 00410f00 a.l.C.o.s.t…A.
26565598 00640064 00650072 00730073 004e0900 d.d.r.e.s.s…N.
265655a8 006d0061 0b000065 0069004c 0065006e a.m.e…L.i.n.e.
265655b8 09000031 00690043 00790074 00530b00 1…C.i.t.y…S.
265655c8 00610074 00650074 005a0700 00700069 t.a.t.e…Z.i.p.
265655d8 00412500 00720072 00790061 0066004f .%A.r.r.a.y.O.f.
265655e8 0072004f 00650064 00650072 00490064 O.r.d.e.r.e.d.I.
265655f8 00650074 1100006d 00740049 006d0065 t.e.m…I.t.e.m.
26565608 0061004e 0065006d 00441700 00730065 N.a.m.e…D.e.s.
26565618 00720063 00700069 00690074 006e006f c.r.i.p.t.i.o.n.
26565628 00551300 0069006e 00500074 00690072 ..U.n.i.t.P.r.i.
26565638 00650063 00511100 00610075 0074006e c.e…Q.u.a.n.t.
26565648 00740069 13000079 0069004c 0065006e i.t.y…L.i.n.e.
26565658 006f0054 00610074 0f00006c 006e0061 T.o.t.a.l…a.n.
26565668 00540079 00700079 41000065 00740068 y.T.y.p.e..Ah.t.
26565678 00700074 002f003a 0077002f 00770077 t.p.:././.w.w.w.
26565688 0077002e 002e0033 0072006f 002f0067 ..w.3…o.r.g./.
26565698 00300032 00310030 0058002f 004c004d 2.0.0.1./.X.M.L.
265656a8 00630053 00650068 0061006d 004d4700 S.c.h.e.m.a..GM.
265656b8 006d0065 0072006f 00490079 00730073 e.m.o.r.y.I.s.s.
265656c8 00650075 002e0073 00750050 00630072 u.e.s…P.u.r.c.
265656d8 00610068 00650073 0072004f 00650064 h.a.s.e.O.r.d.e.
265656e8 003a0072 003a003a 0054003a 00750072 r.:.:.:.:.T.r.u.
265656f8 003a0065 00522700 00610065 00360064 e.:..’R.e.a.d.6.
26565708 0050005f 00720075 00680063 00730061 _.P.u.r.c.h.a.s.
26565718 004f0065 00640072 00720065 00572900 e.O.r.d.e.r..)W.
26565728 00690072 00650074 005f0035 00750050 r.i.t.e.5._.P.u.
26565738 00630072 00610068 00650073 0072004f r.c.h.a.s.e.O.r.
26565748 00650064 00000072 b41e11ac 46920e8c d.e.r……….F
…
265658d8 01070339 0020041c 20045912 030e0e01 9….. ..Y. ….
265658e8 041c0000 1c010100 01022005 07051c1c ……… ……
265658f8 1c251202 00000104 00000000 0000492c ..%………,I..
… or we can save it to disk
0:000> !dda -save g:\blog 0x1f2727a0
Writing Dynamic modules to disk.
In which case it gets saved as hbxmezcu.dll (per above), and we can open it up in reflector for example and from there find out that it contains a class Microsoft.XML.Serialization.GeneratedAssembly/PurchaseOrderSerializer.
If we don’t pass a specific assembly address the –save switch will save all dynamic assemblies.
In fact, if we dump a few more of these we find that they look almost identical. They define the same classes…
So, what are these and why are there so many of them? It would seem that whatever they are we would at least only need one.
Bringing it all together
My code is a slight rewrite of an XmlSerializer sample from MSDN.
Searching my code for PurchaseOrder, I find this line of code in page_load of one of my pages
XmlSerializer serializer = new XmlSerializer(typeof(PurchaseOrder), new XmlRootAttribute(“”));
This would seem like a pretty innocent piece of code. We create an XMLSerializer for PurchaseOrder. But what happens under the covers?
If we take a look at the XmlSerializer constructor with Reflector we find that it calls
this.tempAssembly = XmlSerializer.GenerateTempAssembly(this.mapping, type, defaultNamespace, location, evidence);
which generates a temp (dynamic) assembly. So every time this code runs (i.e. every time the page is hit) it will generate a new assembly.
The reason it generates an assembly is that it needs to generate functions for serializing and deserializing and these need to reside somewhere.
Ok, fine… it creates an assembly, so what? When we’re done with it, it should just disappear right?
Well… an assembly is not an object on the GC Heap, the GC is really unaware of assemblies, so it won’t get garbage collected. The only way to get rid of assemblies in 1.0 and 1.1 is to unload the app domain in which it resides.
And therein lies the problem Dr Watson.
What is the solution?
The default constructors XmlSerializer(type) and XmlSerializer(type, defaultNameSpace) caches the dynamic assembly so if you use those constructors only one copy of the dynamic assembly needs to be created.
Seems pretty smart… why not do this in all constructors? Hmm… interesting idea, wonder why they didn’t think of that one:) Ok, the other constructors are used for special cases, and the assumption would be that you wouldn’t create a ton of the same XmlSerializers using those special cases, which would mean that we would cache a lot of items we later didn’t need and use up a lot of extra space. Sometimes you have to do what is good for the majority of the people.
So what do you do if you need to use one of the other constructors? My suggestion would be to cache the XmlSerializer if you need to use it often. Then it would only be created once.
For more info see
Some other constructs that exhibit the same problems
Temporary assemblies are also used for regular expressions, as well as script blocks in XSL Transforms. In the case of the script blocks I would suggest using XSLT extension objects or cache the transform.
Have fun, and don’t play with XmlSerialization at 2 am in the morning on a Saturday night:)
Is this issue the same or different in .NET 2.0?
Wow. Really frightening. I’m checking my code right now. Thanks.
I dont believe that this has changed in .NET 2.0, tracking assemblies with the gc and unloading them at garbage collection is a pretty large task given the implications it would have.
I believe the new way to do this type of work in .NET 2.0 is to use the SGEN utility to generate the necessary classes.
Then bring these classes into your solution so there is no need to generate a dynamic assembly.
Nice info about SGEN, definitely worth looking into
So why doesn’ t the documentation for the class explain this?
My guess is that it is because it is close to impossible to document everything, and this is an internal implementation detail. When it was developed I don’t believe that it was intended to be used this way (since we are talking about special cases in the constructor). This is why we create kb articles when we discover issues, and of course also why i blog about them:)
If you feel that it should be added to the original msdn content (which i think is a great idea) I would urge you to use the comment feature on the msdn topic so that the content developers can change it.
Oh, I see it is documented in the ‘about xmlserializer’ topic.
(Incidentally, we were burned by the xsl transform script variant of this issue)
Thank you for the great content you are sharing!
Very useful information. Thanks for writing this blog.
I suppose this is the same problem in BinarySerializer as well?
Awesome! Keep it coming.
In this post, Tess describes how using repeatedly calling:
XmlSerializer serializer = new XmlSerializer(typeof(PurchaseOrder),…
Very useful article. Can you tell me which forms of the Regex constructor cause this issue? Or does it just happen if you specify the "compiled" flavor of Regex?
I’ve found your posts about Memory debugging in .NET EXTREMELY useful! Thank you and keep posting away!
The following links to .NET resources have been collated over time with the assistance of colleagues. …
Awesome post, as usual! 🙂
One more trick that can help is to use -MT on the dumpmodule, like !dumpmodule -MT <da_addr> and look at the referenced types for that assembly. This can sometimes be easier to view than the dc output… 😉
A la première initialisation d’une instance de XmlSerializer pour un type, le constructeur déclenche
Igår var jag inne och debuggade en dump från ett projekt där man hade problem med en w3wp process som
Igår var jag inne och debuggade en dump från ett projekt där man hade problem med en w3wp process som
PingBack from
PingBack from
I would also be grateful for any additional detail on the Regex flavour of the problem.
Tess Ferrandez talks about a .NET memory leak , caused by using the default constructors other than,
Nice article. It ‘brings back good memories’ of using adplus!
I have not gone through rest of your blog.. but I would say.. the title needs to be changed…
If broken it is, fix it you ‘must’ … that sounds more yoda-ish 🙂
It was really exciting to see that so many people answered the .NET GC PopQuiz , especially seeing that
These are the articles (in no particular order) that I felt best showed a thorough use of the windbg
Great article!
We do a lot of webservice calls, and we have the same issues. Mostly shown as OutOfMem exceptions in the internal code of the XmlSerializer.Deserialize.
But since it is called by the framework code of the webservice classes, I cant really do any caching.
Do you have a suggestion for our situation?
Hi Nils,
I am not sure that you are looking at the same thing here as what is described in the article.
The article is about leaking dynamic assemblies when creating new xmlserializers, in the case of webservices you will only have one xmlserializer per type you are serializing so that should not amount to too many.
If you are seeing OOMs when you deserialize i see two reasons why this could happen
1. If each deserialization is only generating a small string/object your problem is that you have high overall memory consumption and you should investigate what is using up all the memory…
or
2. more likely, if it seems to happen frequently in Deserialize, you are sending a lot of data back and forth so that the data sent over or the deserialized data is huge and requires a new large object heap block that wont fit anywhere.
I would recommend that you run with GCFailFastOnOOM to see how much it is trying to allocate when it fails to see if this is some really big chunk, unless you fit into #1 where your memory consumption overall is big.
There are a couple of post on this blog on how to troubleshoot high memory issues that you might want to look into as well.
Thanks
Tess
一个.net开发的网站项目现在到了收尾阶段,但突然发现一个问题:网站访问量不高,占用内存却长地飞快,甚至2,3个小时内存使用就上涨到了1g多,非常郁闷。google了一下,查到了几篇相关文章。无法卸…
Hi Tess,
I’m debugging a slightly different beast that happens only on certain m/cs in our environment. First a quick background on the environments:
– all m/cs are running on Windows Server 2003, SP1 (Std Edition) IIS 6.0
– 4 GB memory
– Both .NET 1.1 & 2.0 installed
– number of hotfixes applied
Initial investigation lead me to believe that a possible memory leak in "private bytes" was specific to our web service implementation. So we eventually stubbed out all app specific code & still saw the private bytes increase steadily until the worker process recycled because it reached the "Maximum used memory" limit for the App Pool config.
To confirm the theory that app related stuff wasn’t the culprit, i then created a basic "HelloWorld" web service & set the app pool settings to recycle the worker process when the "Maximum used memory" reached 50MB. Ran the test… private bytes increased steadily and eventually the worker process recycled.
I re-ran this same test on a different m/c (same OS level, 2GB memory but a slightly different set of hotfixes applied to it). Here the HelloWorld app does NOT exhibit the leak (which is what i was expecting).
I’ve tried analysing by gathering minidumps, VA dumps, perf logs – the only thing that stands out between the leaking/non-leaking servers is the Heap size.
Here are some excerpts from the VADump for the leaking case (dumps were taken roughly 45secs apart):
Category Total Private Shareable Shared
Pages KBytes KBytes KBytes KBytes
Page Table Pages 84 336 336 0 0
Other System 39 156 156 0 0
Code/StaticData 3782 15128 1312 2736 11080
Heap 14184 56736 56736 0 0
Stack 79 316 316 0 0
Teb 33 132 132 0 0
Mapped Data 165 660 0 104 556
Other Data 3179 12716 12712 4 0
Total Modules 3782 15128 1312 2736 11080
Total Dynamic Data 17640 70560 69896 108 556
Total System 123 492 492 0 0
Grand Total Working Set 21545 86180 71700 2844 11636
Category Total Private Shareable Shared
Pages KBytes KBytes KBytes KBytes
Page Table Pages 97 388 388 0 0
Other System 51 204 204 0 0
Code/StaticData 3790 15160 1312 2756 11092
Heap 30298 121192 121192 0 0
Stack 79 316 316 0 0
Teb 33 132 132 0 0
Mapped Data 165 660 0 104 556
Other Data 3179 12716 12712 4 0
Total Modules 3790 15160 1312 2756 11092
Total Dynamic Data 33754 135016 134352 108 556
Total System 148 592 592 0 0
Grand Total Working Set 37692 150768 136256 2864 11648
Category Total Private Shareable Shared
Pages KBytes KBytes KBytes KBytes
Page Table Pages 106 424 424 0 0
Other System 60 240 240 0 0
Code/StaticData 3792 15168 1312 2756 11100
Heap 39342 157368 157368 0 0
Stack 80 320 320 0 0
Teb 34 136 136 0 0
Mapped Data 165 660 0 104 556
Other Data 3182 12728 12724 4 0
Total Modules 3792 15168 1312 2756 11100
Total Dynamic Data 42803 171212 170548 108 556
Total System 166 664 664 0 0
Grand Total Working Set 46761 187044 172524 2864 11656
The Heap pages show the increase while the GC Heap (Other Data) does not show any increase at all (this is what I see in the PerfMon logs as well). What/why is there a dramatic increase in the heap allocations (especially, private bytes)?
The other thing I see in the mini dumps is that the number of System.String objects & size is seemingly high in the m/c where the leak is happening whereas those values are much lower on the server where it isn’t leaking.
Can you shed some light on what else should I be looking at? Our production web servers are exhibiting this behavior and we’ve been unable to figure out why. Please help!
Thanks.
PS: btw, i’ve learned a lot from reading thru your blogs & without them, i wouldn’t have gotten the faintest clue on what to look for.
Not sure if this is relevant (perhaps, it is) – the Windows Server 2003 m/cs use the /3GB switch in the boot.ini file.
This is a problem with WCF using the XmlSerializer!!!!!! [System.ServiceModel.XmlSerializerFormatAttribute()]
Using Process Explorer, we see the temp assemblies growing unless we declare the client proxy as static. Unfortunately a lot of developers will be hit by this problem if Microsoft does not get a fix. The solution is not to use the DataContractSerializer (which we assume does not have the problem). We would have to re-tool all our classes.
This is a huge problem for our application which runs out of memory by the end of the day with 100 or so users. This bug does not make moving to WebServices easier – Microsoft should release a patch for this immediately!!!!
Hi Marty,
I haven’t really worked a whole lot with WCF so I don’t know the details regarding the problem you’re talking about but before going to deep into it I just want to see if it is a real problem or a percieved problem.
When you call a webservice from ASP.NET for example you will also create dynamic assemblies for xml serialization of the parameters but you will only create one per type/method so after initially you will create a few but once they are all initialized they will be reused since they are created with the default constructors. Are you saying that with this construct you keep creating new ones throughout the life of the process? I.e. if you call the same web service twice, will you create new assemblies?
If you do continually create new ones for each call to the ws, please post some sample code and I will take a look if time permits or start a case with support. If the 2nd call to a web service does not generate new assemblies your memory problem lies elsewhere. You will need to generate at least one new assembly per type you serialize otherwise serialization will not work…
Thanks
Tess
Hi Tess, thanks for the reply!!!
just a summary again of what we are experiencing… we are trying to get this to Microsoft.
We have experienced the documented XmlSerializer assembly leak problem reported by Microsoft as a problem with the .NET Framework 1.0, 1.1 and 2.0.
Specifically, we use WCF to generate client proxies to WebServices and we also instruct WCF to use the XmlSerializerFormatter and not the DataContractFormatter.
Because we rely on the XmlSerializerFormatter, we have no control over the WCF code that deserializes xml data streams into managed objects. The call to Deserialize from the Service Model exhibits the problem. The call to deserialize uses a constructor of the XmlSerializer that does not cache assemblies based on the type. And it is very easy to duplicate by using Process/App Domain Viewer – Current and Total Assemblies counts grow.
See stack trace below.
We have a workaround whereby we cache the client proxy classes but we feel that this is not an ideal solution long term.
Ideally, we would like a patch to the Service Model framework (WCF) that addresses the problem when invoking the XmlSerializer by caching the serializer in the assembly (as described) or better yet, a path to the XmlSerializer that implements caching in ALL constructors.
Stack Trace (Crude cut and paste – please forgive me).
qtnaxaft!Microsoft.Xml.Serialization.GeneratedAssembly.ArrayOfObjectSerializer1.Deserialize(System.Xml.Serialization.XmlSerializationReader reader = {Microsoft.Xml.Serialization.GeneratedAssembly.XmlSerializationReaderIPlanDesign}) + 0x40 bytes
System.Xml.dll!System.Xml.Serialization.XmlSerializer.Deserialize(System.Xml.XmlReader xmlReader = {Element, Name="a:OfferingCombination"}, string encodingStyle, System.Xml.Serialization.XmlDeserializationEvents events) + 0xa2 bytes
System.Xml.dll!System.Xml.Serialization.XmlSerializer.Deserialize(System.Xml.XmlReader xmlReader, string encodingStyle) + 0x21 bytes
System.ServiceModel.dll!System.ServiceModel.Dispatcher.XmlSerializerOperationFormatter.DeserializeBody(System.Xml.XmlDictionaryReader reader, System.ServiceModel.Channels.MessageVersion version, System.Xml.Serialization.XmlSerializer serializer, System.ServiceModel.Description.MessagePartDescription returnPart = Name={GetOfferingCombinationsResult}, Namespace="", Type={System.Void}, Index=0}, System.ServiceModel.Description.MessagePartDescriptionCollection bodyParts = Count = 1, object[] parameters = {Dimensions:[1]}, bool isRequest = false) + 0x63 bytes
System.ServiceModel.dll!System.ServiceModel.Dispatcher.XmlSerializerOperationFormatter.DeserializeBody(System.Xml.XmlDictionaryReader reader, System.ServiceModel.Channels.MessageVersion version, string action, System.ServiceModel.Description.MessageDescription messageDescription, object[] parameters, bool isRequest) + 0x137 bytes
System.ServiceModel.dll!System.ServiceModel.Dispatcher.OperationFormatter.DeserializeBodyContents(System.ServiceModel.Channels.Message message, object[] parameters, bool isRequest) + 0x95 bytes
System.ServiceModel.dll!System.ServiceModel.Dispatcher.OperationFormatter.DeserializeReply(System.ServiceModel.Channels.Message message, object[] parameters) + 0x198 bytes
System.ServiceModel.dll!System.ServiceModel.Dispatcher.ProxyOperationRuntime.AfterReply(ref System.ServiceModel.Dispatcher.ProxyRpc rpc = {System.ServiceModel.Dispatcher.ProxyRpc}) + 0x33 bytes
System.ServiceModel.dll!System.ServiceModel.Channels.ServiceChannel.HandleReply(System.ServiceModel.Dispatcher.ProxyOperationRuntime operation = {System.ServiceModel.Dispatcher.ProxyOperationRuntime}, ref System.ServiceModel.Dispatcher.ProxyRpc rpc = {System.ServiceModel.Dispatcher.ProxyRpc}) + 0xd8 bytes
System.ServiceModel.dll!System.ServiceModel.Channels.ServiceChannel.EndCall(string action, object[] outs, System.IAsyncResult result) + 0xde bytes
System.ServiceModel.dll!System.ServiceModel.Channels.ServiceChannelProxy.InvokeEndService(System.Runtime.Remoting.Messaging.IMethodCallMessage methodCall = {System.Runtime.Remoting.Messaging.Message}, System.ServiceModel.Dispatcher.ProxyOperationRuntime operation = {System.ServiceModel.Dispatcher.ProxyOperationRuntime}) + 0x45 bytes
System.ServiceModel.dll!System.ServiceModel.Channels.ServiceChannelProxy.Invoke(System.Runtime.Remoting.Messaging.IMessage message = {System.Runtime.Remoting.Messaging.Message}) + 0x81 bytes
mscorlib.dll!System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(ref System.Runtime.Remoting.Proxies.MessageData msgData, int type) + 0x273 bytes
Can/does the same problem occur if you implement iXmlSerializable in an ASP.Net application.
Hi Chad,
The only reason it would make the same problem occurr would be if someone called new XmlSerializer with one of the constructors listed above… just implementing iXmlSerializable would not cause an issue…
Hi Tess,
I’m currently working on a big project involving several WCF services. We have big performance and memory consumption problems for 3 of them.
These 3 services are actually designed to act as gateways for external partners Java webservices.
We used svcutil to generate the client proxy to call these services and we saw that WCF runtime is generating an on-the-fly XMLSerializer assembly *for each call* to the remote WS. As you described, we saw that unfortunately a new assembly is generated each time we call the same method.
We tried many things to solve this problem : SGEN generation, we also tried to used different option of the svcutil generation to change the type of Formatter(DataContractFormatter, XmlSerializerFormatter),…
But – as marty explainded in his comment- we’re also running out of memory during our load test with only 100 users.
We’re exploring this problem with the MS premium support because we think that our perf and memory leak problem is coming from this strange WCF behaviour but we’re running out of ideas and the problem is still there !
Do you have any news of Marty’s problem?
Do you know if a fix has been published?
Thanks a lot!
To be honest I haven’t worked with WCF enough to be able to give you any answers off-hand. For "normal" webservices you would only create one per type, but perhaps WCF uses one of the non-standard constructors… One idea, in order to figure out if this is the case, would be to run a test in a controlled environment where you set a breakpoint with !bpmd -md <methoddesc> on the different XmlSerializer..ctor (constructors)
you can get the method descriptors by doing !dumpheap to find the methodtable for XmlSerializer and then doing !dumpmt -md on that methodtable.
At least that could tell you why you create new assemblies all the time.
We have the exact same issue… new assemblies getting created on every call. We have contracts for every object but am still seeing many references to:
[System.Xml.Serialization.XmlElementAttribute(IsNullable=true)]
Is there any update on a solution?
For us it appears the issue is solved with .NET 2.0 SP1 that was released on 12/27. We are confirming with more tests but it appears fixed. And performance appears 100 times faster.
i’m not good at english 🙁
i happened Marty’s problem and solved it ! we use wcf to call services ,and so of the service are .asmx . And the proxy of the .asmx service is the problem . It serializ and deserialize every time that we call it. so the cup and memory are very high.
just do :
we use a object pool that cache the client proxy.
code:
public interface IWcfClientPoolElement<TClient> where TClient : IWcfClientPoolElement<TClient>, new()
{
WcfClientPool<TClient> Pool
{
get;
set;
}
}
public class WcfClientPool<TClient> where TClient : IWcfClientPoolElement<TClient>, new()
{
private Stack<TClient> m_container;
private int m_currentCount;
private int m_capacity;
private object m_mutex;
public WcfClientPool() : this(20) { }
public WcfClientPool(int capacity)
{
this.m_capacity = capacity;
this.m_container = new Stack<TClient>();
this.m_mutex = new object();
this.m_currentCount = 0;
}
public TClient GetClient()
{
while (true)
{
lock (this.m_mutex)
{
if (this.m_container.Count > 0)
{
return this.m_container.Pop();
}
else if (this.m_currentCount < this.m_capacity)
{
this.m_currentCount++;
TClient client = new TClient();
client.Pool = this;
return client;
}
else
{
Thread.SpinWait(100);
}
}
}
}
public void Restore(TClient client)
{
lock (m_mutex)
{
this.m_container.Push(client);
}
}
}
and
client code like this :
[System.Diagnostics.DebuggerStepThroughAttribute()]
[System.CodeDom.Compiler.GeneratedCodeAttribute("System.ServiceModel", "3.0.0.0")]
public partial class I_UserSoapClient : System.ServiceModel.ClientBase<I_UserSoap>, I_UserSoap,System.IDisposable,
Temporary.IWcfClientPoolElement<I_UserSoapClient>
{
private Temporary.WcfClientPool<I_UserSoapClient> pool = null;
void System.IDisposable.Dispose()
{
Pool.Restore(this);
}
…..
if you have any question ,please add me msn : flyingchen@live.com
we can use below way to use the proxy :
using (I_UserSoapClient objClient = pool.GetClient())
As you already know, i spend my days analyzing dumps for customers, and more often than not I don’t have
First off, I’m so glad that I found this post. It makes complete sense as to what is happening with my code.
I’ve also run into the same behavior that Marty noted about WCF services creating temporary serialization assemblies. I created a Windows Service that runs a polling thread to pickup messages from a WCF service. The polling thread uses a WCF client to invoke the remote service to get a list of available messages. The WCF client is then used to download each message. Everytime the polling thread executes, it creates a WCF client instance. This results in a temporary assembly being generated each time for the types used by the client.
We are using the .NET 3.0 Framework. I have not tried using .NET 3.5 to see if the issue has been fixed, as 3.5 is not available in our production environment.
It seems my only solution is to either cache WCF client instance, which won’t solve the problem if the client has to be closed. This will only minimize the problem, not solve it. The other solution is to create an AppDomain, load the WCF client into that and execute the client methods from there. The AppDomain can then be unloaded when necessary. This seems like a bit of a kludge to me.
I’m really glad I found this article though. I should be able to get some sleep now that I know what the problem is.
Hi Joel,
I have to say that i haven’t played at all with WCF services and usually WCF issues are handled by a different support team which is why I haven’t run into this before. Now you make me curious, I think I’ll have to look into this and see how it works.
Tess
The problem has me really confused because I’m not doing anything special with the WCF client instances.
The WCF client code was created with svcutil.exe and the resulting client code file (C#) imported into the project. I couldn’t use "Add Service Reference" from the IDE because the VS2005 install that I’m using on the R&D box does not have the Orcas extensions. The WCF service interface was annotated with the XmlSerializerFormatAttribute class.
Anyhow, while debugging the Windows Service, everytime I instantiate the client, a temporary serialization assembly is loaded. It’s as if the internal XmlSerializer is not caching the types as noted in your article. Eventually the memory footprint grows into the hundreds of megabytes and that is obviously not something the server administrators are going to like. 🙂
At first I thought this might be an issue with the way I had designed the program. Essentially, the Windows Service is downloading the XML documents from a WCF service that I created to interacts with an ebMS product called Hermes (which is an open-source Java-based ebMS application.) My Windows Service checks each configured partnership for messages by queuing up a command object using ThreadPool.QueueUserWorkItem. Therefore, the WCF client is instantiated and executed in thread-pool thread. I speculated that this might be the problem with the XmlSerializer not caching the types, but it turns out I was wrong. Even if I execute the command object on a non-threadpool thread, the program still exhibits the same behavior.
I’m in the process of recreating the code under VS2008 and .NET 3.0 on my own personal development machine, to see if, for whatever reason, there is an issue with the VS2005/.NET 2.0/3.0 install on the R&D box.
Thank you again for your interest.
Feel free to send me a repro offline if you want…
I can’t promise that I will look at it or that I will have any insightful information if I look at it because of time constraints and because, as I mentioned, I’ve haven’t played much with WCF.
Still I am very curious so feel free to send it and I’ll take a look if time permits.
public class serv : System.Web.Services.WebService
{
public struct ee
{
public string[] foo;
public int bar;
}
[WebMethod]
public ee HelloWorld()
{
ee eee = new ee();
eee.foo = new string[] { "test1", "test2" };
eee.bar = 434;
return eee;
}
}
If i invoke with HTTP POST (/service.asmx/HelloWorld) it returns a XML as it should BUT it adds 100k to memory everytime it’s called and in the dump there are eventually thousands of types of
xml.serialization.generatedassembly.arrayofstringserializer
which is adding up to 1.2gb of memory
How can i implement cache if i’m not actually calling XMLSerializer?
Thanks!
typically a webservice call would only generate one dynamic assembly since it would be using the new xmlserializer(type) but perhaps there is something going on here with the struct.
I am on easter vacation at the moment so I dont have access to all my tools but it sounds very interesting so I will definitely look into it. If I find something I will probably post it as a separate post.
Thanks
Tess
Would there be a way to configure the .NET sampling profiler to profile everytime an assembly is loaded? That way we could find out which calls are causing it to happen.
I dont know if you can configure the .net sampling profiler, not exactly sure which tool you are referring to, but you can either a) use debug diag with leaktracking turned on, which will give you stacks for when you’re allocating memory on the loaderheap, or b) you could potentially create an adplus script that broke on load module and capture the stacks…
Hi Tess
This is a really nice article. Thanks for putting it online. We have a similar memory leak issue for which I am currently working with Microsoft engineers to get it resolved. On intilal investigation, they say there are lots of dynamic assemblies of XmlSerializer. As per MS, we should use XmlSerializer(type) constructor to avoid loading new assemblies. I searched on whole solution and the only constructor I see being used is XmlSerializer(type) and .NET still seems to load a new assembly for every instantiation. BTW, my application is in .NET 2.0. The MS engineer wants me to run sgen.exe on the project that contains entities which are serialized and then just put the xyz.XmlSerializer.dll in the bin directory of the web projects which using the XmlSerializer object. I am not really convince that this will fix the problem. Please recommend. Thanks
It’s a bit hard to comment on that just with this information… if you are not doing anything with any other constructors than XmlSerializer(type) then the issue is probably the amount and complexity of the items being serialized and in that case using sgen is usually one of the recommended ways to reduce the memory used for the serialization assemblies
This article was very helpful in my recent memory leak investigation. Thanks Tess!
Just wanted to thank you for this article. I’ve just spent several days trying to chase down a memory leak in my VB.NET application, and thanks to your article I’ve managed to solve it.
Ben.
Tess you are beautiful as you are smart.
I have a similar problem and solved it with XmlSerializerCache class.
It can be found here:
An article can be found here:
weblogs.asp.net/…/353435.aspx
Just wonderful. Thanks to Tess, my application's memory usages has came down to half.
Good article.
Excellent Information Thanks, Good information Help a lot to understand Memory Leak and Dynamic Assembly Create on XML Serialization
Thanks for the info.
Is there any reason why these temporary assemblies are always loaded in the DefaultDomain? I tried working around the issue of having tons of Microsoft.GeneratedCode assemblies, causing a huge memory consumption by creating a separate AppDomain to do the serialization and assembly generation in a separate AppDomain (which can be unloaded).
But I've noticed that the generated assemblies are all loaded in the DefaultDomain, rather than the AppDomain.CurrentDomain.
Feel like I could kiss you right now….you solved the memory leak we have been having where our process which uses only 500 meg of Managed Memory quickly consumes 13 gig of total memory in one week ! | https://blogs.msdn.microsoft.com/tess/2006/02/15/net-memory-leak-xmlserializing-your-way-to-a-memory-leak/ | CC-MAIN-2016-40 | refinedweb | 7,213 | 58.08 |
Minimum Spanning Tree for Graph in C++
In this tutorial, we will learn about the Spanning Tree of the graph and its properties. We will also learn about the Minimum spanning tree for graph in C++ and its implementation using Prim’s algorithm and Kruskal’s algorithm.
We will take some examples to understand the concept in a better way.
Spanning Tree
Spanning tree is the subset of graph G which has covered all the vertices V of graph G with the minimum possible number of edges. Hence we say that a spanning tree doesn’t contain any loop or cycle and it cannot be disconnected.
For a disconnected graph, there will be no spanning tree possible because it is impossible to cover all the vertices for any disconnected graph.
So, for every connected and undirected graph has at least one spanning tree is possible.
Hence some properties of spanning tree:-
- Spanning tree has V-1 number of edges where V is the number of vertices.
- For a complete and undirected graph has maximum possible spanning tree for n number of vertices will be nn-2
- Spanning tree doesn’t have any loops and cycle.
Now see the diagram,
spanning tree
Weight of the spanning tree is the sum of all the weight of edges present in spanning tree.
Minimum spanning tree in C++
For weighted graph G=(V,E), where V={v1,v2,v3,…..} E={e1,e2,e3,e4………}
Minimum spanning tree is defined by a spanning tree which has minimum weight than all others spanning trees weight of the same graph.
Here we will learn about the two most important algorithms to find the minimum spanning the tree of graph G,
- PRIM’S algorithm.
- KRUSKAL’S algorithm.
Here we see the example of weighted Graph G and use the algorithm to find the MST.
Minimum Spanning tree.
First, we will focus on Prim’s algorithm.
Prim’s algorithm
Prim’s algorithm is a greedy approach method for minimum spanning tree which finds the local optimum path to obtain the global optimum solution.
The basic idea to implement the Prim’s algorithm for minimum spanning tree:-
- Initialise to choose a random vertex.
- Then select the shortest edges which connect new vertex and add it to MST(minimum spanning tree).
- Repeat step 2 until all vertex are not visited.
Let’s focus on pseudocode,
S={ }; //initialize spanning tree set with NULL P={starting vertex}; //contain the starting vertex Visit[ ] ; //initialize the visit array to false Visit[starting vertex]=true ; //make starting vertex visit true While(P!=V) //loop until all the vertex is not visited Select the least cost edge(u,v) where u belongs to P and v belongs to V-P ; Visit[v]=true ; S=S U {(u,v)} ; P=P U {v} ;
Now we will understand this algorithm through the example where we will see the each step to select edges to form the minimum spanning tree(MST) using prim’s algorithm.
Here we look that the cost of the minimum spanning tree is 99 and the number of edges in minimum spanning tree is 6.
In above diagram we take alphabet A,B,C,D,E,F,G for vertex which is similiar to 0,1,2,3,4,5,6 for vertex and we will see 0,1,2,3,4,5,6 in coding section.
Here we can see the code implementation of PRIM’S algorithm.
#include <iostream> #include<bits/stdc++.h> #include <cstring> using namespace std; // number of vertices in graph #define V 7 // create a 2d array of size 7x7 //for adjacency matrix to represent graph int main () { // create a 2d array of size 7x7 //for adjacency matrix to represent graph int G[V][V] = { {0,28,0,0,0,10,0}, {28,0,16,0,0,0,14}, {0,16,0,12,0,0,0}, {0,0,12,22,0,18}, {0,0,0,22,0,25,24}, {10,0,0,0,25,0,0}, {0,14,0,18,24,0,0} }; int edge; // number of edge // create an array to check visited vertex int visit[V]; //initialise the visit array to false for(int i=0;i<V;i++){ visit[i]=false; } // set number of edge to 0 edge = 0; // the number of edges in minimum spanning tree will be // always less than (V -1), where V is the number of vertices in //graph // choose 0th vertex and make it true visit[0] = true; int x; // row number int y; // col number // print for edge and weight cout << "Edge" << " : " << "Weight"; cout << endl; while (edge < V - 1) {//in spanning tree consist the V-1 number of edges //For every vertex in the set S, find the all adjacent vertices // , calculate the distance from the vertex selected. // if the vertex is already visited, discard it otherwise //choose another vertex nearest to selected vertex. int min = INT_MAX; x = 0; y = 0; for (int i = 0; i < V; i++) { if (visit[i]) { for (int j = 0; j < V; j++) { if (!visit[j] && G[i][j]) { // not in selected and there is an edge if (min > G[i][j]) { min = G[i][j]; x = i; y = j; } } } } } cout << x << " ---> " << y << " : " << G[x][y]; cout << endl; visit[y] = true; edge++; } return 0; }
OUTPUT
Edge : Weight 0 ---> 5 : 10 5 ---> 4 : 25 4 ---> 3 : 22 3 ---> 2 : 12 2 ---> 1 : 16 1 ---> 6 : 14
Now we will look at Kruskal’s algorithm.
Kruskal’s algorithm
Kruskal’s algorithm is also a greedy approach method for minimum spanning tree and similar to prim’s that which find the local optimum to find the global optimum.
There is little bit difference in Kruskal algorithm than prim’s algorithm is that in Kruskal’s algorithm we firstly sorted all edges based on weights of edges and further proceed for find MST.
The basic idea to implement the Kruskal’s algorithm for minimum spanning tree:-
- Firstly sorted the all the edges according to weigh of edges.
- Then pick the edges one by one in non-decreasing order and add selected edge in MST if it not produce the cycle.
- Repeat step 2 until all vetex not added in MST.
Let’s see the pseudo code,
S={ }; //contain the spanning tree P={ }; //contain the vertex E is set contain the edges in sorted order while(E is not empty) select edge(u,v) one by one check if it produces cycle or not //use the union mechanism to check if u and v have the same parent make cycle otherwise not make cycle if cycle does not produce then do S=S U {(u,v)}; P=P U {v};
Now we will understand this algorithm through the example where we will see each step to select edges to form the minimum spanning tree using Kruskal’s algorithm.
Here we look that the cost of the minimum spanning tree is 99 and the number of edges is 6.
Here we can see the code implementation for Kruskal’s algorithm.
#include <iostream> #include <vector> #include <utility> #include <algorithm> using namespace std; const int MAX = 1000; int id[MAX], nodes, edges; //array id is use for check the parent of vertex; pair <long long, pair<int, int> > p[MAX]; //initialise the parent array id[] void init() { for(int i = 0;i < MAX;++i) id[i] = i; } int root(int x) { while(id[x] != x) //if x is not itself parent then update its parent { id[x] = id[id[x]]; x = id[x]; } return x; //return the parent } //function for union void union1(int x, int y) { int p = root(x); int q = root(y); id[p] = id[q]; } //function to find out the edges in minimum spanning tree and its cost long long kruskal(pair<long long, pair<int, int> > p[]) { int x, y; long long cost, minimumCost = 0; for(int i = 0;i < edges;++i) { x = p[i].second.first; y = p[i].second.second; cost = p[i].first; if(root(x) != root(y)) { minimumCost += cost; cout<<x<<" ----> "<<y<<" :"<<p[i].first<<endl;//print the edges contain in spanning tree union1(x, y); } } return minimumCost; } int main() { int x, y; long long weight, cost, minimumCost; init(); cout <<"Enter Nodes and edges"<<endl; cin >> nodes >> edges; //enter the vertex and cost of edges for(int i = 0;i < edges;++i) { cout<<"Enter the value of X, Y and edges"<<endl; cin >> x >> y >> weight; p[i] = make_pair(weight, make_pair(x, y)); } //sort the edges according to their cost sort(p, p + edges); minimumCost = kruskal(p); cout <<"Minimum cost is "<< minimumCost << endl; return 0; }
OUTPUT
Enter Nodes and edges 7 9 Enter the value of X, Y and edges 0 5 10 Enter the value of X, Y and edges 5 4 25 Enter the value of X, Y and edges 4 3 22 Enter the value of X, Y and edges 3 2 12 Enter the value of X, Y and edges 2 1 16 Enter the value of X, Y and edges 1 0 28 Enter the value of X, Y and edges 1 6 14 Enter the value of X, Y and edges 6 4 24 Enter the value of X, Y and edges 3 6 18 0 ----> 5 :10 3 ----> 2 :12 1 ----> 6 :14 2 ----> 1 :16 4 ----> 3 :22 5 ----> 4 :25 Minimum cost is 99
Important points:
- Kruskal’s algorithm is prefered when the graph is sparse or when the graph contain the less number of edges .
- Prim’s algorithm is prefered when the graph is dense or when the graph contain the more numbers of edges.
Application of minimum spanning tree:-
- For network designing.
- To perform the clustering analysis.
- Help in traveling salesman problem to find the minimum path to cover all city.
You may also learn,
Solution of N-Queen problem in C++ using Backtracking | https://www.codespeedy.com/minimum-spanning-tree-for-graph-in-cpp/ | CC-MAIN-2019-43 | refinedweb | 1,638 | 58.55 |
This article walks you through how to create a simple React app using TypeScript and a Material Design framework.
Prerequisites
This article assumes the following:
- You are editing with Visual Code
- You have nodejs, npm and npx installed
- You are testing using the Chrome browser
- When I say "app" I am referring to your code running in a browser
This article was tested on a Mac and may or may not need a few steps adjusted for working on Windows or Linux.
Step 1. Install create-react-app
Make sure that you have the latest version of the create-react-app by uninstalling any previous version first.
Note that the $ indicates a terminal window prompt and should not be included in the command:
$ sudo npm uninstall -g create-react-app
Now install the latest version:
$ sudo npm install -g create-react-app
Step 2. Run create-react-app for TypeScript
Use the create-react-app to generate a project to start from.
- Open up a terminal window
- Change to a root project folder (example: cd ~/projects)
$ npx create-react-app material-101 --template typescript $ cd material-101 $ code . $ npm start
Step 3. Edit the title
To change the title that appears in the browser tab, do the following:
- Edit public/index.html
- Replace the content of the title tag with “Material 101”
- Save the file
- Toggle back to the browser
- Verify that the title in the tab has been updated (if not, refresh the browser)
Open up the terminal window panel inside of Visual Code:
- On a Mac press: Control ~
Step 4. Install a Material Design framework
React by itself does not have everything that you need to create apps using Material Design. Here are the steps for adding a framework to your project that we will use below:
In the terminal panel:
$ npm install --save @mui/material @mui/icons-material $ npm install --save @emotion/react @emotion/styled
Step 5. Add a TSX stylesheet
The create-react-app app installed a regular CSS stylesheet. But to work with the framework you need to create a second TSX (TypeScript + React) stylesheet. The styles are not defined using a CSS format, but instead a TSX format.
- Create a new file in the src folder: src/AppStyles.tsx
- Replace the contents with the code below and save the file:
export const AppStyles: any = { cardStyle: { width: '400px', margin: 'auto', marginTop: 20, textAlign: 'center', display: 'block', }, logoStyle: { marginTop: '20px', marginBottom: '20px', padding: '10px', border: '1px solid black', borderRadius: '50%' } }
Even though it looks a lot like CSS, notice the difference in how many values are contained within string quotes, etc. It is formatted using the JSON-like object syntax to compile as TypeScript code. This is for formatting the Material Design framework components.
Step 6. Create a card
In Material Design you can build your UI around the concept of cards. For this example the whole app will be within a card that is centered in the browser window.
- Edit src/App.tsx
- Replace the contents of the App function with the code below:
function App() { return ( <Card style={AppStyles.cardStyle}> <CardContent> <SmartToy style={AppStyles.logoStyle} <Typography gutterBottom Material 101 </Typography> </CardContent> </Card> ); } export default App;
- You should see a series of underlined errors
- Wave your mouse over each error and select the Quick Fix option
- Usually pick the first option
- You may find that not all errors can be corrected this way
- If your editor is not setup for this I will provide instructions for doing things manually below
The top of your file should now look like this. If it does not, manually edit it to look like this:
import React from 'react'; import logo from './logo.svg'; import './App.css'; import { Card, CardContent, Typography } from '@mui/material'; import { SmartToy } from '@mui/icons-material'; import {AppStyles} from './AppStyles';
You can remove the import logo line.
- Save the file
- Verify in the browser that you can see the start of your app and that there are no errors
- If you don't see the card, refresh your browser
Step 7. Add a variable using React Hooks
For this example you will need the latest version of React that supports React Hooks.
- Add this line above the return statement:
const [userName, setUserName] = useState('');
- The useState function should show an error
- Add a Quick Fix which will change the React import line to include it:
import React, { useState } from 'react';
Step 8. Add a change handler
I am going to show you how to create a text edit control to manage entering the user name. But for the form to work you are going to need to first define a function to handle changes as you edit.
- Add this code below the useState line:
const handleUserNameChange = useCallback((event) => { setUserName(event.target.value); }, []);
- Add a Quick Fix or manually add useCallback to the import React line at the top of the file which should now look like this:
import React, { useCallback, useState } from 'react';
Step 9. Add second CardContent block
To add a text field I am going to have you put it in another CardContent block. This is a way of organizing your card into smaller chunks and groups. I could also have made it another card which would be a child of the first card and have a different format. But for now I am keeping the design very simple.
- Right after the first CardContent block, insert this code with a second block into src/App.tsx:
<CardContent> <FormControl style={{ width: '50%' }} > <TextField id="userName" name='userName' label="Your name" value={userName} onChange={handleUserNameChange} </FormControl> </CardContent>
- Quick Fix the errors which should update the material import line to look like this:
import { Card, CardContent, FormControl, TextField, Typography } from '@mui/material';
- The id, name and value fields must all match the name of your variable (userName)
- The onChange handler should reference your handler function
- Save the file
- In the browser make sure that you can type in your name
Step 10. Create card actions
When you create buttons using the framework, the standard is to put them into a CardActions block.
Add the code below right after the last CardContent block:
<CardActions> <Tooltip title="Click to say hello"> <span> <Button color='primary' disabled={userName.length === 0} <span> <Button color='secondary' disabled={userName.length === 0} variant="outlined" onClick={handleClickYo} key={2}> Say Yo </Button> </span> </Tooltip> </CardActions>
As alway, fix any errors with Quick Fix.
The code wraps two buttons in the framework Tooltip tag. This is how to create a message that appears when the user waves the mouse over each button.
Each buttons disabled property checks the length of userName. If the length is zero, they will be disabled. To enable them, just start typing in a user name.
The reason the buttons are wrapped in spans is a quirk of the framework. The Tooltip requires at least one child component to be enabled (like span) or it logs an error to the browser console.
The variant defines the style of the button. Consult the framework documentation for other variants.
The onClick handlers will be defined below.
Step 11. Add the handler methods
Below the handler method for the name change, add these handlers for the buttons above:
const handleClickHello = useCallback((event) => { alert(`Hello ${userName}!`) }, [userName]); const handleClickYo = useCallback((event) => { alert(`Yo ${userName}!`) }, [userName]);
It is important that you put userName in the brackets at the end of each callback. Otherwise the username will be blank in the alert box.
When you are done, make sure that you have saved all files and test your work!
Step 12. Test your work!
- Refresh the browser
- Notice that the buttons are gray and disabled until you enter some text
- Enter some text and click a button
- Make sure that the name you entered comes up in the alert box
- Verify that the message matches the button you clicked
Step 13. Deploy a build
If you look at the package.json file you will see this script:
"build": "react-scripts build",
From a command line run it like this:
$ npm run build
That will generate a new folder called build in the root of your project.
If you are familiar with static hosting sites like surge.sh or Netlify you can use their services to publish your build folder to temporary test sites under their domains.
After you ran the build command it also provided instructions for test serving the files locally. I’ve modified the command to use sudo to avoid any rights issue installing an npm package globally:
$ sudo npm install -g serve $ serve -s build
When you run the serve command it should tell you what port it is using. In my case the port was 5000, so to test the served build I browsed to:
-
Where to find the source
You can find the source for the app featured in this article on GitHub:
Embedded Example
You can see the code in action using this embedded example:
Conclusion
In this article I walked you through how to get started using Material Design to build a React app using TypeScript.
You learned how to use a framework to build a simple card-based IU using React Hooks.
To learn more, read the documentation on the framework (link below) and experiment with the other components that it provides.
Be sure to also read the Material Design documentation to understand how to use the framework more effectively.
References
- MUI - Material UI framework
- Material Design
- Unveiling Material You - the next stage of Material Design
- Create React App (TypeScript) | https://scriptable.com/blog/react-material-design-tutorial-typescript | CC-MAIN-2022-40 | refinedweb | 1,590 | 58.21 |
Flappy bird
Hello everyone! I'm writing the game and at the beginning now
from scene import * import random import time A = Action() def Ground(parent): return SpriteNode('plf:Ground_Grass', parent=parent) def ColumnBlock(parent): return SpriteNode('plc:Brown_Block', parent=parent)))
''' I realized i didnt fully implemate Brushes and that might have been confusing. so ill post a new script using actual brushes. and as a bonus for showing you actually want to learn and not just copy/paste ill add some extra functionality for your 'cookbook'. i do apologise about the amount on notations lol there beingbalot of comments doesnt mean its a bd script. most is advice. i did change a few things mostly for your convenience and to smooth out animation. but first here is some notes on hat you have provided thus far.. ''' from scene import * import random import time A = Action() ''' i added these so you only have one place to change sizes instead of bouncing around sw ⇒ screen width sh ⇒ screen height bw ⇒ block width bh ⇒ block height so ⇒ screen orientation lw ⇒ level view port width lh ⇒ level view port height ''' def sw(): return get_screen_size()[0] def sh(): return get_screen_size()[1] def bw(): return 64 def bh(): return 96 def so(): return 1 if sw() < sh() else 2 def lw(): return 1024 if so() is 2 else 768 def lh(): return 1024 if so() is 1 else 768 def Ground(parent): return SpriteNode('plf:Ground_Grass', parent=parent, size=Size(bw(), bh())) def ColumnBlock(parent): return SpriteNode('plc:Brown_Block', parent=parent, size=Size(bw(), bh())) ''' not sure what happened here but as you probably know if called wil throw exceptions. im assuming this is why you dont actually use it. self.position = (self.size.w) first a regular Node object hase no size property. BUT it has a bbox wich is a scene.Rect with a width value. even with that said a poisition property much recieve a 2 Tuple (x, y) preferably a Point object but not neccesary. i.e: self.position = Point(x, y) let me know if your having a problem here it seem as your trying to use self to refer to your Scene class. and you can. but not through self. if your not sure how self work i can tell u jut say ''') #building the upper and lower ground ''' Generally a while loop is not a very big deal but in this case its not a great idea. i say this because what if for some unknown reason self.size.w is somthing crazy like 2389488754589433357. 🤓😅 unlikely i know but just for fun... now your game is hung up seeming like it is frozen to end user. now a for loop, at least i feel, is much safer in this. matter. but to be honest.. a premade pillar object would be best use. ill show this in next post. ''' while x < lw()+bw(): lower_tile = SpriteNode('plf:Ground_Grass', position=Point(x, 0), size=Size(bw(), bw()), anchor_point=(0.5, 0.0)) higher_tile = SpriteNode('plf:Ground_GrassCenter', position=Point(x, lh()), size=Size(bw(), bw())) x += bw() ground.add_child(lower_tile) ground.add_child(higher_tile) self.speed = 1 # Node.speed is defaulted to 1.0 ''' changed z_position to keep to and bottom over blocks ''' ground.z_position = 999 def update(self): self.time_passed += self.dt ''' good job with >= hen comparing floats in this manner never use == sence your calling self.column_moves() every frame we can manually move our blocks. (see method comment) # note: changing "5" to eithere a random int or an instance property will alow a more dynamin level generation. ''' if self.time_passed >= 5: self.add_column() self.time_passed = 0 self.column_moves() def add_column(self): lower = random.randint(1, 360) // bw() higher = 9 - int(lower) #building the lower part y = 35 ''' here you can get rid of the variable "lower" this should reduse memory use. for this game it doesnt matter but in future it could make a diference. use this instead.. for i in range(random.randint(0, 360) // 64): this insures the memory is released after forloop. also i would suggest moving block.anchor_point = (0.5, 0) and block.position = (self.size.w, y) to your ColumnBlock function like this def ColumnBlock(parent, x, y): return SpriteNode('plc:Brown_Block', anchor_point=(0.5, 0), position=Point(x, y), parent=parent) then in your for loop make your "i" var useful and get rid of "y" for i in range(lower): self.columns.append(ColumnBlock(self, self.size.w, i*64)) now we dont add a new block to memory then copy to list. we just add one to list and memory at same time. anchor_point is values 0 to 1 0 being botom left a 1 being top/right (1, 1) would be top right (0.5, 0.5) would be center. changed block.size.h to 1.0 ''' for i in range(1, lower+1): block = ColumnBlock(parent=self) block.anchor_point = (0.5, 0.5) block.size= Size(bw(), bh()) block.position = (lw(), bh()/3+i*bw()) block.z_position = -i self.columns.append(block) y += bw() #building the higher part y = lh() for i in range(1, higher+1): block = ColumnBlock(parent=self) block.anchor_point = (0.5, 0.5) block.position = (lw(), (lh()-i*bh()/2)) block.z_position = i self.columns.append(block) y -= bh() ''' great work on this part only sugestion would be to set your interp timing. probably TIMING_SINODIAL in this case cuz its mich smoother than linear. and this will go where you have "30/self.speed". and this oddly doesnt thow an excption and id avoid "0" for duration. 0.1 seems fast enough. or even 0.01 if needed. so somthing like ths. A.move_by(-self.size.w, 0.1, TIMING_SINODIAL) you also dont need remove. this is meant o remove objects not Actions. sinve you pass none or () its does nothing. one more thing the "self.speed is used to modify Action speed. but its automatically implemented. to if you want a 2x animation speed just set self.speed = 2 and everything else is done for you." you also should useba Node object to group the sprites this way theres no need for the for loop. you just move the parent and child nodes will follow. i changed the folowing method so that it stops the "jerking"bwhen it moves. they now move smoothly ::from update comment:: with that all said, lets be nice to our cpu and instead of runing an Action proccess lets just do some simple math. im not changing thisone but i would write the following: #note: i wouldnt do this for every block i would group the colums (◕_◕) #note: velocity would be set inside startup/__init__ and can be used # with powerups ☺️ for i in self.columns: i.position = Point( i.position[0] - self.velocity*self.dt, i.position[1]) ''' def column_moves(self): actions = [A.move_by(-self.size.w/2, 1, TIMING_SINODIAL)] for i in self.columns: i.run_action(A.sequence(actions)) ''' dont forget you can set some properties when calling the "run()" function scene.run() ## you know this one lol ## ⑴ scene ## alloud orientations. - FROM DOC: Important: The orientation - parameter has no effect on iPads starting with iOS 10 because - it is technically not possible to lock the orientation in an app - that supports split-screen multitasking. ## ⑵ orientation=DEFAULT_ORIENTATION ## frame rate controle FROM DOC: By default, the scene’s update() method is called 60 times per second. Set the frame_interval parameter to 2 for 30fps, 3 for 20, etc. ## ⑶ frame_interval=1 ## this game wont need but here you go FROM DOC: Set this to True to enable 4x multisampling. This has a (sometimes significant) performance cost, and is disabled by default. ## ⑷ anti_alias=False ## explains itself. i tent do make my own so i can control the color and position. ## ⑸ show_fps=False ## you shouldnt need this for this game if kept simple ⑹ multi_touch=True ) ''' run(Game()) # now to write that version two example for u. and again great work!)
1.
self.point=Point(w() + self.position.w, h() - self.size.h)If w() and self.size.w are the same here, then we got 2self.position.w? And why instead of sec arg don't just right 0??*
self.position.w should be self.position.x
translated
self.point=Point(w() + self.position.w, h() - self.size.h)means
Node's Position is a Point Object with x coordinate at screen width plus node's position x and y coordinate at screen height minus nodes hieght.
or in other words
x = screen_width + self.position.x
y = screen_hieght - self.size.hieght
self.position = Point(x, y)?
this was my fault lol i forgot you need to att a decorator from ui module
@ui.in_background def add_column(self): time.sleep(5)
this will run sleep on a different thread than your scene
@Karina now im off to write the game in the way i would. so you can compare. in no way is MY way the best or even the most correct. but it is the way i learned to implement different objects and functionality over the years.
@Karina just an update ive been a bit busy but i might have my project ready in the next 20 hours or so. it wont have a lot of comments but ill put a few. but im trying to use only descriptive naming so it should not beva problem.hope to see what you learned from this project 🤓🤓🤓?:
sw ⇒ screen width bw ⇒ block width lw ⇒ level view port width
top part and lower part , at least in this case, are the same size width. so both top and bottom parts get their size width from our
bw()Block Width.
Yes we do want to add AT LEAST block width past screen width.
Video Games are Feuled by immersion. You wand the End User to feel that our Level already exists.. so they must not see the objects pop into our level. so we do this just outside the view port. same for removinghe objects. we wait till they are at least Block Width negative position
if block.w < -block.w: del block.
Also in our position checks we want to compare
<for removing blocks and
>for adding blocks and not
<=and
>=. by doing this we get a 1 point buffer insuring our create and delete actions are not visible bo End User.:
“ 🤓😅 unlikely i know but just for fun... ”
the value
2389488754589433357this game is simple so this shouldnt happen. you coud see similar if you had bad execution order while using large open world type game environment...
since python has a
GILGlobal Interpreter Lock both the
forand the
whieloops are "blocking" and both can be used to achive hat you want.. The only prolem i can see making the
forloop better choice here is that a
whileloop has the possibility of being infinit if somthing doesnt go right where the
forloop has a predefined end point. insuring the game will not freeze forever do to your code withen.
did i cover what you were confused for these? im not sure i completey understook your questions. 🙃 | https://forum.omz-software.com/topic/6319/flappy-bird/33 | CC-MAIN-2022-33 | refinedweb | 1,855 | 76.01 |
Kenny Kerr is a software developer and entrepreneur. He is passionate about software and getting it done right. His new company recently launched Window Clippings 3, a screen capture tool designed to produce the highest quality screenshots.
This is the first article in a new series about C++ for Windows. I would have liked to start this series with a somewhat more fun topic but it needs to be said. There are certain prerequisites for starting a project on the right track and one of those is defining the approach for dealing with run-time errors. Many writers avoid talking about error handling when it comes to C++ because there are different approaches and differing views on how it should be done. I would like to say that you can use whatever approach suits you. I must however prepare the way for the remainder of this series and without a consistent way of dealing with errors subsequent articles won’t make sense.
As my approach relies on the Standard C++ Library and in particular the Standard Template Library the use of C++ exceptions is a given. The challenge then is to come up with a rational strategy for handling run-time errors. First I’ll describe what is to be done with exceptions and then how run-time errors in the Windows API are handled.
The first rule of exception handling is to do nothing. Exceptions are for unexpected run-time errors. Don’t throw exceptions you expect to catch. That also means you must not use exceptions for expected failures. If there are no exception handlers then Windows automatically generates an error report, including a minidump of the crash, as soon as an exception is thrown. This provides a perfect snapshot of the state of your application without unwinding the stack. This helps you to get as close to the source of the error as possible when you perform postmortem debugging. If you sign up with the Windows Quality Online Services (Winqual) then Microsoft will even collect these error reports, categorize them and provide them to you at no charge. This is tremendously valuable.
The next step is to clearly distinguish between fatal run-time errors, those that will crash your program, and run-time errors that are expected and that you will handle in your program so that it can continue to run. Some of these will be unique to your application but many will be common to most applications. Keep in mind that unexpected run-time errors that will be reported with exceptions indicate two things. Firstly they may indicate a bug in your application. You assumed the contents of a file is in a certain format when it is not. You expected a folder to exist when it’s actually missing. You sent a message to a window that’s already destroyed. Your algorithm dereferences an invalid iterator. And so on. Secondly they indicate problems outside of your control. Some other factor on the computer causes memory allocations in your application to fail. You fail to get the size of your window’s client area. You fail to write a value to the registry. These types of run-time errors typically point to a bigger problem. In both cases you don’t want your application to continue and an exception that results in an error report is the fastest way to bring your application down so that it doesn’t cause any further harm and lets you debug the problem when the error report arrives.
On the other hand many errors can and should be handled by your application. You may expect writing a value to the registry to succeed but you probably shouldn’t expect reading a value to succeed. Parsing text should be expected to fail. Creating a file may fail if the complete directory structure isn’t already present. And so on. In these cases using exceptions is not usually appropriate. It is usually simpler and more efficient to handle the error directly and as close to the source of the failure as possible.
Now let’s turn our attention to the many functions in the Windows API and the various ways they report run-time errors. The Windows API is unfortunately not very consistent when it comes to reporting run-time errors. You can think of it as having islands of consistency in a sea of inconsistency. There are four common types used for reporting errors explicitly using a return value.
BOOL
Many functions return a BOOL, a typedef of int, indicating either success or failure. It is best to compare the result against 0 rather than 1 since some functions only guarantee that the result will be nonzero upon successful completion. Some but certainly not all of these functions will provide a more descriptive error code that you can retrieve using the GetLastError function upon failure. The values returned by GetLastError can usually be found in the winerror.h header file excluding the HRESULT values.
LONG/DWORD
Different libraries use various typedefs of long and unsigned long including LONG, DWORD, NET_API_STATUS and others to return an error code directly. In most cases success is defined by a 0 return value. These functions typically directly return error codes defined in the winerror.h header file excluding the HRESULT values.
HRESULT/NTSTATUS
Many newer libraries as well as most member functions of COM interfaces use an HRESULT return value to report errors. Some functions that have roots in the Windows Driver Kit use an NTSTATUS return value. Both of these define identical packed error codes. It is not uncommon for values from winerror.h, possibly returned by GetLastError, to be packed inside an HRESULT before returning it to the caller. An HRESULT or NTSTATUS value can have multiple values indicating success and of course multiple values indicating failure. You need to check the documentation for any function that you’re using but in most cases a 0 return value indicates success with negative values indicating failure. Additional values greater than 0 may be defined to distinguish between different variations of success.
A small number of functions have a void return type. This either means that the function cannot fail, usually because whatever resources it relies on have already been allocated, or that any failure will be reported at a later stage. Other return values often imply an error given some sentinel value. This is common in functions that return a handle or pointer to a resource. You just need to read the documentation carefully to determine how to distinguish success from failure as it is not always true that 0 or nullptr alone indicate failure.
Finally for all those internal assumptions in your application there are assertions. Prefer to use static_assert for compile time validation. When that’s not possible use an ASSERT macro that is compiled away in release builds. I prefer to use the _ASSERTE macro from crtdbg.h as it stops the program in the debugger right on the offending line of code.
The listing at the end of this article includes error.h and error.cpp used for error handling in subsequent articles.
Although I avoid macros as much as possible, they remain the only solution for implementing debug assertions. I also define VERIFY and VERIFY_ mainly for checking the result of functions called within destructors where exceptions should not be used. It at least lets me assert the result of these functions in debug builds.
Namespaces are used to partition types and functions unless they’re specifically designed to work together. The error handling functions are however so fundamental that they reside in root kerr namespace. A few overloaded functions are provided for checking the return value of most functions in the Windows API. Argument matching and integral promotion rules help to funnel the various return types into the appropriate check functions. Specifically the int overload handles bool and BOOL return types, the long and unsigned long overloads take care of the rest. The check template function also comes in quite handy when you need to check for a specific value rather than the usual logical success or failure return values.
The check functions throw check_failed exceptions. The check_failed type includes a member that holds the error code passed to the check functions or returned by GetLastError. This comes in handy when you receive a minidump which contains the address of the exception and then allows you to easily find this error code. This can often be invaluable in determining the cause of the crash.
Why did I title this part “Putting bugs into buckets”? Well that’s because Windows Error Reporting categorizes error reports into what they call buckets. And that’s all for today and as always I’d love to hear what you think.
error.h
#pragma once
#ifdef _DEBUG #include <crtdbg.h> #define ASSERT(expression) _ASSERTE(expression) #define VERIFY(expression) ASSERT(expression) #define VERIFY_(expected, expression) ASSERT(expected == expression) #else #define ASSERT(expression) ((void)0) #define VERIFY(expression) (expression) #define VERIFY_(expected, expression) (expression) #endif
namespace kerr { struct check_failed { explicit check_failed(long result);
long error; };
void check(int); void check(long); void check(unsigned long);
template <typename T> void check(T expected, T actual) { if (expected != actual) { throw check_failed(0); } } }
error.cpp
#include "Precompiled.h" #include "error.h" #include <Windows.h> // for GetLastError
kerr::check_failed::check_failed(long result) : error(result) { // Do nothing }
void kerr::check(int result) { if (!result) { throw check_failed(GetLastError()); } }
void kerr::check(long result) { if (result) { throw check_failed(result); } }
void kerr::check(unsigned long result) { if (result) { throw check_failed(result); } }
(Republished here with permission from Kenny Kerr. The original post can be found here.) | http://blogs.msdn.com/b/cdndevs/archive/2010/12/09/the-new-c-for-the-new-windows-part-0-putting-bugs-into-buckets-by-kenny-kerr.aspx?Redirected=true | CC-MAIN-2015-35 | refinedweb | 1,613 | 56.15 |
Unity UI: Ease of Building User Interface Elements
Unity is a really great tool to build User Interfaces for games or for real-world applications. Everything you need you get out of the box in the Unity Editor. To create a UI element just add under Hierarchy window a new UI objects like text, image, button or other one depending on what you need.
Probably the most commonly used element of a user interface case is text. It is needed to display points earned during the game, instructions, or information about the end of the game. Just adding UI objects is easy, but to make them interactive you need to write scripts.
Let’s look at the below example of score system:
Remember to use namespace UnityEngine.UI, then connect the text object to _scoreText variable.
Assign a custom content to the _scroreText and updating is pretty simple like it is shown in above script.
Of course, the UI in Unity offers many more possibilities. To learn more, have a look at the Unity documentation at the following link:. | https://damiandabrowski.medium.com/unity-ui-ease-of-building-user-interface-elements-4b3ef5803c47?source=post_internal_links---------5---------------------------- | CC-MAIN-2021-43 | refinedweb | 179 | 63.7 |
news.digitalmars.com - c++.stlsoftDec 30 2007 Pantheios.COM 1.0.1 (beta 20) released (1)
Dec 29 2007 Pantheios: 1.0.1 (beta 74) Released (1)
Dec 29 2007 STLSoft 1.9.16 released (1)
Dec 28 2007 Pantheios: 1.0.1 (beta 73) Released (1)
Dec 27 2007 Request for pp feature (2)
Dec 27 2007 Pantheios: 1.0.1 (beta 72) Released: breaking changes! (1)
Dec 26 2007 Pantheios: 1.0.1 (beta 71) Released (1)
Dec 25 2007 Merry Christmas, and an STLSoft New Year! (6)
Dec 19 2007 STLSoft 1.9.13 released (1)
Dec 18 2007 STLSoft 1.9.12 released (1)
Dec 18 2007 Recls (2)
Dec 10 2007 Pantheios: 1.0.1 (beta 70) Released: breaking changes! (1)
Dec 10 2007 STLSoft 1.9.10 released (1)
Dec 09 2007 Pantheios: 1.0.1 (beta 69) Released (1)
Dec 07 2007 Pantheios: 1.0.1 (beta 68) Released (1)
Dec 06 2007 Question: Which versions of VC++ have merged MFC/ATL headers (3)
Dec 05 2007 Pantheios: 1.0.1 (beta 67) Released (1)
Dec 03 2007 Pantheios: 1.0.1 (beta 66) Released (1)
Dec 02 2007 Pantheios: 1.0.1 (beta 65) Released (1)
Nov 30 2007 [stlsoft][fixed_array_1d<>] size overhead (21)
Nov 27 2007 Pre-1.9.10 patch (stlsoft/filesystem/searchspec_sequence.hpp) (1)
Nov 27 2007 Pantheios: 1.0.1 (beta 62) Released (1)
Nov 26 2007 Pantheios: 1.0.1 (beta 61) Released (1)
Nov 26 2007 recls 1.8.11 on VC 7.1: makefile doesn't produce widechar binaries (+solution) (6)
Nov 25 2007 Pantheios: 1.0.1 (beta 60) Released (1)
Nov 23 2007 Pantheios: 1.0.1 (beta 59) Released (1)
Nov 23 2007 RFC: Pantheios roadmap: 1.0 -> 1.3 (1)
Nov 23 2007 Pantheios 1.0.1-beta58 and CComBSTR, comstl::bstr compilation failures on VC8 (with stlsoft 1.9.9) (2)
Nov 22 2007 Pantheios: 1.0.1 (beta 58) Released (1)
Nov 22 2007 STLSoft 1.10: Call for feature requests (1)
Nov 21 2007 Pantheios: 1.0.1 (beta 57) Released (1)
Nov 19 2007 Pantheios: 1.0.1 (beta 55) Released: breaking changes! (1)
Nov 18 2007 STLSoft 1.9.9 released (1)
Nov 18 2007 Pantheios: 1.0.1 (beta 54) Released: breaking changes! (1)
Nov 17 2007 VOLE 0.3.2 Released (1)
Nov 17 2007 Pantheios: Breaking change coming in 1.0.1 beta54! (1)
Nov 17 2007 STLSoft 1.9.8 Released: changes to support VOLE with Turbo C++ (1)
Nov 17 2007 Pantheios 1.0.1 beta 53 released (1)
Nov 16 2007 Pantheios 1.0.1 beta 51 released (2)
Nov 15 2007 STLSoft 1.9.7 released (1)
Nov 14 2007 Pantheios 1.0.1 beta 50 released (1)
Nov 13 2007 Pantheios 1.0.1 (beta 49) Released (1)
Nov 12 2007 Pantheios 1.0.1. beta 48 released (1)
Nov 11 2007 Pantheios 1.0.1 beta 47 released (1)
Nov 10 2007 shwild 0.9.6 released (1)
Nov 10 2007 Pantheios 1.0.1 beta 46 released (1)
Nov 09 2007 Pantheios 1.0.1 (beta 45) released (1)
Nov 02 2007 Pantheios 1.0.1 beta 44 released (1)
Nov 02 2007 Pantheios 1.0.1 beta 43 released (1)
Nov 01 2007 VOLE 0.3.1 released (1)
Oct 29 2007 Pantheios 1.0.1 beta 42 released (1)
Oct 28 2007 CodeGear (Borland) C++ 5.92 STLSoft Support (7)
Oct 28 2007 Pantheios 1.0.1 beta 41 released (1)
Oct 25 2007 Pantheios 1.0.1 (beta 40) released (1)
Oct 22 2007 Pantheios 1.0.1 (beta 38) released - New be.speech back-end! (1)
Oct 21 2007 Pantheios.COM 1.0.1 (beta 14) released (1)
Oct 20 2007 Pantheios 1.0.1 beta 37 released (1)
Oct 19 2007 Pantheios 1.0.1 beta 36 released (1)
Oct 18 2007 Pantheios 1.0.1 beta 35 released (1)
Oct 17 2007 Pantheios 1.0.1 beta 34 released (1)
Oct 16 2007 Pantheios 1.0 Roadmap (2)
Oct 13 2007 Need some help STLsoft + Turbo Explorer (11)
Oct 10 2007 Beta Documentation (7)
Oct 06 2007 Pantheios 1.0.1 beta 33 released (1)
Sep 27 2007 basic_static_string + some random questions (1)
Sep 27 2007 Pantheios.COM 1.0.1 beta 12 released (1)
Sep 25 2007 STLSoft 1.9.6 released (1)
Sep 22 2007 Pantheios.COM 1.0.1 beta 10 released; coming changes in Pantheios (1)
Sep 19 2007 shared_ptr<> (3)
Sep 04 2007 stlsoft::frequency_map: semantic change (2)
Sep 03 2007 Pantheios 1.0.1 (beta 32) Released (1)
Aug 18 2007 Pantheios 1.0.1 beta 31 released (1)
Aug 17 2007 Pantheios 1.0.1 beta 30 released (1)
Aug 07 2007 Boost (3)
Aug 05 2007 Pantheios 1.0.1 beta 29 released (1)
Aug 05 2007 VOLE 0.2.5 released (1)
Aug 03 2007 STLSoft 1.9.5 released (1)
Aug 02 2007 comstl::bstr attach method? (3)
Aug 02 2007 Pantheios 1.0.1 beta 28 released (1)
Aug 02 2007 STLSoft 1.9.4 released (1)
Jul 31 2007 [Pantheios] A change in the design philosophy? (2)
Jul 29 2007 Pantheios 1.0.1 beta 27 released (1)
Jul 29 2007 STLSoft 1.9.3 released (1)
Jul 25 2007 VC++ 8.0 x64 compilation errors for scoped_handle (15)
Jul 13 2007 WINSTL basic_findfile_sequence has no iterator typedef (3)
Jul 05 2007 Improvement suggestion to the CString_cadapter (2)
Jul 04 2007 Martin, we need the document for stlsoft, thx! (2)
Jun 28 2007 Extended STL, Volume 1: Collections and Iterators published at last (3)
Jun 02 2007 recls 1.8.10 released (1)
Jun 02 2007 STLSoft 1.9.2 released (1)
May 31 2007 Recls 1.8.5 Problems with Unicode (4)
May 30 2007 Creating registry keys (2)
May 23 2007 Sugestions: Compiler Warnings & Beta Documentation (1)
May 12 2007 Mattew, we still lack the Doc for winstl :( (2)
Apr 30 2007 Open-RJ 1.6.4 released (1)
Apr 30 2007 Pantheios 1.0.1-beta 26 released (1)
Apr 30 2007 STLSoft 1.9.1 (for Extended STL, vol 1: CD) released (1)
Apr 30 2007 STLSoft and Insure++ (7)
Apr 18 2007 fixed_array allocator support? (2)
Apr 18 2007 [COMSTL] interface_cast request (1)
Apr 12 2007 STLSoft 1.9.1 final release: final issues before we go ... (1)
Apr 12 2007 Code Project article on VOLE (1)
Apr 11 2007 STLSoft 1.9.1 beta 47 released (1)
Apr 11 2007 Patch for a bug in rangelib/algorithms.hpp (2)
Apr 10 2007 Pantheios 1.0.1 (beta 25) released (1)
Apr 09 2007 VOLE 0.2.3 released (1)
Apr 07 2007 Synesis/STLSoft website down. (2)
Apr 06 2007 recls 1.8.6 released (1)
Apr 06 2007 VOLE 0.2.2 released (1)
Apr 06 2007 Pantheios 1.0.1 (beta 24) released (1)
Apr 06 2007 STLSoft 1.9.1 beta 47 released (1)
Apr 02 2007 A vocabulary for "smart pointers" (4)
Mar 29 2007 Converting from winstl::a2w to 'const unsigned short *' on MSVC++8 (6)
Mar 21 2007 STLSoft Pending issues; real-world delays; Pantheios/FastFormat/recls; etc. (1)
Mar 20 2007 fixed_array, at, and NDEBUG (13)
Mar 20 2007 Add operator() to fixed_array (4)
Mar 12 2007 STLSoft 1.9.1 beta 46 released (6)
Mar 11 2007 STLSoft 1.9.1 beta 45 released (1)
Mar 10 2007 Pantheios Library Selector 1.4.3.37 released (1)
Mar 08 2007 winstl::findfile_sequence does not compile with Visual Studio 2005 (19)
Mar 07 2007 scope_hancle /CLR compilation problem (2)
Mar 06 2007 WINSTL reg_key and reg_value: Copy of empty objects fails (2)
Feb 27 2007 [recls] recls 1.8.5 released (1)
Feb 26 2007 COMSTL collection_sequence and enumerator_sequence have no const_iterator type (10)
Feb 25 2007 comstl exceptions (6)
Feb 22 2007 STLSoft 1.9.1 beta 44 released (1)
Feb 15 2007 Extended STL, volume 1 complete! Phew ... (7)
Feb 08 2007 Thief constructor for scoped_handle? (5)
Feb 07 2007 scoped_handle but for functions? (11)
Feb 03 2007 The New book... (2)
Feb 02 2007 98/msdos/dmc EOF behavior (4)
Feb 01 2007 FastFormat (4)
Jan 26 2007 Pantheios 1.0.1 beta 23 released (1)
Jan 22 2007 Problem with fixed_array copy constructor (gcc-3.4.6) (10)
Jan 21 2007 Announcing VOLE - A Neat C++ COM/Automation Driver, version 0.2.1 (1)
Jan 21 2007 STLSoft 1.9.1 beta 43 released (1)
Jan 14 2007 STLSoft 1.9.1 beta 42; Pantheios 1.0.1 beta 22 (10)
Jan 14 2007 STLSoft 1.9.1 beta 41 released: probably the final beta (5)
Jan 13 2007 [8.50.3] bug: CRTP + namespaces not correctly handled (1)
Jan 11 2007 Pantheios 1.0.1 (beta 21) released (8)
Jan 08 2007 STLSoft: unpack distribution in single new directory? (2)
Jan 06 2007 STLSoft 1.9.1 beta 40 released (1)
Jan 04 2007 Ambiguous Symbol size_t (3)
Jan 02 2007 STLSoft 1.9.1 beta 38 released (1)
Other years:
2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 | http://www.digitalmars.com/d/archives/c++/stlsoft/index2007.html | CC-MAIN-2016-18 | refinedweb | 1,555 | 77.64 |
Edit Article
wikiHow to Code Your First Program in C++
Coding in C++ can be very simple once you get the basics of it down. When you begin to learn C++ you are taught to code one of many fundamental programs, the most popular being the "Hello World!" prompt.
Steps
- 1Make sure you have a C++ compiler or IDE that you are comfortable with and prefer using. (Ex. Microsoft Visual C++)
- 2Launch your C++ compiler and began a new project. In Visual C++ it's done by pressing File->New->Project->Empty Project.
- 3After having your empty project set up. Right click Source Files and add a new item, select cpp file.
- 4Begin your code with #include <iostream> this is to allow you to use the cout(C-out) and cin(C-in) commands which are used to output and input from within the prompt.
- 5Tell the program to use the "std" namespace by typing 'using namespace std;'. Always end a line of code with ';' to let the program know that the line is finished.
- 6Declare where your program codes by writing 'int main(){' and ending it with the closing brackets '}'. This is where your code is going to go inside.
- 7Inside your 'int main() {' you will tell the program to output 'Hello World!' by typing "cout << "Hello World!" << endl;" The 'cout' command tells the console to output something, in this case its the "Hello World!" text. The 'endl;' at the end tells the program to start whatever code you have next on a new line.
- 8After you have all that, you have to stop the program from automatically closing right after it opens. This is done by making the program close on user input. Type "cin.get();" and then "return 0". This asks for an unspecified user input to end the program. 'cin' is the command used for user input. 'return 0' is to end the program without any errors.
// the complete program follows: #include <iostream> using namespace std; int main ( int argc, char** argv ) // standard form ignore for now { cout << "Hello World!" << endl; cin.get(); return 0; }
Community Q&A
Search
- What is a C++ compiler?Sushil guptaA compiler is a special program that processes the statements written in programming languages, and converts them into the machine language that a computer's processor uses.
- How many header files are there in C++?Sushil guptaThere are a total of 49 header files in the Standard C++ library. These include 19 equivalent C library header files.
- How can I learn C++?wikiHow ContributorUse tutorials like those from tutorialspoint.com.
- How can this be adapted for kids?Bryan HadlandUse a easier language, like Python.
Ask a Question
If this question (or a similar one) is answered twice in this section, please click here to let us know.
Article Info
Categories: C Programming Languages
Thanks to all authors for creating a page that has been read 34,088 times.
Is this article up to date? | http://www.wikihow.com/Code-Your-First-Program-in-C%2B%2B | CC-MAIN-2017-30 | refinedweb | 493 | 74.59 |
Collision detection is an important part of most 3D games. Shooting enemies, avoiding (or failing to avoid) obstacles, even simply staying on the ground usually requires some form of collision-detection. It's a vast subject and there are many excellent articles written about it. However, most of them concern the more sophisticated algorithms, such as BSP and the simple bounding-sphere check receives no more than a passing mention. It is a quite simple procedure but it still could be very useful and there are a couple of subtle points involved. So I thought perhaps I should write something about it.
Straightforward version
The simplest possible way to do bounding-sphere test is to measure the (squared) distance between the two objects in question and to compare the result with the (squared again) sum of their radii. The reason we use the squared distances is to avoid the costly square-root calculation. The code will look something like this:
BOOL bSphereTest(CObject3D* obj1, CObject3D* obj2 ) { D3DVECTOR relPos = obj1->prPosition - obj2->prPosition; float dist = relPos.x * relPos.x + relPos.y * relPos.y + relPos.z * relPos.z; float minDist = obj1->fRadius + obj2->fRadius; return dist <= minDist * minDist; }Code Snippet 1
Note: This and the following code snippets use the D3DVECTOR type from Microsoft's Direct3D, if you are using some other 3D library, change the code accordingly.
This procedure is easy indeed and some cases may be even adequate but what if your objects are moving at a considerable speed? Then it could happen that at the beginning of a frame the bounding spheres do not intersect yet by the end of the frame they have passed through each other. If the distance your objects travel in one frame is greater than the size of the object this scenario is quite likely. Clearly we need something more advanced to handle this case.
Improved version
What we need is to somehow incorporate the object velocities into the calculation. That is quite simple if you do just a little bit of vector math (you can skip this section if you have a math phobia).
Let
Equation 1
and their relative position will be
The distance between the objects will be just the magnitude of the relative-position vector or, which is the same thing, the square of the distance will be equal to the square of the vector:
Note: I use * to denote the dot product between two vectors.
Now all we need is to find if
The way to do it is to check if the distance already less than the minimal (we might have somehow missed the collision on the previous frame, because of physics and AI corrections or whatever) and if not then we need to solve the equation
I have omitted [0] to make the formula more readable.
This is a simple quadratic equation and we solve it in the usual way, by finding the determinant:
Equation 2
If D<0 then the equation does not have a solution, that is the spheres do not collide at all. If D>0 then the spheres collide (if D=0 then the spheres merely touch each other, you can handle this case either way, depending on the specifics of the situation). The roots of the equation are:
Equation 3
Then we can take
Second, and more important we can test if the roots are between 0 and 1 even before calculating
Equation 4
This is a case of what is called Viete's theorem. Let's look closer at this, specifically let's look at the signs of these values. If
To make these cases even easier note that the denominators in both cases are greater than 0 - just like the square of any real number. (if they are equal to 0 then the relative speed of the spheres is 0, not moving at all. Then everything is just like in the previous section). Therefore the signs of these expressions are the same as the signs of their numerators. The numerator of the second expression is just the square of the distance between the spheres at t=0 minus the
Ok, so we can find the cases where solution is <0 that is when the collision has already happened. Now we need also to weed out the cases when the solution is >1. That is the collision might happen but not within the next frame, so we are not interested. (if you are interested in these situations as well just skip this check). To find out we just write the condition out:
Or, after some algebraic manipulations, we arrive at these 2 conditions
If these two inequalities hold true then
Now, If we passed all the tests then we have no choice but to calculate D and see if it is >0. If you wish to know the exact time (or position) of the collision then you can go on and calculate its square root, find
BOOL bSphereTest(CObject3D* obj1, CObject3D* obj; //Discriminant/4 float D = pv * pv - pp * vv; return ( D > 0 ); }Code Snippet 2
One more improvement
The code in snippet 2 works quite well in many circumstances but there's a little problem with it. It is not a mathematical problem but a computer-related one. Think about the very last calculation
float D = pv * pv - pp * vv;Say the distance between the objects is about 100 units and the objects themselves move about 100 units per frame (this is just an order-of-magnitude estimation, so the exact values are not important). Then pv, vv and pp are generally all about numbers around 100*100=10000. Then their products are about 10000*10000=100000000! And we subtract these values. What happens is a fantastic loss of precision amd the result of the computation can off by miles. What's worse the sign can turn out completely wrong as well, so we may flag non-existant collisions or even dismiss perfectly legitimate ones. Your bullets might fly right through your enemy's head and and his ones might hit even though you have ducked at the last moment. Not good. We need something to tame the numbers in our calculations. What you can do is to divide the formula above by vv, that will bring down the numbers to a more manageable size. Since by that time we know vv is greater than 0 (if it is 0 then so will be pv and our check 2 will fail) so the sign will be unaffected. We do have to spend more cycles on the division but it happens only if all other tests have failed. Here's the final version of the collision test that does calculate the time.
BOOL bSphereTest(CObject3D* obj1, CObject3D* obj2) { //Initialize the return value *t = 0.0; / ); }Code Snippet 3
Conclusion
So here it is, the bounding-sphere collision detection algorithm. It might be useful as is (say, for a pool game) or can be a quick-and-dirty test before doing a more sophisticated check - polygon-level, for instance. You may also want to try to improve the accuracy of the collision detection by using a hierarchy of bounding spheres, breaking your object into several parts and enclosing each of them in a bounding sphere of its own.
//Discriminant/(4*dV^2) = -(dp^2-r^2+dP*dV*tmin)
return ( pp + pv * tmin < 0 ); | http://www.gamedev.net/page/resources/_/technical/math-and-physics/simple-bounding-sphere-collision-detection-r1234?forceDownload=1&_k=880ea6a14ea49e853634fbdc5015a024 | CC-MAIN-2016-50 | refinedweb | 1,232 | 64.54 |
.flow; 20 21 import java.util.Map; 22 23 import javax.faces.component.UIViewRoot; 24 25 import org.apache.myfaces.orchestra.flow.config.FlowAccept; 26 import org.apache.myfaces.orchestra.flow.config.FlowCall; 27 28 /** 29 * Holds information about a specific call to a specific flow. 30 * <p> 31 * A new instance of this type is created for each call to a flow, 32 * and is stored in the ConversationContext created for that flow. 33 */ 34 public class FlowInfo 35 { 36 public static final int STATE_ACTIVE = 1; 37 public static final int STATE_COMMITTED = 2; 38 public static final int STATE_CANCELLED = 3; 39 40 private int state = STATE_ACTIVE; 41 private boolean isModalFlow; 42 43 // caller info 44 private String callerOutcome; 45 private String callerViewId; 46 private UIViewRoot callerViewRoot; 47 private FlowCall flowCall; 48 49 // values being passed from caller to callee 50 private Map<String,Object> argsIn; 51 52 // values being returned from callee to caller 53 private Map<String,Object> argsOut; 54 55 // Javascript to render when closing a flow that was run as 56 // a modal dialog. 57 private String onExitScript; 58 59 // callee info 60 private String flowViewId; 61 private String flowPath; 62 private FlowAccept flowAccept; 63 64 /** 65 * Constructor. 66 * <p> 67 * This creates just a partially-initialised FlowInfo object, as it defines only 68 * information about the caller. It is expected that very soon after this is 69 * created a call to setAcceptInfo will be made to add information about the 70 * called flow. 71 * 72 * @param callerViewId contains the viewId of the view that started the call. This 73 * value must not be null. 74 * 75 * @param callerViewRoot contains the view tree for the view that started the call. 76 * This is optional; when not null then this view state will be restored on return 77 * from the flow. In particular, this allows data entered by the user into input 78 * components to be restored when the flow returns (assumes that the flow call is 79 * triggered by an immediate command component). When this is null, then on return 80 * to the caller a new view tree will be created. The viewRoot should normally not 81 * be saved (this avoids wasting memory). It does need to be saved, however, when 82 * the component triggering the flow has immediate=true and the caller wants 83 * incomplete data to be preserved on return. It may also need to be preserved in 84 * certain scenarios such as when code stores stateful data into the view tree (eg 85 * when the t:saveState tag is used or when JSF2.0 view-scope beans are used). 86 * 87 * @param flowCall is the configuration object that describes the caller of a flow. 88 * 89 * @param argsIn is the set of parameter values being passed to the called flow. See 90 * FlowCall.readSendParams(). 91 */ 92 public FlowInfo( 93 String callerOutcome, 94 String callerViewId, 95 UIViewRoot callerViewRoot, 96 FlowCall flowCall, 97 Map<String, Object> argsIn) 98 { 99 this.callerOutcome = callerOutcome; 100 this.callerViewId = callerViewId; 101 this.callerViewRoot = callerViewRoot; 102 this.flowCall = flowCall; 103 this.argsIn = argsIn; 104 105 // TODO: save the serialized view tree, not just a ref to the viewroot 106 // TODO: determine whether to save the view tree or not (see comments on callerViewRoot property) 107 } 108 109 /** 110 * Return the outcome string that triggered this flowcall. 111 */ 112 public String getCallerOutcome() 113 { 114 return callerOutcome; 115 } 116 117 /** 118 * Return the viewId of the page that initiated the call to a flow (never null). 119 */ 120 public String getCallerViewId() 121 { 122 return callerViewId; 123 } 124 125 /** 126 * Return the view root of the page that initiated the call to a flow (optional, 127 * may be null). 128 */ 129 public UIViewRoot getCallerViewRoot() 130 { 131 return callerViewRoot; 132 } 133 134 /** 135 * Return metadata about the caller of the flow. 136 */ 137 public FlowCall getFlowCall() 138 { 139 return flowCall; 140 } 141 142 /** 143 * Get the (name,value) parameters that the caller is passing to the 144 * called flow. 145 * <p> 146 * These objects are then "accepted" into the called flow when the flow 147 * first starts. These objects are also used if the flow performs a 148 * "restart"; note however that if any of these objects are mutable then 149 * on restart the modified versions are passed to the called flow. 150 */ 151 public Map<String,Object> getArgsIn() 152 { 153 return argsIn; 154 } 155 156 /** 157 * Get the (name,value) parameters that the caller is returning to the 158 * called flow. 159 * <p> 160 * These objects are then "accepted" into the calling flow when the 161 * caller's view is restored. The return value is null until the 162 * flow has committed. 163 */ 164 public Map<String, Object> getArgsOut() 165 { 166 return argsOut; 167 } 168 169 /** Set to true to specify that the flow this object represents is "modal". */ 170 public void setModalFlow(boolean state) 171 { 172 this.isModalFlow = state; 173 } 174 175 /** 176 * Returns true when this flow represents a modal flow. 177 */ 178 public boolean isModalFlow() 179 { 180 return isModalFlow; 181 } 182 183 /** 184 * Finish initialising this object by adding information about the called flow. 185 * <p> 186 * It is assumed that the FlowAccept has been checked for compatibility against 187 * the FlowCall object. 188 * <p> 189 * @param entryViewId 190 * @param flowPath 191 * @param flowAccept 192 */ 193 public void setAcceptInfo(String flowViewId, String flowPath, FlowAccept flowAccept) 194 { 195 this.flowViewId = flowViewId; 196 this.flowPath = flowPath; 197 this.flowAccept = flowAccept; 198 } 199 200 /** 201 * Return the viewId of the entry page of the flow. 202 */ 203 public String getFlowViewId() 204 { 205 return flowViewId; 206 } 207 208 /** 209 * Return the path prefix which is expected to be present on the viewId of every 210 * page that "belongs" to this flow. 211 * <p> 212 * A viewId which does not start with the flowPath is not part of the current flow, 213 * and so immediately triggers cancellation of the flow. 214 */ 215 public String getFlowPath() 216 { 217 return flowPath; 218 } 219 220 /** 221 * Return metadata about the called flow. 222 */ 223 public FlowAccept getFlowAccept() 224 { 225 return flowAccept; 226 } 227 228 public String getOnExitScript() 229 { 230 return onExitScript; 231 } 232 233 public void setOnExitScript(String script) 234 { 235 onExitScript = script; 236 } 237 238 239 public int getState() 240 { 241 return state; 242 } 243 244 public void commit(Map<String, Object> argsOut) 245 { 246 this.argsOut = argsOut; 247 state = STATE_COMMITTED; 248 } 249 250 public void cancel() 251 { 252 state = STATE_CANCELLED; 253 } 254 255 public boolean isComplete() 256 { 257 return (state == STATE_CANCELLED || state == STATE_COMMITTED); 258 } 259 } | http://myfaces.apache.org/orchestra/myfaces-orchestra-flow/xref/org/apache/myfaces/orchestra/flow/FlowInfo.html | CC-MAIN-2016-36 | refinedweb | 1,089 | 59.64 |
public class Solution { public List<Integer> findMinHeightTrees(int n, int[][] edges) { Map<Integer, Node> graph = new HashMap<>(); for (int i = 0; i < n; i++) { graph.put(i, new Node(i)); } for (int[] edge : edges) { graph.get(edge[0]).neighbors.add(graph.get(edge[1])); graph.get(edge[1]).neighbors.add(graph.get(edge[0])); graph.get(edge[0]).degree++; graph.get(edge[1]).degree++; } Queue<Node> queue = new LinkedList<>(); for (int index : graph.keySet()) { if (graph.get(index).degree == 1) { queue.offer(graph.get(index)); } } while (n > 2) { int size = queue.size(); for (int i = 0; i < size; i++) { Node leaf = queue.poll(); Node neighbor = leaf.neighbors.iterator().next(); neighbor.neighbors.remove(leaf);//remove leaf from its neighbor's adj list graph.remove(leaf.label);//remove leaf self n--; if (--neighbor.degree == 1) { queue.offer(neighbor); } } } return new ArrayList<>(graph.keySet()); } } class Node { int label; int degree; Set<Node> neighbors; public Node(int index) { label = index; neighbors = new HashSet<>(); } }
Inside the bfs-process, when deleting the leaf in its neighbor's adjacent nodes' list, you can just check whether the neighbor's new degree is 1 or not. Instead of looping through the leaf neighbor to find the nodes with degree 1, | https://discuss.leetcode.com/topic/48519/java-bfs-like-ac-solution-using-user-defined-graph-node-class | CC-MAIN-2018-05 | refinedweb | 202 | 61.53 |
log() function in C++
In this tutorial, we will learn about log function in C++.
log() function in C++
log() is the natural logarithmic function. It is the inverse of natural exponential function (e). When we use log() function, it returns the value with respect to base e. Another function similar to this is log10(). This gives us value with respect to base 10.
The syntax for the functions are :
log(x); log10(x);
We use the following header file with the functions :
#include <math.h>
The parameter x can be of int, float, double or long double type. We get return value as float, double or long double respectively. The return types of the parameters are as follows :
Here is an example displaying the use of the log() and log10() functions in C++.
A C++ program using log() and log10() functions :
#include<iostream> #include<math.h> using namespace std; int main() { double k, r, r10; cout << "Enter value of k : "; cin >> k; r = log(k); r10 = log10(k); cout << "log(k) = " << r << endl; cout << "log10(k) = " << r10 << endl; return 0; }
Output 1 :
Enter value of k : 6.2 log(k) = 1.82455 log10(k) = 0.792392
Output 2 :
Enter value of k : 0.54 log(k) = -0.616186 log10(k) = -0.267606
Hope this was helpful. Enjoy Coding!
Also, learn :
Mathematical Constants in C++
Mathematical functions in CPP | https://www.codespeedy.com/log-function-in-cpp/ | CC-MAIN-2020-45 | refinedweb | 229 | 77.23 |
Nokia 5110 LCD is a cheap and simple to use component that you can use in almost all your Arduino projects. There are lots of examples about it, but most of them are a bit complicated because it's used with other components. All I need is a simple 'Hello World' application to start with. Then I can build my complicated applications later. So let's start with Hello World.
What we need?
- Arduino Nano
- Nokia 5110 LCD
- u8glib library. Download:
After downloading the library, follow
Sketch>Include Library>Add .zip Library menu to include the library. I've also written how to use page loop in u8glib to write more sophisticated applications:
But this article will focus on using Nokia 5110 rather than u8glib.
And below is a Nokia 5110 LCD example with u8glib in its simplest form.
#include "U8glib.h" U8GLIB_PCD8544 u8g(11, 10, 8, 9, 7); // Clk, Din, DC, CE, RST void draw() { u8g.setFont(u8g_font_unifont); u8g.drawStr(0, 20, "Merhaba "); u8g.drawStr(12, 36, "Dunya"); } void setup() { u8g.setColorIndex(1); u8g.setRot180(); // Ekranı 180 derece çevir } void loop() { u8g.firstPage(); do { draw(); } while(u8g.nextPage()); delay(1000); }
Now we're ready to add some sensors and output their values into Nokia 5110 LCD. | https://cuneyt.aliustaoglu.biz/en/nokia-5110-lcd-with-arduino-nano-and-u8glib/ | CC-MAIN-2019-09 | refinedweb | 207 | 77.33 |
Sep 02, 2018 01:13 PM|born2win|LINK
Hello,
I am designing API which will be consumed by mobile app and website. when i create the class with properties, should i use mixed data types else all as stings. below are the two example
public class PersonDto { public string Id { get; set; } public string Name { get; set; } public class PersonDto { public int Id { get; set; } public string Name { get; set; } }
which one is best. please guide me .
All-Star
38661 Points
Sep 02, 2018 03:54 PM|born2win|LINK
thank you for the reply and it's not about the class. it's about the properties inside the class. on the first one both are string and on the second one mixed data type. Id is int and name is string. so when we create DTO is it good having all the property a string? what is the universal standard. please suggest me
Contributor
3211 Points
Sep 02, 2018 03:55 PM|DA924|LINK
The DTO should have primitive data type properties that match the primitive data types they were derived from, such a Int to Int, Double to Double, String to String, etc. and etc. It makes no sense to make something Sting when it derived from Int only to have to convert it from String back to Int in order to use it.
public class DtoProject { public int ProjectId { get; set; } public string ClientName { get; set; } public string ProjectName { get; set; } public string Technology { get; set; } public string ProjectType { get; set; } public string UserId { get; set; } public DateTime StartDate { get; set; } public DateTime EndDate { get; set; } public decimal Cost { get; set; } }
All-Star
51204 Points
Sep 02, 2018 06:12 PM|bruce (sqlwork.com)|LINK
you should use the C# data types, so the class is easier to use from from C#. when converted to json, there are only
1) arrays: []
2) objects: {}
3) string: ""
4) Number: 124.1
5) null: null
note: you need to take care with dates. there is no JSON standard for dates. typically they are serialized to an ISO date string. how they are parsed depends on the library used.
4 replies
Last post Sep 02, 2018 06:12 PM by bruce (sqlwork.com) | https://forums.asp.net/t/2146335.aspx?Help+needed+in+Data+transfer+object+practice | CC-MAIN-2019-18 | refinedweb | 371 | 78.99 |
September updates for Microsoft Cognitive Services APIs.
Computer Vision API
On September 14th, the Computer Vision API will be available in China. Additions include the ability to transact in Chinese currency.
Microsoft’s Computer Vision API is able to extract rich information from images to categorize and process visual data and protect your users from unwanted content. No changes have been made to the Computer Vision API but it is now available on the Mooncake Sovereign Cloud in China and includes the ability to transact in Chinese currency.
Learn more about the Computer Vision API at Cognitive Services Computer Vision.
Face API
On September 14, the Face API will be available in China. Additions include the ability to transact in Chinese currency.
Microsoft’s FACE API can detect human faces and compare similar ones, organize people into groups according to visual similarity, and identify previously tagged people in images. No changes have been made to the Face API but it is now available on the Mooncake Sovereign Cloud in China and includes the ability to transact in Chinese currency.
Learn more about the Face API at Cognitive Services now supports German, Portuguese, and Japanese. Additionally, we created a C# SDK for LUIS endpoints. Entities have been updated for greater flexibility, hierarchies have been added, and we crated a new entity type: the composite entity. Reminder: the number of free LUIS transactions is changing from 100,000 to 10,000 a month for the trial offer.
LUIS understands language contextually, so your app communicates with people in the way they speak. New LUIS features make the service more powerful for you and your users.
- New languages – German, Portuguese and Japanese now supported in addition to the already supported English, French, Italian, Spanish, and Chinese.
- Entities have been updated to provide more flexibility.
- Composite Entity has been added.
- Hierarchical entities have been added to allow editing.
Learn more about the Language Understanding Intelligence Service API at Cognitive Services LUIS.
Emotion API
On September 14th, the Emotion API will be available in China. Additions include the ability to transact in Chinese currency.
Microsoft’s Emotion API analyzes faces to detect a range of feelings and personalize your app's responses. No changes have been made to the Emotion API but it is now available on the Mooncake Sovereign Cloud in China and includes the ability to transact in Chinese currency.
Learn more about the Emotion API at Cognitive Services Emotion.
Bing Speech API
The Bing Speech API’s authentication endpoint and SDK namespaces are changing as of September 21. If you download the new SDK and used the Azure portal to obtain keys, you will need a new subscription key. Old keys will still work for six months before deprecation. All users will need to change the reference to the SDK namespace.
The Bing Speech API converts audio to text, understands intent, and converts text back to speech for natural responsiveness.
Learn more about the Bing Speech API at Cognitive Services Bing Speech. | https://azure.microsoft.com/ja-jp/blog/cognitive-service-2016-09-15/ | CC-MAIN-2018-05 | refinedweb | 500 | 55.84 |
Wilfredo Casas6,174 Points
Why is my code not working?
I'm in the task 6 of 6, but my code is not passing this time, why is it?
from flask import Flask from flask import render_template app = Flask(__name__) @app.route('/') def index(): return render_template('index.html')
{% extends 'layout.html' %} {% block title %} {{ super() }} Homepage {% endblock %} {%block content %} <h1>Smells Like Bakin'!</h1> <p>Welcome to my bakery web site!</p> {% endblock %}
<!doctype html> <html> {% block title %}Smells Like Bakin'{% endblock%} <body> {% block content %} {% endblock%} </body> </html>
1 Answer
Chris FreemanTreehouse Moderator 53,520 Points
it seems the "<title>" tags fell out of
layout.html:
<!doctype html> <html> <head><title>{% block title %}Smells Like Bakin'{% endblock %}</title></head> <body> {% block content %}{% endblock %} </body> </html>
Chris FreemanTreehouse Moderator 53,520 Points
The block marks a region that can be replaced when another template extends this template.
The tags <head> and <title> need to be present somewhere. If placed inside a template block then every template that extended this one would have to include <head> and <title> in the replacement text. If left outside the block then they don't have to be included in every extending template.
Wilfredo Casas6,174 Points
Wilfredo Casas6,174 Points
Thanks, I think I misunderstood the block purpose, didn't it suppose to replace the <head><title>? | https://teamtreehouse.com/community/why-is-my-code-not-working-14 | CC-MAIN-2019-18 | refinedweb | 221 | 64.3 |
fchown - change owner and group of a file
#include <unistd.h>
int fchown(int fildes, uid_t owner, gid_t group);
The fchown() function shall be equivalent to chown() except that the file whose owner and group are changed is specified by the file descriptor fildes.
Upon successful completion, fchown() shall return 0. Otherwise, it shall return -1 and set errno to indicate the error. system.
The fchown() function may fail if:
- [EINVAL]
- The owner or group ID is not a value supported by the implementation. The fildes argument refers to a pipe or socket .);
None.
None.
None.
chown(),:
-
Clarification is added that a call to fchown() may not be allowed on a pipe.
The fchown() function is defined as mandatory. | http://pubs.opengroup.org/onlinepubs/009695399/functions/fchown.html | CC-MAIN-2018-05 | refinedweb | 119 | 73.58 |
This is much harder for computers than the output, but the .Net framework again provides ready-made functions of the class System.Speech.Recognition, over which speech recognition can be realized with little effort.
In general there are 2 modes in which speech recognition can be run: This post is about the Dictation Mode, the next about the Command Mode.
The Dictation Mode is, as the name already suggests, suited for dictating texts. The recorded sound is understood as a dictate and the program tries to understand the spoken words.
As for the speech output, first a reference to the class System.Speech has to be included. Now though the needed subclass is Recognition, so we first use the following using directive:
using System.Speech.Recognition;
For speech recognizing we use an instance of the class SpeechRecognitionEngine. This needs a grammar, which is kind of a command list, on how to interpret the language.
As a grammar we hand over an instance of the class DictationGrammar, to indicate, that we want to use the dictation mode.
The recognition of spoken words now works with the function Recognize(). This prepares for recognition and starts it, when the microphone records sound. Is the speaker takes a break (the needed duration can be set), the speech recognition is finished and the program now tries to interpret the sound as words (an asynchone recognition is also possible). Finally the dictation result is returned.
Now the code:
SpeechRecognitionEngine SRE = new SpeechRecognitionEngine();
SRE.LoadGrammar(new DictationGrammar()); // load dictation grammar
SRE.SetInputToDefaultAudioDevice(); // set recording souce to default
RecognitionResult Result = SRE.Recognize(); // record sound and recognize
string ResultString = "";
// add all recognized words to the result string
foreach (RecognizedWordUnit w in Result.Words)
{
ResultString += w.Text;
}
Thank you very much. Your code example helped me with my project.
I regular read your blog and it is very interesting for me.
Hello,
My code does not progress past SRE.recognize, what am I missing? | http://csharp-tricks-en.blogspot.de/2011/03/speech-recognition-part-1-dictation-mode.html | CC-MAIN-2018-09 | refinedweb | 324 | 59.3 |
Transform trie to regular expression
Project description
Efficient keyword extraction with regex
This package contains a function for efficiently representing a set of keywords as regex. This regex can be used to replace keywords in sentences or extract keywords from sentences
Why use trrex?
- Pure Python, no other dependencies
- trrex is fast, about 300 times faster than a regex union, and about 2.5 times faster than FlashText
- Plays well with others, can be integrated easily with pandas
Install trrex
Use pip,
pip install trrex
Usage
import trrex as tx pattern = tx.compile(['baby', 'bat', 'bad']) hits = pattern.findall('The baby was scared by the bad bat.') # hits = ['baby', 'bat', 'bad']
pandas
import trrex as tx import pandas as pd frame = pd.DataFrame({ "txt": ["The baby", "The bat"] }) pattern = tx.make(['baby', 'bat', 'bad'], prefix=r"\b(", suffix=r")\b") # need to specify capturing groups frame["match"] = frame["txt"].str.extract(pattern) hits = frame["match"].tolist() print(hits) # hits = ['baby', 'bad']
Why the name?
Naming is difficult, but as we had to call it something:
- trex: trie to regex
- trex: Tyrannosaurus rex, a large dinosaur species with small arms (rex meaning "king" in Latin)
Acknowledgments
This project is based on the following resources:
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
trrex-0.0.4.tar.gz (4.8 kB view hashes)
Built Distribution
trrex-0.0.4-py3-none-any.whl (6.8 kB view hashes) | https://pypi.org/project/trrex/ | CC-MAIN-2022-21 | refinedweb | 260 | 65.62 |
While there is much work yet to be done on Conference Buddy on the client side, it isn’t too early to start thinking about how we’ll store data on a server. One option that might be ideal would be to use ASP.NET Web API.
ASP.NET Web API is a framework for building HTTP services. ASP.NET Web API has been designed to work well with RESTful (where REST is an acronym for REpresentational State Transfer) design principles.
Fortunately, ASP.NET Web API guides you into creating services that follow a RESTful style. We will follow that style as long as it is useful. To see this at work, create a new application, and in the templates select ASP.NET MVC4 from the templates
After entering the name ContactManager, click OK. In the next dialog select the Web API Template.
ASP.NET MVC takes great advantage of “convention over configuration” meaning that if you place files in the expected folders, and use the expected naming conventions, a great deal of work is done for you.
Creating the Model
The first step is to create our Contact class. For this example, we’ll simplify Conference Buddy and build a simple contact manager, and the Contact class will be the fundamental data that we’ll use on both the server and the client.
Right click on the Models folder that was created for you and add a Contact.cs class
public class Contact
{
[ScaffoldColumn(false)]
public int ContactId { get; set; }
[Required]
public string Name { get; set; }
public string Address { get; set; }
public string City { get; set; }
public string State { get; set; }
public string Zip { get; set; }
public string Email { get; set; }
public string Twitter { get; set; }
}
The first data annotation, ScaffoldColumn, indicates that the ContactId is controlled by the application and should not be set externally. The Required attribute indicates that, at a minimum, a Contact must have a name.
Creating the Controller
We are ready to create a Controller, but to do so we must first Build the application so that the Contact Model class will be available.
After building, right-click on the Controllers folder and select Add / Controller. Set the name of the controller to ContactsController.
Drop down the Template list and select
API controller with read/write actions, using Entity Framework.
Drop down the Model class list. If it is empty, you neglected to build the application, so cancel this dialog and build. From the drop down, select Contact (ContactManager.Models).
Drop down the Data context class list and select New Data Context. Name the new context
ContactManagerContext,
Conventions in ASP.NET Web API
ASP.NET Web API (like ASP.NET MVC) uses conventions to simplify code while conforming to HTTP standards. As you can see, all of the controller action methods are named using HTTP verbs, and ASP.NET Web API will automatically route HTTP requests to the appropriate method based on name. Thus an HTTP GET request to /api/contacts will automatically route to the ContactsController’s GetContacts method, simply by following naming conventions.
Entity Framework and Sample Data
At this point, and with no further work, we have a functioning web service that uses Entity Framework to handle data persistence! Because this approach uses Entity Framework Code-First, it will automatically create our database – complete with tables based on our classes – when the service is first accessed.
Entity Framework also provides a method for inserting sample or initial data into our database when it’s created, known as a database initializer.
Creating the Database Initializer
Right click on the Models folder and create a new class named ContactManagerDatabaseInitializer.
This class has one purpose – provide data when the database is initialized. Here’s the code for that class:
public class ContactManagerDatabaseInitializer :
DropCreateDatabaseIfModelChanges<ContactManagerContext>
{
protected override void Seed( ContactManagerContext context )
{
base.Seed( context );
context.Contacts.Add(
new Contact
{
Name = "Jon Galloway",
Twitter = "jongalloway",
City = "San Diego",
State = "CA"
} );
context.Contacts.Add(
new Contact
{
Name = "Jesse Liberty",
Twitter = "jesseliberty",
City = "Acton",
State = "MA"
} );
}
}
The ContactManagerContext was created for you and is used when overriding the Seed method (as we do above). When you register the initializer (see below) the initializer will be called by Entity Framework when it creates the database, and it will call your overridden seed method.
The context Contacts collection that you are adding to above is of type DbSet<Contact> which was created for you when you selected to create a new Data Context
Registering the Database Initializer
We need to register this database initializer in Global.asax.cs. We do so in the Application_Start method
Database.SetInitializer(new ContactManagerDatabaseInitializer());
You now have a server ready to provide data to a Windows 8 client.
Creating the Windows 8 Client
Right click on the solution and choose Add New Project. Select Windows Store in the left column, and Grid App in the right column. Name your application ContactManager.WindowsStore (once this technique is fully understood we’ll use Conference Buddy as the client side application).
Adding the WebApi Client package from NuGet
We’ll need the WebApi client package to work with the WebAPI application we’ve built. Fortunately, this can be installed through NuGet. To do this, select View->Other Windows ->Package Manager Console. This window will open at the bottom of your screen. At the Package Manager prompt (PM>) enter the following command:
Install-Package Microsoft.AspNet.WebApi.Client
-ProjectName ContactManager.WindowsStore –IncludePrerelease
Hit Enter and the package will be installed,
Adding the Contact Class
We need to share the Contact class in the Server class as well as in the client. Copy and paste the class into the client project (yuck, but simpler for this example).
Right click on the DataModel folder and create a new class named Contact. Paste in the Contact class from ContactManager, eliminating the attributes,
public class Contact
{
public int ContactId { get; set; }
public string Name { get; set; }
public string Address { get; set; }
public string City { get; set; }
public string State { get; set; }
public string Zip { get; set; }
public string Email { get; set; }
public string Twitter { get; set; }
}
Editing The SampleDataSource class
Within the ContactManager.WindowsStore project, open the SampleDataSource.cs file in the DataModel folder. Add the following functions to the end of the SampleDataSource class:
public static async Task<SampleDataGroup>
LoadDataAsync( bool clear = true )
{
// Load this from configuration
const string serviceUrl = "";
if ( clear )
{
_sampleDataSource.AllGroups.Clear();
}
HttpClient client = new HttpClient();
client.BaseAddress = new Uri( serviceUrl );
HttpResponseMessage response = await client.GetAsync( "api/contacts" );
Contact[] contacts = await response.Content.ReadAsAsync<Contact[]>();;
}
static string GetContactInfo( Contact contact )
{
return string.Format(
"{0}, {1}\n{2}\n@{3}",
contact.City ?? "City?",
contact.State ?? "State?",
contact.Email ?? "Email?",
contact.Twitter ?? "Twitter?" );
}
Take a close look at the first function, LoadDataAsync:
const string serviceUrl = "";
This function will be calling into our service. In the current example application, this will need to match the port number used by the ASP.NET Web API application. To find this, scroll up to the ContactManager project and double click on the Properties
When the Properties display opens, click on Web (in the left column) to open the Web tab. About half way down the page you’ll find the Servers section. Within that, find the Project Url for the Local IIS Web Server and note the Port Number.
Change the port number in your code accordingly,
public static async Task<SampleDataGroup> LoadDataAsync( bool clear = true )
{
// Load this from configuration
const string serviceUrl = "";
If the Boolean value clear evaluates to true, we clear the data source before fetching data from the web service. In this case, we do want to have sample data available for design-time support, but we don’t want to have the sample data flash on the screen when the application runs, so we’ll leave the default to true.
We then instantiate an HttpClient object and set its BaseAddress. We use that to get the contacts asynchronously. They are returned as JSON data that we can then read into a collection of Contact objects.
HttpClient client = new HttpClient();
client.BaseAddress = new Uri(serviceUrl);
HttpResponseMessage response = await client.GetAsync("api/contacts");
Contact[] contacts = await response.Content.ReadAsAsync<Contact[]>();
Surprisingly, this is all you need to do to call a webservice which returns JSON data. The HttpClient object will make the request, wait asynchronously for the response, and deserialize the result to an array of Contact objects.;
The SampleDataSource can hold separate groups of data. We’re returning all contacts in an AllGroups grouping, but we could split these up if we had a meaningful way to do that – by geographical region, for instance. We’d add those groups to the _sampleDataSource.
Notice that the third parameter in the SampleDataItem constructor is GetContactInfo which takes the contact as a parameter. This returns a string based on what information is available for the Contact,
static string GetContactInfo( Contact contact )
{
return string.Format(
"{0}, {1}\n{2}\n@{3}",
contact.City ?? "City?",
contact.State ?? "State?",
contact.Email ?? "Email?",
contact.Twitter ?? "Twitter?" );
}
This is the data that will be displayed on the Group and on the Items page. That data is displayed automatically as we’ve passed that string in as the third parameter to the SampleDataItem, and that third parameter is the Sub-title.
Key to understanding how this works is that we are not changing any of the out of the box behavior of the Grid App template; we are simply providing alternative data to the built-in sample data, mapping our contact data to the expected strings.
Avatars
We’re using another web service for our avatars, avatars.io. This site takes a handle and tries to find the best avatar image from popular social networking sites like Twitter and Facebook.
new Uri( "" + contact.Twitter +
"?size=large", UriKind.Absolute ).AbsoluteUri
Calling LoadDataAsync
Finally, we’ll need to call the LoadDataAsync method from App.xaml.cs. Add the line
SampleDataSource.LoadDataAsync();
above the call to Window.Current.Content.RootFrame,
protected override async void OnLaunched(LaunchActivatedEventArgs args)
{
Frame rootFrame = Window.Current.Content as Frame;
// Do not repeat app initialization when the Window already has content,
// just ensure that the window is active
if (rootFrame == null)
{
//…
SampleDataSource.LoadDataAsync();
// Place the frame in the current Window
Window.Current.Content = rootFrame;
}
Run the application and the Grid App page is displayed using the default display characteristics, but using our data from our Web Service, rather than the sample data.
Download the Source Code for this demonstration
Unquestionably imagine that which you said. Your favourite reason appeared to be on the web the easiest factor to be aware of. I say to you, I definitely get irked even as people consider worries that they just don’t realize about. You managed to hit the nail upon the highest as smartly as defined out the whole thing with no need side-effects , folks can take a signal. Will probably be back to get more. Thank you
Thank you for all your valuable work on this web site. My daughter really likes going through internet research and it’s really obvious why. We learn all concerning the lively medium you render great techniques via the web blog and therefore strongly encourage participation from people about this theme and my simple princess is now understanding a great deal. Enjoy the rest of the new year. You are always conducting a useful job.
Pingback: Windows Store Developer Links – 2013-01-31 | Dan Rigby | http://jesseliberty.com/2013/01/30/windows-8-conference-buddy-and-remote-data/ | CC-MAIN-2014-35 | refinedweb | 1,901 | 55.34 |
1 <!doctype linuxdoc system> 2 3 <!-- This is the User's Guide for Grace --> 4 5 <article> 6 7 <title>Grace User's Guide (for Grace-5.1.25)</title> 8 <author>by the Grace Team</author> 9 <date>15.02.2015</date> 10 <abstract> 11 This document explains the usage of 12 <bf>Grace</bf>, a WYSIWYG 2D plotting tool for numerical data. 13 (A German translation of this document, made by Tobias Brinkert, is 14 available here: <url name="Grace Benutzerhandbuch" 15.) 16 </abstract> 17 18 <toc> 19 20 <!-- **************************************** --> 21 <sect>Introduction 22 <p> 23 24 <sect1>What is Grace? 25 <p> 26 Grace is a WYSIWYG tool to make two-dimensional plots of numerical 27 data. It runs under various (if not all) flavors of Unix with X11 28 and M*tif (LessTif or Motif). It also runs under VMS, OS/2, and 29 Windows (95/98/NT/2000/XP). Its capabilities are roughly similar to 30 GUI-based programs like Sigmaplot or Microcal Origin plus 31 script-based tools like Gnuplot or Genplot. Its strength lies in the 32 fact that it combines the convenience of a graphical user interface 33 with the power of a scripting language which enables it to do 34 sophisticated calculations or perform automated tasks. 35 36 Grace is derived from Xmgr (a.k.a. ACE/gr), originally written by 37 Paul Turner. 38 39 From version number 4.00, the development was taken over by a 40 team of volunteers under the coordination of Evgeny Stambulchik. 41 You can get the newest information about Grace and download the 42 latest version at the <url 43. 44 45 When its copyright was changed to GPL, the name was changed to Grace, 46 which stands for ``GRaphing, Advanced Computation and Exploration of 47 data'' or ``Grace Revamps ACE/gr''. The first version of Grace available 48 is named 5.0.0, while the last public version of Xmgr has the version 49 number 4.1.2. 50 51 Paul still maintains and develops a non-public version of Xmgr for 52 internal use. 53 54 <sect1>Copyright statement 55 <p> 56 57 <tscreen><verb> 58 Copyright (©) 1991-1995 Paul J Turner, Portland, OR 59 Copyright (©) 1996-2014 Grace Development Team 60 61 Maintained by Evgeny Stambulchik 62 63 64 All Rights Reserved 65 66 This program is free software; you can redistribute it and/or modify 67 it under the terms of the GNU General Public License as published by 68 the Free Software Foundation; either version 2 of the License, or 69 ., 675 Mass Ave, Cambridge, MA 02139, USA. 79 </verb></tscreen> 80 81 <p> 82 For certain libraries required to build Grace 83 (which are therefore even included in a suitable version) 84 there may be different Copyright/License statements. Though their 85 License may by chance match the one used for Grace, 86 the Grace Copyright holders can not influence or change them. 87 88 <p> 89 <table loc="htbp"> 90 <tabular ca="ll"> 91 <hline> 92 Package | License @ 93 <hline> 94 cephes library | Free @ 95 T1lib | LGPL @ 96 Xbae | BSD-like @ 97 Tab Widget | BSD-like @ 98 <hline> 99 </tabular> 100 <caption> 101 Licenses 102 </caption> 103 </table> 104 <P> 105 106 107 <!-- **************************************** --> 108 <sect>Installation guide 109 <p> 110 111 <sect1>Installing from sources 112 <p> 113 <enum> 114 <item> Configuration <label id="configuration"> 115 <itemize> 116 <item> Requirements. 117 Grace usually compiles out of the box in a regular Unix-like 118 environment. You need an ANSI C compiler (gcc is just fine), 119 the X11R5 or above libraries and headers, and an 120 implementaion of the M*tif API, version 1.2 or above. 121 If you want to compile your own changes to certain parts of 122 Grace, you will need a parser generator (<tt/yacc/ or, better, 123 <tt/bison/). 124 <item> Extra libraries. Some features will be available only if 125 additional libraries are installed. Those are: 126 <itemize> 127 <item> The JPEG backend needs the IJG's 128 (<url name="JPEG library" url="">), 129 version 6.x. 130 <item> The PNG backend needs 131 the (<url name="libpng" url="">) 132 library (version 0.96 or above). 133 <item> The PDF driver requires the PDFlib library of Thomas 134 Merz to be installed, which is available 135 <url name="here" url="">, version 136 4.0.3 or above. 137 <item> If your computer has the FFTW library installed when 138 Grace is compiled, Grace will link itself to this, and 139 drop all conventional FFT's and DFT's. All transforms 140 will be routed through this package. Note that there is 141 then no difference between pushing the "FFT" button and 142 the "DFT" button, except that FFT will complain if the 143 length isn't a power of 2, and DFT will not. 144 145 For more information on this package, see the 146 <url name="FFTW Home page" url="">. 147 In short, this package allows one to do non-power-of-2 148 length FFT's along with the normal ones. It seems to work 149 very efficiently for any set length which factors into 2^a 150 3^b 5^c 7^d for integer a, b, c, d. The great feature here 151 is that set lengths which are powers of 10 (e.g. 1000, 152 10000) and integer multiples of these (500, 2000, 2500, 153 5000, etc.) can be computed with no significant penalty 154 (maybe 20%) over power-of-2 transforms. Very often, real 155 datasets come in these sizes, and not in powers of 2. 156 <item> In order to read/write sets in the NetCDF data format, you 157 will also need the <url name="NetCDF libraries" 158. 159 </itemize> 160 <item> Decide whether you want to compile in a separate place (thus 161 leaving the source tree pristine). You most probably would 162 want it if compiling Grace for more than one OS and keeping 163 the sources in a central shared (e.g. via NFS) location. 164 If you don't need it, skip the rest of this paragraph and go 165 right to the next step. Otherwise, assuming the sources are 166 in <tt>/usr/local/src/grace-x.y.z</tt> and the compilation 167 will be performed in <tt>/tmp/grace-obj</tt>, do the following: 168 <verb> 169 % mkdir /tmp/grace-obj 170 % cd /tmp/grace-obj 171 % /usr/local/src/grace-x.y.z/ac-tools/shtool mkshadow \ 172 /usr/local/src/grace-x.y.z . 173 </verb> 174 <item> The <tt>configure</tt> shell script attempts to guess correct 175 values for various system-dependent variables used during 176 compilation. It uses those values to create <tt>Make.conf</tt> in the 177 top directory of the package. It also create <tt>config.h</tt> file 178 containing system-dependent definitions. Finally, it creates a shell 179 script <tt>config.status</tt> that you can run in the future to 180 recreate the current configuration, a file <tt>config.cache</tt> that 181 saves the results of its tests to speed up reconfiguring, and a file 182 <tt>config.log</tt> containing compiler output (useful mainly for 183 debugging <tt>configure</tt>). If at some point <tt>config.cache</tt> 184 contains results you don't want to keep, you may remove or edit it. 185 <item> Run <tt>./configure --help</tt> 186 to get list of additional switches specific to Grace 187 <item> Run <tt>./configure <options></tt>. Just an example: 188 <verb> 189 % ./configure --enable-grace-home=/opt/grace 190 --with-extra-incpath=/usr/local/include:/opt/include \ 191 --with-extra-ldpath=/usr/local/lib:/opt/lib --prefix=/usr 192 </verb> 193 would use <tt>/usr/local/include</tt> and 194 <tt>/opt/include</tt> in addition to the default include path 195 and <tt>/usr/local/lib</tt> and <tt>/opt/lib</tt> in addition 196 to the default ld path. As well, all stuff would be put under 197 the /opt/grace directory and soft links made to 198 <tt>/usr/bin</tt>, <tt>/usr/lib</tt> and <tt>/usr/include</tt>. 199 <p> 200 <bf>Note</bf>: If you change one of the 201 <tt>--with-extra-incpath</tt> or 202 <tt>--with-extra-ldpath</tt> options from one run of 203 configure to another, remember to delete the 204 <tt>config.cache</tt> file!!! 205 </itemize> 206 <item> Compilation 207 <itemize> 208 <item> Issue <tt>make</tt> 209 <p> 210 If something goes wrong, try to see if the problem has been 211 described already in the <bf>Grace FAQ</bf> (in the 212 <tt>doc</tt> directory). 213 </itemize> 214 <item> Testing 215 216 <itemize> 217 <item> <tt>make tests</tt> 218 <p> 219 This will give you a slide show demonstrating some nice 220 features of Grace. 221 </itemize> 222 <item> Installation 223 <itemize> 224 <item> <tt>make install</tt> 225 <item> <tt>make links</tt> 226 <p> 227 The later (optional) step will make soft links from some files 228 under the Grace home directory to the system-wide default 229 locations (can be changed by the <tt>--prefix</tt> option 230 during the configuration, see above). 231 </itemize> 232 </enum> 233 234 <sect1>Binary installation 235 <p> 236 <enum> 237 <item> Getting pre-built packages 238 <item> Installation 239 <item> Running tests 240 </enum> 241 242 <sect1>Alternative packaging schemes (RPM, ...) 243 <p> 244 245 Not written yet... 246 247 248 <!-- **************************************** --> 249 <sect>Getting started 250 <p> 251 252 For a jump-in start, you can browse the demos ("Help/Examples" menu tree). 253 These are ordinary Grace projects, so you can play with them and modify them. 254 Also, read the <url name="Tutorial" url="Tutorial.html">. 255 <p> 256 O.k. Here's a VERY quick introduction: 257 <enum> 258 <item> Start the GUI version: xmgrace (return). 259 <item> Select/check the output medium and canvas size in File/Device 260 Setup. 261 <item> If needed, set the graph size ('Viewport' in 262 Plot/Graph Appearance). 263 <item> Load your data with Data/Import/ASCII. 'Load as': 'Single set' for 264 two-column ASCII data, 'Block data' for multi-column ASCII data. 265 <item> Adjust the scales, axis labels and tick marks in Plot/Axis 266 properties. Acknowledge all changes with 'Apply'. 267 <item> Adjust lines, symbols, legends in Plot/Set appearance. 268 <item> Adjust titles, plot frame and legend display in Plot/Graph 269 Appearance. 270 <item> Data can be manipulated in Data/Transformations. To shift a 271 data set by 20 to the left, e.g., in 'Evaluate Expression' 272 select the same set on the left and the right, and say Formula: 273 y=y-20. 274 As you'll probably notice, Grace can do MUCH more than that. 275 Explore at your leisure. 276 <item> When you like your plot, select File/Print. That's it! 277 </enum> 278 279 <sect1>General concepts 280 <p> 281 282 <sect2>Project files <label id="project-file"> 283 <p> 284 285 A project file contains all information necessary to restore a plot 286 created by Grace, as well as some of preferences. Each plot is 287 represented on a single page, but may have an unlimited number of 288 graphs.You create a project file of your current graph with 289 File/Save,Save as. 290 291 <sect2>Parameter files <label id="parameter-file"> 292 <p> 293 294 A parameter file contains the detailed settings of your project. It 295 can be used to transfer these settings to a different plot/project. 296 You generate a parameter file with File/Save menu entry selected 297 from the "Plot/Graph appearance popup". You can load the settings 298 contained in a parameter file with File/Open. 299 300 <sect2>Input file formats <label id="files-formats"> 301 <p> 302 303 Grace understands several input files formats. The most basic one 304 is ASCII text files containing space and comma separated columns 305 of data. The data fields can be either numeric (Fortran 'd' and 306 'D' exponent markers are also supported) or alphanumeric (with or 307 without quotes). Several calendar date formats are recognized 308 automatically and you can specify your own reference for numeric 309 date formats. Lines beginnig with "#" are ignored. Blank lines 310 indicate new dataset. 311 Grace also has a command language (see <ref 312), you can 313 include commands in data files using lines having "@" as their 314 first non-blank character, though this is not recommended. 315 Depending on configuration, Grace can also read NetCDF files (see 316 <ref id="configuration" name="configuration">). 317 318 <sect2>Graphs <label id="graph"> 319 <p> 320 321 A graph consists of (every element is optional): a graph frame, axes, 322 a title and a subtitle, a number of sets and additional annotative 323 objects (time stamp string, text strings, lines, boxes and ellipses). 324 325 The graph type can be any of:<label id="graph-types"> 326 <itemize> 327 <item> XY Graph 328 <item> XY Chart 329 <item> Polar Graph 330 <item> Fixed Graph 331 <item> Pie chart 332 </itemize> 333 334 The idea of "XY Chart" is to plot bars (or symbols in general) of 335 several sets side by side, assuming the abscissas of all the sets are 336 the same (or subsets of the longest set). 337 338 <sect2>Datasets <label id="datasets"> 339 <p> 340 341 A dataset is a collection of points with x and y coordinates, up to 342 four optional data values (which, depending on the set type, can be 343 displayed as error bars or like) and one optional character string. 344 345 <sect2>Sets <label id="sets"> 346 <p> 347 348 A set is a way of representing datasets. It consists 349 of a pointer to a dataset plus a collection of parameters describing 350 the visual appearance of the data (like color, line dash pattern etc). 351 352 The set type can be any of the following: 353 354 <p> 355 356 <table loc="htbp"> 357 <tabular ca="lcp{9cm}"> 358 <hline> 359 Set type | # of num. cols | Description @ 360 <hline> 361 XY | 2 | An X-Y scatter and/or line plot, plus (optionally) an annotated value @ 362 XYDX | 3 | Same as XY, but with error bars (either one- or two-sided) along X axis @ 363 XYDY | 3 | Same as XYDX, but error bars are along Y axis @ 364 XYDXDX | 4 | Same as XYDX, but left and right error bars are defined separately @ 365 XYDYDY | 4 | Same as XYDXDX, but error bars are along Y axis @ 366 XYDXDY | 4 | Same as XY, but with X and Y error bars (either one- or two-sided) @ 367 XYDXDXDYDY | 6 | Same as XYDXDY, but left/right and upper/lower error bars are defined separately @ 368 BAR | 2 | Same as XY, but vertical bars are used instead of symbols @ 369 BARDY | 3 | Same as BAR, but with error bars (either one- or two-sided) along Y axis @ 370 BARDYDY | 4 | Same as BARDY, but lower and upper error bars are defined separately @ 371 XYHILO | 5 | Hi/Low/Open/Close plot @ 372 XYZ | 3 | Same as XY; makes no sense unless the annotated value is Z @ 373 XYR | 3 | X, Y, Radius. Only allowed in Fixed graphs @ 374 XYSIZE | 3 | Same as XY, but symbol size is variable @ 375 XYCOLOR | 3 | X, Y, color index (of the symbol fill)@ 376 XYCOLPAT | 4 | X, Y, color index, pattern index (currently used for Pie charts only) @ 377 XYVMAP | 4 | Vector map @ 378 XYBOXPLOT | 6 | Box plot (X, median, upper/lower limit, upper/lower whisker) @ 379 <hline> 380 </tabular> 381 <caption> 382 Set types 383 </caption> 384 </table> 385 386 <p> 387 388 Not all set types, however, can be plotted on any graph type. The 389 following table summarizes it: 390 391 <p> 392 393 <table loc="htbp"> 394 <tabular ca="lccccc"> 395 <hline> 396 Set type | XY Graph | XY Chart | Fixed | Polar | Pie @ 397 <hline> 398 XY | + | + | + | + | + @ 399 XYDX | + | - | + | - | - @ 400 XYDY | + | + | + | - | - @ 401 XYDXDX | + | - | + | - | - @ 402 XYDYDY | + | + | + | - | - @ 403 XYDXDY | + | - | + | - | - @ 404 XYDXDXDYDY | + | - | + | - | - @ 405 BAR | + | + | + | - | - @ 406 BARDY | + | + | - | - | - @ 407 BARDYDY | + | + | - | - | - @ 408 XYHILO | + | - | - | - | - @ 409 XYZ | + | - | + | + | - @ 410 XYR | - | - | + | - | - @ 411 XYSIZE | + | + | + | + | - @ 412 XYCOLOR | + | + | + | + | + @ 413 XYCOLPAT | - | - | - | - | + @ 414 XYVMAP | + | - | + | - | - @ 415 XYBOXPLOT | + | - | - | - | - @ 416 <hline> 417 </tabular> 418 <caption> 419 Graph/Set type connection 420 </caption> 421 </table> 422 423 424 <sect2>Regions <label id="regions"> 425 <p> 426 427 Regions are sections of the graph defined by the interior or exterior 428 of a polygon, or a half plane defined by a line. Regions are used to 429 restrict data transformations to a geometric area occupied by region. 430 431 <sect2>Real Time Input <label id="RTI"> 432 <p> 433 434 Real Time Input refers to the ability Grace has to be 435 fed in real time by an external program. The Grace 436 process spawned by the driver program is a full featured 437 Grace process: the user can interact using the GUI at the 438 same time the program sends data and commands. The process 439 will adapt itself to the incoming data rate. 440 441 <sect2>Hotlinks <label id="hotlinks"> 442 <p> 443 444 Hotlinks are sources containing varying data. Grace can be 445 instructed a file or a pipe is a hotlink in which case it 446 will provide specific commands to refresh the data on a 447 mouse click (a later version will probably allow automatic 448 refresh). 449 450 <sect2>Devices<label id="devices"> 451 <p> 452 Grace allows the user to choose between several output 453 devices to produce its graphics. The current list of 454 supported devices is: 455 456 <itemize> 457 <item> X11 458 <item> PostScript (level 1 and level 2) 459 <item> EPS (encapsulated PostScript) 460 <item> Metafile (which is Grace format, used at the moment mostly 461 for debugging purposes) 462 <item> MIF (Maker Interchange Format used by FrameMaker) 463 <item> SVG (Scalable Vector Graphics, a language for describing 464 two-dimensional vector and mixed vector/raster graphics 465 in XML) 466 <item> PDF (depends on extra libraries, 467 see <ref name="configuration" id="configuration">) 468 <item> PNM (portable anymap file format) 469 <item> JPEG (depends on extra libraries, 470 see <ref name="configuration" id="configuration">) 471 <item> PNG (depends on extra libraries, 472 see <ref name="configuration" id="configuration">) 473 </itemize> 474 475 <p> 476 Note that Grace no longer supports GIF due to 477 the copyright policy of Unisys. Grace can also be 478 instructed to launch conversion programs automatically 479 based on file name. As an example you can produce MIF 480 (FrameMaker Interchange Format) or Java applets using 481 pstoedit, or almost any image format using the netpbm suite 482 (see the <url name="FAQ" url="FAQ.html">). 483 484 <sect2>Magic path<label id="magic-path"> 485 <p> 486 In many cases, when Grace needs to access a file given with a 487 relative <tt>pathname</tt>, it searches for the file along the 488 following path: 489 <tt>./pathname:./.grace/pathname:~/.grace/pathname:$GRACE_HOME/pathname</tt> 490 491 <sect2>Dynamic modules<label id="dynamic-modules"> 492 <p> 493 Grace can access external functions present in either system 494 or third-party shared libraries or modules specially compiled 495 for use with it. The term dynamic refers to the possibility 496 Grace has to open the library at run time to find the code of 497 the external function, there is no need to recompile Grace 498 itself (the functions already compiled in Grace are 499 "statically linked"). 500 501 <sect2>Coordinate frames <label id="coordinates"> 502 503 <p> 504 There are two types of coordinates in Grace: the <bf>world 505 coordinates</bf> and the <bf>viewport coordinates</bf>. Points of 506 data sets are defined in the world coordinates. The viewport 507 coordinates correspond to the image of the plot drawn on the canvas 508 (or printed on, say, PS output page). The transformation converting 509 the world coordinates into the viewport ones is determined by both 510 the graph type and the axis scaling. 511 </p> 512 513 <p> 514 Actually, there is yet another level in the hierarchy of coordinates 515 - the <bf>device coordinates</bf>. However, you (as a user of Grace) 516 should not worry about the latter. The mapping between the viewport 517 coordinates and the device coordinates is always set in such a way 518 that the origin of the viewport corresponds to the left bottom corner 519 of the device page, the smallest of the device dimensions corresponds 520 to one unit in the viewport coordinates. Oh, and the most important 521 thing about the viewport → device transformation is that it is 522 homotetic, i.e. a square is guaranteed to remain a square, not a 523 rectangle, a circle remains a circle (not an ellipse) etc. 524 </p> 525 526 <sect1>Invocation 527 <p> 528 529 <sect2>Operational mode 530 <p> 531 532 With respect to the user interface, there are three modes of 533 operation that Grace can be invoked in. The full-featured GUI-based 534 version is called <tt>xmgrace</tt>. A batch-printing version is 535 called <tt>gracebat</tt>. A command-line interface mode is called 536 <tt>grace</tt>. Usually, a single executable is called in all cases, 537 with two of the three files being (symbolic) links to a "real" one. 538 539 <sect2>Command line options 540 <p> 541 <descrip> 542 <tag> -autoscale <it>x|y|xy</it> </tag> 543 Override any parameter file settings 544 545 <tag> -barebones </tag> 546 Turn off all toolbars 547 548 <tag> -batch <it>batch_file</it> </tag> 549 Execute batch_file on start up (i.e., after all other options 550 have been processed and the UI initialized) 551 552 <tag> -block <it>block_data</it> </tag> 553 Assume data file is block data 554 555 <tag> -bxy <it>x:y:etc.</it> </tag> 556 Form a set from the current block data set using the current set 557 type from columns given in the argument 558 559 <tag> -datehint <it>iso|european|us|days|seconds|nohint</it> </tag> 560 Set the hint for dates analysis 561 562 <tag> -dpipe <it>descriptor</it> </tag> 563 Read data from descriptor (anonymous pipe) on startup 564 565 <tag> -fixed <it>width</it> <it>height</it> </tag> 566 Set canvas size fixed to width*height 567 568 <tag> -free </tag> Use free page layout 569 570 <tag> -graph <it>graph_number</it> </tag> 571 Set the current graph number 572 573 <tag> -graphtype <it>xy|chart|fixed|polar|pie</it> </tag> 574 Set the type of the current graph 575 576 <tag> -hardcopy </tag> 577 No interactive session, just print and quit 578 579 <tag> -hdevice <it>hardcopy_device_name</it> </tag> 580 Set default hardcopy device 581 582 <tag> -install </tag> 583 Install private colormap 584 585 <tag> -legend <it>load</it> </tag> 586 Turn the graph legend on 587 588 <tag> -log <it>x|y|xy</it> </tag> 589 Set the axis scaling of the current graph to logarithmic 590 591 <tag> -maxpath <it>length</it> </tag> 592 Set the maximal drawing path length 593 594 <tag> -mono </tag> 595 Run Grace in monochrome mode (affects the display only) 596 597 <tag> -netcdf <it>file</it> </tag> 598 Assume data <it>file</it> is in netCDF format. This option is 599 present only if the netCDF support was compiled in 600 601 <tag> -netcdfxy <it>X_var</it> <it>Y_var</it> </tag> 602 If -netcdf was used previously, read from the netCDF file 603 <it>X_var</it> <it>Y_var</it> variables and create a set. If 604 <it>X_var</it> name is "null" then load the index of Y to X. 605 This option is present only if the netCDF support was compiled 606 in 607 608 <tag> -noask </tag> 609 Assume the answer is yes to all requests - if the operation would 610 overwrite a file, Grace will do so without prompting 611 612 <tag> -noinstall </tag> 613 Don't use private colormap 614 615 <tag> -noprint </tag> 616 In batch mode, do not print 617 618 <tag> -nosafe </tag> 619 Disable safe mode 620 621 <tag> -nosigcatch </tag> 622 Don't catch signals 623 624 <tag> -npipe <it>file</it> </tag> 625 Read data from named pipe on startup 626 627 <tag> -nxy <it>nxy_file</it> </tag> 628 Assume data file is in X Y1 Y2 Y3 ... format 629 630 <tag> -param <it>parameter_file</it> </tag> 631 Load parameters from parameter_file to the current graph 632 633 <tag> -pexec <it>parameter_string</it> </tag> 634 Interpret string as a parameter setting 635 636 <tag> -pipe </tag> 637 Read data from stdin on startup 638 639 <tag> -printfile </tag> <it>file</it> 640 Save print output to file 641 642 <tag> -remove </tag> 643 Remove data file after read 644 645 <tag> -results <it>results_file</it> </tag> 646 Write results of some data manipulations to results_file 647 648 <tag> -rvideo </tag> 649 Exchange the color indices for black and white 650 651 <tag> -safe </tag> 652 Run in the safe mode (default) - no file system modifications 653 are allowd through the batch language 654 655 <tag> -saveall <it>save_file</it> </tag> 656 Save all graphs to save_file 657 658 <tag> -seed <it>seed_value</it> </tag> 659 Integer seed for random number generator 660 661 <tag> -settype <it>xy|xydx|...</it> </tag> 662 Set the type of the next data file 663 664 <tag> -source <it>disk|pipe</it> </tag> 665 Source type of next data file 666 667 <tag> -timer <it>delay</it> </tag> 668 Set allowed time slice for real time inputs to delay ms 669 670 <tag> -timestamp </tag> Add timestamp to plot 671 672 <tag> -version </tag> 673 Show the program version 674 675 <tag> -viewport <it>xmin ymin xmax ymax</it> </tag> 676 Set the viewport for the current graph 677 678 <tag> -wd <it>directory</it> </tag> 679 Set the working directory 680 681 <tag> -world <it>xmin ymin xmax ymax</it> </tag> 682 Set the world coordinates for the current graph 683 684 <tag> -usage|-help </tag> 685 This message 686 687 </descrip> 688 689 <sect1>Customization 690 <p> 691 <sect2>Environment variables <label id="environment-variables"> 692 <p> 693 <itemize> 694 <item> GRACE_HOME 695 <p> 696 Set the location of Grace. This will be where help files, 697 auxiliary programs, and examples are located. If you are unable 698 to find the location of this directory, contact your system 699 administrator. 700 <p> 701 <item> GRACE_PRINT_CMD 702 <p> 703 Print command. If the variable is defined but is an empty 704 string, "Print to file" will be selected as default. 705 <p> 706 <item> GRACE_EDITOR 707 <p> 708 The editor used for manual editing of dataset values. 709 <p> 710 <item> GRACE_HELPVIEWER 711 <p> 712 The shell command to run an HTML viewer for on-line browsing of 713 the help documents. Must include at least one instance of "%s" 714 which will be replaced with the actual URL by Grace. 715 <p> 716 <item> GRACE_FFTW_WISDOM_FILE and 717 GRACE_FFTW_RAM_WISDOM 718 <p> 719 These flags control behavior of the FFTW planner (see 720 <ref id="fftw-tuning" name="FFTW tuning"> for detailed info) 721 <p> 722 </itemize> 723 724 <sect2>Init file<label id="gracerc"> 725 <p> 726 Upon start-up, Grace loads its init file, <tt>gracerc</tt>. The file 727 is searched for in the magic path (see 728 <ref id="magic-path" name="magic path">); once found, the rest of the 729 path is ignored. It's recommended that in the <tt>gracerc</tt> file, 730 one doesn't use statements which are part of a project file - such 731 defaults, if needed, should be set in the default template (see 732 <ref id="default-template" name="default template">). 733 734 <sect2>Default template<label id="default-template"> 735 <p> 736 Whenever a new project is started, Grace loads the default template, 737 <tt>templates/Default.agr</tt>. The file is searched for in the magic 738 path (see <ref id="magic-path" name="magic path">); once found, the 739 rest of the path is ignored. It's recommended that in the default 740 template, one doesn't use statements which are NOT part of a project 741 file - such defaults, if needed, should be set in the 742 <tt>gracerc</tt> (see <ref id="gracerc" name="init file">). 743 744 <sect2>X resources 745 <p> 746 747 The following Grace-specific X resource settings are supported: 748 749 <itemize> 750 <item> XMgrace.invertDraw 751 <newline> 752 Use GXinvert rather than GXxor for rubber-band lines. 753 If the rubber-banding for zooms and lines, etc. doesn't 754 appear on the canvas, set this resource to yes. 755 <newline> 756 757 <item> XMgrace.allowDoubleClick 758 <newline> 759 When Yes, allow double clicks on the canvas to bring up various 760 popups depending on the location of the pointer when the double 761 click occurs. 762 <newline> 763 764 <item> XMgrace.toolBar 765 <newline> 766 Enables button toolbar 767 <newline> 768 769 <item> XMgrace.statusBar 770 <newline> 771 Enables status bar 772 <newline> 773 774 <item> XMgrace.locatorBar 775 <newline> 776 Enables locator bar 777 <newline> 778 779 </itemize> 780 781 782 It is also possible to customize menus by assigning key accelerators to 783 any item. 784 785 You'll need to derive the item's X resource name from the respective 786 menu label, which is easily done following these rules: 787 <itemize> 788 <item> All non-alphanumeric characters are skipped 789 <item> Start with lower case; each new word (if any) continues from 790 the capital letter 791 <item> Add the item's type to the end - "Menu" for pulldown menus, 792 "Button" for menu buttons. 793 </itemize> 794 795 For example, in order to make Grace popup the Non-linear curve fitting 796 by pressing Control+F, you would add the following two lines 797 798 <tt> 799 XMgrace*transformationsMenu.nonLinearCurveFittingButton.acceleratorText: Ctrl+F 800 <newline> 801 XMgrace*transformationsMenu.nonLinearCurveFittingButton.accelerator: Ctrl<Key>f 802 </tt> 803 804 805 to your <tt>.Xresources</tt> file (the file which is read when an X 806 session starts; it could be <tt>.Xdefaults</tt>, <tt>.Xsession</tt> or 807 some other file - ask your system administrator when in doubt). 808 809 <p> 810 Similarly, it may be desirable to alter default filename patterns of 811 file selection dialogs. The recipe for the dialog's name is like for 812 menu buttons outlined above, with "Button" being replaced with "FSB". 813 E.g., to list all files in the "Open project" dialog ("File/Open..."), 814 set the following resource: 815 816 <tt> 817 XMgrace*openProjectFSB.pattern: * 818 </tt> 819 820 </p> 821 822 823 <!-- **************************************** --> 824 <sect>Guide to the graphical user interface<label id="GUI-guide"> 825 <p> 826 <sect1>GUI controls 827 <p> 828 This section describes interface controls - basic building blocks, used in 829 many popups. 830 </p> 831 <sect2>File selection dialogs <label id="FS-dialog"> 832 <p> 833 Whenever the user is expected to provide a filename, either for reading 834 in or writing some data, a file selection dialog is popped up. In 835 addition to the standard entries (the directory and file lists and the 836 filter entry), there is a pulldown menu for quick directory change to 837 predefined locations (the current working directory, user's home 838 directory and the file system root). Also, a "Set as cwd" button is 839 there which allows to set any directory as you navigate through the 840 directory tree as the current working directory (cwd). Once defined, it 841 can be used in any other file selection dialog to switch to that 842 directory quickly. 843 844 <sect2>List selectors <label id="list-selector"> 845 <p> 846 Various selectors are available in several popups. They all display 847 lists of objects (graphs, sets, ...) and can be used to perform 848 simple operations on these objects (copying, deleting, ...). The 849 operations are available from a popup menu that appears when pressing 850 mouse button 3 on them. Depending on the required functionality, they 851 may allow multiple choices or not. The following 852 shortcuts are enabled (if the result of an action would contradict the 853 list's selection policy, this would be ignored): 854 <itemize> 855 <item> Ctrl+a select all 856 <item> Ctrl+u unselect all 857 <item> Ctrl+i invert selection 858 </itemize> 859 860 <sect3> Graph selector <label id="graph-selector"> 861 <p> 862 The operations that can be performed on graphs through the graph 863 selector's popup menu are: 864 <itemize> 865 <item> focus to 866 <item> hide 867 <item> show 868 <item> duplicate 869 <item> kill 870 <item> swap 871 <item> create new 872 </itemize> 873 All this operations are not available in every instance of the 874 selector. For example in the "read sets" popup only one graph can 875 be selected at a time, and the swap operation is disabled. 876 877 Double-clicking on a list entry will switch the focus to that graph. 878 879 <sect3> Set selector <label id="set-selector"> 880 <p> 881 The operations that can be performed on sets through the set 882 selector's popup menu are: 883 <itemize> 884 <item> hide 885 <item> show 886 <item> bring to front 887 <item> send to back 888 <item> duplicate 889 <item> kill 890 <item> kill data 891 <item> swap 892 <item> edit 893 <itemize> 894 <item> in spreadsheet (see 895 <ref name="Spreadsheet data set editor" id="SSEditor">) 896 <item> in text editor 897 </itemize> 898 <item> create new 899 <itemize> 900 <item> by formula 901 <item> in spreadsheet (see 902 <ref name="Spreadsheet data set editor" id="SSEditor">) 903 <item> in text editor 904 <item> from block data 905 </itemize> 906 <item> pack all sets 907 <item> selector operations 908 <itemize> 909 <item> view set comments 910 <item> show data-less 911 <item> show hidden 912 <item> select all 913 <item> unselect all 914 <item> invert selection 915 <item> update 916 </itemize> 917 </itemize> 918 919 Double-clicking on a list entry will open the spreadsheet editor 920 (see <ref name="Spreadsheet data set editor" id="SSEditor">) on 921 the set data. 922 923 <sect1>The main window<label id="main-window"> 924 <p> 925 926 <sect2>The canvas<label id="canvas"> 927 <p> 928 929 <sect3>Canvas hotkeys 930 <p> 931 932 When the pointer focus is on the canvas (where the graph is drawn), 933 there are some shortcuts to activate several actions. They are: 934 935 <itemize> 936 <item> Ctrl <Key>A: Autoscale the current graph 937 <item> Ctrl <Key>D: Delete an object 938 <item> Ctrl <Key>L: Move current graph legend 939 <item> Ctrl <Key>M: Move an object 940 <item> Ctrl <Key>T: Place timestamp 941 <item> Ctrl <Key>U: Refresh hotlinks 942 <item> Ctrl <Key>V: Set the viewport with mouse 943 <item> Ctrl <Key>Z: Zoom 944 <item> Ctrl Alt <Key>L: Draw a line 945 <item> Ctrl Alt <Key>B: Draw a box 946 <item> Ctrl Alt <Key>E: Draw an ellipse 947 <item> Ctrl Alt <Key>T: Write a text string 948 </itemize> 949 950 <sect3>Clicks and double clicks<label id="clicks"> 951 <p> 952 A single click inside a graph switches focus to that graph. This is the 953 default policy, but it can be changed from the "Edit/Preferences" 954 popup. 955 </p> 956 957 <p> 958 Double clicking on parts of the canvas will invoke certain actions 959 or raise some popups: 960 961 <itemize> 962 <item> on a focus marker: move selected viewport corner 963 <item> on an axis: "Plot/Axis properties" popup 964 <item> on a set: "Plot/Set appearance" popup 965 <item> on a legend: "Plot/Graph appearance" popup 966 <item> on a (sub)title: "Plot/Graph appearance" popup 967 <item> on an object (box, line, ...): a popup for editing 968 properties of that object 969 </itemize> 970 971 The double clicking actions can be enabled/disabled from the 972 "Edit/Preferences" popup. 973 974 <sect2>Toolbar buttons<label id="toolbar"> 975 <p> 976 Along the left-hand side of the canvas (if shown) is the ToolBar. It 977 is armed with several buttons to provide quick and easy access to the 978 more commonly used Grace functions. 979 980 <itemize> 981 <item> <tt> Draw</tt>: This will redraw the canvas and sets. 982 Useful if "Auto Redraw" has been deselected in the Edit|Preferences 983 dialog or after executing commands directly from the Window|Commands 984 interpreter. 985 <p> 986 987 <item> <tt> Lens</tt>: A zoom lens. Click on the lens, then select the 988 area of interest on the graph with the "rubber band". The region 989 enclosed by the rubber band will fill the entire graph. 990 <item> <tt> AS</tt>: AutoScale. Autoscales the graph to contain all 991 data points of all visible (not hidden) sets. 992 <item> <tt> Z/z</tt>: Zoom in/out by 5%. The zoom percentage can be 993 set in the Edit/Preferences dialog. 994 <item> <tt>Arrows</tt>: Scroll active graph by 5% in the arrow's 995 direction. The scroll percentage can be set in the 996 Edit/Preferences dialog. 997 998 <p> 999 <item> <tt>AutoT</tt>: AutoTick Axes. This will find the optimum 1000 number of major and minor tick marks for both axes. 1001 <item> <tt>AutoO</tt>: Autoscale On set. Click the <tt>AutoO</tt> 1002 button, then click on the graph near the set you wish to use for 1003 determining the autoscale boundaries of the graph. 1004 <p> 1005 <item> <tt>ZX,ZY</tt>: Zoom along an axis. These buttons work like the 1006 zoom lens above but are restricted to a single axis. 1007 <item> <tt>AX,AY</tt>: Autoscale one axis only. 1008 <p> 1009 The following buttons deal with the graph stack and there is a good 1010 example under Help/Examples/General Intro/World Stack. 1011 <item> <tt>Pu/Po</tt>: Push and pop the current world settings to/from 1012 the graph stack. When popping, makes the new stack top current. 1013 <item> <tt>PZ</tt>: Push before Zooming. Functions as the zoom lens, 1014 but first pushes the current world settings to the stack. 1015 <item> <tt>Cy</tt>: Cycles through the stack settings of the active 1016 graph. Each graph may have up to twenty layers on the stack. 1017 <p> 1018 <item> <tt>Exit</tt>: Pretty obvious, eh? 1019 </itemize> 1020 1021 1022 <sect1> File menu <label id="file-menu"> 1023 <p> 1024 The file menu contains all entries related to the input/output features 1025 of Grace. 1026 1027 <sect2> New <label id="new"> 1028 <p> 1029 Reset the state of Grace as if it had just started (one empty 1030 graph ranging from 0 to 1 along both axes). If some work has 1031 been done and not yet saved, a warning popup is displayed to 1032 allow canceling the operation. 1033 1034 <sect2> Open <label id="open"> 1035 <p> 1036 Open an existing <ref id="project-file" name="project file">. A 1037 popup is displayed that allow to browse the file system. 1038 1039 <sect2> Save <label id="save"> 1040 <p> 1041 Save the current work in a project file, using the name that was 1042 used for the last open or save. If no name has been set (i.e., 1043 if the project has been created from scratch) act as <ref 1044. 1045 1046 <sect2> Save as <label id="save-as"> 1047 <p> 1048 Save the current work in a project file with a new name. A popup allows 1049 to browse the file system and set the name, the format to use for saving 1050 data points (the default value is "%16.8g"), and a textual description of 1051 the project. A warning is displayed if a file with the same name already 1052 exists. 1053 1054 <sect2> Revert to saved <label id="revert-to-saved"> 1055 <p> 1056 Abandon all modifications performed on the project since the 1057 last save. A confirmation popup is fired to allow the user 1058 canceling the operation. 1059 1060 <sect2> Print setup <label id="print-setup"> 1061 <p> 1062 Set the properties of the printing device. Each device has its 1063 own set of specific options (see <ref name="Device-specific 1064 settings" id="device-settings">). According to the device, the 1065 output can be sent either directly to a printer or directed to a 1066 file. The global settings available for all devices are the 1067 sizing parameters. The size of the graph is fixed. Changing the 'Page' 1068 settings changes the size of the canvas underneath the graph. 1069 Switching between portrait and landscape rotates the canvas. 1070 Make sure the canvas size is large enough to hold your graph. 1071 Otherwise you get a 'Printout truncated' warning. If your canvas 1072 size cannot easily be changed because, for example, you want to 1073 print on letter size paper, you need to adjust the size of 1074 your graph ('Viewport' in Plot/Graph Appearance). 1075 1076 <sect2> Print <label id="print"> 1077 <p> 1078 Print the project using the current printer settings 1079 1080 <sect2> Exit <label id="exit"> 1081 <p> 1082 Exit from Grace. If some work has been done and not saved, a 1083 warning popup will be displayed to allow the user to cancel the 1084 operation. 1085 1086 <sect1> Edit menu <label id="edit-menu"> 1087 <p> 1088 <sect2> Data sets <label id="data-sets"> 1089 <p> 1090 Using the data set popup, you can view the properties of 1091 datasets. This include its type, length, associated comment and 1092 some statistics (min, max, mean, standard deviation). A 1093 horizontal scrollbar at the bottom allows to get the two last 1094 properties, they are not displayed by default. Also note that if 1095 you find some columns are too narrow to show all significant 1096 digits, you can drag the vertical rules using Shift+Button 2. 1097 1098 Using the menu on the top of this dialog, you can manipulate existing 1099 sets or add new ones. Among the most important entries in the menu, 1100 are options to create or modify a set using the spreadsheet data set 1101 editor (see <ref name="Spreadsheet data set editor" id="SSEditor">). 1102 1103 <sect3>Spreadsheet data set editor<label id="SSEditor"> 1104 <p> 1105 The dialog presents an editable matrix of numbers, corresponding 1106 to the data set being edited. The set type (and hence, the number 1107 of data columns) can be changed using the "Type:" selector. 1108 Clicking on a column label pops up a dialog allowing to adjust 1109 the column formatting. Clicking on the row labels toggles the 1110 respective row state (selected/unselected). The selected rows can 1111 be deleted via the dialog's "Edit" menu. Another entry in this 1112 menu lets you add a row; the place of the new row is determined 1113 by the row containing a cell with the keyboard focus on. As well, 1114 just typing in an empty cell will add one or several rows 1115 (filling the intermediate rows with zeros). 1116 1117 To resize columns, drag the vertical rules using Shift+Button 2. 1118 </p> 1119 </sect3> <!-- Spreadsheet data set editor --> 1120 1121 1122 <sect2> Set operations <label id="set-operations"> 1123 <p> 1124 The set operations popup allows you to interact with sets as a 1125 whole. If you want to operate on the data ordering of the sets, 1126 you should use the <ref name="data set operations" 1127 popup from the Data menu. The popup 1128 allows you to select a source (one set within one graph) and a 1129 destination and perform some action upon them (copy, move, 1130 swap). This popup also give you a quick access to several graph 1131 and set selectors if you want to perform some other operation 1132 like hiding a graph or creating a new set from block data. 1133 1134 <sect2> Arrange graphs <label id="arrange-graphs"> 1135 <p> 1136 This entry fires up a popup to lay out several graphs in a 1137 regular grid given by <bf>M</bf> rows and <bf>N</bf> columns. 1138 1139 The graph selector at the top allows one to select a number of graphs 1140 the arrangement will operate on. If the number of selected graphs 1141 isn't equal to <bf>M</bf> times <bf>N</bf>, new graphs may be created 1142 or extra graphs killed if needed. These options are controlled by the 1143 respective checkboxes below the graph selector. 1144 1145 The order in which the matrix is filled in with the graphs can be 1146 selected (first horizontally then vertically or vise versa, with 1147 either of them inverted). Additionaly, one may choose to fill the matrix 1148 in the snake-like manner (adjacent "strokes" are anti-parallel). 1149 1150 The rest of the controls of the dialog window deal with the matrix 1151 spacing: left/right/top/bottom page offsets (in the viewport 1152 coordinates) and <it>relative</it> inter-cell distances, vertical 1153 and horizontal. Next to each of the vertical/horizontal spacing 1154 spinboxes, a "Pack" checkbox is found. Enabling it effectively sets 1155 the respective inter-cell distance to zero and alter axis tickmark 1156 settings such that only bottom/left-most tickmarks are visible. 1157 1158 If you don't want the regular layout this arrangement gives you, 1159 you can change it afterwards using the mouse (select a graph and 1160 double click on the focus marker, see <ref id="clicks" 1161). 1162 1163 <sect2> Overlay graphs <label id="overlay-graphs"> 1164 <p> 1165 You can overlay a graph on top of another one. The main use of 1166 this feature is to plot several curves using different scales on 1167 the same (apparently) graph. The main difficulty is to be sure 1168 you operate on the graph you want at all times (you can hide one 1169 for a moment if this becomes too difficult). 1170 1171 <sect2> Autoscale <label id="autoscale"> 1172 <p> 1173 Using this entry, you can autoscale one graph or all graphs 1174 according to the specified sets only. This is useful if you need 1175 either to have truly comparable graphs despite every one 1176 contains data of different ranges, or if you want to focus your 1177 attention on one set only while it is displayed with other data 1178 in a complex graph. 1179 1180 <sect2> Regions menu <label id="regions-menu"> 1181 <p> 1182 <sect3> Status <label id="status"> 1183 <p> 1184 This small popup only displays the current state (type and 1185 whether it is active or not) of the existing regions. 1186 1187 <sect3> Define <label id="define"> 1188 <p> 1189 You can define a new region (or redefine an existing one), 1190 the allowed region types are: 1191 1192 <itemize> 1193 <item> Inside polygon 1194 <item> Outside polygon 1195 <item> Above line 1196 <item> Below line 1197 <item> Left of line 1198 <item> Right of line 1199 <item> In horizontal range 1200 <item> In vertical range 1201 <item> Out of horizontal range 1202 <item> Out of vertical range 1203 </itemize> 1204 1205 A region can be either linked to the current graph only or to 1206 all graphs. 1207 1208 <sect3> Clear <label id="clear"> 1209 <p> 1210 This kills a region. 1211 1212 <sect3> Report on <label id="report-on"> 1213 <p> 1214 This popup reports you which sets or points are inside or 1215 outside of a region. 1216 1217 <sect2> Hot links <label id="hot-links"> 1218 <p> 1219 You can link a set to a file or a pipe using this feature. Once 1220 a link has been established, you can update it (i.e., read data 1221 again) by clicking on the update button. 1222 <p> 1223 Currently, only simple XY sets can be used for hotlinks. 1224 1225 <sect2> Set locator fixed point <label id="set-locator-fixed-point"> 1226 <p> 1227 After having selected this menu entry, you can select a point on 1228 a graph that will be used as the origin of the locator display 1229 (just below the menu bar). The fixed point is taken into account 1230 only when the display type of the locator is set to [DX,DY]. 1231 1232 <sect2> Clear locator fixed point <label id="clear-locator-fixed-point"> 1233 <p> 1234 This entry is provided to remove a fixed point set before and 1235 use the default again: point [0, 0]. 1236 1237 <sect2> Locator props <label id="locator-props"> 1238 <p> 1239 The locator props popup allows you to customize the display of 1240 the locator, mainly its type and the format and precision of the 1241 display. You can use all the formats that are allowed in the 1242 graphs scales. 1243 1244 <sect2> Preferences <label id="preferences"> 1245 <p> 1246 The preferences popup allows you to set miscellaneous properties 1247 of your Grace session, such as GUI behavior, cursor type, 1248 date reading hint and reference date used for calendar conversions. 1249 1250 <sect1> Data menu <label id="data-menu"> 1251 <p> 1252 <sect2> Data set operations <label id="data-set-operations"> 1253 <p> 1254 This popup gathers all operations that are related to the 1255 ordering of data points inside a set or between sets. If you 1256 want to operate on the sets as a whole, you should use the <ref 1257 popup from the Edit 1258 menu. You can sort according to any coordinate (X, Y, DX, ...) 1259 in ascending or descending order, reverse the order of the 1260 points, join several sets into one, split one set into several 1261 others of equal lengths, or drop a range of points from a 1262 set. The <ref name="set selector" id="set-selector"> of the 1263 popup shows the number of points in each set in square brackets 1264 like this: G0.S0[63], the points are numbered from 0 to n-1. 1265 1266 <sect2> Transformations menu <label id="transformations-menu"> 1267 <p> 1268 The transformations sub-menu gives you access to all data-mining 1269 features of Grace. 1270 1271 <sect3> Evaluate expression <label id="evaluate-expression"> 1272 <p> 1273 Using evaluate expression allows you to create a set by 1274 applying an explicit formula to another set, or to parts of 1275 another set if you use regions restrictions. 1276 1277 All the classical mathematical functions are available (cos, 1278 sin, but also lgamma, j1, erf, ...). As usual all 1279 trigonometric functions use radians by default but you can 1280 specify a unit if you prefer to say cos (x rad) or sin (3 * y 1281 deg). For the full list of available numerical functions and 1282 operators, see 1283 <ref name="Operators and functions" id="operators-and-functions">. 1284 1285 In the formula, you can use X, Y, Y1, ..., Y4 to denote any 1286 coordinate you like from the source set. An implicit loop 1287 will be used around your formula so if you say: 1288 1289 <tscreen><verb> 1290 x = x - 4966.5 1291 </verb></tscreen> 1292 1293 you will shift all points of your set 4966.5 units to the left. 1294 1295 You can use more than one set in the same formula, like this: 1296 1297 <tscreen><verb> 1298 y = y - 0.653 * sin (x deg) + s2.y 1299 </verb></tscreen> 1300 1301 which means you use both X and Y from the source set but also 1302 the Y coordinate of set 2. Beware that the loop is a simple 1303 loop over the indices, all the sets you use in such an hybrid 1304 expression should therefore have the same number of points 1305 and point i of one set should really be related to point i of 1306 the other set. If your sets do not follow these requirements, 1307 you should first homogenize them using 1308 <ref name="interpolation" id="interpolation">. 1309 1310 <sect3> Histograms <label id="histograms"> 1311 <p> 1312 The histograms popup allows you to compute either standard 1313 or cumulative histograms from the Y coordinates of 1314 your data. Optionally, the histograms can be normalized to 1 (hence 1315 producing a PDF (Probability Distribution Function). 1316 1317 The bins can be either a linear mesh defined by its min, max, and 1318 length values, or a mesh formed by abscissas of another set (in which 1319 case abscissas of the set must form a strictly monotonic array). 1320 1321 <sect3> Fourier transforms <label id="fourier-transforms"> 1322 <p> 1323 This popup is devoted to direct and inverse Fourier transforms 1324 (actually, what is computed is a power spectrum). The default is to 1325 perform a direct transform on unfiltered data and to produce a set 1326 with the index as abscissa and magnitude as ordinate. You can filter 1327 the input data window through triangular, Hanning, Welch, Hamming, 1328 Blackman and Parzen filters. You can load magnitude, phase or 1329 coefficients and use either index, frequency or period as abscissas. 1330 You can choose between direct and inverse Fourier transforms. If you 1331 specify real input data, X is assumed to be equally spaced and 1332 ignored; if you specify complex input data X is taken as the real part 1333 and Y as the imaginary part. 1334 1335 If Grace was configured with the FFTW library (see <ref 1336), then the DFT and 1337 FFT buttons really perform the same transform (so there is no 1338 speed-up in using FFT in this case). If you want Grace can to 1339 use FFTW <it>wisdom</it> files, you should set several <ref 1340 to 1341 name them. 1342 1343 <sect3> Running averages <label id="running-averages"> 1344 <p> 1345 The running average popup allows you to compute some values 1346 on a sliding window over your data. You choose both the value 1347 you need (average, median, minimum, maximum, standard 1348 deviation) and the length of the window and perform the 1349 operation. You can restrict the operation to the points 1350 belonging to (or outside of) a region. 1351 1352 <sect3> Differences <label id="differences"> 1353 <p> 1354 The differences popup is used to compute approximations of 1355 the first derivative of a function with finite 1356 differences. The only choice (apart from the source set of 1357 course) is the type of differences to use: forward, backward 1358 or centered. 1359 1360 <sect3> Seasonal differences <label id="seasonal-differences"> 1361 <p> 1362 The seasonal differences popup is used to subtract data from 1363 a period to data of the preceding period (namely y[i] - y[i + 1364 period]). Beware that the period is entered in terms of index 1365 in the set and not in terms of abscissa! 1366 1367 <sect3> Integration <label id="integration"> 1368 <p> 1369 The integration popup is used to compute the integral of a 1370 set and optionally to load it. The numerical value of the 1371 integral is shown in the text field after 1372 computation. Selecting "cumulative sum" in the choice item 1373 will create and load a new set with the integral and compute 1374 the end value, selecting "sum only" will only compute the end 1375 value. 1376 1377 <sect3> Interpolation/Splines <label id="interpolation"> 1378 <p> 1379 This popup is used to interpolate a set on an array of alternative X 1380 coordinates. This is mainly used before performing some complex 1381 operations between two sets with the <ref name="evaluate 1382 expression" id="evaluate-expression"> popup. 1383 1384 The sampling array can be either a linear mesh defined by its min, 1385 max, and length values, or a mesh formed by abscissas of another set. 1386 1387 Several interpolation methods can be used: linear, spline or Akima 1388 spline. 1389 1390 Note that if the sampling mesh is not entirely within the source set 1391 X bounds, evaluation at the points beyond the bounds will be performed 1392 using interpolation parameters from the first (or the last) segment 1393 of the source set, which can be considered a primitive extrapolation. 1394 This behaviour can be disabled by checking the "Strict" option on the 1395 popup. 1396 1397 The abscissas of the set being interpolated must form a strictly 1398 monotonic array. 1399 1400 <sect3> Regression <label id="regression"> 1401 <p> 1402 The regression popup can be used to fit a set against 1403 polynomials or some specific functions (y=A*x^B, 1404 y=A*exp(B*x), y=A+B*ln(x) and y=1/(A+Bx)) for which a simple 1405 transformation of input data can be used to apply linear 1406 regression formulas. 1407 1408 You can load either the fitted values, the residuals or the 1409 function itself. Choosing to load fitted values or residuals 1410 leads to a set of the same length and abscissas as the 1411 initial set. Choosing to load the function is almost similar 1412 to load the fitted values except that you choose yourself the 1413 boundaries and the number of points. This can be used for 1414 example to draw the curve outside of the data sample range or 1415 to produce an evenly spaced set from an irregular one. 1416 1417 <sect3> Non-linear fit <label id="non-linear-fit"> 1418 <p> 1419 The non linear fit popup can be used for functions outside of 1420 the simple regression methods scope. With this popup you 1421 provide the expression yourself using a0, a1, ..., a9 to 1422 denote the fit parameters (as an example you can say y = a0 * cos 1423 (a1 * x + a2)). You specify a tolerance, starting values and 1424 optional bounds and run several steps before loading the 1425 results. 1426 1427 The fit characteristics (number of parameters, formula, ...) 1428 can be saved in a file and retrieved as needed using the file 1429 menu of the popup. 1430 1431 In the "Advanced" tab, you can additionally apply a restriction to 1432 the set(s) to be fitted (thus ignoring points not satisfying the 1433 criteria), use one of preset weighting schemes or define your own 1434 (notice that "dY" in the preset "1/dY^2" one actually refers to the 1435 third column of the data set; use the "Custom" function if this 1436 doesn't make sense for your data set), and choose whether to load 1437 the fitted values, the residuals or the function itself. Choosing 1438 to load fitted values or residuals leads to a set of the same 1439 length and abscissas as the initial set. Choosing to load the 1440 function is almost similar to load the fitted values except that 1441 you choose yourself the boundaries and the number of points. This 1442 can be used for example to draw the curve outside of the data 1443 sample range or to produce an evenly spaced set from an irregular 1444 one. 1445 1446 <sect3> Correlation/covariance <label id="correlation/covariance"> 1447 <p> 1448 This popup can be used to compute autocorrelation 1449 of one set or cross correlation between two sets. You only 1450 select the set (or sets) and specify the maximum lag. A check 1451 box allows one to evaluate covariance instead of correlation. 1452 The result is normalized so that abs(C(0)) = 1. 1453 1454 <sect3> Digital filter <label id="digital-filter"> 1455 <p> 1456 You can use a set as a weight to filter another set. Only the 1457 Y part and the length of the weighting set are important, the 1458 X part is ignored. 1459 1460 <sect3> Linear convolution <label id="linear-convolution"> 1461 <p> 1462 The convolution popup is used to ... convolve two sets. You 1463 only select the sets and apply. 1464 1465 <sect3> Geometric transforms <label id="geometric-transforms"> 1466 <p> 1467 You can rotate, scale or translate sets using the geometric 1468 transformations popup. You specify the characteristics of 1469 each transform and the application order. 1470 1471 <sect3> Sample points <label id="sample-points"> 1472 <p> 1473 This popup provides two sampling methods. The first one is 1474 to choose a starting point and a step, the second one is to 1475 select only the points that satisfy a boolean expression you 1476 specify. 1477 1478 <sect3> Prune data <label id="prune-data"> 1479 <p> 1480 This popup is devoted to reducing huge sets (and then saving 1481 both computation time and disk space). 1482 1483 The interpolation method can be applied only to ordered sets: 1484 it is based on the assumption that if a real point and an 1485 interpolation based on neighboring points are closer than a 1486 specified threshold, then the point is redundant and can be 1487 eliminated. 1488 1489 The geometric methods (circle, ellipse, rectangle) can be 1490 applied to any set, they test each point in turn and keep 1491 only those that are not in the neighborhood of previous 1492 points. 1493 <p> 1494 1495 <sect2> Feature extraction <label id="feature-extraction"> 1496 <p> 1497 Given a set of curves in a graph, extract a feature from each 1498 curve and use the values of the feature to provide the Y values 1499 for a new curve. 1500 1501 <p> 1502 <table loc="htbp"> 1503 <tabular ca="lp{10cm}"> 1504 <hline> 1505 Feature | Description @ 1506 <hline> 1507 Y minimum | Minimum Y value of set @ 1508 Y maximum | Maximum Y value of set @ 1509 Y average | Average Y value of set @ 1510 Y std. dev. | Standard deviation of Y values @ 1511 Y median | Median Y value @ 1512 X minimum | Minimum X value of set @ 1513 X maximum | Maximum X value of set @ 1514 X average | Average X value of set @ 1515 X std. dev. | Standard deviation of X values @ 1516 X median | Median X value @ 1517 Frequency | Perform DFT (FFT if set length a power of 2) to find largest frequency component @ 1518 Period | Inverse of above @ 1519 Zero crossing | Time of the first zero crossing, + or - going @ 1520 Rise time | Assume curve starts at the minimum and rises to the maximum, get time to go from 10% to 90% of rise. For single exponential curves, this is 2.2*time constant @ 1521 Fall time | Assume curve starts at the maximum and drops to the minimum, get time to go from 90% to 10% of fall @ 1522 Slope | Perform linear regression to obtain slope @ 1523 Y intercept | Perform linear regression to obtain Y-intercept @ 1524 Set length | Number of data points in set @ 1525 Half maximal width | Assume curve starts from the minimum, rises to the maximum and drops to the minimum again. Determine the time for which the curve is elevated more than 50% of the maximum rise. @ 1526 Barycenter X | Barycenter along X axis @ 1527 Barycenter Y | Barycenter along Y axis @ 1528 X (Y max) | X of Maximum Y @ 1529 Y (X max) | Y of Maximum X @ 1530 integral | cumulative sum @ 1531 <hline> 1532 </tabular> 1533 <caption> 1534 Extractable features 1535 </caption> 1536 </table> 1537 1538 <sect2> Import menu <label id="read-menu"> 1539 <p> 1540 <sect3> ASCII <label id="read-sets"> 1541 <p> 1542 Read new sets of data in a graph. A <ref id="graph-selector" 1543 is used to specify the graph where the 1544 data should go (except when reading block data, which are 1545 copied to graphs later on). 1546 1547 Reading as "Single set" means that if the source contains 1548 only one column of numeric data, one set will be created 1549 using the indices (from 1 to the total number of points) as 1550 abscissas and read values as ordinates and that if the source 1551 contains more than one column of data, the first two numeric 1552 columns will be used. Reading as "NXY" means that the first 1553 numeric column will provide the abscissas and all remaining 1554 columns will provide the ordinates of several sets. Reading 1555 as "Block data" means all column will be read and stored and 1556 that another popup will allow to select the abscissas and 1557 ordinates at will. It should be noted that block data are 1558 stored as long as you do not override them by a new read. You 1559 can still retrieve data from a block long after having closed 1560 all popups, using the <ref id="set-selector" name="set 1561 selector">. 1562 1563 The set type can be one of the predefined set presentation types 1564 (see <ref id="sets" name="sets">). 1565 1566 The data source can be selected as "Disk" or "Pipe". In the 1567 first case the text in the "Selection" field is considered to 1568 be a file name (it can be automatically set by the file 1569 selector at the top of the popup). In the latter case the 1570 text is considered to be a command which is executed and 1571 should produce the data on its standard output. On systems 1572 that allows is, the command can be a complete sequence of 1573 programs glued together with pipes. 1574 1575 If the source contains date fields, they should be 1576 automatically detected. Several formats are recognized (see 1577 appendix <ref id="dates" name="dates in grace">). Calendar 1578 dates are converted to numerical dates upon reading. 1579 1580 The "Autoscale on read" menu controls whether, upon reading in new 1581 sets, which axes of the graph should be autoscaled. 1582 1583 <sect3> NetCDF <label id="read-netCDF"> 1584 <p> 1585 This entry exists only if Grace has been compiled with 1586 support for the NetCDF data format (see <ref 1587). 1588 1589 <sect2> Export menu <label id="write-menu"> 1590 <p> 1591 <sect3> ASCII <label id="write-sets"> 1592 <p> 1593 Save data sets in a file. A <ref id="set-selector" name="set 1594 selector"> is used to specify the set to be saved. The format 1595 to use for saving data points can be specified (the default 1596 value is "%16.8g"). A warning is displayed if a file with the 1597 same name already exists. 1598 1599 1600 <sect1> Plot menu <label id="plot-menu"> 1601 <p> 1602 <sect2> Plot appearance <label id="plot-appearance"> 1603 <p> 1604 The plot appearance popup let you set the time stamp properties 1605 and the background color of the page. The color is used outside 1606 of graphs and also on graphs were no specific background color 1607 is set. The time stamp is updated every time the project is modified. 1608 1609 <sect2> Graph appearance <label id="graph-appearance"> 1610 <p> 1611 The graph appearance popup can be displayed from both the plot menu 1612 and by double-clicking on a legend, title, or subtitle of a graph 1613 (see <ref name="Clicks and double clicks" id="clicks">). The graph 1614 selector at the top allows to choose the graph you want to operate 1615 on, it also allows certain common actions through its popup menu (see 1616 <ref name="graph selector" id="graph-selector">). Most of the actions 1617 can also be performed using the "Edit" menu available from the popup 1618 menubar. The main tab includes the properties you will need more 1619 often (title for example), and other tabs are used to fine tune some 1620 less frequently used options (fonts, sizes, colors, placements). 1621 1622 If you need special characters or special formatting in your 1623 title or subtitle, you can use Grace escape sequences (the 1624 sequence will appear verbatim in the text field but will be 1625 rendered on the graph), see <ref name="typesetting" 1626. If you don't remember the mapping between 1627 alphabetic characters and the glyph you need in some specific 1628 fonts (mainly symbol and zapfdingbats), you can invoke the font 1629 tool from the text field by hitting CTRL-e. You can change fonts 1630 and select characters from there, they will be copied back in 1631 the text field when you press the "Accept" button. Beware of 1632 the position of the cursor as you enter text or change font in 1633 the font tool, the character or command will be inserted at this 1634 position, not at the end of the string! 1635 1636 You can save graph appearance parameters or retrieve settings 1637 previously saved via the "File" menu of this popup. In the "Save 1638 parameters" dialog, you can choose to save settings either for 1639 the current graph only or for all graphs. 1640 1641 1642 <sect2> Set appearance <label id="set-appearance"> 1643 <p> 1644 The set appearance popup can be displayed from both the plot 1645 menu and by double-clicking anywhere in a graph (see <ref 1646). The set selector 1647 at the top allows to choose the set you want to operate on, it 1648 also allows certain common actions through its popup menu (see 1649 <ref name="set selector" id="set-selector">). The main tab 1650 gathers the properties you will need more often (line and 1651 symbol properties or legend string for example), and other tabs 1652 are used to fine tune some less frequently used options (drop 1653 lines, fill properties, annotated values and error bars 1654 properties for example). 1655 1656 You should note that despite the legend string related to 1657 <em>one</em> set is entered in the set appearance popup, this is not 1658 sufficient to display it. Displaying <em>all</em> legends is a graph 1659 level decision, so the toggle is in the main tab of the <ref 1660 popup. 1661 1662 If you need special characters or special formatting in your 1663 legend, you can use Grace escape sequences (the sequence will 1664 appear verbatim in the text field but will be rendered on the 1665 graph), see <ref name="typesetting" id="typesetting">. If you 1666 don't remember the mapping between alphabetic characters and the 1667 glyph you need in some specific fonts (mainly symbol and 1668 zapfdingbats), you can invoke the font tool from the text 1669 field by hitting CTRL-e. You can change fonts and select 1670 characters from there, they will be copied back in the text 1671 field when you press the "Accept" button. Beware of the 1672 position of the cursor as you enter text or change font in the 1673 font tool, the character or command will be inserted at this 1674 position, not at the end of the string! 1675 1676 <sect2> Axis properties <label id="axis-properties"> 1677 <p> 1678 The axis properties popup can be displayed from both the "Plot" 1679 menu and by double-clicking exactly on an axis (see <ref 1680). The pulldown menu 1681 at the top allows to select the axis you want to operate on. The 1682 "Active" toggle globally activates or deactivates the axis (all 1683 GUI elements are insensitive for deactivated axes). The start 1684 and stop fields depict the displayed range. Three types of 1685 scales are available: linear, logarithmic or reciprocal, and 1686 you can invert the axis (which normally increases from left to 1687 right and from bottom to top). The main tab includes the 1688 properties you will need more often (axis label, tick 1689 spacing and format for example), and other tabs are used to fine 1690 tune some less frequently used options (fonts, sizes, colors, 1691 placements, stagger, grid lines, special ticks, ...). 1692 1693 If you need special characters or special formatting in your 1694 label, you can use Grace escape sequences (the 1695 sequence will appear verbatim in the text field but will be 1696 rendered on the graph), see <ref name="typesetting" 1697. If you don't remember the mapping between 1698 alphabetic characters and the glyph you need in some specific 1699 fonts (mainly symbol and zapfdingbats), you can invoke the font 1700 tool from the text field by hitting CTRL-e. You can change fonts 1701 and select characters from there, they will be copied back in 1702 the text field when you press the "Accept" button. Beware of 1703 the position of the cursor as you enter text or change font in 1704 the font tool, the character or command will be inserted at this 1705 position, not at the end of the string! 1706 1707 Most of the controls in the dialog should be self-explanatory. One 1708 that is not (and frequently missed) is the "Axis transform" 1709 input field in the "Tick labels" tab. Entering there e.g. "-$t" 1710 will make the tick labels show negates of the real coordinates their 1711 ticks are placed at. You can use any expression understood by the 1712 interpreter (see <ref id="command-interpreter" 1713). 1714 1715 Once you have set the options as you want, you can apply 1716 them. One useful feature is that you can set several axes at 1717 once with the bottom pulldown menu (current axis, all axes 1718 current graph, current axis all graphs, all axes all 1719 graphs). Beware that you always apply the properties of all 1720 tabs, not only the selected one. 1721 1722 <sect1> View menu <label id="view-menu"> 1723 <p> 1724 <sect2> Show locator bar <label id="show-locator-bar"> 1725 <p> 1726 This toggle item shows or hides the locator below the menu bar. 1727 1728 <sect2> Show status bar <label id="show-status-bar"> 1729 <p> 1730 This toggle item shows or hides the status string below the 1731 canvas. 1732 1733 <sect2> Show tool bar <label id="show-tool-bar"> 1734 <p> 1735 This toggle item shows or hides the tool bar at the left of the 1736 canvas. 1737 1738 <sect2> Page setup <label id="page-setup"> 1739 <p> 1740 Set the properties of the display device. It is the same dialog as 1741 in <ref name="Print setup" id="print-setup">. 1742 1743 <sect2> Redraw <label id="redraw"> 1744 <p> 1745 This menu item triggers a redrawing of the canvas. 1746 1747 <sect2> Update all <label id="update-all"> 1748 <p> 1749 This menu item causes an update of all GUI controls. Usually, everything 1750 is updated automatically, unless one makes modifications by entering 1751 commands in the <ref name="Command" id="commands"> tool. 1752 1753 1754 <sect1> Window menu <label id="window-menu"> 1755 <p> 1756 <sect2> Commands <label id="commands"> 1757 <p> 1758 Command driven version of the interface to Grace. Here, commands 1759 are typed at the "Command:" text item and executed when 1760 <Return> is pressed. The command will be parsed and executed, 1761 and the command line is placed in the history list. Items in the 1762 history list can be recalled by simply clicking on them with the 1763 left mouse button. For a reference on the Grace command interpreter, 1764 see <ref id="command-interpreter" name="Command interpreter">. 1765 </p> 1766 <sect2> Point tracking <label id="point-tracking"> 1767 <p> 1768 Not written yet... 1769 </p> 1770 <sect2> Drawing objects <label id="drawing-objects"> 1771 <p> 1772 Not written yet... 1773 </p> 1774 <sect2> Font tool <label id="font-tool"> 1775 <p> 1776 Not written yet... 1777 </p> 1778 <sect2> Console <label id="console"> 1779 <p> 1780 The console window displays errors and results of some numerical 1781 operations, e.g. nonlinear fit (see <ref id="non-linear-fit" 1782). The window is popped up automatically 1783 whenever an error occurs or new result messages appear. This can 1784 be altered by checking the "Options/Popup only on errors" option. 1785 </p> 1786 1787 <sect1> Help menu <label id="help-menu"> 1788 <p> 1789 <sect2> On context <label id="on-context"> 1790 <p> 1791 Click on any element of the interface to get context-sensitive help 1792 on it. Only partially implemented at the moment. 1793 </p> 1794 <sect2> User's guide <label id="users-guide"> 1795 <p> 1796 Browse the Grace user's guide. 1797 </p> 1798 <sect2> Tutorial <label id="tutorial"> 1799 <p> 1800 Browse the Grace tutorial. 1801 </p> 1802 <sect2> FAQ <label id="faq"> 1803 <p> 1804 Frequently Asked Questions with answers. 1805 </p> 1806 <sect2> Changes <label id="changes"> 1807 <p> 1808 The list of changes during the Grace development. 1809 </p> 1810 <sect2> Examples <label id="examples"> 1811 <p> 1812 The whole tree of submenus each loading a sample plot. 1813 </p> 1814 <sect2> Comments <label id="comments"> 1815 <p> 1816 Use this to send your suggestions or bug reports. 1817 </p> 1818 <sect2> License terms <label id="license-terms"> 1819 <p> 1820 Grace licensing terms will be displayed (GPL version 2). 1821 </p> 1822 <sect2> About <label id="about"> 1823 <p> 1824 A popup with basic info on the software, including some 1825 configuration details. More details can be found when running Grace 1826 with the "-version" command line flag. 1827 </p> 1828 1829 <!-- **************************************** --> 1830 <sect>Command interpreter <label id="command-interpreter"> 1831 <p> 1832 1833 <sect1>General notes 1834 1835 <p> 1836 The interpreter parses its input in a line-by-line manner. There may 1837 be several statements per line, separated by semicolon (<tt>;</tt>). 1838 The maximal line length is 4 kbytes (hardcoded). The parser is 1839 case-insensitive and ignores lines beginning with the "<tt>#</tt>" sign. 1840 </p> 1841 1842 <sect1>Definitions 1843 1844 <p> 1845 <table loc="htbp"> 1846 <tabular ca="lp{5cm}l"> 1847 <hline> 1848 Name | Description | Examples @ 1849 <hline> 1850 expr | 1851 Any numeric expression | 1852 1.5 + sin(2) @ 1853 iexpr | 1854 Any expression that evaluates to an integer | 1855 25, 0.1 + 1.9, PI/asin(1) @ 1856 nexpr | 1857 Non-negative iexpr | 1858 2 - 1 @ 1859 indx | 1860 Non-negative iexpr | 1861 @ 1862 sexpr | 1863 String expression | 1864 "a string", "a " . "string", "square root of 1872 Basic types 1873 </caption> 1874 </table> 1875 </p> 1876 1877 <p> 1878 <table loc="htbp"> 1879 <tabular ca="llll"> 1880 <hline> 1881 Expression | Description | Types | Example @ 1882 <hline> 1883 GRAPH[<it>id</it>] | 1884 graph <it>id</it> | 1885 indx <it>id</it> | 1886 GRAPH[0] @ 1887 G<it>nn</it> | 1888 graph <it>nn</it> | 1889 <it>nn</it>: 0-99 | 1890 G0 @ 1891 <hline> 1892 </tabular> 1893 <caption> 1894 <label id="graph-sel"> 1895 Graph selections 1896 </caption> 1897 </table> 1898 </p> 1899 1900 <p> 1901 <table loc="htbp"> 1902 <tabular ca="lp{5cm}ll"> 1903 <hline> 1904 Expression | Description | Types | Example @ 1905 <hline> 1906 <it>graph</it>.SETS[<it>id</it>] | 1907 set <it>id</it> in graph <it>graph</it>| 1908 indx <it>id</it>, graphsel <it>graph</it> | 1909 GRAPH[0].SETS[1] @ 1910 <it>graph</it>.S<it>nn</it> | 1911 set <it>nn</it> in graph <it>graph</it>| 1912 <it>nn</it>: 0-99, graphsel <it>graph</it> | 1913 G0.S1 @ 1914 SET[<it>id</it>] | 1915 set <it>id</it> in the current graph| 1916 indx <it>id</it> | 1917 SET[1] @ 1918 S<it>nn</it> | 1919 set <it>nn</it> in the current graph| 1920 <it>nn</it>: 0-99 | 1921 S1 @ 1922 S_ | 1923 the last implicitly (i.e. as a result of a data transformation) allocated set in the current graph| 1924 - | 1925 S_ @ 1926 S$ | 1927 the active set in the current graph| 1928 - | 1929 S$ @ 1930 <hline> 1931 </tabular> 1932 <caption> 1933 <label id="set-sel"> 1934 Set selections 1935 </caption> 1936 </table> 1937 </p> 1938 1939 <p> 1940 <table loc="htbp"> 1941 <tabular ca="llll"> 1942 <hline> 1943 Expression | Description | Types | Example @ 1944 <hline> 1945 R<it>n</it> | 1946 region <it>n</it> | 1947 <it>n</it>: 0-4 | 1948 R0 @ 1949 <hline> 1950 </tabular> 1951 <caption> 1952 <label id="region-sel"> 1953 Region selections 1954 </caption> 1955 </table> 1956 </p> 1957 1958 1959 <p> 1960 <table loc="htbp"> 1961 <tabular ca="llll"> 1962 <hline> 1963 Expression | Description | Types | Example @ 1964 <hline> 1965 COLOR <it>"colorname"</it> | 1966 a mapped color <it>colorname</it> | 1967 - | 1968 COLOR "red" @ 1969 COLOR <it>id</it> | 1970 a mapped color with ID <it>id</it> | 1971 nexpr <it>id</it> | 1972 COLOR 2 @ 1973 <hline> 1974 </tabular> 1975 <caption> 1976 <label id="color-sel"> 1977 Color selections 1978 </caption> 1979 </table> 1980 </p> 1981 1982 <p> 1983 <table loc="htbp"> 1984 <tabular ca="llll"> 1985 <hline> 1986 Expression | Description | Types | Example @ 1987 <hline> 1988 PATTERN <it>id</it> | 1989 pattern with ID <it>id</it> | 1990 nexpr <it>id</it> | 1991 PATTERN 1 @ 1992 <hline> 1993 </tabular> 1994 <caption> 1995 <label id="pattern-sel"> 1996 Pattern selections 1997 </caption> 1998 </table> 1999 </p> 2000 2001 <p> 2002 <table loc="htbp"> 2003 <tabular ca="llll"> 2004 <hline> 2005 Expression | Description | Types | Example @ 2006 <hline> 2007 X | 2008 the first column | 2009 - | 2010 X @ 2011 Y | 2012 the second column | 2013 - | 2014 Y @ 2015 Y<it>n</it> | 2016 (<it>n</it> + 2)-th column | 2017 <it>n</it> = 0 - 4 | 2018 Y3 @ 2019 <hline> 2020 </tabular> 2021 <caption> 2022 <label id="datacol-sel"> 2023 Data column selections 2024 </caption> 2025 </table> 2026 </p> 2027 2028 <p> 2029 Not finished yet... 2030 </p> 2031 2032 <sect1>Variables 2033 <p> 2034 2035 <table loc="htbp"> 2036 <tabular ca="ll"> 2037 <hline> 2038 Variable | Description @ 2039 <hline> 2040 datacolumn | data column of current ("active") set @ 2041 set.datacolumn | data column of set @ 2042 <hline> 2043 vvar | user-defined array @ 2044 <hline> 2045 vvariable [i:j] | segment of a vector variable (elements from i-th to j-th inclusive, i <= j) @ 2046 <hline> 2047 </tabular> 2048 <caption> 2049 <label id="vvariables"> 2050 Vector variables 2051 </caption> 2052 </table> 2053 2054 <p> 2055 2056 <table loc="htbp"> 2057 <tabular ca="ll"> 2058 <hline> 2059 Variable | Description @ 2060 <hline> 2061 vvariable[i] | i-th element of a vector variable @ 2062 <hline> 2063 var | user-defined variable @ 2064 <hline> 2065 </tabular> 2066 <caption> 2067 <label id="svariables"> 2068 Scalar variables 2069 </caption> 2070 </table> 2071 2072 2073 <sect1>Numerical operators and functions<label id="operators-and-functions"> 2074 <p> 2075 2076 In numerical expressions, the infix format is used. Arguments of 2077 both operators and functions can be either scalars or vector arrays. 2078 Arithmetic, logical, and comparison operators are 2079 given in tables below. 2080 2081 <p> 2082 2083 <table loc="htbp"> 2084 <tabular ca="cl"> 2085 <hline> 2086 Operator | Description @ 2087 <hline> 2088 + | addition @ 2089 - | substraction @ 2090 * | multiplication @ 2091 / | division @ 2092 % | modulus @ 2093 ^ | raising to power @ 2094 <hline> 2095 </tabular> 2096 <caption> 2097 <label id="arithmetic-operators"> 2098 Arithmetic operators 2099 </caption> 2100 </table> 2101 2102 <p> 2103 2104 <table loc="htbp"> 2105 <tabular ca="cl"> 2106 <hline> 2107 Operator | Description @ 2108 <hline> 2109 AND or && | logical AND @ 2110 OR or || | logical OR @ 2111 NOT or ! | logical NOT @ 2112 <hline> 2113 </tabular> 2114 <caption> 2115 <label id="logical-operators"> 2116 Logical operators 2117 </caption> 2118 </table> 2119 2120 <p> 2121 2122 <table loc="htbp"> 2123 <tabular ca="cl"> 2124 <hline> 2125 Operator | Description @ 2126 <hline> 2127 EQ or == | equal @ 2128 NE or != | not equal @ 2129 LT or < | less than @ 2130 LE or <= | less than or equal @ 2131 GT or > | greater than @ 2132 GE or >= | greater than or equal @ 2133 <hline> 2134 </tabular> 2135 <caption> 2136 <label id="comparison-operators"> 2137 Comparison operators 2138 </caption> 2139 </table> 2140 2141 <p> 2142 2143 Another conditional operator is the "?:" (or ternary) operator, which 2144 operates as in C and many other languages. 2145 2146 (expr1) ? (expr2) : (expr3); 2147 2148 This expression evaluates to expr2 if expr1 evaluates to TRUE, and 2149 expr3 if expr1 evaluates to FALSE. 2150 <p> 2151 2152 <table loc="htbp"> 2153 <tabular ca="ll"> 2154 <hline> 2155 Function | Description @ 2156 <hline> 2157 abs(x) | absolute value @ 2158 acos(x) | arccosine @ 2159 acosh(x) | hyperbolic arccosine @ 2160 asin(x) | arcsine @ 2161 asinh(x) | hyperbolic arcsine @ 2162 atan(x) | arctangent @ 2163 atan2(y,x) | arc tangent of two variables @ 2164 atanh(x) | hyperbolic arctangent @ 2165 ceil(x) | greatest integer function @ 2166 cos(x) | cosine @ 2167 cosh(x) | hyperbolic cosine @ 2168 exp(x) | e^x @ 2169 fac(n) | factorial function, n! @ 2170 floor(x) | least integer function @ 2171 irand(n) | random integer less than n @ 2172 ln(x) | natural log @ 2173 log10(x) | log base 10 @ 2174 log2(x) | base 2 logarithm of x @ 2175 maxof(x,y) | returns greater of x and y @ 2176 mesh(n) | mesh array (0 ... n - 1) @ 2177 mesh(x1, x2, n) | mesh array of n equally spaced points between x1 and x2 inclusive @ 2178 minof(x,y) | returns lesser of x and y @ 2179 mod(x,y) | mod function (also x % y) @ 2180 pi | the PI constant @ 2181 rand | pseudo random number distributed uniformly on (0.0,1.0) @ 2182 rand(n) | array of n random numbers @ 2183 rint(x) | round to closest integer @ 2184 rsum(x) | running sum of x @ 2185 sgn(x) | signum function @ 2186 sin(x) | sine function @ 2187 sinh(x) | hyperbolic sine @ 2188 sqr(x) | x^2 @ 2189 sqrt(x) | x^0.5 @ 2190 tan(x) | tangent function @ 2191 tanh(x) | hyperbolic tangent @ 2192 <hline> 2193 </tabular> 2194 <caption> 2195 <label id="functions"> 2196 Functions 2197 </caption> 2198 </table> 2199 2200 <table loc="htbp"> 2201 <tabular ca="ll"> 2202 <hline> 2203 Function | Description @ 2204 <hline> 2205 chdtr(df, x) | chi-square distribution @ 2206 chdtrc(v, x) | complemented Chi-square distribution @ 2207 chdtri(df, y) | inverse of complemented Chi-square distribution @ 2208 erf(x) | error function @ 2209 erfc(x) | complement of error function @ 2210 fdtr(df1, df2, x) | F distribution function @ 2211 fdtrc(x) | complemented F distribution @ 2212 fdtri(x) | inverse of complemented F distribution @ 2213 gdtr(a, b, x) | gamma distribution function @ 2214 gdtrc(a, b, x) | complemented gamma distribution function @ 2215 ndtr(x) | Normal distribution function @ 2216 ndtri(x) | inverse of Normal distribution function @ 2217 norm(x) | gaussian density function @ 2218 pdtr(k, m) | Poisson distribution @ 2219 pdtrc(k, m) | complemented Poisson distribution @ 2220 pdtri(k, y) | inverse Poisson distribution @ 2221 rnorm(xbar,s) | pseudo random number distributed N(xbar,s) @ 2222 stdtr(k, t) | Student's t distribution @ 2223 stdtri(k, p) | functional inverse of Student's t distribution @ 2224 <hline> 2225 </tabular> 2226 <caption> 2227 <label id="stat-functions"> 2228 Statistical functions 2229 </caption> 2230 </table> 2231 2232 <table loc="htbp"> 2233 <tabular ca="ll"> 2234 <hline> 2235 Function | Description @ 2236 <hline> 2237 ai(x), bi(x) | Airy functions (two independent solutions of the differential equation <tt>y''(x) = xy</tt>) @ 2238 beta(x) | beta function @ 2239 chi(x) | hyperbolic cosine integral @ 2240 ci(x) | cosine integral @ 2241 dawsn(x) | Dawson's integral @ 2242 ellie(phi, m) | incomplete elliptic integral of the second kind @ 2243 ellik(phi, m) | incomplete elliptic integral of the first kind @ 2244 ellpe(m) | complete elliptic integral of the second kind @ 2245 ellpk(m) | complete elliptic integral of the first kind @ 2246 expn(n, x) | exponential integral @ 2247 fresnlc(x) | cosine Fresnel integral @ 2248 fresnls(x) | sine Fresnel integral @ 2249 gamma(x) | gamma function @ 2250 hyp2f1(a, b, c, x) | Gauss hyper-geometric function @ 2251 hyperg(a, b, x) | confluent hyper-geometric function @ 2252 i0e(x) | modified Bessel function of order zero, exponentially scaled @ 2253 i1e(x) | modified Bessel function of order one, exponentially scaled @ 2254 igam(a, x) | incomplete gamma integral @ 2255 igamc(a, x) | complemented incomplete gamma integral @ 2256 igami(a, p) | inverse of complemented incomplete gamma integral @ 2257 incbet(a, b, x) | incomplete beta integral @ 2258 incbi(a, b, y) | Inverse of incomplete beta integral @ 2259 iv(v, x) | modified Bessel function of order v @ 2260 jv(v, x) | Bessel function of order v @ 2261 k0e(x) | modified Bessel function, third kind, order zero, exponentially scaled @ 2262 k1e(x) | modified Bessel function, third kind, order one, exponentially scaled @ 2263 kn(n, x) | modified Bessel function, third kind, integer order @ 2264 lbeta(x) | natural log of |beta(x)| @ 2265 lgamma(x) | log of gamma function @ 2266 psi(x) | psi (digamma) function @ 2267 rgamma(x) | reciprocal gamma function @ 2268 shi(x) | hyperbolic sine integral @ 2269 si(x) | sine integral @ 2270 spence(x) | dilogarithm @ 2271 struve(v, x) | Struve function @ 2272 voigt(gamma, sigma, x) | Voigt function (convolution of Lorentzian and Gaussian) @ 2273 yv(v, x) | Bessel function of order v @ 2274 zeta(x, q) | Riemann zeta function of two arguments @ 2275 zetac(x) | Riemann zeta function @ 2276 <hline> 2277 </tabular> 2278 <caption> 2279 <label id="spec-functions"> 2280 Special math functions 2281 </caption> 2282 </table> 2283 2284 <p> 2285 2286 <table loc="htbp"> 2287 <tabular ca="cl"> 2288 <hline> 2289 Function | Description @ 2290 <hline> 2291 MIN(x) | min value of array x @ 2292 MAX(x) | max value of array x @ 2293 AVG(x) | average of array x @ 2294 SD(x) | standard deviation of array x @ 2295 SUM(x) | sum of all elements of array x @ 2296 INT(x,y) | integral of y dx @ 2297 IMIN(x) | index of min value of array x @ 2298 IMAX(x) | index of max value of array x @ 2299 <hline> 2300 </tabular> 2301 <caption> 2302 <label id="aggregate-functions"> 2303 Aggregate functions 2304 </caption> 2305 </table> 2306 2307 <sect1>Procedures 2308 <p> 2309 Methods of directly manipulating the data corresponding to the 2310 Data|Transformation menu are described in table <ref 2311. To evaluate expressions, you can directly submit 2312 them to the command interpreter like you would in the evaluate expression 2313 window. As an example, S1.X = S1.X * 0.000256 scales the x-axis 2314 coordinates. The functionality of the 'Sample points' menu entry can be 2315 obtained through RESTRICT. 2316 2317 <table loc="htbp"> 2318 <tabular ca="p{3cm}p{5cm}p{3.5cm}p{3cm}"> 2319 <hline> 2320 Statement | Description | Types | Example @ 2321 <hline> 2322 RUNAVG (set, npoints) | 2323 running average of <it>set</it> using <it>npoints</it> number of points | 2324 nexpr <it>npoints</it> | 2325 RUNAVG (S0, 100) @ 2326 RUNMED (set, npoints) | 2327 running median of <it>set</it> using <it>npoints</it> number of points | 2328 nexpr <it>npoints</it> | 2329 RUNMED (S0, 100) @ 2330 RUNMIN (set, npoints) | 2331 running minimum of <it>set</it> using <it>npoints</it> number of points | 2332 nexpr <it>npoints</it> | 2333 RUNMIN (S0, 100) @ 2334 RUNMAX (set, npoints) | 2335 running maximum of <it>set</it> using <it>npoints</it> number of points | 2336 nexpr <it>npoints</it> | 2337 RUNMAX (S0, 100) @ 2338 RUNSTD (set, npoints) | 2339 running standard deviation of <it>set</it> using <it>npoints</it> number of points | 2340 nexpr <it>points</it> | 2341 RUNSTD (S0, 100) @ 2342 INTERPOLATE (set, mesh, method, strict) | 2343 interpolate <it>set</it> on a sampling <it>mesh</it> using <it>method</it>. <it>strict</it> flag controls whether result should be bound within the source set | 2344 vexpr <it>mesh</it>, <it>method</it>: one of LINEAR, SPLINE, and ASPLINE, onoff <it>strict</it> | 2345 INTERPOLATE (S0, S1.X, ASPLINE, OFF) @ 2346 HISTOGRAM (set, bins, cumulative, normalize) | 2347 calculate histogram of <it>set</it> on defined <it>bins</it>. <it>cumulative</it> and <it>normalize</it> flags control whether to calculate cumulative and normalized (aka PDF) histograms, respectively. Data points are placed at upper limit of the bin | 2348 vexpr <it>bins</it>, onoff <it>cumulative</it>, onoff <it>normalize</it> | 2349 HISTOGRAM (S0, MESH(0, 1, 11), OFF, ON) @ 2350 INTEGRATE (set) | 2351 cumulative integral of <it>set</it> | 2352 | 2353 INTEGRATE (S0) @ 2354 XCOR (set1, set2, maxlag, covar) | 2355 calculate cross-correlation (or -covariance if the <it>covar</it> flag is set) of <it>set1</it> with <it>set2</it> with maximum lag <it>maxlag</it>. | 2356 nexpr <it>maxlag</it>, onoff <it>covar</it> | 2357 XCOR (S0, S0, 50, OFF) @ 2358 LINCONV (set1, set2) | 2359 calculate linear convolution of the array of ordinates of <it>set1</it> with that of <it>set2</it>. | 2360 | 2361 LINCONV (S0, S1) @ 2362 RESTRICT (set, restriction) | 2363 filter <it>set</it> according to logical <it>restriction</it>. The original set will be overwritten | 2364 vexpr <it>restriction</it> | 2365 RESTRICT (S0, S0.X < 0) @ 2366 RESTRICT (set, region, negate) | 2367 filter <it>set</it> by keeping only points lying inside/outside <it>region</it>. The original set will be overwritten | 2368 onoff <it>negate</it>| 2369 RESTRICT (S0, R1, OFF) @ 2370 <hline> 2371 </tabular> 2372 <caption> 2373 <label id="transformations"> 2374 Transformations 2375 </caption> 2376 </table> 2377 </p> 2378 2379 <p> 2380 Not finished yet... 2381 </p> 2382 2383 <sect1>Device parameters 2384 <p> 2385 For producing "hard copy", several parameters can be set via the command 2386 interpreter. They are summarized in table 2387 <ref id="device-parameters" name="Device parameters">. 2388 2389 <table loc="htbp"> 2390 <tabular ca="lp{7cm}"> 2391 <hline> 2392 Command | Description @ 2393 <hline> 2394 PAGE SIZE xdim, ydim | set page dimensions (in pp) of all devices @ 2395 PAGE RESIZE xdim, ydim | same as above plus rescale the current plot accordingly @ 2396 DEVICE <it>"devname"</it> PAGE SIZE xdim, ydim | set page dimensions (in pp) of device <it>devname</it> @ 2397 DEVICE <it>"devname"</it> DPI dpi | set device's dpi (dots per pixel) @ 2398 DEVICE <it>"devname"</it> FONT onoff | enable/disable usage of built-in fonts for device <it>devname</it> @ 2399 DEVICE <it>"devname"</it> FONT ANTIALIASING onoff | enable/disable font aliasing for device <it>devname</it> @ 2400 DEVICE <it>"devname"</it> OP <it>"options"</it> | set device specific options (see <ref id="device-settings" name="Device-specific settings">) @ 2401 HARDCOPY DEVICE <it>"devname"</it> | set device <it>devname</it> as current hardcopy device @ 2402 PRINT TO <it>"filename"</it> | set print output to <it>filename</it> (but do not print) @ 2403 PRINT TO DEVICE | set print output to hardcopy device (but do not print) @ 2404 <hline> 2405 </tabular> 2406 <caption> 2407 <label id="device-parameters"> 2408 Device parameters 2409 </caption> 2410 </table> 2411 2412 <sect1>Flow control 2413 2414 <p> 2415 <table loc="htbp"> 2416 <tabular ca="lp{5cm}ll"> 2417 <hline> 2418 Statement | Description | Types | Example @ 2419 <hline> 2420 EXIT(<it>status</it>) | 2421 cause normal program termination with exit status <it>status</it> | 2422 iexpr <it>status</it> | 2423 EXIT(0) @ 2424 EXIT | 2425 cause normal program termination; same as EXIT(0) | 2426 | 2427 EXIT @ 2428 HELP <it>url</it> | 2429 open a HTML document pointed to by <it>url</it> | 2430 sexpr <it>url</it> | 2431 HELP "doc/FAQ.html" @ 2432 HELP | 2433 open User's Guide | 2434 | 2435 HELP @ 2436 PRINT | 2437 execute print job | 2438 | 2439 PRINT @ 2440 AUTOSCALE | 2441 scale the graph | 2442 | 2443 AUTOSCALE @ 2444 AUTOSCALE XAXES | 2445 scale the graph in x only | 2446 | 2447 AUTOSCALE XAXES @ 2448 AUTOSCALE YAXES | 2449 scale the graph in y only | 2450 | 2451 AUTOSCALE YAXES @ 2452 AUTOSCALE set | 2453 scale to a specific set | 2454 | 2455 AUTOSCALE S0 @ 2456 AUTOTICKS | 2457 autotick all axes | 2458 | 2459 AUTOTICKS @ 2460 REDRAW | 2461 refresh the canvas to reflect the current project state | 2462 | 2463 REDRAW @ 2464 SLEEP <it>n</it> | 2465 sleep for <it>n</it> seconds | 2466 expr <it>n</it> | 2467 SLEEP(3) @ 2468 UPDATEALL | 2469 update the GUI (graph and set selectors etc) to reflect the current project state | 2470 | 2471 UPDATEALL @ 2472 SAVEALL <it>"file"</it> | 2473 save project to <it>file</it> | 2474 sexpr <it>file</it> | 2475 SAVEALL "foo.agr" @ 2476 LOAD <it>"file"</it> | 2477 load project <it>file</it> | 2478 sexpr <it>file</it> | 2479 LOAD "foo.agr" @ 2480 <hline> 2481 </tabular> 2482 <caption> 2483 <label id="flow-control"> 2484 Flow control 2485 </caption> 2486 </table> 2487 </p> 2488 2489 <sect1>Declarations 2490 <p> 2491 User-defined variables are set and used according to the syntax 2492 described in table <ref id="user-variables" name="User variables">. 2493 2494 <table loc="htbp"> 2495 <tabular ca="lp{7cm}ll"> 2496 <hline> 2497 Statement | Description | Types | Example @ 2498 <hline> 2499 DEFINE <it>var</it> | 2500 define new scalar variable <it>var</it> | 2501 | 2502 DEFINE myvar @ 2503 DEFINE <it>vvar</it>[] | 2504 define new vector variable <it>vvar</it> of zero length | 2505 | 2506 DEFINE myvvar[] @ 2507 DEFINE <it>vvar</it>[<it>n</it>] | 2508 define new vector variable <it>vvar</it> of length <it>n</it> | 2509 nexpr <it>n</it> | 2510 DEFINE myvvar[10] @ 2511 <hline> 2512 CLEAR <it>var</it> | 2513 undefine new variable <it>var</it> and deallocate associated storage | 2514 | 2515 CLEAR myvar @ 2516 <hline> 2517 <it>vvar</it> LENGTH <it>n</it> | 2518 reallocate vector variable <it>vvar</it> | 2519 nexpr <it>n</it> | 2520 myvvar LENGTH 25 @ 2521 <hline> 2522 </tabular> 2523 <caption> 2524 <label id="user-variables"> 2525 User variables 2526 </caption> 2527 </table> 2528 </p> 2529 2530 <p> 2531 Not finished yet... 2532 </p> 2533 2534 <sect1>Graph properties 2535 <p> 2536 We divide the commands pertaining to the properties and appearance of 2537 graphs into those which directly manipulate the graphs and those that 2538 affect the appearance of graph elements---the parameters that can appear 2539 in a Grace project file. 2540 </p> 2541 2542 <sect2>Command operations 2543 <p> 2544 General graph creation/annihilation and control commands appear in 2545 table <ref id="graph-ops" name="Graph operations">. 2546 2547 <table loc="htbp"> 2548 <tabular ca="p{3.5cm}p{4.5cm}p{3cm}p{3.5cm}"> 2549 <hline> 2550 Statement | Description | Types | Example @ 2551 <hline> 2552 FOCUS <it>graph</it> | Makes <it>graph</it> current and unhides it if necessary | 2553 graphsel <it>graph</it> | FOCUS G0 @ 2554 KILL <it>graph</it> | Kills <it>graph</it> | graphsel <it>graph</it> | KILL G0 @ 2555 ARRANGE(<it>nrows</it>, <it>ncols</it>, <it>offset</it>, <it>hgap</it>, <it>vgap</it>) | 2556 Arrange existing graphs (or add extra if needed) to form an <it>nrows</it> by <it>ncols</it> matrix, leaving <it>offset</it> at each page edge with <it>hgap</it> and <it>vgap</it> relative horizontal and vertical spacings | 2557 nexpr <it>nrows</it>, <it>ncols</it>, expr <it>offset</it>, <it>hgap</it>, <it>vgap</it> | 2558 ARRANGE(2, 2, 0.1, 0.15, 0.2) @ 2559 ARRANGE(<it>nrows</it>, <it>ncols</it>, <it>offset</it>, <it>hgap</it>, <it>vgap</it>, <it>hvinv</it>, <it>hinv</it>, <it>vinv</it>) | 2560 Same as above, plus additional <it>hvinv</it>, <it>hinv</it>, and <it>vinv</it> flags allowing to alter the order of the matrix filling | 2561 nexpr <it>nrows</it>, <it>ncols</it>, expr <it>offset</it>, <it>hgap</it>, <it>vgap</it>, onoff <it>hvinv</it>, <it>hinv</it>, <it>vinv</it> | 2562 ARRANGE(2, 2, 0.1, 0.15, 0.2, ON, OFF, ON) @ 2563 ARRANGE(<it>nrows</it>, <it>ncols</it>, <it>offset</it>, <it>hgap</it>, <it>vgap</it>, <it>hvinv</it>, <it>hinv</it>, <it>vinv</it>, <it>snake</it>) | 2564 Same as above, plus additional <it>snake</it> flag allowing to fill the matrix in a snake-like fashion | 2565 nexpr <it>nrows</it>, <it>ncols</it>, expr <it>offset</it>, <it>hgap</it>, <it>vgap</it>, onoff <it>hvinv</it>, <it>hinv</it>, <it>vinv</it>, <it>snake</it> | 2566 ARRANGE(2, 2, 0.1, 0.15, 0.2, ON, OFF, ON, ON) @ 2567 </tabular> 2568 <caption> 2569 <label id="graph-ops"> 2570 Graph operations 2571 </caption> 2572 </table> 2573 </p> 2574 <sect2>Parameter settings 2575 <p> 2576 Setting the active graph and its type is accomplished with the commands 2577 found in table <ref id="graphsel-pars" name="Graph selection parameters">. 2578 2579 <table loc="htbp"> 2580 <tabular ca="p{3.25cm}p{4.5cm}p{3.5cm}p{3.25cm}"> 2581 <hline> 2582 Statement | Description | Types | Example @ 2583 <hline> 2584 WITH <it>graph</it> | Makes <it>graph</it> current | 2585 graphsel <it>graph</it> | WITH G0 @ 2586 TYPE <it>type</it> | Sets <it>type</it> of current graph | 2587 graphtype <it>type</it> |TYPE XY @ 2588 <it>graph</it> onoff| (De)Activates selected <it>graph</it> | graphsel <it>graph</it>, onoff | G0 ON @ 2589 <it>graph</it> HIDDEN onoff | Hides selected <it>graph</it> | graphsel <it>graph</it>, onoff | G1 HIDDEN TRUE @ 2590 <it>graph</it> TYPE <it>type</it> | Sets <it>type</it> of <it>graph</it> | 2591 graphsel <it>graph</it>, graphtype <it>type</it> | G0 TYPE XYDY @ 2592 </tabular> 2593 <caption> 2594 <label id="graphsel-pars"> 2595 Graph selection parameters 2596 </caption> 2597 </table> 2598 2599 The axis range and scale of the current graph as well as its location 2600 on the plot viewport are set with the commands listed in table 2601 <ref id="graphaxis-pars" name="Axis parameters">. 2602 <table loc="htbp"> 2603 <tabular ca="p{3.25cm}p{4.5cm}p{3.5cm}p{3.25cm}"> 2604 <hline> 2605 Statement | Description | Types | Example @ 2606 <hline> 2607 WORLD XMIN <it>xmin</it> | Sets minimum value of current graph's x axis to <it>xmin</it> | 2608 expr <it>xmin</it> | WORLD XMIN -10 @ 2609 WORLD XMAX <it>xmax</it> | Sets maximum value of current graph's x axis to <it>xmin</it> | 2610 expr <it>xmax</it> | WORLD XMAX 22.5 @ 2611 WORLD YMIN <it>ymin</it> | Sets minimum value of current graph's y axis to <it>ymin</it> | 2612 expr <it>ymin</it> | WORLD YMIN 0 @ 2613 WORLD YMAX <it>ymax</it> | Sets maximum value of current graph's y axis to <it>ymax</it> | 2614 expr <it>ymax</it> | WORLD YMAX 1e4 @ 2615 VIEW XMIN <it>xmin</it> | Sets left edge of current graph at x=<it>xmin</it> in the viewport | 2616 expr <it>xmin</it>| VIEW XMIN .2 @ 2617 VIEW XMAX <it>xmax</it> | Sets right edge of current graph at x=<it>xmax</it> in the viewport | 2618 expr <it>xmax</it>| VIEW XMAX 1.0 @ 2619 VIEW YMIN <it>ymin</it> | Sets bottom edge of current graph at y=<it>ymin</it> in the viewport | 2620 expr <it>ymin</it>| VIEW YMIN .25 @ 2621 VIEW YMAX <it>ymax</it> | Sets top edge of current graph at y=<it>ymax</it> in the viewport | 2622 expr <it>ymax</it>| VIEW YMAX .75 @ 2623 VIEW <it>xmin</it>, <it>ymin</it>, <it>xmax</it>, <it>ymax</it>| Sets graph's viewport | 2624 expr <it>xmin</it>, <it>ymin</it>, <it>xmax</it>, <it>ymax</it>| VIEW 0.15, 0.15, 1.15, 0.85 @ 2625 XAXES SCALE <it>type</it> | Set scaling of the x axes to <it>type</it> | 2626 <it>type</it>: one of NORMAL, LOGARITHMIC, or RECIPROCAL | XAXES SCALE NORMAL @ 2627 YAXES SCALE <it>type</it> | Set scaling of the y axes to <it>type</it> | 2628 <it>type</it>: one of NORMAL, LOGARITHMIC, or RECIPROCAL | YAXES SCALE LOGARITHMIC @ 2629 XAXES INVERT onoff | If ON, draws xmin to xmax from right to left | 2630 onoff | XAXES INVERT OFF @ 2631 YAXES INVERT onoff | If ON, draws ymin to ymax from top to bottom | 2632 onoff | YAXES INVERT OFF @ 2633 AUTOSCALE ONREAD <it>type</it> | Set automatic scaling on read according to <it>type</it> | 2634 <it>type</it>: one of NONE, XAXES, YAXES, XYAXES | AUTOSCALE ONREAD NONE @ 2635 </tabular> 2636 <caption> 2637 <label id="graphaxis-pars"> 2638 Axis parameters 2639 </caption> 2640 </table> 2641 2642 The commands to set the appearance and textual content of titles and 2643 legends are given in table 2644 <ref id="graphlegend-pars" name="Titles and legends">. 2645 2646 <table loc="htbp"> 2647 <tabular ca="p{3.5cm}p{4.5cm}p{3.25cm}p{3.25cm}"> 2648 <hline> 2649 Statement | Description | Types | Example @ 2650 <hline> 2651 TITLE <it>title</it> | Sets the title of current graph | 2652 sexpr <it>title</it> | TITLE "Foo" @ 2653 TITLE FONT <it>font</it> | Selects font of title string | 2654 fontsel <it>font</it> | TITLE FONT 1 @ 2655 TITLE SIZE <it>size</it> | Sets size of title string | 2656 expr <it>size</it> | TITLE SIZE 1.5 @ 2657 TITLE COLOR <it>color</it> | Sets color of title string | 2658 colorsel <it>color</it> | TITLE COLOR 1 @ 2659 SUBTITLE <it>subtitle</it> | Sets the subtitle of current graph | 2660 sexpr <it>subtitle</it> | SUBTITLE "Bar" @ 2661 SUBTITLE FONT <it>font</it> | Selects font of subtitle string | 2662 fontsel <it>font</it> | SUBTITLE FONT "Times-Italic" @ 2663 SUBTITLE SIZE <it>size</it> | Sets size of subtitle string | 2664 expr <it>size</it> | SUBTITLE SIZE .60 @ 2665 SUBTITLE COLOR <it>color</it> | Sets color of subtitle string | 2666 colorsel <it>color</it> | SUBTITLE COLOR "blue" @ 2667 LEGEND onoff | Toggle legend display |onoff | LEGEND ON @ 2668 LEGEND LOCTYPE <it>type</it> | Posistion legend in <it>type</it> coordinates | 2669 <it>type</it>: either WORLD or VIEW | LEGEND LOCTYPE WORLD @ 2670 LEGEND <it>xloc, yloc</it> | Set location of legend box (upper left corner) | 2671 expr <it>xloc, yloc</it> | LEGEND .5,.75 @ 2672 LEGEND FONT <it>font</it>| Set legend font type | 2673 fontsel <it>font</it> | LEGEND FONT "Helvetica" @ 2674 LEGEND CHAR SIZE <it>size</it> | Sets size of legend label characters (1 is normal)| 2675 expr <it>size</it> | LEGEND CHAR SIZE .30 @ 2676 LEGEND <it>color</it>| Set color of legend text | 2677 colorsel <it>color</it> | LEGEND COLOR 1 @ 2678 LEGEND VGAP <it>gap</it> | Sets vertical gap between legend entries | 2679 nexpr <it>gap</it> | LEGEND VGAP 1 @ 2680 LEGEND HGAP <it>gap</it> | Sets horizontal gap between symbol and description | 2681 nexpr <it>gap</it> | LEGEND HGAP 4 @ 2682 LEGEND LENGTH <it>length</it>| Sets <it>length</it> of legend| 2683 nexpr <it>length</it> | LEGEND LENGTH 5 @ 2684 LEGEND INVERT onoff | Determines relationship between order of sets and order of legend labels | 2685 onoff | LEGEND INVERT true @ 2686 LEGEND BOX onoff | Determines if the legend bounding box is drawn | 2687 onoff | LEGEND BOX off @ 2688 LEGEND BOX COLOR <it>color</it> | Sets color of legend bounding box | colorsel <it>color</it> | 2689 LEGEND BOX COLOR 1@ 2690 LEGEND BOX PATTERN <it>pattern</it>| Sets pattern of legend bounding box | patternsel <it>pattern</it> | 2691 LEGEND BOX PATTERN 2@ 2692 LEGEND BOX LINESTYLE <it>style</it>| Sets line style of bounding box |nexpr <it>style</it> | 2693 LEGEND BOX LINESTYLE 1 @ 2694 LEGEND BOX LINEWIDTH <it>width</it>| Sets line width of bounding box|nexpr <it>width</it> | 2695 LEGEND BOX LINEWIDTH 2 @ 2696 LEGEND BOX FILL onoff | Determines if the legend bounding box is filled | 2697 onoff | LEGEND BOX FILL false @ 2698 LEGEND BOX FILL COLOR <it>color</it> | Sets color of legend box fill | colorsel <it>color</it> | 2699 LEGEND BOX COLOR 3 @ 2700 LEGEND BOX FILL <it>pattern</it>| Sets pattern of legend box fill |patternsel <it>pattern</it> | 2701 LEGEND BOX FILL PATTERN 1@ 2702 </tabular> 2703 <caption> 2704 <label id="graphlegend-pars"> 2705 Titles and legends 2706 </caption> 2707 </table> 2708 </p> 2709 <p> 2710 Not finished yet... 2711 </p> 2712 2713 <sect1>Set properties 2714 <p> 2715 Again, as with the graphs, we separate those parser commands that 2716 manipulate the data in a set from the commands that determine 2717 parameters---elements that are saved in a project file. 2718 2719 <sect2>Commands 2720 <p> 2721 Operations for set I/O are summarized in table 2722 <ref id="set-io" name="Set input, output, and creation">. (Note that 2723 this is incomplete and only lists <it>input</it> commands at the moment.) 2724 2725 <table loc="htbp"> 2726 <tabular ca="p{3.5cm}p{4.5cm}p{3cm}p{3.5cm}"> 2727 <hline> 2728 Statement | Description | Types | Example @ 2729 <hline> 2730 READ <it>"file"</it> | Reads <it>file</it> as a single set | 2731 sexpr <it>file</it> | READ "foo.dat" @ 2732 READ <it>settype</it> <it>"file"</it> | Reads <it>file</it> into a single set of type <it>settype</it> | 2733 xytype <it>settype</it>, sexpr <it>file</it> | READ xydy "bar.dat" @ 2734 READ NXY <it>"file"</it> | Reads <it>file</it> as NXY data | 2735 sexpr <it>file</it> | READ NXY "gad.dat" @ 2736 READ BLOCK <it>"file"</it> | Reads <it>file</it> as block data | 2737 sexpr <it>file</it> | READ BLOCK "zooks.dat" @ 2738 KILL BLOCK | Kills the current block data and frees the associated memory | 2739 | KILL BLOCK @ 2740 BLOCK <it>settype</it> <it>columns</it> | Forms a data set of type <it>settype</it> using <it>columns</it> from current block data file. | 2741 xytype <it>settype</it>, sexpr <it>columns</it> | BLOCK xydxdy "0:2:1:3" @ 2742 WRITE <it>set</it> | writes <it>set</it> to stdout| 2743 setsel <it>set</it> | WRITE G0.S1 @ 2744 WRITE <it>set</it> FORMAT <it>"formatstring"</it> | writes <it>set</it> to stdout using format specification <it>formatstring</it> | 2745 setsel <it>set</it> sexpr <it>formatstring</it> | WRITE G0.S1 FORMAT "%18.8g" @ 2746 WRITE <it>set</it> FILE <it>"file"</it> | writes <it>set</it> to <it>file</it> | 2747 setsel <it>set</it> sexpr <it>file</it> | WRITE G0.S1 FILE "data.dat" @ 2748 WRITE <it>set</it> FILE <it>"file"</it> FORMAT <it>"formatstring"</it> | writes <it>set</it> to <it>file</it> using format specification <it>formatstring</it> | 2749 setsel <it>set</it> sexpr <it>file</it> sexpr <it>formatstring</it> | WRITE G0.S1 FILE "data.dat" FORMAT "%18.8g" @ 2750 </tabular> 2751 <caption> 2752 <label id="set-io"> 2753 Set input, output, and creation 2754 </caption> 2755 </table> 2756 2757 The parser commands analogous to the Data|Data set operations dialogue 2758 can be found in table <ref id="set-ops" name="Set operations">. 2759 <table loc="htbp"> 2760 <tabular ca="p{3.5cm}p{4.5cm}p{3cm}p{3.5cm}"> 2761 <hline> 2762 Statement | Description | Types | Example @ 2763 <hline> 2764 COPY <it>src</it> TO <it>dest</it> | Copies <it>src</it> to <it>dest</it>| 2765 setsel <it>src,dest</it> | COPY S0 TO S1 @ 2766 MOVE <it>src</it> TO <it>dest</it> | Moves <it>src</it> to <it>dest</it> | 2767 setsel <it>src,dest</it> | MOVE G0.S0 TO G1.S0 @ 2768 SWAP <it>src</it> AND <it>dest</it> | Interchanges <it>src</it> and <it>dest</it> | 2769 setsel <it>src,dest</it> | SWAP G0.S0 AND G0.S1 @ 2770 KILL <it>set</it> | Kills <it>set</it> | setsel <it>set</it> | KILL G0.S0 @ 2771 </tabular> 2772 <caption> 2773 <label id="set-ops"> 2774 Set operations 2775 </caption> 2776 </table> 2777 </p> 2778 <p> 2779 Not Finished yet... 2780 2781 <sect2>Parameter settings 2782 <p> 2783 Not written yet... 2784 2785 <!-- **************************************** --> 2786 <sect>Advanced topics 2787 <p> 2788 2789 <sect1>Fonts<label id="fonts"> 2790 <p> 2791 For all devices, Grace uses Type1 fonts. Both PFA (ASCII) and PFB 2792 (binary) formats can be used. 2793 <p> 2794 <sect2>Font configuration 2795 <p> 2796 The file responsible for the font configurations of Grace is 2797 <tt>fonts/FontDataBase</tt>. The first line contains a positive integer 2798 specifying the number of fonts declared in that file. All remaining lines 2799 contain declarations of one font each, composed out of three fields: 2800 <enum> 2801 <item> Font name. The name will appear in the font selector controls. 2802 Also, backend devices that has built-in fonts, will be given the 2803 name as a font identifier. 2804 <item> Font fall-back. Grace will try to use this in case the real 2805 font is not found. 2806 <item> Font filename. The file with the font outline data. 2807 </enum> 2808 <p> 2809 Here is the default <tt>FontDataBase</tt> file: 2810 <tscreen><code> 2811 14 2812 Times-Roman Times-Roman n021003l.pfb 2813 Times-Italic Times-Italic n021023l.pfb 2814 Times-Bold Times-Bold n021004l.pfb 2815 Times-BoldItalic Times-BoldItalic n021024l.pfb 2816 Helvetica Helvetica n019003l.pfb 2817 Helvetica-Oblique Helvetica-Oblique n019023l.pfb 2818 Helvetica-Bold Helvetica-Bold n019004l.pfb 2819 Helvetica-BoldOblique Helvetica-BoldOblique n019024l.pfb 2820 Courier Courier n022003l.pfb 2821 Courier-Oblique Courier-Oblique n022023l.pfb 2822 Courier-Bold Courier-Bold n022004l.pfb 2823 Courier-BoldOblique Courier-BoldOblique n022024l.pfb 2824 Symbol Symbol s050000l.pfb 2825 ZapfDingbats ZapfDingbats d050000l.pfb 2826 </code></tscreen> 2827 <p> 2828 2829 <sect2>Font data files 2830 <p> 2831 For text rastering, three types of files are used. 2832 <enum> 2833 <item> <tt>.pfa</tt>-/<tt>.pfb</tt>-files: These contain the character 2834 outline descriptions. The files are assumed to be in the 2835 <tt>fonts/type1</tt> directory; these are the filenames 2836 specified in the <tt>FontDataBase</tt> configuration file. 2837 <item> <tt>.afm</tt>-files: These contain high-precision font metric 2838 descriptions as well as some extra information, such as kerning 2839 and ligature information for a particular font. It is assumed 2840 that the filename of a font metric file has same basename as the 2841 respective font outline file, but with the <tt>.afm</tt> 2842 extension; the metric files are expected to be found in the 2843 <tt>fonts/type1</tt> directory, too. 2844 <item> <tt>.enc</tt>-files: These contain encoding arrays in a special 2845 but simple form. They are only needed if someone wants to load 2846 a special encoding to re-encode a font. Their place is 2847 <tt>fonts/enc</tt> 2848 </enum> 2849 2850 <sect2>Custom fonts 2851 <p> 2852 It is possible to use custom fonts with Grace. One mostly needs to use 2853 extra fonts for the purpose of localization. For many European 2854 languages, the standard fonts supplied with Grace should contain all the 2855 characters needed, but encoding may have to be adjusted. This is done by 2856 putting a <tt>Default.enc</tt> file with proper encoding scheme into the 2857 <tt>fonts/enc</tt> directory. Grace comes with a few encoding files in 2858 the directory; more can be easily found on the Internet. (If the 2859 <tt>Default.enc</tt> file doesn't exist, the IsoLatin1 encoding will be 2860 used). Notice that for fonts having an encoding scheme in themselves 2861 (such as the Symbol font, and many nationalized fonts) the default 2862 encoding is ignored. 2863 <p> 2864 If you do need to use extra fonts, you should modify the 2865 <tt>FontDataBase</tt> file accordingly, obeying its format. However, 2866 if you are going to exchange Grace project files with other people who 2867 do not have the extra fonts configured, an important thing is to define 2868 reasonable fall-back font names. 2869 <p> 2870 For example, let us assume I use Hebrew fonts, and the configuration file 2871 has lines like these: 2872 <tscreen><code> 2873 ... 2874 Courier-Hebrew Courier courh___.pfa 2875 Courier-Hebrew-Oblique Courier-Oblique courho__.pfa 2876 ... 2877 </code></tscreen> 2878 My colleague, who lives in Russia, uses Cyrillic fonts with Grace 2879 configured like this: 2880 <tscreen><code> 2881 ... 2882 Cronix-Courier Courier croxc.pfb 2883 Cronix-Courier-Oblique Courier-Oblique croxco.pfb 2884 ... 2885 </code></tscreen> 2886 The font mapping information (Font name <-> Font fall-back) is 2887 stored in the Grace project files. Provided that all the localized fonts 2888 have English characters in the lower part of the ASCII table unmodified, 2889 I can send my friend files (with no Hebrew characters, of course) and be 2890 sure they render correctly on his computer. 2891 <p> 2892 Thus, with properly configured national fonts, you can make localized 2893 annotations for plots intended for internal use of your institution, 2894 while being able to exchange files with colleagues from abroad. People 2895 who ever tried to do this with MS Office applications should appreciate 2896 the flexibility :-). 2897 2898 <sect1>Interaction with other applications 2899 <p> 2900 2901 <sect2>Using pipes 2902 <p> 2903 2904 <sect2>Using grace_np library 2905 <p> 2906 The grace_np library is a set of compiled functions that 2907 allows you to launch and drive a Grace subprocess from your C or 2908 Fortran application. Functions are provided to start the 2909 subprocess, to send it commands or data, to stop it or detach 2910 from it. 2911 2912 <table loc="htbp"> 2913 <tabular ca="p{4.5cm}lp{5.5cm}"> 2914 <hline> 2915 Function | Arguments | Description @ 2916 <hline> 2917 int GraceOpenVA | (char *<it>exe</it>, int <it>buf_size</it>, ...) 2918 | launch a Grace executable <it>exe</it> and open a communication channel with it using <it>buf_size</it> bytes for data buffering. The remaining NULL-terminated list of options is command line arguments passed to the Grace process @ 2919 int GraceOpen | (int <it>buf_size</it>) 2920 | equivalent to GraceOpenVA("xmgrace", buf_size, "-nosafe", "-noask", NULL) @ 2921 int GraceIsOpen | (void) | test if a Grace subprocess is currently connected @ 2922 int GraceClose | (void) | close the communication channel and exit the Grace subprocess @ 2923 int GraceClosePipe | (void) | close the communication channel and leave the Grace subprocess alone @ 2924 <hline> 2925 int GraceFlush | (void) | flush all the data remaining in the buffer @ 2926 int GracePrintf | (const char* <it>format</it>, ...) 2927 | format a command and send it to the Grace subprocess @ 2928 int GraceCommand | (const char* <it>cmd</it>) 2929 | send an already formated command to the Grace subprocess @ 2930 <hline> 2931 GraceErrorFunctionType GraceRegisterErrorFunction 2932 | (GraceErrorFunctionType <it>f</it>) 2933 | register a user function <it>f</it> to display library errors @ 2934 <hline> 2935 </tabular> 2936 <caption> 2937 <label id="C functions"> grace_np library C functions. 2938 </caption> 2939 </table> 2940 2941 <table loc="htbp"> 2942 <tabular ca="p{5cm}lp{5cm}"> 2943 <hline> 2944 Function | Arguments | Description @ 2945 <hline> 2946 integer GraceOpenF | (integer <it>buf_size</it>) 2947 | launch a Grace subprocess and open a communication channel with it @ 2948 integer GraceIsOpenF | (void) | test if a Grace subprocess is currently connected @ 2949 integer GraceCloseF | (void) | close the communication channel and exit the Grace subprocess @ 2950 integer GraceClosePipeF | (void) | close the communication channel and leave the Grace subprocess alone @ 2951 <hline> 2952 integer GraceFlushF | (void) | flush all the data remaining in the buffer @ 2953 integer GraceCommandF | (character*(*) <it>cmd</it>) 2954 | send an already formatted command to the Grace subprocess @ 2955 <hline> 2956 GraceFortranFunctionType GraceRegisterErrorFunctionF 2957 | (GraceFortranFunctionType <it>f</it>) 2958 | register a user function <it>f</it> to display library errors @ 2959 <hline> 2960 </tabular> 2961 <caption> 2962 <label id="fortran functions"> grace_np library F77 functions. 2963 </caption> 2964 </table> 2965 2966 <p> There is no Fortran equivalent for the GracePrintf function, 2967 you should format all the data and commands yourself before 2968 sending them with GraceCommandF. 2969 2970 The Grace subprocess listens for the commands you send and 2971 interprets them as if they were given in a batch file. You can 2972 send any command you like (redraw, autoscale, ...). If you want 2973 to send data, you should include them in a command like "g0.s0 2974 point 3.5, 4.2". 2975 2976 Apart from the fact it monitors the data sent via an anonymous 2977 pipe, the Grace subprocess is a normal process. You can interact 2978 with it through the GUI. Note that no error can be sent back to 2979 the parent process. If your application send erroneous commands, 2980 an error popup will be displayed by the subprocess. 2981 2982 If you exit the subprocess while the parent process is still 2983 using it, the broken pipe will be detected. An error code will 2984 be returned to every further call to the library (but you can 2985 still start a new process if you want to manage this situation). 2986 2987 Here is an example use of the library, you will find this 2988 program in the distribution. 2989 2990 <tscreen><code> 2991 #include <stdlib.h> 2992 #include <stdio.h> 2993 #include <unistd.h> 2994 #include <grace_np.h> 2995 2996 #ifndef EXIT_SUCCESS 2997 # define EXIT_SUCCESS 0 2998 #endif 2999 3000 #ifndef EXIT_FAILURE 3001 # define EXIT_FAILURE -1 3002 #endif 3003 3004 void my_error_function(const char *msg) 3005 { 3006 fprintf(stderr, "library message: \"%s\"\n", msg); 3007 } 3008 3009 int 3010 main(int argc, char* argv[]) 3011 { 3012 int i; 3013 3014 GraceRegisterErrorFunction(my_error_function); 3015 3016 /* Start Grace with a buffer size of 2048 and open the pipe */ 3017 if (GraceOpen(2048) == -1) { 3018 fprintf(stderr, "Can't run Grace. \n"); 3019 exit(EXIT_FAILURE); 3020 } 3021 3022 /* Send some initialization commands to Grace */ 3023 GracePrintf("world xmax 100"); 3024 GracePrintf("world ymax 10000"); 3025 GracePrintf("xaxis tick major 20"); 3026 GracePrintf("xaxis tick minor 10"); 3027 GracePrintf("yaxis tick major 2000"); 3028 GracePrintf("yaxis tick minor 1000"); 3029 GracePrintf("s0 on"); 3030 GracePrintf("s0 symbol 1"); 3031 GracePrintf("s0 symbol size 0.3"); 3032 GracePrintf("s0 symbol fill pattern 1"); 3033 GracePrintf("s1 on"); 3034 GracePrintf("s1 symbol 1"); 3035 GracePrintf("s1 symbol size 0.3"); 3036 GracePrintf("s1 symbol fill pattern 1"); 3037 3038 /* Display sample data */ 3039 for (i = 1; i <= 100 && GraceIsOpen(); i++) { 3040 GracePrintf("g0.s0 point %d, %d", i, i); 3041 GracePrintf("g0.s1 point %d, %d", i, i * i); 3042 /* Update the Grace display after every ten steps */ 3043 if (i % 10 == 0) { 3044 GracePrintf("redraw"); 3045 /* Wait a second, just to simulate some time needed for 3046 calculations. Your real application shouldn't wait. */ 3047 sleep(1); 3048 } 3049 } 3050 3051 if (GraceIsOpen()) { 3052 /* Tell Grace to save the data */ 3053 GracePrintf("saveall \"sample.agr\""); 3054 3055 /* Flush the output buffer and close Grace */ 3056 GraceClose(); 3057 3058 /* We are done */ 3059 exit(EXIT_SUCCESS); 3060 } else { 3061 exit(EXIT_FAILURE); 3062 } 3063 } 3064 3065 </code></tscreen> 3066 <p> 3067 To compile this program, type 3068 <tscreen><code> 3069 cc example.c -lgrace_np 3070 </code></tscreen> 3071 If Grace wasn't properly installed, you may need to instruct the 3072 compiler about include and library paths explicitly, e.g. 3073 <tscreen><code> 3074 cc -I/usr/local/grace/include example.c -L/usr/local/grace/lib -lgrace_np 3075 </code></tscreen> 3076 3077 <sect1>FFTW tuning<label id="fftw-tuning"> 3078 <p> 3079 When the FFTW capabilities are compiled in, Grace looks at two environment 3080 variables to decide what to do with the FFTW 'wisdom' capabilities. 3081 First, a quick summary of what this is. The FFTW package is capable of 3082 adaptively determining the most efficient factorization of a set to give 3083 the fastest computation. It can store these factorizations as 'wisdom', 3084 so that if a transform of a given size is to be repeated, it is does not 3085 have to re-adapt. The good news is that this seems to work very well. 3086 The bad news is that, the first time a transform of a given size is 3087 computed, if it is not a sub-multiple of one already known, it takes a LONG 3088 time (seconds to minutes). 3089 <p> 3090 The first environment variable is GRACE_FFTW_WISDOM_FILE. If this is set 3091 to the name of a file which can be read and written (e.g., 3092 $HOME/.grace_fftw_wisdom) then Grace will automatically create this file 3093 (if needed) and maintain it. If the file is read-only, it will be read, 3094 but not updated with new wisdom. If the symbol GRACE_FFTW_WISDOM_FILE 3095 either doesn't exist, or evaluates to an empty string, Grace will drop the 3096 use of wisdom, and will use the fftw estimator (FFTW_ESTIMATE flag sent to 3097 the planner) to guess a good factorization, instead of adaptively 3098 determining it. 3099 <p> 3100 The second variable is GRACE_FFTW_RAM_WISDOM. If this variable is defined 3101 to be non-zero, and GRACE_FFTW_WISDOM_FILE variable is not defined (or is 3102 an empty string), Grace will use wisdom internally, but maintain no 3103 persistent cache of it. This will result in very slow execution times the 3104 first time a transform is executed after Grace is started, but very fast 3105 repeats. I am not sure why anyone would want to use wisdom without 3106 writing it to disk, but if you do, you can use this flag to enable it. 3107 <p> 3108 3109 <sect1>DL modules <label id="dl-modules"> 3110 <p> 3111 Grace can access external functions present 3112 in either system or third-party shared libraries or modules 3113 specially compiled for use with Grace. 3114 3115 <sect2>Function types 3116 <p> 3117 One must make sure, however, that the external function is of one 3118 of supported by Grace types: 3119 <table loc="htbp"> 3120 <tabular ca="ll"> 3121 <hline> 3122 Grace type | Description @ 3123 <hline> 3124 f_of_i | a function of 1 <tt/int/ variable @ 3125 f_of_d | a function of 1 <tt/double/ variable @ 3126 f_of_nn | a function of 2 <tt/int/ parameters @ 3127 f_of_nd | a function of 1 <tt/int/ parameter and 1 <tt/double/ variable @ 3128 f_of_dd | a function of 2 <tt/double/ variables @ 3129 f_of_nnd | a function of 2 <tt/int/ parameters and 1 <tt/double/ variable @ 3130 f_of_ppd | a function of 2 <tt/double/ parameters and 1 <tt/double/ variable @ 3131 f_of_pppd | a function of 3 <tt/double/ parameters and 1 <tt/double/ variable @ 3132 f_of_ppppd | a function of 4 <tt/double/ parameters and 1 <tt/double/ variable @ 3133 f_of_pppppd | a function of 5 <tt/double/ parameters and 1 <tt/double/ variable @ 3134 <hline> 3135 </tabular> 3136 <caption> 3137 <label id="grace-types"> 3138 Grace types for external functions 3139 </caption> 3140 </table> 3141 3142 The return values of functions are assumed to be of the 3143 <tt/double/ type. 3144 3145 Note, that there is no difference from the point of view of 3146 function prototype between parameters and variables; the 3147 difference is in the way Grace treats them - an attempt to use 3148 a vector expression as a parameter argument will result in a 3149 parse error. 3150 3151 Let us consider few examples. 3152 3153 <sect2>Examples 3154 <p> 3155 Caution: the examples provided below (paths and compiler flags) 3156 are valid for Linux/ELF with gcc. On other operating systems, 3157 you may need to refer to compiler/linker manuals or ask a guru. 3158 3159 <sect3>Example 1 3160 <p> 3161 Suppose I want to use function <tt/pow(x,y)/ from the Un*x math 3162 library (libm). Of course, you can use the "^" operator defined 3163 in the Grace language, but here, for the sake of example, we 3164 want to access the function directly. 3165 3166 The command to make it accessible by Grace is 3167 <tscreen> 3168 USE "pow" TYPE f_of_dd FROM "/usr/lib/libm.so" 3169 </tscreen> 3170 3171 Try to plot y = pow(x,2) and y = x^2 graphs (using, for 3172 example, "create new -> Formula" from any <ref name="set 3173 selector" id="set-selector">) and compare. 3174 3175 <sect3>Example 2 3176 <p> 3177 Now, let us try to write a function ourselves. We will define 3178 function <tt/my_function/ which simply returns its (second) 3179 argument multiplied by integer parameter transferred as the 3180 first argument. 3181 3182 In a text editor, type in the following C code and save it as 3183 "my_func.c": 3184 3185 <tscreen><code> 3186 double my_function (int n, double x) 3187 { 3188 double retval; 3189 retval = (double) n * x; 3190 return (retval); 3191 } 3192 </code></tscreen> 3193 3194 OK, now compile it: 3195 3196 <tscreen><code> 3197 $gcc -c -fPIC my_func.c 3198 $gcc -shared my_func.o -o /tmp/my_func.so 3199 </code></tscreen> 3200 3201 (You may strip it to save some disk space): 3202 3203 <tscreen><code> 3204 $strip /tmp/my_func.so 3205 </code></tscreen> 3206 3207 That's all! Ready to make it visible to Grace as "myf" - we are 3208 too lazy to type the very long string "my_function" many times. 3209 3210 <tscreen> 3211 USE "my_function" TYPE f_of_nd FROM "/tmp/my_func.so" ALIAS "myf" 3212 </tscreen> 3213 3214 3215 <sect3>Example 3 3216 <p> 3217 A more serious example. There is a special third-party library 3218 available on your system which includes a very important for 3219 you yet very difficult-to-program from the scratch function 3220 that you want to use with Grace. But, the function prototype 3221 is NOT one of any predefined <ref name="types" 3222. The solution is to write a simple function 3223 wrapper. Here is how: 3224 3225 Suppose, the name of the library is "special_lib" and the 3226 function you are interested in is called "special_func" and 3227 according to the library manual, should be accessed as <tt/void 3228 special_func(double *input, double *output, int parameter)/. 3229 The wrapper would look like this: 3230 3231 <tscreen><code> 3232 double my_wrapper(int n, double x) 3233 { 3234 extern void special_func(double *x, double *y, int n); 3235 double retval; 3236 (void) special_func(&x, &retval, n); 3237 return (retval); 3238 } 3239 </code></tscreen> 3240 3241 Compile it: 3242 3243 <tscreen><code> 3244 $gcc -c -fPIC my_wrap.c 3245 $gcc -shared my_wrap.o -o /tmp/my_wrap.so -lspecial_lib -lblas 3246 $strip /tmp/my_wrap.so 3247 </code></tscreen> 3248 3249 Note that I added <tt/-lblas/ assuming that the special_lib 3250 library uses some functions from the BLAS. Generally, you have 3251 to add <it>all</it> libraries which your module depends on (and 3252 all libraries those libraries rely upon etc.), as if you wanted 3253 to compile a plain executable. 3254 3255 Fine, make Grace aware of the new function 3256 3257 <tscreen> 3258 USE "my_wrapper" TYPE f_of_nd FROM "/tmp/my_wrap.so" ALIAS "special_func" 3259 </tscreen> 3260 3261 so we can use it with its original name. 3262 3263 <sect3>Example 4 3264 <p> 3265 An example of using Fortran modules. 3266 3267 Here we will try to achieve the same functionality as in 3268 Example 2, but with the help of F77. 3269 3270 <tscreen><code> 3271 DOUBLE PRECISION FUNCTION MYFUNC (N, X) 3272 IMPLICIT NONE 3273 INTEGER N 3274 DOUBLE PRECISION X 3275 C 3276 MYFUNC = N * X 3277 C 3278 RETURN 3279 END 3280 </code></tscreen> 3281 3282 As opposite to C, there is no way to call such a function from 3283 Grace directly - the problem is that in Fortran all arguments 3284 to a function (or subroutine) are passed by reference. So, we 3285 need a wrapper: 3286 3287 <tscreen><code> 3288 double myfunc_wrapper(int n, double x) 3289 { 3290 extern double myfunc_(int *, double *); 3291 double retval; 3292 retval = myfunc_(&n, &x); 3293 return (retval); 3294 } 3295 </code></tscreen> 3296 3297 Note that most of f77 compilers by default add underscore to 3298 the function names and convert all names to the lower case, 3299 hence I refer to the Fortran function <tt/MYFUNC/ from my C 3300 wrapper as <tt/myfunc_/, but in your case it can be different! 3301 3302 Let us compile the whole stuff: 3303 3304 <tscreen><code> 3305 $g77 -c -fPIC myfunc.f 3306 $gcc -c -fPIC myfunc_wrap.c 3307 $gcc -shared myfunc.o myfunc_wrap.o -o /tmp/myfunc.so -lf2c -lm 3308 $strip /tmp/myfunc.so 3309 </code></tscreen> 3310 3311 And finally, inform Grace about this new function: 3312 3313 <tscreen> 3314 USE "myfunc_wrapper" TYPE f_of_nd FROM "/tmp/myfunc.so" ALIAS "myfunc" 3315 </tscreen> 3316 3317 <sect2>Operating system issues 3318 <p> 3319 <sect3>OS/2 3320 <p> 3321 In general the method outlined in the examples above can be 3322 used on OS/2, too. However you have to create a DLL (Dynamic Link Library) 3323 which is a bit more tricky on OS/2 than on most Un*x systems. 3324 Since Grace was ported by using EMX we also use it to create 3325 the examples; however other development environments should work 3326 as well (ensure to use the _System calling convention!). 3327 We refer to Example 2 only. Example 1 might demonstrate 3328 that DLLs can have their entry points (i.e. exported functions) 3329 callable via ordinals only, so you might not know how to access a 3330 specific function without some research. 3331 First compile the source from Example 2 to "my_func.obj" 3332 3333 <tscreen> 3334 gcc -Zomf -Zmt -c my_func.c -o my_func.obj 3335 </tscreen> 3336 3337 Then you need to create a linker definition file "my_func.def" 3338 which contains some basic info about the DLL and declares 3339 the exported functions. 3340 3341 <tscreen><code> 3342 LIBRARY my_func INITINSTANCE TERMINSTANCE 3343 CODE LOADONCALL 3344 DATA LOADONCALL MULTIPLE NONSHARED 3345 DESCRIPTION 'This is a test DLL: my_func.dll' 3346 EXPORTS 3347 my_function 3348 </code></tscreen> 3349 3350 (don't forget about the 8 characters limit on the DLL name!). 3351 Finally link the DLL: 3352 3353 <tscreen> 3354 gcc my_func.obj my_func.def -o my_func.dll -Zdll -Zno-rte -Zmt -Zomf 3355 </tscreen> 3356 3357 (check out the EMX documentation about the compiler/linker flags 3358 used here!) 3359 To use this new library function within Grace you may either 3360 put the DLL in the LIBPATH and use the short form: 3361 3362 <tscreen> 3363 USE "my_function" TYPE f_of_nd FROM "my_func" ALIAS "myf" 3364 </tscreen> 3365 3366 or put it in an arbitrary path which you need to specify explicitly 3367 then: 3368 3369 <tscreen> 3370 USE "my_function" TYPE f_of_nd FROM "e:/foo/my_func.dll" ALIAS "myf" 3371 </tscreen> 3372 3373 (as for most system-APIs you may use the Un*x-like forward 3374 slashs within the path!) 3375 3376 <!-- ****** Appendices/references ************ --> 3377 <sect> References 3378 <p> 3379 3380 <sect1>Typesetting<label id="typesetting"> 3381 <p> 3382 Grace permits quite complex typesetting on a per string basis. 3383 Any string displayed (titles, legends, tick marks,...) may contain 3384 special control codes to display subscripts, change fonts within the 3385 string etc. 3386 <p> 3387 3388 <table loc="htbp"> 3389 <tabular ca="ll"> 3390 <hline> 3391 Control code | Description @ 3392 <hline> 3393 \f{x} | switch to font named "x" @ 3394 \f{n} | switch to font number n @ 3395 \f{} | return to original font @ 3396 \R{x} | switch to color named "x" @ 3397 \R{n} | switch to color number n @ 3398 \R{} | return to original color @ 3399 \#{x} | treat "x" (must be of even length) as list of hexadecimal char codes @ 3400 \t{xx xy yx yy} | apply transformation matrix @ 3401 \t{} | reset transformation matrix @ 3402 \z{x} | zoom x times @ 3403 \z{} | return to original zoom @ 3404 \r{x} | rotate by x degrees @ 3405 \l{x} | slant by factor x @ 3406 \v{x} | shift vertically by x @ 3407 \v{} | return to unshifted baseline @ 3408 \V{x} | shift baseline by x @ 3409 \V{} | reset baseline @ 3410 \h{x} | horizontal shift by x @ 3411 \n | new line @ 3412 \u | begin underline @ 3413 \U | stop underline @ 3414 \o | begin overline @ 3415 \O | stop overline @ 3416 \Fk | enable kerning @ 3417 \FK | disable kerning @ 3418 \Fl | enable ligatures @ 3419 \FL | disable ligatures @ 3420 \m{n} | mark current position as n @ 3421 \M{n} | return to saved position n @ 3422 \dl | LtoR substring direction @ 3423 \dr | RtoL substring direction @ 3424 \dL | LtoR text advancing @ 3425 \dR | RtoL text advancing @ 3426 <hline> 3427 \x | switch to Symbol font (same as \f{Symbol}) @ 3428 \+ | increase size (same as \z{1.19} ; 1.19 = sqrt(sqrt(2))) @ 3429 \- | decrease size (same as \z{0.84} ; 0.84 = 1/sqrt(sqrt(2))) @ 3430 \s | begin subscripting (same as \v{-0.4}\z{0.71}) @ 3431 \S | begin superscripting (same as \v{0.6}\z{0.71}) @ 3432 \T{xx xy yx yy} | same as \t{}\t{xx xy yx yy} @ 3433 \Z{x} | absolute zoom x times (same as \z{}\z{x}) @ 3434 \q | make font oblique (same as \l{0.25}) @ 3435 \Q | undo oblique (same as \l{-0.25}) @ 3436 \N | return to normal style (same as \v{}\t{}) @ 3437 <hline> 3438 \\ | print \ @ 3439 <hline> 3440 \n | switch to font number n (0-9) (deprecated) @ 3441 \c | begin using upper 128 characters of set (deprecated) @ 3442 \C | stop using upper 128 characters of set (deprecated) @ 3443 <hline> 3444 </tabular> 3445 <caption> 3446 <label id="control-codes"> 3447 Control codes. 3448 </caption> 3449 </table> 3450 3451 <p> 3452 Example: 3453 <p> 3454 F\sX\N(\xe\f{}) = 3455 sin(\xe\f{})\#{b7}e\S-X\N\#{b7}cos(\xe\f{}) 3456 <p> 3457 prints roughly 3458 <tscreen><verb> 3459 -x 3460 F (e) = sin(e)·e ·cos(e) 3461 x 3462 </verb></tscreen> 3463 <p> 3464 using string's initial font and e prints as epsilon from the Symbol font. 3465 <p> 3466 NOTE: 3467 Characters from the upper half of the char table can be entered directly 3468 from the keyboard, using appropriate <tt>xmodmap(1)</tt> settings, or 3469 with the help of the font tool ("Window/Font tool"). 3470 <p> 3471 3472 <sect1>Device-specific limitations<label id="device-limitations"> 3473 <p> 3474 3475 Grace can output plots using several device backends. The list of 3476 available devices can be seen (among other stuff) by specifying the 3477 "-version" command line switch. 3478 <itemize> 3479 <item> X11, PostScript and EPS are full-featured devices 3480 <item> Raster drivers (PNM/JPEG/PNG): 3481 <itemize> 3482 <item> only even-odd fill rule is supported 3483 <item> patterned lines are not implemented 3484 </itemize> 3485 <item> PDF driver: 3486 <itemize> 3487 <item> bitmapped text strings are not transparent 3488 </itemize> 3489 <item> MIF driver: 3490 <itemize> 3491 <item> some of patterned fills not implemented 3492 <item> bitmapped text strings not implemented 3493 </itemize> 3494 <item> SVG driver: 3495 <itemize> 3496 <item> bitmapped text strings not implemented 3497 </itemize> 3498 </itemize> 3499 3500 <p> 3501 3502 <sect1>Device-specific settings<label id="device-settings"> 3503 <p> 3504 3505 Some of the output devices accept several configuration options. You can 3506 set the options by passing a respective string to the interpreter 3507 using the "DEVICE <it>"devname"</it> OP <it>"options"</it>" command (see 3508 <ref id="device-parameters" name="Device parameters">). A few options 3509 can be passed in one command, separated by commas. 3510 3511 <p> 3512 3513 <table loc="htbp"> 3514 <tabular ca="ll"> 3515 <hline> 3516 Command | Description @ 3517 <hline> 3518 grayscale | set grayscale output @ 3519 color | set color output @ 3520 level1 | use only PS Level 1 subset of commands @ 3521 level2 | use also PS Level 2 commands if needed @ 3522 docdata:7bit | the document data is 7bit clean @ 3523 docdata:8bit | the document data is 8bit clean @ 3524 docdata:binary | the document data may be binary @ 3525 xoffset:<it>x</it> | set page offset in X direction <it>x</it> pp @ 3526 yoffset:<it>y</it> | set page offset in Y direction <it>y</it> pp @ 3527 mediafeed:auto | default input tray @ 3528 mediafeed:match | select input with media matching page dimensions @ 3529 mediafeed:manual | manual media feed @ 3530 hwresolution:on | set hardware resolution @ 3531 hwresolution:off | do not set hardware resolution @ 3532 <hline> 3533 </tabular> 3534 <caption> 3535 PostScript driver options 3536 </caption> 3537 </table> 3538 3539 <p> 3540 3541 <table loc="htbp"> 3542 <tabular ca="ll"> 3543 <hline> 3544 Command | Description @ 3545 <hline> 3546 grayscale | set grayscale output @ 3547 color | set color output @ 3548 level1 | use only PS Level 1 subset of commands @ 3549 level2 | use also PS Level 2 commands if needed @ 3550 bbox:tight | enable "tight" bounding box @ 3551 bbox:page | bounding box coincides with page dimensions @ 3552 <hline> 3553 </tabular> 3554 <caption> 3555 EPS driver options 3556 </caption> 3557 </table> 3558 3559 <p> 3560 3561 <table loc="htbp"> 3562 <tabular ca="ll"> 3563 <hline> 3564 Command | Description @ 3565 <hline> 3566 PDF1.3 | set compatibility mode to PDF-1.3 @ 3567 PDF1.4 | set compatibility mode to PDF-1.4 @ 3568 compression:value | set compression level (0 - 9) @ 3569 patterns:on | enable use of patterns @ 3570 patterns:off | disable use of patterns @ 3571 <hline> 3572 </tabular> 3573 <caption> 3574 PDF driver options 3575 </caption> 3576 </table> 3577 3578 <p> 3579 3580 <table loc="htbp"> 3581 <tabular ca="ll"> 3582 <hline> 3583 Command | Description @ 3584 <hline> 3585 format:pbm | output in PBM format @ 3586 format:pgm | output in PGM format @ 3587 format:ppm | output in PPM format @ 3588 rawbits:on | "rawbits" (binary) output @ 3589 rawbits:off | ASCII output @ 3590 <hline> 3591 </tabular> 3592 <caption> 3593 PNM driver options 3594 </caption> 3595 </table> 3596 3597 <p> 3598 3599 <table loc="htbp"> 3600 <tabular ca="ll"> 3601 <hline> 3602 Command | Description @ 3603 <hline> 3604 grayscale | set grayscale output @ 3605 color | set color output @ 3606 optimize:on/off | enable/disable optimization @ 3607 quality:value | set compression quality (0 - 100) @ 3608 smoothing:value | set smoothing (0 - 100) @ 3609 baseline:on/off | do/don't force baseline output @ 3610 progressive:on/off | do/don't output in progressive format @ 3611 dct:ifast | use fast integer DCT method @ 3612 dct:islow | use slow integer DCT method @ 3613 dct:float | use floating-point DCT method @ 3614 <hline> 3615 </tabular> 3616 <caption> 3617 JPEG driver options 3618 </caption> 3619 </table> 3620 3621 <p> 3622 3623 <table loc="htbp"> 3624 <tabular ca="ll"> 3625 <hline> 3626 Command | Description @ 3627 <hline> 3628 interlaced:on | make interlaced image @ 3629 interlaced:off | don't make interlaced image @ 3630 transparent:on | produce transparent image @ 3631 transparent:off | don't produce transparent image @ 3632 compression:value | set compression level (0 - 9) @ 3633 <hline> 3634 </tabular> 3635 <caption> 3636 PNG driver options 3637 </caption> 3638 </table> 3639 3640 <p> 3641 3642 3643 <sect1>Dates in Grace <label id="dates"> 3644 3645 <p> 3646 We use two calendars in Grace: the one that was established in 3647 532 by Denys and lasted until 1582, and the one that was created 3648 by Luigi Lilio (Alyosius Lilius) and Christoph Klau 3649 (Christophorus Clavius) for pope Gregorius XIII. Both use the 3650 same months (they were introduced under emperor Augustus, a few 3651 years after Julian calendar introduction, both Julius and 3652 Augustus were honored by a month being named after each one). 3653 3654 The leap years occurred regularly in Denys's calendar: once 3655 every four years, there is no year 0 in this calendar (the leap 3656 year -1 was just before year 1). This calendar was not compliant 3657 with earth motion and the dates were slowly shifting with regard 3658 to astronomical events. 3659 3660 This was corrected in 1582 by introducing Gregorian 3661 calendar. First a ten days shift was introduced to reset correct 3662 dates (Thursday October the 4th was followed by Friday October 3663 the 15th). The rules for leap years were also changed: three 3664 leap years are removed every four centuries. These years are 3665 those that are multiple of 100 but not multiple of 400: 1700, 3666 1800, and 1900 were not leap years, but 1600 and 2000 were (will 3667 be) leap years. 3668 3669 We still use Gregorian calendar today, but we now have several 3670 time scales for increased accuracy. The International Atomic 3671 Time (TAI) is a linear scale: the best scale to use for 3672 scientific reference. The Coordinated Universal Time (UTC, often 3673 confused with Greenwich Mean Time) is a legal time that is 3674 almost synchronized with earth motion. However, since the earth 3675 is slightly slowing down, leap seconds are introduced from time 3676 to time in UTC (about one second every 18 months). UTC is not a 3677 continuous scale ! When a leap second is introduced by 3678 International Earth Rotation Service, this is published in 3679 advance and the legal time sequence is as follows: 23:59:59 3680 followed one second later by 23:59:60 followed one second later 3681 by 00:00:00. At the time of this writing (1999-01-05) the 3682 difference between TAI and UTC was 32 seconds, and the last leap 3683 second was introduced in 1998-12-31. 3684 3685 These calendars allow to represent any date from the mist of the 3686 past to the fog of the future, but they are not convenient for 3687 computation. Another time scale is possible: counting only the 3688 days from a reference. Such a time scale was introduced by 3689 Joseph-Juste Scaliger (Josephus Justus Scaliger) in 1583. He 3690 decided to use "-4713-01-01T12:00:00" as a reference date 3691 because it was at the same time a Monday, first of January of a 3692 leap year, there was an exact number of 19 years Meton cycle 3693 between this date and year 1 (for Easter computation), and it 3694 was at the beginning of a 15 years <it>Roman indiction</it> 3695 cycle. The day number counted from this reference is 3696 traditionally called <it>Julian day</it>, but it has really 3697 nothing to do with the Julian calendar. 3698 3699 Grace stores dates internally as reals numbers counted from a 3700 reference date. The default reference date is the one chosen by 3701 Scaliger, it is a classical reference for astronomical 3702 events. It can modified for a single session using the <ref 3703 popup of the GUI. If 3704 you often work with a specific reference date you can set it for 3705 every sessions with a REFERENCE DATE command in your 3706 configuration file (see <ref name="Default template" 3707). 3708 3709 The following date formats are supported (hour, minutes and 3710 seconds are always optional): 3711 3712 <enum> 3713 <item>iso8601 : 1999-12-31T23:59:59.999 3714 <item>european : 31/12/1999 23:59:59.999 or 31/12/99 23:59:59.999 3715 <item>us : 12/31/1999 23:59:59.999 or 12/31/99 23:59:59.999 3716 <item>Julian : 123456.789 3717 </enum> 3718 3719 One should be aware that Grace does not allow to put a space in 3720 one data column as spaces are used to separate fields. You 3721 should always use another separator (:/.- or better T) between 3722 date and time in data files. The GUI, the batch language and the 3723 command line flags do not have this limitation, you can use 3724 spaces there without any problem. The T separator comes from the 3725 ISO8601 standard. Grace support its use also in european and us 3726 formats. 3727 3728 You can also provide a hint about the format ("ISO8601", 3729 "european", "us") using the -datehint command line flag or the 3730 ref popup of the GUI. 3731 The formats are tried in the following order: first the hint 3732 given by the user, then iso, european and us (there is no 3733 ambiguity between calendar formats and numerical formats and 3734 therefore no order is specified for them). The separators 3735 between various fields can be any characters in the set: " :/.-T" 3736 (one or more spaces act as one separator, other characters can 3737 not be repeated, the T separator is allowed only between date and time, 3738 mainly for iso8601), so the string "1999-12 31:23/59" is allowed 3739 (but not recommended). The '-' character is used both as a 3740 separator (it is traditionally used in iso8601 format) and as 3741 the unary minus (for dates in the far past or for numerical 3742 dates). By default years are left untouched, so 99 is a date far 3743 away in the past. This behavior can be changed with the <ref 3744 popup, or with the 3745 <tt>DATE WRAP on</tt> and <tt>DATE WRAP YEAR year</tt> 3746 commands. Suppose for example that the wrap year is chosen as 3747 1950, if the year is between 0 and 99 and is written with two or 3748 less digits, it is mapped to the present era as follows: 3749 3750 range [00 ; 49] is mapped to [2000 ; 2049] 3751 3752 range [50 ; 99] is mapped to [1950 ; 1999] 3753 3754 with a wrap year set to 1970, the mapping would have been: 3755 3756 range [00 ; 69] is mapped to [2000 ; 2069] 3757 3758 range [70 ; 99] is mapped to [1970 ; 1999] 3759 3760 this is reasonably Y2K compliant and is consistent with current 3761 use. Specifying year 1 is still possible using more than two 3762 digits as follows: "0001-03-04" is unambiguously March the 4th, 3763 year 1. The inverse transform is applied for dates written by 3764 Grace, for example as tick labels. Using two digits only for 3765 years is not recommended, we introduce a <it>wrap year + 3766 100</it> bug here so this feature should be removed at some 3767 point in the future ... 3768 3769 The date scanner can be used either for Denys's and Gregorian 3770 calendars. Inexistent dates are detected, they include year 0, 3771 dates between 1582-10-05 and 1582-10-14, February 29th of non 3772 leap years, months below 1 or above 12, ... the scanner does 3773 not take into account leap seconds: you can think it works only 3774 in International Atomic Time (TAI) and not in Coordinated 3775 Unified Time (UTC). If you find yourself in a situation were you 3776 need UTC, a very precise scale, and should take into account 3777 leap seconds ... you should convert your data yourself (for 3778 example using International Atomic Time). But if you bother with 3779 that you probably already know what to do. 3780 3781 3782 <sect1>Xmgr to Grace migration guide 3783 3784 <p> 3785 3786 This is a very brief guide describing problems and workarounds for 3787 reading in project files saved with Xmgr. You should read the docs or 3788 just play with Grace to test new features and controls. 3789 3790 <enum> 3791 <item> Grace must be explicitly told the version number of the software 3792 used to create a file. You can manually put "@version VERSIONID" 3793 string at the beginning of the file. The VERSIONID is built as 3794 MAJOR_REV*10000 + MINOR_REV*100 + PATCHLEVEL; so 40101 corresponds 3795 to xmgr-4.1.1. Projects saved with Xmgr-4.1.2 do NOT need the above, 3796 since they already have the version string in them. If you have no 3797 idea what version of Xmgr your file was created with, try some. 3798 In most cases, 40102 would do the trick. 3799 3800 <item> The above relates to the ASCII projects only. The old binary 3801 projects (saved with xmgr-4.0.*) are not automatically converted 3802 anymore. An input filter must be defined to make the conversion 3803 work on-the-fly. Add the following line to ~/.gracerc or the 3804 system-wide $GRACE_HOME/gracerc resource file: DEFINE IFILTER 3805 "grconvert %s -" MAGIC "00000031" See docs for more info on the 3806 I/O filters. 3807 3808 <item> Documentation on the script language is severely lacking still. 3809 3810 <item> Grace is WYSIWYG. Xmgr was not. Many changes required to achieve the 3811 WYSIWYG'ness led to the situation when graphs with objects carefully 3812 aligned under Xmgr may not look so under Grace. Grace tries its best 3813 to compensate for the differences, but sometimes you may have to 3814 adjust such graphs manually. 3815 3816 <item> A lot of symbol types (all except *real* symbols) are removed. 3817 "Location *" types can be replaced (with much higher comfort) by 3818 A(nnotating)values. "Impulse *", "Histogram *" and "Stair steps *" 3819 effects can be achieved using the connecting line parameters (Type, 3820 Drop lines). "Dot" symbol is removed as well; use the filled circle 3821 symbol of the zero size with no outline to get the same effect. 3822 3823 <item> Default page layout switched from free (allowing to resize canvas 3824 with mouse) to fixed. For the old behavior, put "PAGE LAYOUT FREE" 3825 in the Grace resource file or use the "-free" command line switch. 3826 <bf>The use of the "free" page layout is in general deprecated, 3827 though.</bf> 3828 3829 <item> System (shell) variables GR_* renamed to GRACE_* 3830 3831 <item> Smith plots don't work now. They'll be put back soon. 3832 3833 </enum> 3834 3835 3836 </article> 3837 3838 <!-- End of UsersGuide.sgml --> | https://fossies.org/linux/misc/grace-5.1.25.tar.gz/grace-5.1.25/doc/UsersGuide.sgml | CC-MAIN-2020-16 | refinedweb | 25,407 | 53.34 |
sourcecode Adam <code> # Change the base representation of a non-negative integer use strict; sub GenerateBase { my $base = shift; $base = 62 if $base > 62; my @nums = (0..9,'a'..'z','A'..'Z')[0..$base-1]; my $index = 0; my %nums = map {$_,$index++} @nums; my $To = sub { my $number = shift; return $nums[0] if $number == 0; my $rep = ""; # this will be the end value. while( $number > 0 ) { $rep = $nums[$number % $base] . $rep; $number = int( $number / $base ); } return $rep; }; my $From = sub { my $rep = shift; my $number = 0; for( split //, $rep ) { $number *= $base; $number += $nums{$_}; } return $number; }; return ( $To, $From ); } =Example usage: my( $ToBase62, $FromBase62 ) = GenerateBase( 62 ); my $UniqueID = $ToBase62->( $$ ) . $ToBase62->( time ); my $hex = (GenerateBase(16))[0]; print $hex->( '28' ); =cut </code> This was spawned from the code in [id://27127] and is a subroutine that you are welcome to steal. It allows base conversions to and from base 10 to any base from 2 to 62. (less then 2 would be pointless.) It only handles non-negative integers, as I didn't feel like exploring the realm of sign bits, twos-complement, and mantissas. Enjoy. Adam | http://www.perlmonks.org/index.pl?displaytype=xml;node_id=27148 | CC-MAIN-2016-50 | refinedweb | 187 | 67.08 |
Some years ago (back in the early 1990's !) I bought a serial to digital input/output converter module to experiment with connecting things to my computer(s). This module was sold by Maplin, made by R.M.Electronics and was named the RM9011. As with many things I've tried in the past, I can't recall having much success using it with a range of computers I owned, these being the Dragon 32, Atari ST, finally migrating to the PC running Windows 95. So it was put away for a future project...
With the arrival of the Raspberry Pi my interest in interfacing with technology and sensors has been rekindled. So digging out the RM9011 I have spent a happy few hours figuring out how to interface this with the Raspberry Pi using some discreet logic circuitry and using a serial console (minicom) to configure and test different scenarios with sensors and LEDs. Using Python I've also coded some scripts to run various monitoring and output scenarios. As this module is way out of date, considered legacy and I've not been able to find spares or originals I have decided not to use it for any serious projects. However, for completeness for this article I've included a description of the module, an image of the board together with the circuit diagram (courtesy of Maplin magazine - January 1992).
RM9011 - description
RS232 to 8-bit digital I/O converter module - introduced in 1992 by Maplin Electronics, made by R.M.Electronics.
Features:
Each input/output line individually configurable as input or output.
Bit or byte read and write.
Configuration changes made via RS232 / Serial interface maximum speed 1200bps.
5V DC supply.
I/O Lines TTL / CMOS compatible.
On-board CMOS controller (pre-programmed).
An image of the board and circuit diagram is below:
While searching on-line for information on serial to digital converters to take the place of the RM9011 I found this device - the DACIO300.
The DACIO300 by Tronisoft in many ways has some similarities to the RM9011, though much more advanced and more importantly still available to purchase. The way it is controlled is also similar to the RM9011, in that ASCII characters are sent to the interface board via RS232/Serial. This method of operating lends itself to being controlled directly from a serial console, such as minicom on the Raspberry Pi, or from the Raspberry Pi using Python serial commands or Arduino using serial print commands.
This board is fully populated but it is possible to buy this unpopulated and exclude many of the components from the board depending on how it might be used. It is also possible to buy just the pre-programmed chip to build your own project(s).
DACIO300 - Specification.
The DACIO 300 features 8 10-bit A/D and a high speed 115.2kbps serial interface.
Easy to use communication and control open protocol.
Details about the RS-232 standard are available and provides useful information about remote operating distances.
Raspberry Pi - DACIO300 interface
Connecting the Raspberry Pi to the DACIO300 is relatively straightforward. The serial transmit output from the RPi is connected to the serial receive input on the DACIO300, and the serial transmit from the DACIO300 is connected to the serial receive input on the RPi. There should be no direct connections between the two devices as the RPi operates at 3.3v while the DACIO300 operates at 5v. It is necessary to provide an interfacing circuit between the two as shown below.
This simple interface allows the Raspberry Pi serial transmit port operating at 3.3v to connect via one hex buffer of the HCF4050 IC and connect to the DACIO300 serial receive port operating at 5v. In the reverse direction, the 5v serial transmit from the DACIO300 is stepped down to 3.3v (though not absolutely necessary) via R1 & R2 to drive another hex buffer whose output connects to the Raspberry Pi serial receive port. The HCF4050 is powered at 3.3v with pin 1 +Vdd and pin 8 Ov or Vss. I have also used an Arduino clone (RasPi.TV Duino) in the same way to successfully interface with the DACIO300.
Example input / outputs
Here are a few simple examples of how the DACIO300 inputs and outputs could be used.
Port A comprises 8 bits, channels 0 - 7 and is by default an analogue input port.
Ports B and C are both 8 bits, channels 0 - 7 digital ports, Port B by default is set as inputs and Port C as outputs. Both Ports B and C are configurable at bit/channel level and can be either inputs or outputs.
The commands to interact with the DACIO300 are straightforward and comprise a string of ascii characters forming a command.
Each command has a start character either ! or #
Characters to issue a command; typically comprising the Port name, an operator (e.g. '=' to write, '?' to read), a character if needed to provide an input parameter and an end character ';'
If a valid command is received, the module returns a '!' followed by any data requested.
Examples:
!C=255;
This command is instructing the module that Port C has a decimal byte value of 255 written to it.
!Bx?;
This command reads bit x (0-7) from Port B, where the reply is !x<0D> where x is 1 or 0 and 0D is a carriage return character.
!B?;
This command reads the whole of Port B and returns a decimal byte value.
!Ax?;
This command reads Port A, bit x and returns a reply !xxxx<0D> where xxxx is 0000-1023
Further details are provided in the DACIO300 manual to translate the numeric reading to a voltage level.
Many other commands can be sent to the module to set Ports as inputs or outputs, to read ports at a bit or byte level, to write out data to the digital ports at bit or byte levels, and to read the analogue port values.
Full details and instructions are detailed in the DACIO300 manual which is a very clear useful guide.
I have experimented with this module running various scenarios of reading digital values at both bit and byte levels and setting digital values again both at bit and byte levels. Using a minicom terminal running at 9600 enables some simple experimentation to confirm the principles of what I've thought is possible. I've then followed this with using Python3 scripts to send and receive serial messages to configure, read and write date to and from the module. I can see this module has many practical uses where a simple and reliable serial connection is preferred over alternative I2C and SPI interfaces.
A Python3 script is listed below that queries the byte level values present on Port B and sets Port C outputs to match them. This script could be used so that Port B is monitoring various input lines and Port C writes out the 'state' of the input lines to light LEDs to provide a visual indication. A download of this script is available: DACIO300-auto-portb-byte-read-portc-byte-set.py
# DACIO300-auto-portb-byte-read-portc-byte-set.py
# Python3 script to run on RaspberryPi to continuously send serial messages to the DACIO 300
# only when GPIO port 17 is set high.
# RS232 interface board to query the logic levels on the digital Port B at 'byte' level
# and set Port C to the same value at byte level in order to mirror the values at Port B
# These levels can be used to drive LEDs, relays etc to reflect Port B status.
# Hardware interfacing with RPi:
# GPIO serial o/p connected to hex i/p of IC4050 running at 3.3v with
# hex output directly connected to TTL interface pin 4 (tx i/p) on the DACIO 300
# the DACIO300 TTL interface pin 5 (rx o/p) is connected to a potential divider
# to drop the TTL 5v to 3.3v, which is then connected to hex input on IC4050.
# the corresponding hex o/p at 3.3v is connected to RPi GPIO serial input.
# GPIO port 17 connected to a switch or set/reset latch circuit to control
# when the script sends/receives serial commands/data
import serial
import time
import RPi.GPIO as GPIO
GPIO.setmode(GPIO.BCM)
GPIO.setup(17, GPIO.IN)
x=1
latch_state=0
# set serial parameters
ser = serial.Serial("/dev/ttyAMA0",
baudrate=9600,
bytesize=8,
parity='N',
stopbits=1,
timeout=1,
xonxoff=False,
rtscts=False,
dsrdtr=False)
ser.close()
def queryByte():
ser.open()
ser.write(bytes('!'+port+'?;','utf_8'))
global byteQuery # make variable available outside function
byteQuery = ser.readline(4).decode("utf_8","ignore").strip()[-3:]
print ('\tPort ', port, '\tstate = ', byteQuery)
ser.close()
def setByte(PortC):
ser.open()
ser.write(bytes('!C='+PortC+';','utf_8'))
print ('\tPort C', '\tstate = ', PortC)
bitQuery = ser.readline(1).decode("utf_8","ignore").strip()[-1:]
if bitQuery == '!':
print ("Command received by DACIO300 ok:")
else:
print ("Error:")
ser.close()
try:
while x:
#print reminder of DACIO 300 default conditions
print ("Waiting to activate..")
if latch_state == 0:
if GPIO.input(17):
print ("Monitoring starts..")
print ("This script queries DACIO300 Port B logic levels on each input channel")
print ("The result displayed is in decimal byte value")
print ("Port C 'bits' are set to the same values as read on Port B")
port = 'B'
latch_state = 1
queryByte()
setByte(byteQuery) # pass value obtained from queryByte function to setByte function
time.sleep(5)
if latch_state == 1:
if GPIO.input(17):
port = 'B'
queryByte()
setByte(byteQuery)
time.sleep(5)
if latch_state == 1:
if not GPIO.input(17):
latch_state = 0
print ("Monitoring ends..")
time.sleep(5)
time.sleep(5)
except KeyboardInterrupt:
GPIO.cleanup()
Hi,
i dug out my rm9011 today to teach my 10 year old how to interface with the real world using python. I can't however find and documentation on the protcol and wondered if you could post your python examples or any hints please. I had it working back in the day
thanks
If you take a look at the old Maplin Magazine here:
There is a full article on hardware, software, protocol and interfacing. Hope this helps. | http://electronicsadventures.blogspot.com/2017/01/raspberry-pi-serial-interfacing-with.html | CC-MAIN-2019-35 | refinedweb | 1,694 | 63.39 |
Making Music with MIDI and C#
WEBINAR: On-demand webcast
How to Boost Database Development Productivity on Linux, Docker, and Kubernetes with Microsoft SQL Server 2017 REGISTER >
In a previous post, I showed you how to construct a wave file manually by hand, using nothing but program code. In this post, I want to show you how to use the other sound generation device that most Windows machines have, the MIDI Sequencer.
Most Windows-based sound cards these days have some kind of wavetable synth on them. This means that, when you play back a MIDI music file, you're actually just taking some high quality instrument samples that exist in a hidden part of your operating system, and then stringing them together in the right order to represent the notes you want to play, just before sending that large wave onto your normal sound card audio output.
There's actually a bit more to the process than my description, but for the purposes of this article, it'll suffice. Some people, however, may have full-size MIDI keyboards or external synthesizers attached to their PC, myself for example (I have a Yamaha PSR270 and a Yamaha MU10 connected to mine).
Once you start connecting external MIDI equipment, you can start doing some rather more interesting things. For example, it's possible to connect your MIDI keyboard up, then by responding to the key presses on it, make those keys perform functions on your PC.
All of that is quite a complex task. So, for now at least, I'm just going to show you how to make your device play music.
First, however....
Just What Is MIDI?
MIDI stands for "Musical Instrument Digital Interface" and it's exactly as the name implies; it's an electronic interface for connecting musical instruments together. As way of an example, I don't have to include my PC if I don't want to; I could simply just connect my keyboard to my synth unit and then use the note keys on my keyboard to play notes on the synth unit.
MIDI is a serial interface, and quite a slow one at that (approx. 3000 bytes per second, or 3k). It doesn't need to be fast, however, because most MIDI messages can easily fit into 3 bytes. The bigger messages that you might send are normally only sent once at start up, so that there's no slowing down or delays due to large amounts of data.
For you running on Windows, however, you may only ever have been familiar with "MIDI Files." "MIDI Files" are a file format that holds collections of these MIDI messages, which are then sent to and processed by the instrument in question. What often confuses many developers is just what the difference is between them and, say, a wave or MP3 file.
The best way to describe the difference is to imagine the difference between a sheet of music and a recording of music.
Sheet music (those books with the straight lines and funny circles with sticks them on that musicians are so fond of) are a bit like an instruction manual on how to play that piece of music. It contains no information on what each instrument should sound like, only what note it should play at any given time.
Recorded music, such as you might see in a wave file, is exactly that. It's a snapshot of someone's interpretation of the instructions to play a piece of music at the time they played it.
A "MIDI File," therefore, is a set of instructions, sent to a musical instrument, instructing it as to what the resulting music should sound like.
I could go into a full blown description at this point, but I'm not going to because that's not what this article is about. If you're curious, however, the Wikipedia article at
is a good place to start.
Okay, I Understand That, but Why Would I Want to Do This? What's Wrong with Waves?
There's nothing wrong with waves, if all you want to do is to play back some pre-recorded audio. With MIDI, however, there's more to it than just sending music commands.
There's a large variety of control and display hardware. The MU10 synthesizer I have, for example, has two audio in ports on it. Via these ports, I can attach external sound sources, such as another PC laying audio, microphone, or a CD player. I then can use MIDI commands to control the volume and mixing of these signals. Many rides and attractions in theme parks use MIDI data to control them, and stage shows often use laser and lighting setups that are controlled in the same manner. However, most folks wouldn't think that things like the instruments that come with games like "Rockband" are also all controlled using MIDI data.
Okay, so enough with the theory. Let's take a look at some code...
Start a simple command line project in Visual Studio as a code base to work from.
Now comes the hard part.
Under .NET, there are no official assemblies built into the run time that allow you to access the MIDI hardware on your computer. This means that you must use P/Invoke to call through to the original native operating system calls to use them.
Unfortunately, that does mean that it will be harder to port your code to other .NET platforms, and before we can do anything useful we'll need to create some P/Invoke definitions. I covered P/Invoke in a previous post in this column, so I'm not going to go into any detail about how it all works. To get the definitions you need, add the following to your "Program class" just after the start of the class, but before any other definitions:
[DllImport("winmm.dll")] private static extern long mciSendString(string command, StringBuilder returnValue, int returnLength, IntPtr winHandle); ")] protected static extern int midiOutShortMsg(int handle, int message); [DllImport("winmm.dll")] protected static extern int midiOutClose(int handle);
You'll also need to add a "struct" to your project so that MIDI device information can be obtained.
A struct is very similar to a class that only contains properties, but derives from the native C and C++ languages and so is specially suited to transferring data from managed code to unmanaged code. Don't worry, though; you don't have to change much. Create a new class in your project called "MidiOutCaps.cs" and add the following code to it:
using System; using System.Runtime.InteropServices; namespace MidiSample { [StructLayout(LayoutKind.Sequential)] public struct MidiOutCaps { public UInt16 wMid; public UInt16 wPid; public UInt32 vDriverVersion; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 32)] public String szPname; public UInt16 wTechnology; public UInt16 wVoices; public UInt16 wNotes; public UInt16 wChannelMask; public UInt32 dwSupport; } }
Remember to change the namespace to match that of your own project's namespace.
Finally, make sure you have the following using declarations in your main "Program.cs" file:
using System; using System.Runtime.InteropServices; using System.Text;
You also might want to add the following just after the P/Invoke definitions:
private delegate void MidiCallBack(int handle, int msg, int instance, int param1, int param2);
We won't be using it in this post, but if you decide to do MIDI input or need the MIDI system to call back to your application with messages and events, you'll need to define this to allow you to set a callback handler.
At this point, we should now be able to start using the API.
We'll start with a simple example. If all you want to do is play back a pre-recorded MIDI music file, the simplest thing to do is to use the Windows multimedia control interface, otherwise known as the "MCI."
You can see in the preceding definitions a definition for "mciSendString." This is used to send MCI command strings to the MCI subsystem, and is used by sending simple command strings.
You can use "mciSendString" to control many different things, not just playback of MIDI. For our post, though, make sure you have a MIDI file on hand and use the following code to have your PC play this file back, via the default MIDI sequencer device:
var res = String.Empty; res = Mci("open \"M:\\anger.mid\" alias music"); res = Mci("play music"); Console.ReadLine(); // Pause until return is pressed res = Mci("close music");
I've wrapped each of the above calls in the following function, just to make things easier:
static string Mci(string command) { int returnLength = 256; StringBuilder reply = new StringBuilder(returnLength); mciSendString(command, reply, returnLength, IntPtr.Zero); return reply.ToString(); }
A couple of things you MUST take note of when using the MCI: First off, filenames that include spaces MUST be quoted; if you don't, the MCI will refuse to load the file; secondly, path names must not have the period character in them.
"c:\my.files\music.midi"
will fail to play, even though it's quoted, due to the period between "my" and "files". The golden rule is to use as short and space-free file name as possible and to make sure it's quoted.
If everything has worked, your program should start playing your chosen MIDI file, and pause for you to press return before stopping and exiting.
There may be times, however, when you want to send individual commands directly to your chosen MIDI device. To start with, let's find out how many output devices are present in the system:
var numDevs = midiOutGetNumDevs(); Console.WriteLine("You have {0} midi output devices", numDevs);
MIDI devices are numbered starting at 0 (zero), so if the above tells you that you have two devices, your MIDI device IDs will be from 0 to 1. If it reports 3, your MIDI devices will be numbered 0, 1, and 2.
To find out the details of a numbered device, you need to use "midiOutGetDevCaps", as follows:
MidiOutCaps myCaps = new MidiOutCaps(); var res = midiOutGetDevCaps(0, ref myCaps, (UInt32)Marshal.SizeOf(myCaps));
The most interesting part of the information returned is the device name available in the "szPname" field in the struct. The other fields can be looked up on MSDN (just search for midioutgetdevcaps); many of the fields have set constants that allow you to determine the type of MIDI technology, supported channels, and other useful information. However, not all drivers and technology support all the different flags.
Once you know the ID of the device you want to open, you then can open it and obtain a handle to it as follows:
int handle = 0; int deviceNumber = 0; res = midiOutOpen(ref handle, deviceNumber, null, 0, 0);
Upon success, the handle variable will be filled with a large integer number representing the handle allocated. When you're finished, you must ensure that you close this to free up the MIDI resource, especially if you've allocated exclusive use of the device.
At this point, you now can send short MIDI commands to your device. For example, to play a note you would send a note on command:
0x90 = Note on command on channel 0. (The upper 4 bits are the command number, and the lower 4 are the channel; in this case, 0.)
0x3C = The note to play (60 in decimal or a 'C' in octave 4 as the following table shows:
Table 1: Decimal values of musical notes
0x7F = The velocity with which to hit the note (127 = maximum volume and force). The lower the number, the softer the sound.
Because the entire message fits into 3 bytes, we can send the entire message in a standard 32-bit integer. However, because of byte order or "Endiness" on a standard PC architecture, we need to reverse the order of the bytes, so in hexadecimal we have to send the following to the MIDI device to send this note on command.
0x007F3C90 (Decimal - 8338576)
As you can see, the command is at the end, with our 3 bytes in reverse order and a 00 to pad things out.
You can combine the values you need by doing the following
byte command = 0x90; byte note = 0x3C; byte velocity = 0x7F;int message = (velocity << 16) + (note << 8) + command;
Which you then can easily send to your MIDI device by using "midiOutShortMessage":
var res = midiOutShortMsg(handle, 0x007F3C90);
If everything worked, you should hear your device play a note. If you have no external devices, then because we've used device 0, your default will likely be the built-in Windows wavetable synth. Either way, you should hear a note play.
Once you play a note, you should also send a note stop command once it's played for the length of time you wanted it to. You do this the same way as for playing a note, but change the command byte from 0x90 to 0x80.
You can find a full list of the basic MIDI command messages on the MIDI manufacturers association web page at
If you want to select different instrument sounds on your MIDI device, you'll need to use the "Program Change" message with the appropriate instrument number. That information can be found here:
To make your program play your note on an electric guitar, for example, send
0x000019C0
to your MIDI device just before you send the note.
Finally, don't forget to close the device when you're finished. The full code for "Program.cs" should look something like this:
using System; using System.Runtime.InteropServices; using System.Text; namespace MidiSample { class Program { // MCI INterface [DllImport("winmm.dll")] private static extern long mciSendString(string command, StringBuilder returnValue, int returnLength, IntPtr winHandle); // Midi API ")] private static extern int midiOutShortMsg(int handle, int message); [DllImport("winmm.dll")] private static extern int midiOutClose(int handle); private delegate void MidiCallBack(int handle, int msg, int instance, int param1, int param2); static string Mci(string command) { StringBuilder reply = new StringBuilder(256); mciSendString(command, reply, 256, IntPtr.Zero); return reply.ToString(); } static void MciMidiTest() { var res = String.Empty; res = Mci("open \"M:\\anger.mid\" alias music"); res = Mci("play music"); Console.ReadLine(); res = Mci("close crooner"); } static void Main() { int handle = 0; var numDevs = midiOutGetNumDevs(); MidiOutCaps myCaps = new MidiOutCaps(); var res = midiOutGetDevCaps(0, ref myCaps, (UInt32)Marshal.SizeOf(myCaps)); res = midiOutOpen(ref handle, 0, null, 0, 0); res = midiOutShortMsg(handle, 0x000019C0); res = midiOutShortMsg(handle, 0x007F3C90); res = midiOutClose(handle); } } }
As you can see, direct MIDI programming is not for the faint hearted, and it requires a LOT of work to implement correctly. You'll need to write file parsing routines, you'll need to study the MIDI implementation charts for the device you're working with, you'll need to build timing routines and command sequencing code, and all that's before you even get anywhere near building a user interface.
It's for this reason I suggest you invest some time in looking at the excellent "NAudio" (naudio.codeplex.com) sound library that contains many common MIDI classes and managed APIs to make your job easier. With NAudio you can, if you want, still get the low-level individual messages control as seen in this post if you want, or you can just use its higher level functionality to set up and run your own MIDI data sequencers and file parsers.
Got a burning question about .NET? or just want to know how to make A do B? Come and hunt me down on the interweb; you can normally find me in the Lidnug .NET user group on the Linked-in platform, or you can find me on Twitter as @shawty_ds. Let me know your thoughts or simply just leave a comment below.
How to create a VST dllPosted by Ron Garza on 05/30/2017 08:37pm
Great article, Peter Shaw. I am not familiar with the coding language but I can get around with plain C code. I'm looking for a way to "encapsulate" a sound wave (.wav or .mp3) into a .dll so that I can play that sound (sample) at different frequencies. I've heard the term VST (Virtual Studio Technology) and have seen Cantabile software use it. So ... How can I create a VST (dll?) with my sampled sound?Reply
worksPosted by chris clement on 03/08/2017 07:19pm
This played middle C! I had been looking for a simple console example that would work in the latest 2015 .net Thanks!!Reply
eventsPosted by omar on 12/26/2016 12:53am
hi there... do you happen to have somewhere any example or something about sending events... a program change ?? thanksReply
Midi channelsPosted by uCkuper on 07/28/2016 05:20pm
@Ranandar: The MIDI spec numbers the MIDI channels 1 through 16. Windows uses zero-based indexes for the MIDI channels, so yes, drums play on channel 10 (MIDI spec), which is channel 9 in Winmm.dll.Reply
Good tutorialPosted by Ranandar on 05/04/2016 07:40am
Thanks! This was very helpful with getting my C# drum machine project started. The only snag was finding the right channel for the percussion. While midi.org says channel 10, you actually use channel 9 in the code.Reply
i am trying to make a midi piezo drum in arduinoPosted by asish on 02/23/2016 03:41am
if i want to make my piezo play as a snare in any other music software....how to code my sketch to make my computer understand what i want to do.Reply | http://www.codeguru.com/columns/dotnet/making-music-with-midi-and-c.html | CC-MAIN-2017-43 | refinedweb | 2,926 | 60.55 |
- buster 4.20.0-2
- buster-backports 5.4.0-1~bpo10+1
- testing 5.4.0-1
- unstable 5.5.0-1
NAME¶ip-netns - process network namespace management
SYNOPSIS¶¶A network namespace is logically another copy of the network stack, with its own routes, firewall rules, and network devices.
By default a process inherits its network namespace from its parent. Initially all the processes share the same default network namespace from the init process. [ set NAME NETNSID - assign an id to a peer network namespace
This command assigns a id to a peer network namespace. This id is valid only in the current network namespace. If the keyword "auto" is specified an available nsid will be chosen.¶ip netns list
ip netns add vpn
ip netns exec vpn ip link set lo up | https://manpages.debian.org/buster/iproute2/ip-netns.8.en.html | CC-MAIN-2020-10 | refinedweb | 135 | 68.16 |
Pipes.Group.Tutorial
Description
pipes-group builds upon
pipes to establish idioms for grouping streams
into sub-streams without collecting elements into memory. This tutorial
assumes familiarity with
pipes and
pipes-parse.
Synopsis
Motivation
Dividing a stream into sub-streams is non-trivial. To illustrate the problem, consider the following task: limit a stream to the first three groups of elements (a group means consecutive equal elements).
The wrong way to do it is to read each group into memory like this:
import Lens.Family.State.Strict (zoom) import Pipes import Pipes.Parse import qualified Pipes.Prelude as P threeGroups :: (Monad m, Eq a) => Producer a m () -> Producer a m () threeGroups p0 = loop 3 p0 where loop 0 _ = return () loop n p = do (as, p') <- lift $ runStateT (zoom group drawAll) p each as loop (n - 1) p'
The first problem is that this approach does not output any elements from each group until after parsing the entire group:
>>>
runEffect $ threeGroups P.stdinLn >-> P.stdoutLn1<Enter> 1<Enter> 2<Enter> 1 1 2<Enter> 2<Enter> 3<Enter> 2 2 2 4<Enter> 3
>>>
Worse, this program will crash without outputting a single value if fed an infinitely long group of identical elements:
>>>
runEffect $ threeGroups (each (repeat 1)) >-> P.print<Consumes all memory and crashes>
A better approach is to just stream directly from the first three groups instead of storing the groups in intermediate lists:
import Lens.Family ((^.)) import Pipes import Pipes.Parse import qualified Pipes.Prelude as P threeGroups :: (Monad m, Eq a) => Producer a m () -> Producer a m () threeGroups p0 = loop 3 p0 where loop 0 _ = return () loop n p = do p' <- p ^. group loop (n - 1) p'
This will run in constant memory and stream values immediately:
>>>
runEffect $ threeGroups P.stdinLn >-> P.stdoutLn1<Enter> 1 1<Enter> 1 2<Enter> 2 2<Enter> 2 2<Enter> 2 3<Enter> 3 4<Enter>
However, this code is not very modular: we have to integrate our group
creation logic with our group consumption logic. This conflicts with the
pipes philosophy of decoupling streaming programs into modular components.
An more modular approach would be to split our logic into three steps:
- Split our
Producerinto groups
- Take the first three groups
- Join these three groups back into a
Producer
But how do we split our
Producer into groups without loading an entire
group into memory? We want to avoid solutions like the following code:
import Control.Monad (when, liftM2) import Lens.Family.State.Strict (zoom) import Pipes.Parse split :: (Monad m, Eq a) => Producer a m () -> Producer [a] m () split p = do ((as, eof), p') <- lift (runStateT parser p) yield as when (not eof) (split p') where parser = liftM2 (,) (zoom group drawAll) isEndOfInput
... because then we're back where we started, loading entire groups into memory.
FreeT
Fortunately, you can group elements while still streaming individual
elements at a time. The
FreeT type from the
free package solves this
problem by allowing us to build "linked lists" of
Producers. This lets
you work with streams in a list-like manner.
The key idea is that:
-- '~' means "is analogous to" -- If a Producer is like a list Producer a m () ~ [a] -- ... then a 'FreeT'-delimited 'Producer' is like a list of lists FreeT (Producer a m) m () ~ [[a]]
Think of
(FreeT (Producer a m) m ()) as a "list of
Producers".
FreeT nests each subsequent
Producer within the return value of the
Producer so that you cannot access the next
Producer until you
completely drain the current
Producer. However, you rarely need to work
with
FreeT directly. Instead, you can structure most things using
"splitters", "transformations" and "joiners":
-- A "splitter" Producer a m () -> FreeT (Producer a m) m () ~ [a] -> [[a]] -- A "transformation" FreeT (Producer a m) m () -> FreeT (Producer a m) m () ~ [[a]] -> [[a]] -- A "joiner" FreeT (Producer a m) m () -> Producer a m () ~ [[a]] -> [a]
An example splitter is
(view groups), which splits a
Producer into
FreeT-delimited
Producers, one for each group of consecutive equal
elements:
view groups :: (Eq a, Monad m) => Producer a m x -> FreeT (Producer a m) m x
An example transformation is
(takes 3), which takes the first three
Producers from a
FreeT and drops the rest:
takes 3 :: Monad m => FreeT (Producer a m) m () -> FreeT (Producer a m) m ()
An example joiner is
concats, which collapses a
FreeT of
Producers
back down into a single
Producer:
concats :: Monad m => FreeT (Producer a m) m x -> Producer a m x
If you compose these three functions together, you will create a function
that transforms a
Producer to keep only the first three groups of
consecutive equal elements:
import Lens.Family import Pipes import Pipes.Group import qualified Pipes.Prelude as P threeGroups :: (Monad m, Eq a) => Producer a m () -> Producer a m () threeGroups = concats . takes 3 . view groups
Both splitting and joining preserve the streaming nature of
Producers and
do not collect or buffer any values. The transformed
Producer still
outputs values immediately and does not wait for groups to complete before
producing results.
>>>
runEffect $ threeGroups P.stdinLn >-> P.stdoutLn1<Enter> 1 1<Enter> 1 2<Enter> 2 2<Enter> 2 2<Enter> 2 3<Enter> 3 4<Enter>
>>>
Also, lenses simplify things even further. The reason that
groups is a
lens is because it actually combines both a splitter and joiner into a
single package. We can then use
over to handle both the splitting and
joining for us:
>>>
runEffect $ over groups (takes 3) P.stdinLn >-> P.stdoutLn<Exact same behavior>
This behaves the same because
over takes care of calling the splitter
before applying the transformation, then calling the inverse joiner
afterward.
Another useful lens is
individually, which lets you apply transformations
to each
Producer layer of a
FreeT. For example, if we wanted to
add an extra
"!" line to the end of every group, we would write:
>>>
import Control.Applicative ((<*))
>>>
runEffect $ over (groups . individually) (<* yield "!") P.stdinLn >-> P.stdoutLn1<Enter> 1 1<Enter> 1 2<Enter> ! 2 2<Enter> 2 2<Enter> 2 3<Enter> ! 3 4<Enter> !
>>>
Note that
individually is only compatible with the
lens package. You
can alternatively use
maps if you are using
lens-family-core:
>>>
runEffect $ over groups (maps (<* yield "!")) P.stdinLn >-> P.stdoutLn<Exact same behavior>
How FreeT Works
You don't necessarily have to restrict yourself to predefined
FreeT
functions. You can also manually build or recurse over
FreeTs of
Producers.
For example, here is how
concats is implemented, which collapses all the
Producers within a
FreeT into a single
Producer:
concats :: Monad m => FreeT (Producer a m) m x -> Producer a m x concats = go where go f = do x <- lift (runFreeT f) -- Match against the "head" of the "list" case x of Pure r -> return r -- The "list" is empty Free p -> do -- The "list" is non-empty f' <- p -- The return value of the 'Producer' is go f' -- the "tail" of the "list"
Many patterns for
FreeTs have equivalent analogs for lists.
runFreeT
behaves like pattern matching on the list, except that you have to bind the
result.
Pure is analogous to
[] and
Free is analogous to
(:).
When you receive a
Free constructor that means you have a
Producer whose
return value is the rest of the list (i.e. another
FreeT). You cannot
access the rest of the list without running the
Producer to completion to
retrieve this return value. The above example just runs the entire
Producer, binds the remainder of the list to
f' and then recurses on
that value.
You can also build
FreeTs in a manner similar to lists. For example, the
chunksOf lens uses the following splitter function internally:
_chunksOf :: Monad m => Producer a m x -> FreeT (Producer a m) m x _chunksOf p = FreeT $ do x <- next p -- Pattern match on the 'Producer' return $ case x of Left r -> Pure r -- Build an empty "list" Right (a, p') -> Free $ do -- Build a non-empty "list" p'' <- (yield a >> p')^.splitAt n0 -- Emit the "head" return (_chunksOf p'') -- Return the "tail"
Pure signifies an empty
FreeT (one with no
Producer layers), just like
[] signifies an empty list (one with no elements). We return
Pure
whenever we cannot emit any more
Producers.
Free indicates that we wish to emit a
Producer followed by another
"list". The
Producer we run directly within the body of the
Free.
However, we store the remainder of the "list" within the return value of
the
Producer. This is where
_chunksOf recurses to build the rest of the
"list".
To gain a better understanding for how
FreeT works, consult the definition
of the type, which you can find in Control.Monad.Trans.Free:
newtype FreeT f m a = FreeT { runFreeT :: m (FreeF f a (FreeT f m a)) } data FreeF f a b = Pure a | Free (f b)
... and just replace all occurrences of
f with
(Producer e m):
-- This is pseudocode newtype FreeT' m a = FreeT { runFreeT :: m (FreeF' a (FreeT' m a)) } data FreeF' a b = Pure a | Free (Producer e m b)
... which you can further think of as:
-- More pseudocode newtype FreeT' m a = FreeT { runFreeT :: m (Pure a | Producer e m (FreeT' m a)) }
In other words,
runFreeT unwraps a
FreeT to produce an action in the
base monad which either finishes with a value of type
a or continues with
a
Producer which returns a new
FreeT. Vice versa, if you want to build
a
FreeT, you must create an action in the base monad which returns either
a
Pure or a
Producer wrapping another
FreeT.
Conclusion
This library is very small since it only contains element-agnostic grouping
utilities. Downstream libraries that provide richer grouping utilities
include
pipes-bytestring and
pipes-text.
To learn more about
pipes-group, ask questions, or follow development, you
can subscribe to the
haskell-pipes mailing list at:
... or you can mail the list directly at:
mailto:haskell-pipes@googlegroups.com | http://hackage.haskell.org/package/pipes-group-1.0.12/docs/Pipes-Group-Tutorial.html | CC-MAIN-2021-49 | refinedweb | 1,651 | 58.21 |
:
Supplies:
-Light weight, transparent container. example
-Mesh Spackle tape. example
-Pollyfill stuffing
-Fishing line or thread to hang
-Arduino
-LEDs, Wires, Resistors, Infrared sensor and Remote
-Battery or power source for arduino
-Breadboard or blank circuit board, I used this one from Radio Shack.
Tools:
-Hot Glue
-Soldering iron (optional)
-Wire cutter, strippers.
Step 3: Hang and Enjoy!
In the future, there are some changes I would like the make.
- Replace power cord with a battery
- Build my own circuit board to replace the arduino
- Sound effects? perhaps recorded, or for example a small shaker for rain, etc
- Add fading. For some reason, as soon as I added fading the lights flashed instead (even when using a ~ pin). I would have liked "sun" to be fading between yellow and white, same for blue and white with "rain." I would have also liked to fade between functions.
Questions, Comments, Feedback welcome :]
Have fun with it!
is it possible to change colors without remote control?
I had a lot of fun making this! Thanks for the inspiration!
Sorry, leds without proper resistor will burn very soon
The updated version has proper resistors
I have allways loved this idea. He was my inspiration to do a remix
That is really beautiful! I am just mesmerized by the different colors.
Thank you in advance
Hi, Im sorry I missed this, Im sure it would be possable. I considered using them, but was afraid it wouldnt be reliable due to the light coming from the cloud itself.
NEC protocol (results.value):
1=50167935
2=50151615
3=50184255
4=50143455
5=50176095
6=50159775
7=50192415
8=50139375
9=50172015
0=50135295
:-)
#include
int RECV_PIN = 11;
IRrecv irrecv(RECV_PIN);
decode_results results;
void setup()
{
Serial.begin(9600);
irrecv.enableIRIn(); // Start the receiver
}
void loop() {
if (irrecv.decode(&results)) {
Serial.println(results.value, DEC);
irrecv.resume(); // Receive the next value
}
switch(results.value){
case 50151615:
digitalWrite (13, HIGH);
break;
//red off
case 50184255:
digitalWrite (13, LOW);
break;
}
}
@dablondeemu this is great instructables for basic IR remote bro!good work
sorry for the new reply, couldn't answer your last comment..
Thanks
Just the reciever and the remote are pretty cheap by them selves
But with a IR reciever any remote will work. I was using my TV remote to test it,. I bet you could find a cheap remote used or at a dollar store even.
#include <IRremote.h>
#include <IRremoteInt.h>
int RECV_PIN = 11;
etc...
If it still doesnt work, make sure the library is installed correctly. Let me know if that fixes it! :]
Now can you figure out how to make Lighting Bolts shoot out of that too :)
and get a lamp kit from a craft store.
However - you'd lose the infra-red remote control aspect, so (assuming your lights have a flashing feature - or three) you'd need a way to access the controls. If you can/need to extend the lead between the control box and 1st LED somewhat, you should be able to have your cloud but keep the conrols at a more accessible point. So long as you don't cross-connect the wires, that shouldn't be an issue for the vast majority of people. And you wouldn't necessarily even need to solder the connections - a pair of terminal blocks ( for example) at each junction would work (they'd look ugly, but they'd work), but I don't know that they'd take the strain of hanging your likely-to-sway cloud.
A second issue could be the number and brightness of your lights - you'd have to experiment to find an acceptable compromise. | http://www.instructables.com/id/IR-Remote-Controlled-Color-Changing-Cloud-Arduino/?ALLSTEPS | CC-MAIN-2015-22 | refinedweb | 606 | 74.08 |
Excess current draw after deepsleep
I'm running ver 1.20.1.r1 on a gpy with adhoc wifi disabled (pycom.wifi_on_boot(0)) which gives me a standyby current (no program running) of about 50mA. However after I run a program that terminates with a machine.deepsleep(20000) via Atom the standby current is 50mA for ~3s after wakeup then jumps to ~204mA where it stays. If I press the rest button it repowers @ 204mA for about 5s then drops back to 50mA where it stays. A soft reset (ctrlD) has no effect (current draw still 204mA).
I can't figure out what is drawing the extra 154mA, why deepsleep causes it or why a hard reset fixes it but a soft reset doesn't. Anybody got some thoughts? This is really bugging me because I thought I had deepsleep sorted like a year ago & now it's doing weird stuff again.
@dgerman If I press reset then run
import time, network, machine; print() #lte=network.LTE(); lte.deinit() causes={0:'pwrup', 1:'hardreset', 2:'wdt', 3:'wakeup', 4:'softreset', 5:'brownout'}; cause=causes[machine.reset_cause()] print(' powerup cause is', cause, ' check current', end=' ') for i in range(10): print(i, end=' '), time .sleep(1) t=10; print(' deepsleep(s)', t); machine.deepsleep(t*1000)
the program deepsleeps at ~50uA after the first pass
powerup cause is pwrup check current 0 1 2 3 4 5 6 7 8 9 deepsleep(s) 10
but ~170mA subsequently.
powerup cause is wakeup check current 0 1 2 3 4 5 6 7 8 9 deepsleep(s) 10
To get the 50uA on all deepsleeps I have to uncomment line 2. My take on this is that a pwrup prevents 'smart provisioning' powering up the lte modem but wakeup from deepsleep does not. I'm unfamiliar with '.enable_pullups' & I'm not using it.
@kjm What do you mean "first cycle of the program"? Is the deepsleep expiring? This should cause the Gpy to reset (i.e. start a "first cycle").
Did you .enable_pullups?
What does deepsleep.get_wake_status() return?
The 154mA seems suspiciously like the LTE radio is back on.
Can you post more(all) code?
I figured it out! So called 'Smart provisioning' was turning the lte modem on after a deepsleep. A hard reset disables 'smart provisioning' apparently. So even if you're not using the gpy lte modem you need to switch it off (lte=LTE(); lte.deinit()) manually if you want low current deepsleep.
I feel that this is contrary to which says of deepsleep, 'Execution is resumed from the main script, just like when pressing the reset button'. Fact is 'smart provisioning' powers up the lte modem after every deepsleep.
Further to my deepsleep woes on the gpy. I got another gpy, reflashed it with 1.20.1.r1 then ran
import machine t=15; print(t, 's sleep'); machine.deepsleep(t*1000)
What I'm finding is that deepsleep only drops the gpy to low current (~50uA) on the first cycle of the program. On subsequent cycles the program 'sleeps' but the current is ~204mA. Low current deepsleep works only if I press the reset button after each cycle of the program. Any thoughts on why low current deepsleep is a 'one shot' only?
My feeling is that some process that draws an extra (204-50=154mA) kicks in after the first low current deepsleep & stays till I press the reset button. Any suggestions what it might be? | https://forum.pycom.io/topic/6327/excess-current-draw-after-deepsleep | CC-MAIN-2022-33 | refinedweb | 580 | 75.71 |
Current Version:
Linux Kernel - 3.80
Synopsis
#include <sys/mman.h> void *mmap(void *addr, size_t length, int prot, int flags, int fd, off_t offset); int munmap(void *addr, size_t length);
See NOTES for information on feature test macro requirements.
Description
If addr is NULL, then the kernel chooses the address at which to create the mapping; this is the most portable method of creating a new mapping. of these flags are described in POSIX.1-2001.GB. Deprecated.
- MAP_ANONYMOUS
- The mapping is not backed by any file; its contents are initialized discouraged.
- information.
- MAP_LOCKED (since Linux 2.5.37)
- Lock the pages of the mapped region into memory in the manner of mlock(2). This flag is ignored in older kernels.
- MAP_NONBLOCK (since Linux 2.5.46)
- Only meaningful. However, address addr must be a multiple of the page size. All pages containing a part of the indicated range are unmapped, and subsequent references to these pages will generate SIGSEGV. It is not an error if the indicated range does not contain any mapped pages.
Timestamps changes for file-backed mappings
The st_ctime and st_mtime field for a file mapped with PROT_WRITE and MAP_SHARED will be updated after a write to the mapped region, and before a subsequent msync(2) with the MS_SYNC or MS_ASYNC flag, if one occurs.
Return Value mappings would have been exceeded.
-).
Conforming To
Availability
Notes.
C library/kernel ABI differences
Bugs (for example, when using POSIX shared memory interface documented in shm_overview(7)).
Exampleaqt); } exit(EXIT_SUCCESS); }
See Also
The descriptions of the following files in proc(5): /proc/[pid]/maps, /proc/[pid]/map_files, and /proc/[pid]/smaps.
B.O. Gallmeister, POSIX.4, O'Reilly, pp. 128-129 and 389-391.
Colophon
License & Copyright
and Copyright (C) 2006, 1997-01-31 by Eric S. Raymond Modified 2000-03-25 by Jim Van Zandt Modified 2001-10-04 by John Levon Modified 2003-02-02 by Andi Kleen Modified 2003-05-21 by Michael Kerrisk MAP_LOCKED works from 2.5.37 Modified 2004-06-17 by Michael Kerrisk Modified 2004-09-11 by aeb Modified 2004-12-08, from Eric Estievenart Modified 2004-12-08, mtk, formatting tidy-ups Modified 2006-12-04, mtk, various parts rewritten 2007-07-10, mtk, Added an example program. 2008-11-18, mtk, document MAP_STACK | https://community.spiceworks.com/linux/man/2/munmap | CC-MAIN-2019-35 | refinedweb | 385 | 66.64 |
Hi there,
I want some help on decreasing the database size.
My SYSTEM Table Space size is too large. 4.09GB.
How can I reduce it?
Kind Regards.
Printable View
Hi there,
I want some help on decreasing the database size.
My SYSTEM Table Space size is too large. 4.09GB.
How can I reduce it?
Kind Regards.
Hi,
Please check the free space for system ts.
SQL> SELECT TABLESPACE_NAME, SUM(BYTES) FROM DAB_FREE_SPACE
GROUP BY TABLESPACE_NAME;
and try to find out how much of contiguous free space is available.
use that size to minimize your TS size by finding related datafile
EX :
SQL> alter database datafile '/disk1/oradata/XXXXX/system01.dbf' resize 2gb;
Regards,
Singh.
Hi there,
I have checked the free space it's only 2.47MB. The DataFile is 99% Full, but I know that my whole data is not more than 700-800MB.
Actually I imported this whole data (I mean for other Table Spaces) from UNIX.
In this situation what you will suggest.
Kind Regards,
Shani
Follow these steps only if U have not used these tables for any DML perpose.
When you imported tables Did u had Export log associated with the Dump file ?
1. If Yes then drop all those table from System tablespace by connecting to system users. (if No then create index file using Import utility specifying index file name )
2. Create diffent tablespace say "TBS1" , create differnt user say "ABC" and give this tablespace name as default tablespace name.
grant privilegs to ABC
3. Locate the script of tables if possible and change tablespace name to "TBS1" for each table.
(If u dont have tables script create Index file using Import utility Copy contents and make script for creating tables )
4. Run this script in newly created user.
5. Import the dump saying
Imp system/systempassword file=*****.dmp rows=y ignore=y fromuser=system touser=ABC
6. Then go for resizing datafiles assoicited with System tablespace
Regards
Viraj
----------
OCP 9i DBA
Hi,
Check for segments that does not belong to SYS/SYSTEM schema but reside in SYSTEM tablespace and move them to designated tablespaces:
select owner, segment_type, segment_name
from dba_segments
where owner not in ('SYS','SYSTEM')
and tablespace_name = 'SYSTEM';
For each table, simply move to other tablespace using
alter table tab_name move tablespace ts_name storage (...
for each index, you can rebuild it using:
alter index ind_name rebuild tablespace ts_name storage (...
Cheers,
R.
Hi shaniahmad
Before you do anything, first check what are default and temporary tablespaces for the users. NEVER leave them default. Make sure they are not SYSTEM.
select username, default_tablespace, temporary_tablespace from dba_users where username<>'SYSTEM';
If you find SYSTEM tablespace here change them using ALTER USER command. Then you can continue with what rotem_fo has said.
After you transfer all the objects to different tablespaces, you can use the following command:
alter database datafile 'xxxx.dbf' resize 100M;
If this command fails, which is possible, then you will have to export and import the entire database with possibly COMPRESS=Y option.
Results From DBA_DATA_FILES
===========================
FILE_NAME = E:\ORACLE\ORADATA\SFPL\SYSTEM01.DBF
TABLESPACE_NAME = SYSTEM
BYTES = 4293787648
BLOCKS = 524144
STATUS = AVAILABLE
RELATIVE_FNO = 1
AUTOEXTENSIBLE = YES
MAXBYTES = 3.4360E+10
MAXBLOCKS = 4194302
INCREMENT_BY = 80
USER_BYTES = 4293779456
USER_BLOCKS = 524143
Results From DBA_FREE_SPACE
===========================
TABLESPACE_NAME = SYSTEM
SUM(BYTES/1048576) = 2699.28906 (It's in MB)
when I tried to resize the data file it gives the following error.
ORA-03297: file contains used data beyond requested Resize Value.
Hi
quick n dirty answer to you...on how to resize :-D
export the database with compress=y
import the database ..
then you can resize your datafile..i suspect you have non system database objects in the system tablespace.
ahh..i hope your database is small enough to do export import..
regards
Hrishy | http://www.dbasupport.com/forums/printthread.php?t=33311&pp=10&page=1 | CC-MAIN-2018-22 | refinedweb | 628 | 66.33 |
def find_best(key): ... return new_key, value key, value = find_best(key) a, b = b, a # swapInterval test
if 2 < x < 4: print "x is between 2 and 4."Set membership test
if val in ('foo','bar'): # or {'foo','bar'}: a setFor performance, store the tuple/set in a variable (whose name could be useful documentation), or just do
if val=='foo' or val=='bar':Conditional expression What in CeeLanguage is written
c ? x : yis in Python (see for history)
x if c else y # note orderIn Python <2.5, use (if you must) one of
c and x or y # incorrectly returns y if x is (any kind of) false (c and [x] or [y])[0] # reliable, but ugly and churns objects (x, y)[not c] # always evaluates both (y, x)[c] # only if c is really a bool (or otherwise 0 or 1)
for f in foo: print f * barnot
for i in range(len(foo)): print foo[i] * barIterating over the lines in a file
with open('foo') as f: # automatically close f even on exception for line in f: ...not
f=file('foo') # synonym for open; see for line in f.readlines(): # store whole file in memory ...In CPython, the "with" is not so necessary, as dropping the last reference to f will close the file (but that can happen later than you think in case of exceptions). Repeat-until loop
while True: ... if test: break ...
class MyObj(object): def __init__(self): self.foo=0 obj=MyObj() obj.foo=4You can later make foo a property if logic is necessary:
class MyObj(object): def __init__(self): self._foo=1 # oops, it has to be odd now @property # this is a "decorator" def foo(self): return self._foo @foo.setter def foo(self,x): self._foo=x-x%2+1 # round up to next odd number obj=MyObj() obj.foo=4 # now assigns 5Dispatch tables
disp = { 0: f0, 1: f1, 2: f2, 3: f3 } # dictionary of functions x=disp[n](...) # choose and callnot
if n==0: x=f0(...) elif n==1: x=f1(...) elif n==2: x=f2(...) elif n==3: x=f3(...) else: raise KeyError(n) # for symmetry with the above(Python has no SwitchStatement.)
class A(object): def f(self,l): self.list=l[:]If it's common for the call to look like
a=A() a.f([1,2,3]) # no other reference to this list anyway(but with a large list that shouldn't be needlessly copied) you can (in CPython) optimize away the copy:
import sys class A(object): def f(self,l): if sys.getrefcount(l)>3: l=l[:] self.list=lThe three allowed references are l itself, and its presence in the argument lists of f() and of getrefcount(). (This occurred to me just before writing this on June 20 2013; thoughts welcome. -- DavisHerring?) | http://c2.com/cgi/wiki?PythonIdioms | CC-MAIN-2015-35 | refinedweb | 472 | 73.68 |
Version 7 does offer some performance gains (especially in regards to optimized code for Pentium4 compared to GCC), but the debugger is less than stellar, LinuxJournal says.
Testing the Intel C++ Compiler
2003-09-12 Intel 10 Comments
Version 7 does offer some performance gains (especially in regards to optimized code for Pentium4 compared to GCC), but the debugger is less than stellar, LinuxJournal says.
If your’re using VS.Net 7.x with this, just forget using it with the bundled STL in MSVC.Net. You’ll have to use STLport to get any decent STL support in ICL.
It is available on Windoze, for those who not know it.
-magg
I can’t get to the article for some reason but i have heard that the intel compiler for linux is tight. As it should be what better compilier to use then the one offered by the CPU manufacturer. This may be the primer that sets off a new level of quality software for linux. Where do i get my intel optimized and compiled linux kernel? GNU G++ does a pretty good job but still alot of it is guess work.
Dinkumware, P.J. Plauger’s company, makes an excellent C++ standard libary (incl STL) with support for VC++ and Intel C++ ::
Dinkum Unabridged Library for VC++
The Dinkum Unabridged Library V4.02 for VC++ is the successor to our highly popular V3.08 upgrade library for VC++ V6.0 and our libraries for Windows CE. In one package, you get support for VC++ V6.0, V7.0 (.NET), and V7.1 (Everett or .NET 2003), as well as the Intel C++ compiler for Windows. You get our industry-leading Standard C++, Abridged, and EC++ libraries — with optional iterator debugging, multithreading, exception handling, and namespaces. You can use them with the existing VC++ C library, our C95 library or our C99 library. It’s a natural companion to our highly portable Dinkum Unabridged Library, which works with most other popular compilers.
The thing is, I don’t do heavy development, so I am certainly not willing to pay 400 bucks for a compiler. Once the evaluation period ends, you’ll be screwed because of the incompatibility.
As I installed gentoo I saw that it can be configured to compile its packages using icc. There was however a note stating tha not all of them are suitable for this trick.
As I installed gentoo I saw that it can be configured to compile its packages using icc. There was however a note stating tha not all of them are suitable for this trick.
To take only one example, the Linux Kernel isn’t supposed to compile with any kernel but gcc (ie: it has a lot of gcc-only extensions (__asm__(), …)
Perhaps Intel implement some of theses, but I don’t really want to take the risk.
i think icc 7.1 is more favourable than the reviewed icc 7, in regards to performance and debugging ability and ability to link to gcc objects.
> The thing is, I don’t do heavy development, so I am certainly not willing to pay 400 bucks for a compiler. Once the evaluation period ends, you’ll be screwed because of the incompatibility.
Then if you use Linux, you can go and download the “Free Unsupported version” intended for noncommerical purposes.
ICC’s main strength lies in Itanium, and the main reason of icc’s development was to support Itanium’s EPIC architecture. Because of the FUD beiung spread against it claiming “there’s no good compiler for EPIC -> Itanium sucks”.
In this respect, gcc is far behind…
Just a little word, ecc/efc is for the Itanium, not icc/ifc. That said, I hate efc with a passion. I have never been able to use it properly. Now, granted, part of that hatred was due to GNU binutils (ld), but I still don’t like it.
On the other hand, I’ve had pretty good success with ifc on Xeons. Quite a nice compiler. I still think that ifc doesn’t interact with C code as well as Intel seems to believe. Of course, the annoying thing about ifc is that it is continuously behind Red Hat. Remember, IFC 7.1 does not work on RH 9 and even NPTL 8.0 systems (well, without *very* ugly hacks). That is causing more problems for people now than anything.
The requirements are not as harsh as they state in the article. I’ve installed all of the icc-versions since 6.0 on my Debian unstable. It works fine with glibc 2.3.1, but glibc 2.3.0 had a few bugs that made the headers inparsable by compilers that didnt define _GCC (icc 7.2 has “fixed” this by defining it and thus claiming to be gcc).
To use the rpm’s in alien you first has to remove the uninstall-scripts since debian wont accept conflicting files. | https://www.osnews.com/story/4524/testing-the-intel-c-compiler/ | CC-MAIN-2019-51 | refinedweb | 827 | 74.29 |
09 November 2007 16:14 [Source: ICIS news]
By Nigel Davis
LONDON (ICIS news)--?xml:namespace>
The Henry Hub spot price on 7 November was up about 2% on the week at $7.34/m Btu (million British thermal units). All important gas inventories are high.
Working gas stocks hit a record high as of 2 November with 3,545bn cubic feet in storage, 8.9 percent above the five-year average.
But gas prices in the
And the way prices in continental
High gas prices have been an issue in the
Infrastructure investment, however, including the laying of two new gas pipelines under the
Prices this week, though, have been around the equivalent of $10/m Btu.
Producers do not so much fear high gas costs but sharply rising prices they can do without.
Add the industry’s concerns over the weaker dollar and you have the makings of an onslaught on European chemical industry competitiveness.
The sector currently appears to be taking higher costs in its stride but concerns have been raised about a possible demand slowdown which, coupled with higher input costs, could hit companies’ pricing power hard and subsequently dent margins.
Pricing power has slipped away downstream in the sector as companies such as Clariant have shown.
Some analysts believe it may not be long before producers in other segments lose influence in the marketplace.
A straw poll of producers and traders this week, when it looked as though oil would breach the $100/bbl mark, produced a range of comment.
One large ethylene buyer suggested that cracker operators would cut operating rates rather than sell marginal tonnes at even closer to break-even values.
Producers of polycarbonate and acrylic acid were insistent that higher prices would be needed before high oil prices filtered down into feedstock benzene, phenol and propylene.
Short-term production issues continue to have an impact on some markets while others, such as benzene, are feeling downward pressure from weak downstream demand.
Styrene traders said some products were still cheap in relation to crude.
The price of natural gas will certainly move up on the back of further crude gains, but with a delayed effect, one methanol trader said.
“Today’s oil prices only feature in a small way in today’s gas prices. But they increasingly get factored in as the weeks go by. So at this stage methanol producers are running on borrowed time,” he added.
Methanol producers would have to hedge production to avoid shutting down or losing money, another trader suggested.
“Winter gas is really strong for this time of year but cracks are not outside any of the historical ranges,” another said.
All is not rosy for US Gulf petrochemicals makers.
The natural gas cost differential is working in their favour; and at those relative natural gas prices, there are probably export opportunities to
But ethane feedstock prices have marched upwards with oil in recent months, so the input cost differential across a range of products is not as great as it first looks.
High priced oil and spiking gas costs hit the sector hard. Given the continued weakness of the US dollar, European bulk chemicals makers are in for a tough period.
ICIS pricing reporters conributed | http://www.icis.com/Articles/2007/11/09/9077689/INSIGHT-High-gas-prices-add-to-Europes-oil-woes.html | CC-MAIN-2014-42 | refinedweb | 540 | 60.85 |
Go
Unanswered
|
Answered
Screensavers
~300
answered questions
Parent Category:
Software and Applications (non-game)
Screensavers are applications that run when a computer is idle. Their purpose is to prevent damage to computer monitors and/or provide entertainment.
1
2
3
>
How can you convert a flash file into a Windows screensaver?
You can convert Shockwave Flash (*.swf) files into Screen Saver files (*.scr) using a program like InstantStorm, see the related link for their homepage. With a FlV converter, convert FLV or .swf files to gif or aiv.…
Popularity: 85
Your screensaver comes up nothing else?
go to google images type what u want click see full size image .right click on piture ,click set as background and hey presto
Popularity: 2
How can you keep your screensaver when music is playing on the computer?
Right click on your desktop and click personalize after that personalization window will appear, click on screen saver and set the minutes into a large number. Also, in the screen saver settings click change power settings and click change when the computer sleep and set the display settings.
Popularity: 1
How do you create screensavers?
MS PowerPoint
Popularity: 11
Why are screensavers necessary?
According to my knowledge this is because screen savers increases the life of your monitorUpdated 6/7/10your knowledge is correct
Popularity: 13
How do you apply the digital time on your screensaver?
computer screensaver configuring SCREENSAVER To get a screensaver to show digital time on your computer do the following as below: - - Take your mouse cursor to an empty area on your desktop and right click on the empty area. - The display properties dialog will come up on which you click screen…
Popularity: 6
Does a LCD monitor need a screensaver?
NOT REALLY, Older displays used technology that used Phosphors that were charged to make colors display. When an image was on the screen too long the phosphors would sometimes "burn out" with the constant energy and leave that ghastly image on the screen called "BURN IN". A LCD screen uses a complet…
Popularity: 31
Whats wrong with your computer screen if whenever it goes in the black screen mode after the screensaver and you move your mouse and your screen go to desktop then black over and over again?
Usually try a friends computer if your computer is experiencing difficulties. Otherwise make the year you were born in lower than 1900 because the site might think you are not old enough if you have a 1997 date.There are 3 step to repair blue screen error If you got blue screen error then there is …
Popularity: 25
Where can you get a screensaver for WWE wrestler edge?
Answer i love edge unchained wrestling wallpapers no 1 source for wwe tna and ecw wrestling wallpapers
Popularity: 19
Where can you download Daniel Craig screensaver as James Bond in swimming trunks?
type in "Daniel Craig + James bond swim suit" on Google images.
Popularity: 2
What is a screensaver?
A screensaver is the image, or video etc. that your computer puts on the screen when it goes into a lower power system (similar to "sleep" mode).
Popularity: 1
How do you get rid of crawlercomI was trying to get a free screensaver and I got Crawlercom as my home page for Mozilla Firefox I have tried all that I know to delete it Please advise Thank you?
just do a system restore.It worked for me
Popularity: 1
Where does Windows save screensavers?
I found this info On the net for Screan savers: SCR-files If the downloaded file is of 'scr' type, you need to move it manually to c:\windows unless author's instructions tell you otherwise (sometimes needs to go to c:\windows\system or c:\windows\system32). *Go to 'my computer', open c:\, then 'wi…
Popularity: 8
How do you convert C source code into a screensaver for Linux?
Most screensavers on Linux are modules run by Xscreensaver: I don't know exactly how to convert a program to a module, check the Xscreensaver documentation.
Popularity: 2
What is Freeze Screen Saver exe Is it good or bad. Should I remove it from PC?
freezescreensaver.exe is adware and should be removed immediately.
Popularity: 2
What website does the best Pokemon Charmander screensavers?
I would suggest using image sites. There is a good one in the related links below.
Popularity: 5
In Windows XP whenever the screensaver comes on it goes back to the user login screen and how do you stop it from doing that?
Right click on desktop/Properties On label "Screen saver" (the 3rd label, don't know how it's called exactly, as i don't use English XP) you can find a checkbox called password security or something similar. Check it out. NB: If you use a remote desktop connection while the screen saver turns on, …
Popularity: 1
Where can you find the old screensaver where Barney The Dinosaur dies in various ways?
I found only a link that does not work. But I also found what is supposedly the email address of the person who made it: Karl_A._Bunker@bcsmac.bcs.org I haven't written to it so I don't know if it will work or not.
Popularity: 1
Delete pictures off screensaver and desktop?
If you have a windows system you can do it two ways. One way is to go into the picture files and delete all pics or you can go into appearance and hit screen saver tab. When that comes up you will see what is programmed as your screen saver and you can change that with the drop down menu there. Same…
Popularity: 2
Why does your computer need a Screen-saver?
Computers these days do not need a screen saver at all, the monitors and screens have developed since the start. Originally the screens used to be CRT green screens which displayed the characters and information by sending a beam of light against a florescent background. When the computers were lef…
Popularity: 4
What is the windows version of apple leopard mosaic screensaver?
2012 05 29 I found that in my search for windows version of this mosaic leopard screesaver: and 'version beta Regards, Myke974
Popularity: 2
How do you make screensaver for mobile phone?
go to RED DOO
Popularity: 9
Where is the screensaver island with three palm trees?
roatan, honduras
Popularity: 3
How can you get a spyro dawn of dragon screensaver on YouTube?
You don't. But there's a cool free one that I got from spyroslair.com!
Popularity: 1
A screensaver is best described as an?
as an image or animation used to prevent desktop images from burning into the monitor. Back when CRT monitors were still used, any image left on the screen for an extended period of time would 'burn' into the monitor, and when you changed the image you could still see outlines of the image that w…
Popularity: 2
What is the purpose of a screen saver on a computer?
To save the screen.
Popularity: 6
How do you put a screensaver on my computer screen?
For Windows Users: In order to switch on the screen saver, you can right click on the desktop and select 'properties'. Then select the tab marked 'Screensaver'. It is turned on or off here. If you want to put on a screen saver than left click on your desktop. If you have Vista click personalise. Xp…
Popularity: 5
How do you make a screensaver?
You can use adobe flash player. Once you have downloaded it you should go to the start button and click on multimedia then on the flash player with the red icon.Hope this helps!
Popularity: 3
Where can you find twilight screensavers?
You can download Twilight screen savers from the link in related links.
Popularity: 4
In your desktop properties screen saver and desktop is not showing i think some files are missing or get corrupted but how it will be resolved?
This may be due to some spyware or adwares try to download the anti spywares.
Popularity: 1
Where you can find Morning Glory Korean stationery company screensavers?
Hey there. You can go in lots of places . There's one in bankstown, campsie,cabramatta and lots more Jenniferrr
Popularity: 1
Make a picture a screensaver?
if you have a PC put your mouse pointer over the picture you want as your screensaver and click the right side button of your mouse then somewhere in that list of words it says (set as backround) select that and you should have a sceen saver on your desk top.
Popularity: 3.
Popularity: 3
Where can you get a Yankees screensaver?
Yankees.com has some cool ones. Otherwise just google image search Yankees and see what comes up.
Popularity: 2
How do you change your screensaver?
It Depends On Which Computer You Use. If You Have Windows Vista: 3: Find "Personalize" 4: Find "Screen Savers" 5: Select The Screen Saver and click OK. ------------------------------------------------------------------------------ If You Have Windows XP: 1:Right Click Desktop Note: You Mus…
Popularity: 3
Where can you find screensavers for free?
There are a few on yoyogames.com.
Popularity: 3
Where can you get a WWE raw screensaver?
On wwe.com/screen savers
Popularity: 1
How do you make a slide show screensaver on a hp?
on your screen have to left click then click properties on it says themes, desktop,screen saver, apperance,settings you wanna click screensaversthen click the down arrow button and pick which screen saver you want and that's how you get a screensaver
Popularity: 2
Have the samsung tocco have moving images as screensavers and how can you do it if so?
When You Open Images I Wont Move But When You Apply It As A Walpaper It Moves
Popularity: 1
How do you make a screen saver?
If you have a PC, you can go to the (paint) aplication on your start menu. You can there make a design or put a picture on the page. After you are completely done, Click the (File) icon, then click Save As. Name it anything you like. (DO NOT CLICK SAVE. MAKE SURE YOU CLICK SAVE AS!) From there you m…
Popularity: 1
Are screensavers free?
Most are free unless it says you have to pay. If it says you have to pay you have to pay.
Popularity: 3
How do you change the screensaver on the xbox 360?
Very carefully
Popularity: 0
What is the optimal wait time before a screensaver kicks in?
10 minute delay before the screen saver kicks. my answer is (d) 10 minutes.
Popularity: 2
What is the best wait time before a screensaver kick in?
I put mine at 5 min.
Popularity: 4
What the best time to put on you screensaver?
When you ain't using the computer.
Popularity: 6
A screen saver is best described as a?
B. moving full-screen graphic
Popularity: 3
Source code in C for moving circle screensaver?
#include <stdio.h>#include <conio.h>#include <math.h>#include <graphics.h>#include<stdlib.h>void main(){ int dr,md,midx,i, midy,dx,dy,mx, my,k,x,y,xdir,ydir,oldx,oldy; double pi,tpi,a,t; void *ball; unsigned imgsize; dr = DETECT; initgraph(&dr, &md, "c:\\tc\\bgi…
Popularity: 1
How do you retrieve a screensaver?
Either right clik on your mouse or go to start and hit paint you can create your own screensavers there that's what i always do!
Popularity: 3
Is bad dog screensaver still available?
You really shouldn't want that screen Saver anyway... But in anycase I really don't know
Popularity: 1
Need a screen saver?
There are many places online where you can search for a screensaver, including screensavers.com.
Popularity: 3
Where can you find a twilight screensaver?
You can find a Twilight screen saver on Google Images. Just type in what image you want in the box then press search and it will give you all different picture that you can turn into screensavers. When you find the one that you want double click on it and then at the top of the screen press full siz…
Popularity: 1
How do you set your screensaver?
Right click on your desktop, then you click on "properties", then go to the "screensaver" tab and pick the screen saver that you want.
Popularity: 3
Can you put music on your screensaver?
Yes,we can put music on our screensaver .
Popularity: 1
How do you set up a screensaver?
To set up a screensaver, simply go to the desktop (which is the first screen you see after you log in or with the icons.) Right click on the background then go to propterties. There should be a tab at the top of the screen that opened, called screensavers. There you can change your screen saver. To…
Popularity: 2
Why are screensavers used on computers?
It keeps the computer screen from being burned of a still image on the glass because of the radiation of the catharay tubes.
Popularity: 1
What is the name of the painting featured on the laptop screen-saver used by the art restorer in Palermo Shooting?
Still Life with Oysters, Lemons and Grapes by Cornelis de Heem
Popularity: 1
Why doesn't my screensaver doesn't come on?
Well if you have set your screensaver to come on after the PC or laptop has not been used for 30 minutes and you have been sat there for ONLY 10 minutes then keep sitting there for another 20 minutes then you might see your screensaver?
Popularity: 5
Screensavers blue beach sand with DELL?
do you mean where can you get beach screen savers from dell ??
Popularity: 1
Rugby League State of Origin screensavers?
You can check this site out ;) Regards, Literati
Popularity: 15
How do you change the screensaver on an ASUS Eee PC running Linux?
See:
Popularity: 1
Can you change the screensaver on the samsung impression?
Yes
Popularity: 1
How do you make a video as a screensaver?
If it's an existing video on YouTube, download the free "Video Saver" from . Enter the URL to the video on YouTube in the settings and click on "Save Settings" - done. If the video is not on YouTube, you could just upload it and set then e…
Popularity: 4
Can you make a PowerPoint your screensaver?
not since 2006
Popularity: 2
How does a screensaver function?
Right click your mouse in the wallpaper click properties select screensaver then choose one and type what minute will it appear then click OK.Screensaver is a anti virus protection when you use it
Popularity: 1
Can you change the screensaver on a samsung tocco?
No.
Popularity: 5
How can you make a YouTube video into a screensaver on a computer with optional sound?
It's easy. just download the free "Video Saver" from . In the settings, enter the URL to the video on YouTube and hit "Save Settings" - you now have a new screensaver using a YouTube video.
Popularity: 1
Where can you get a screensaver with the theme of gangster paradise?
you can install it on one of those old dell computers they proboly don't sell them any more so i sugust u give up
Popularity: 3
Does nokia 5800 have a clock screensaver?
Not now but yes in a few years
Popularity: 11
Are free screensavers safe to download?
That is what I asked you.. I wouldn't have asked if I knew the answer.
Popularity: 1
What is the difference between a screensaver and wallpaper?
Wallpaper (also known as a desktop background) is a usually static image placed on the desktop area of many personal computers that use a graphical interface. Screen savers are programs that display moving / rotating images or animations after a certain amount of time with no user input has occurred…
Popularity: 5
Where can you get 3d screensavers that move?
Try doing a google search.
Popularity: 2
How do you set screensaver in nokia 5800 express music mobile phone?
download
Popularity: 2
You want to set a screensaver on your laptop which consumes least energy?
In Control Panel, power settings, select maximum battery life.
Popularity: 1
A screensaver is best described as a?
desktop background pattern or wallpaper
Popularity: 3
Sai baba wallpapers and screensavers free download?
Is chaitanya going to marry me
Popularity: 1
If you unplug one monitor and hook up another monitor will you lose all of your screen savers?
No, The screensavers are stored in the computer, not the monitor.
Popularity: 1
You lost your screensaver for Mac pro?
i have an mac pro, when I go into system preferences my screensaver is not there when i go into system preferences the screensaver is not there
Popularity: 5
What do you call screensavers that move. Not the 3d ones but the ones where the characters in the screensavers move?
Animated?
Popularity: 3
How do you make gifs work as desktop wallpaper?
Just download a working or moving gif and set that as desktop background and yes they do work that way.
Popularity: 1
Where are the best Britney Spears screensavers?
who gives a $hit.
Popularity: 1
How do you put the screensaver on an iBook G4?
Screen savers are set up in the DeskTop and Screen saver section of the System Preferences. After selecting which screen saver you wan to use you can either set a time delay so the screen saver starts after the computer has not been used for a few minutes or set a Hot Corner which will activate the…
Popularity: 3
What is the benefit of using a screensaver?
Benefit of a screen saver: Protects monitor screen from burning during idle monitors by screen saver utility program. This prolongs the life of the monitor.The Advantages of a Screen SaverIn past decades, computer monitors comprised of cathode-ray tubes often suffered "screen burn." A static image …
Popularity: 0
What is the most popular screensaver for people's cellphone?
a multi-colored zebra background, or a peace sign
Popularity: 3
Where can you find good free moving screensavers?
just go to your search bar and type in "free moving screen savers" youll be surprised at how many they are-its safe for ur computer and can be fun as well.i wish i could remember a couple for you,its been a while.i went back to the regular s.savers.lol good luck
Popularity: 1
Does a computer virus effect the screensaver?
Yes, Normally it will change the screen to blue.
Popularity: 1
What are the uses of screen saver?
They have no actual use nowadays, but back when computers with CRT screens first came out, they had 'burn in' problems. What would happen is the screens back then used a more powerful ion beam, and would burn holes in the screen if left too long. The purpose of the screen saver was to quite literall…
Popularity: 2
Does screensavers give viruses?
a lot of the "free" ones do!
Popularity: 4
How can you use more than one screensaver?
You will have to be more specific: Do you have two or more monitors, or just one? What operating system do you have? (e.g. Ubuntu, MS-DOS, Mac OS 9, Windows XP, etc.) Only after these questions are answered can this question be answered.
Popularity: 4
Who invented screensavers?
Adam Megilgot has a creeper mustach
Popularity: 2
How do you make your screensaver your desk top image?
for a mac go to terminal and paste this after it says your computer info (you have to set up a screen saver first) what to paste /System/Library/Frameworks/ScreenSaver.framework/Resources/ScreenSaverEngine.app/Contents/MacOS/ScreenSaverEngine -background
Popularity: 1
Where can you find your name in cool bubble letters for your screensaver?
im not sure maby they might make a website bubbles letters
Popularity: 1
Is screensaver saves computer power?
yes,because it put the monitor into very low power consumption state,then you will be able to save lot of power
Popularity: 1
Is it truth that screensaver can slow the PC?
Uh, No
Popularity: 1
In gta 4 theres a mission where you sneak into a guys house and then you go on his laptop and his screensaver is liesdamnliescom and i cant find that that house is it near the carnival?
In the mission Right Is Wrong, Oleg Minkov's apartment is in Iroquois Avenue, Hove Beach, which is the street behind Niko's original safehouse in Broker.
Popularity: 1
How do you add a new screensaver to your Control-Panel-Display drop-down choices list if you have already added the SCR file to your windows-System32 folder?
Right click the scr file, click install.
Popularity: 2
Where can you find free Barbie Doll screensavers to download?
just research in the google free barbie screensavers
Popularity: 3
Does Nexon have a Maple Story screensaver?
Yes, the Zakum are available at their website.
Popularity: 4
What is justin bieber's computer screensaver?
That is a very strange question, as no-one could possibly know, and I don't see the point in knowing... but my answer is that there are so many screensavers out there that it could be anything!
Popularity: 2
Can you put screensaver on Samsung C5220?
Yes
Popularity: 4
Does a screensaver conserve battery life?
No, it doesn't. When your phone/laptop/tablet is idling and a screensaver is running, it uses CPU and it doesn't conserve battery life.
Popularity: 1
1
2
3
> | http://www.answers.com/Q/FAQ/17177 | CC-MAIN-2017-30 | refinedweb | 3,604 | 72.16 |
Authenticating API calls
In this example, we're showing how service accounts can be used to call the AdSense Platforms API to create and manage sub-accounts.
Step 1: Create a new Google Cloud project (or use an existing one)
If you have an existing Google Cloud project, feel free to use that. Otherwise, follow the guide below on setting up a new project:
Step 2: Create a service account
Using service accounts is the best way to create sub-accounts. Follow these steps to create your service account:
- Visit the service accounts page in Google Cloud
- You can either use an existing service account, or create a new one:
- Click on "+ Create service account"
- Fill in the "Service account details" form
- Steps 2 and 3 on the page (granting access to projects and users) are optional
Learn more about creating and managing service accounts.
Once the service account has been created, you need to send it to Google to get it added to your AdSense account. This is essential, as the service account needs to be allowed to access your AdSense account. You can either send it via your account manager, or email afp-support@google.com.
Step 3: Enable the AdSense Platform API for your Google Cloud project
The AdSense Platform API isn't discoverable, meaning you have to visit the following link to enable the it for your project:
Step 4: Create a service key
In order to generate access tokensfor use in the API calls, you need to create a service key. Follow these steps:
- Visit the service accounts page in Google Cloud
- In the actions column, for the service account you want to use to create sub-accounts, click
then click "Manage keys"
- Click on "Add key", then select "Create new key"
- Keep JSON selected as the key type, and click on "Create"
- A json file will be created and downloaded onto your computer. Keep this safe as it will be needed to authenticate the API calls
Learn more about creating and manaigng service account keys.
Step 5: Use Google's OAuth libraries to generate an access token
Google provides libraries to help generate access tokens, which can be used to make the API calls. Learn about how to generate credentials for service accounts here:
The scope for the AdSense Platforms API is as follows:
Python example
from google.auth.transport import requests from google.oauth2 import service_account CREDENTIAL_SCOPES = [""] CREDENTIALS_KEY_PATH = 'service.json' def get_service_account_token(): credentials = service_account.Credentials.from_service_account_file( CREDENTIALS_KEY_PATH, scopes=CREDENTIAL_SCOPES) credentials.refresh(requests.Request()) return credentials.token
At this stage, you're ready to start calling the APIs. As client libraries are not supported for the AdSense Platform API yet, direct HTTP requests have to be made instead. The access token should be included as a header in the HTTP request. The header should look like this:
Authorization: OAuth <credentials>
Examples are included in the API pages. | https://developers.google.com/adsense/platforms/reference/authenticating-api-calls | CC-MAIN-2022-05 | refinedweb | 483 | 57.61 |
I'm working with someone on an app and I cloned their branch from our git repository. They added a NuGet Package called Xam.Plugin.Connectivity.
However, it's not recognizing this in our utils page. It just says that the type or namespace "Plugin" could not be found. I made sure to get the most up to date version and I made sure that I added it to the entire solution. It works for him, but we can't seem to figure out why it is not working for me.
Does anyone know why this is the case?
This is the plugin
You need to add the nuget just search Plugin.Connectivity
Answers
To be more specific, I just mean it says type or namespace not found
This is the plugin
You need to add the nuget just search Plugin.Connectivity
Sorry, I was in a rush and wow I didn't realize how terrible of a question I asked.
I have added that package to my project, but it's still giving me the error. It's not recognizing it when I put:
Using Plugin.Connectivity;
OKAY, finally got it. I just kept uninstalling and installing it until it finally recognized it. | https://forums.xamarin.com/discussion/137229/vs-not-recognizing-plugin-connectivity | CC-MAIN-2020-45 | refinedweb | 204 | 74.39 |
OK. First of all, this will be my first post on this forum, I just found it and I'm pretty happy about it.OK. First of all, this will be my first post on this forum, I just found it and I'm pretty happy about it.
Short story of why I'm posting: I study information science at a University in Sweden and recently started a course in Java-programming(OOP).
We recieved an assignment bout a week ago and I've been trying to finish this project all week. Every time I start trying my brain freezes and I just dont know how to fix this problem.
The problem: We're making a highscore-list that sends strings to a external .txt-file. While doing so I need to sort the current data (in the textfile) with the new highscore I'm adding. Also, I need to erase anything that's not in the top 5 on the highscorelist.
Not sure if I've made myself clear, my english is not perfect so please ask if you dont understand what I need help with.
What I'm asking for: I really need some sort of guidance to finish this project, so help me please!
(I seem to have problems with indenting the code on the forums, that's why I've added pastie-links to all classes)
Just click the bold text.
The code:
MAIN:
MENU:MENU:Code:public class main { public static void main(String[] args) { menu menu = new menu(); menu.display(); } }
HIGHSCORELIST:HIGHSCORELIST:Code:import java.util.*; public class menu { highscores highscores = new highscores(); private Scanner input = new Scanner(System.in); public void display() { System.out.println("Make your selection!"); System.out.println("Select an option: \n" + " 1) Insert new score\n" + " 2) Print list\n" + " 3) Reset list \n" + " 4) Quit\n "); int selection = input.nextInt(); input.nextLine(); switch (selection) { case 1: highscores.enterScore(); break; case 2: highscores.printList(); break; case 3: highscores.resetList(); break; case 4: System.out.println("Exiting program..."); System.exit(1); default: System.out.println("Try Again!"); break; } } }
Code:import java.io.*; import java.util.*; public class highscores { public void enterScore() { Scanner scan = new Scanner(System.in); System.out.println("Enter the players name!: "); String name = scan.nextLine(); System.out.println("Enter the players score!: "); String score = scan.nextLine(); System.out.println("Player " + name + " got: " + score + " points. Great job!"); try { File file = new File("HighScores.txt"); PrintWriter writer; writer = new PrintWriter(file); writer.println("Player name: " +name +" - " +"Player score:" +score); writer.close(); } catch (Exception e) { System.out.println("Error #1"); } } private void sortScore() { } public void printList() { try { File file = new File("Highscores.txt"); Scanner scanner; if (file.exists()) { scanner = new Scanner(file); for (int i = 0; i < 5; ++i) { String line = scanner.nextLine(); System.out.println(line); } } else { System.out.println("Error #2"); } } catch (Exception e) { System.out.println("Error #1"); } } public void resetList() { try { File file = new File("Highscores.txt"); PrintWriter writer; writer = new PrintWriter(file); for (int i = 0; i < 5; ++i) { writer.println("Player name: x - Player score: x "); } writer.close(); } catch (Exception e) { System.out.println("Error #1"); } } } | http://www.codingforums.com/java-and-jsp/286986-sort-existing-data-new-data-external-txt-file.html?s=42d6187531d94d673e8c4e13493ce581 | CC-MAIN-2016-22 | refinedweb | 522 | 61.22 |
Scalaz Features for Everyday Usage Part 3: State Monad, Writer Monad, and Lenses
Scalaz Features for Everyday Usage Part 3: State Monad, Writer Monad, and Lenses
In Part 3 of this series Monads and Lenses are introduced with code samples, with a focus on stuff that is practical to use.
Join the DZone community and get the full member experience.Join For Free a set of operations
- State monad: Have an easy way of tracking state across a set of computations
- Lenses: Easily access deeply nested attributes and make copying case classes more convenient
We'll start with one of the additional monads provided by Scalaz.
Writer Monad
Basically each writer has a log and a return value. This way you can just write your clean code, and at a later point determine what you want to do with the logging (e.g validate it in a test, output it to the console, or to some log file). So for instance, we could use a writer to keep track of the operations we've executed to get to some specific value.
So let's look at the code and see how this thing works:
import scalaz._ import Scalaz._ object WriterSample extends App { // the left side can be any monoid. E.g something which support // concatenation and has an empty function: e.g. String, List, Set etc. type Result[T] = Writer[List[String], T] def doSomeAction() : Result[Int] = { // do the calculation to get a specific result val res = 10 // create a writer by using set res.set(List(s"Doing some action and returning res")) } def doingAnotherAction(b: Int) : Result[Int] = { // do the calculation to get a specific result val res = b * 2 // create a writer by using set res.set(List(s"Doing another action and multiplying $b with 2")) } def andTheFinalAction(b: Int) : Result[String] = { val res = s"bb:$b:bb" // create a writer by using set res.set(List(s"Final action is setting $b to a string")) } // returns a tuple (List, Int) println(doSomeAction().run) val combined = for { a <- doSomeAction() b <- doingAnotherAction(a) c <- andTheFinalAction(b) } yield c // Returns a tuple: (List, String) println(combined.run) }
In this sample we've got three operations that do something. In this case, they don't really do that much, but that doesn't matter. The main thing is that instead of returning a value, we return a Writer (note that we could have also created the writer in the for comprehension), by using the set function. When we call run on a Writer, we don't just get the result of the operation, but also the aggregated values collected by the Writer. So when we write
type Result[T] = Writer[List[String], T] def doSomeAction() : Result[Int] = { // do the calculation to get a specific result val res = 10 // create a writer by using set res.set(List(s"Doing some action and returning res")) } println(doSomeAction().run)
The result looks like this: (List(Doing some action and returning res),10). Not that exciting, but it becomes more interesting when we start using the writers in a for-comprehension.
val combined = for { a <- doSomeAction() b <- doingAnotherAction(a) c <- andTheFinalAction(b) } yield c // Returns a tuple: (List, String) println(combined.run)
When you look at the output from this you'll see something like
(List(Doing some action and returning res, Doing another action and multiplying 10 with 2, Final action is setting 20 to a string) ,bb:20:bb)
As you can see we've gathered up all the different log messages in a List[String] and the resulting tuple also contains the final calculated value.
When you don't want to add the Writer instantiation in your functions you can also just create the writers in a for-comprehension like so:
val combined2 = for { a <- doSomeAction1() set(" Executing Action 1 ") // A String is a monoid too b <- doSomeAction2(a) set(" Executing Action 2 ") c <- doSomeAction2(b) set(" Executing Action 3 ") // c <- WriterT.writer("bla", doSomeAction2(b)) // alternative construction } yield c println(combined2.run)
The result of this sample is this:
( Executing Action 1 Executing Action 2 Executing Action 3 ,5)
Cool right? For this sample we've only shown the basic Writer stuff, where the type is just a simple type. You can of course also create Writer instances from more complex types. An example of this can be found here.
State Monad
Another interesting monad is the State monad, which provides a convenient way to handle state that needs to be passed through a set of functions. You might need to keep track of results, need to pass some context around a set of functions, or require some (im)mutable context for another reason. With the (Reader monad) we already saw how you could inject some context into a function. That context, however, wasn't changeable. With the state monad, we're provided with a nice pattern we can use to pass a mutable context around in a safe and pure manner.
Let's look at some examples:
case class LeftOver(size: Int) /** A state transition, representing a function `S => (S, A)`. */ type Result[A] = State[LeftOver, A] def getFromState(a: Int): Result[Int] = { // do all kinds of computations State[LeftOver, Int] { // just return the amount of stuff we got from the state // and return the new state case x => (LeftOver(x.size - a), a) } } def addToState(a: Int): Result[Int] = { // do all kinds of computations State[LeftOver, Int] { // just return the amount of stuff we added to the state // and return the new state case x => (LeftOver(x.size + a), a) } } val res: Result[Int] = for { _ <- addToState(20) _ <- getFromState(5) _ <- getFromState(5) a <- getFromState(5) currentState <- get[LeftOver] // get the state at this moment manualState <- put[LeftOver](LeftOver(9000)) // set the state to some new value b <- getFromState(10) // and continue with the new state } yield { println(s"currenState: $currentState") a } // we start with state 10, and after processing we're left with 5 // without having to pass state around using implicits or something else println(res(LeftOver(10)))
As you can see, in each function we get the current context, make some changes to it, and return a tuple consisting of the new state and the value of the function. This way each function has access to the State, can return a new one, and returns this new state together with the function's value as a Tuple. When we run the above code we see the following
currenState: LeftOver(15) (LeftOver(8990),5)
As you can see each of the functions does something with the state. With the get[S] function we can get the value of the state at the current moment, and in this example we print that out. Besides using the get function, we can also set the state directly using the put function.
As you can see, a very nice and simple to use pattern and great when you need to pass some state around a set of functions.
Lenses
So enough with the monads for now, let's look at Lenses. With Lenses it is possible to easily (well easier than just copying case classes by hand) change values in nested object hierarchies. Lenses can do many things, but in this article I'll introduce just some basic features. First, the code:
import scalaz._ import Scalaz._ object LensesSample extends App { // crappy case model, lack of creativity case class Account(userName: String, person: Person) case class Person(firstName: String, lastName: String, address: List[Address], gender: Gender) case class Gender(gender: String) case class Address(street: String, number: Int, postalCode: PostalCode) case class PostalCode(numberPart: Int, textPart: String) val acc1 = Account("user123", Person("Jos", "Dirksen", List(Address("Street", 1, PostalCode(12,"ABC")), Address("Another", 2, PostalCode(21,"CDE"))), Gender("male"))) val acc2 = Account("user345", Person("Brigitte", "Rampelt", List(Address("Blaat", 31, PostalCode(67,"DEF")), Address("Foo", 12, PostalCode(45,"GHI"))), Gender("female"))) // when you now want to change something, say change the gender (just because we can) we need to start copying stuff val acc1Copy = acc1.copy( person = acc1.person.copy( gender = Gender("something") ) )
In this sample we defined a couple of case classes, and want to change a single value. For case classes this means that we have to start nesting a set of copy operations to correctly change one of the nested values. While this can be done for simple hierarchies, it quickly becomes cumbersome. With lenses you're offered a mechanism to do this in a composable way:
val genderLens = Lens.lensu[Account, Gender]( (account, gender) => account.copy(person = account.person.copy(gender = gender)), (account) => account.person.gender ) // and with a lens we can now directly get the gender val updated = genderLens.set(acc1, Gender("Blaat")) println(updated) #Output: Account(user123,Person(Jos,Dirksen,List(Address(Street,1,PostalCode(12,ABC)), Address(Another,2,PostalCode(21,CDE))),Gender(Blaat)))
So we define a Lens, which can change a specific value in the hierarchy. With this lens we can now directly get or set a value in a nested hierarchy. We can also create a lens which modifies a value and returns the modified object in one go by using the =>= operator.
// we can use our base lens to create a modify lens val toBlaBlaLens = genderLens =>= (_ => Gender("blabla")) println(toBlaBlaLens(acc1)) # Output: Account(user123,Person(Jos,Dirksen,List(Address(Street,1,PostalCode(12,ABC)), Address(Another,2,PostalCode(21,CDE))),Gender(blabla))) val existingGender = genderLens.get(acc1) println(existingGender) # Output: Gender(male)
And we can use the >=> and the <=< operators to combine lenses together. For example in the following code sample, we create to separate lenses which are then combined and executed:
// First create a lens that returns a person val personLens = Lens.lensu[Account, Person]( (account, person) => account.copy(person = person), (account) => account.person ) // get the person lastname val lastNameLens = Lens.lensu[Person, String]( (person, lastName) => person.copy(lastName = lastName), (person) => person.lastName ) // Get the person, then get the lastname, and then set the lastname to // new lastname val combined = (personLens >=> lastNameLens) =>= (_ => "New LastName") println(combined(acc1)) # Output: Account(user123,Person(Jos,New LastName,List(Address(Street,1,PostalCode(12,ABC)), Address(Another,2,PostalCode(21,CDE))),Gender(male)))
Conclusion
There are still two subjects I want to write about, and those are Validations and Free monads. In the next article in this series I'll show how you can use ValidationNEL for validations. Free Monads, however, don't really fall in the category of everyday usage, so I'll spend a couple of other articles on that in the future.
Published at DZone with permission of Jos Dirksen , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/scalaz-features-for-everyday-usage-part-3-state-mo?fromrel=true | CC-MAIN-2019-35 | refinedweb | 1,806 | 51.18 |
MS Dynamics CRM 3.0
and probably there are other obscure ways, but the intended way is obviously f.write
nsz
Diez
> You should look at the mmap-module.
However, numpy has a properly working memory mapped array class, numpy.memmap. It can be used for fast file access. Numpy also has a wide range of datatypes that are efficient for working with binary data (e.g. an uint8 type for bytes), and a record array for working with structured binary data. This makes numpy very attractive when working with binary data files.
Get the latest numpy here:.
Let us say you want to memory map an 23 bit RGB image of 640 x 480 pixels, located at an offset of 4096 bytes into the file 'myfile.dat'. Here is how numpy could do it:
import numpy
byte = numpy.uint8 desc = numpy.dtype({'names':['r','g','b'],'formats':[byte,byte,byte]}) mm = numpy.memmap('myfile.dat', dtype=desc, offset=4096, shape=(480,640), order='C') red = mm['r'] green = mm['g'] blue = mm['b']
Now you can access the RGB values simply by slicing the arrays red, green, and blue. To set the R value of every other horizontal line to 0, you could simply write
red[::2,:] = 0
As always when working with memory mapped files, the changes are not committed before the memory mapping is synchronized with the file system. Thus, call
mm.sync()
when you want the actual write process to start.
The memory mapping will be closed when it is garbage collected (typically when the reference count falls to zero) or when you call mm.close().
> byte = numpy.uint8 > desc = numpy.dtype({'names':['r','g','b'],'formats':[byte,byte,byte]}) > mm = numpy.memmap('myfile.dat', dtype=desc, offset=4096, > shape=(480,640), order='C') > red = mm['r'] > green = mm['g'] > blue = mm['b']
# reading from file to raw string rstr = mm.tostring()
# writing raw string to file mm[:] = numpy.fromstring(rstr, dtype=numpy.uint8) mm.sync()
mmap must be fixed. | http://www.megasolutions.net/python/writing-to-a-file-78682.aspx | CC-MAIN-2014-52 | refinedweb | 334 | 67.76 |
Barcode Software
generate qr code using c#
Contents in visual C#.net
Maker QR Code JIS X 0510 in visual C#.net Contents
5 Solutions Overview and Concepts
using barcode integrating for .net control to generate, create barcode image in .net applications. syntax
BusinessRefinery.com/barcode
use rdlc reports net barcodes implementation to connect bar code on .net c# webpage
BusinessRefinery.com/ barcodes
Feature-Oriented Integration
generate barcode in asp.net using c#
use visual .net barcodes creation to encode barcodes on c# step
BusinessRefinery.com/ bar code
generate, create barcode additional none on visual c# projects
BusinessRefinery.com/ barcodes
Figure 2-8 select it.
generate, create barcode special none in java projects
BusinessRefinery.com/ bar code
use microsoft word barcode integrating to attach bar code in microsoft word way
BusinessRefinery.com/barcode
Optional Boolean. Indicates whether, to be embedded or published correctly, the le depends on another le. Optional string. The name of the le that was created by any le generator that was run on this item. Optional string. The namespace in which any le generator that runs on this item should create code.
rdlc qr code
using browser rdlc report to generate quick response code in asp.net web,windows application
BusinessRefinery.com/QR Code 2d barcode
qr-codes size tool in excel microsoft
BusinessRefinery.com/qr codes
ASP.NET provides a set of controls, classes, and management tools for authenticating users with web forms and storing user information in a database. These controls allow you to track, manage, and authenticate users without creating your own schema, relying on the Active Directory user database, or managing users by other means. Prior to version 2.0 of the Microsoft .NET Framework, custom user authentication required creation from scratch of many complex components, such as user database schemas, logon pages, password management pages, and user administration. Creating these components yourself is timeconsuming and risky to your application s security. ASP.NET helps you minimize this risk.
crystal reports 2011 qr code
using api visual .net crystal report to build denso qr bar code with asp.net web,windows application
BusinessRefinery.com/QR Code
to build quick response code and qr codes data, size, image with visual basic.net barcode sdk character
BusinessRefinery.com/qr bidimensional barcode
Figure 14-5: The sample page now shows filtered data from the Customers table. The XML data has been carried using a hidden field. Note Another key technique you can use to refresh the page using clientside data leverages DHTML. Although this approach can be effective and powerful, it doesn't combine well with managed code. DHTML refers to the page object model and is designed for scripting. The page object model is exposed as a suite of COM objects, and driving it from within managed code is certainly possible but not particularly easy.
qr code jis x 0510 data resolution on c#.net
BusinessRefinery.com/qr bidimensional barcode
to add qr barcode and qr-code data, size, image with java barcode sdk tiff
BusinessRefinery.com/QR Code 2d barcode
Producing an Online Presentation
generate, create pdf-417 2d barcode text none in .net projects
BusinessRefinery.com/PDF 417
crystal reports data matrix
using preview visual .net to use barcode data matrix on asp.net web,windows application
BusinessRefinery.com/DataMatrix
Threading
.net code 39 reader
Using Barcode recognizer for padding .net vs 2010 Control to read, scan read, scan image in .net vs 2010 applications.
BusinessRefinery.com/39 barcode
.net code 128 reader
Using Barcode recognizer for use VS .NET Control to read, scan read, scan image in VS .NET applications.
BusinessRefinery.com/Code 128 Code Set A
Adding Graphics to the Call to Action and Key Point Slides
winforms code 39
generate, create code 39 extended allocate none in .net projects
BusinessRefinery.com/3 of 9
crystal reports pdf 417
use .net vs 2010 pdf417 generating to access pdf 417 for .net details
BusinessRefinery.com/pdf417 2d barcode
D 1000 A 2000 C 3000 index order scan
ssrs code 39
generate, create ansi/aim code 39 result none for .net projects
BusinessRefinery.com/Code 39 Full ASCII
using barcode implement for microsoft excel control to generate, create data matrix barcode image in microsoft excel applications. digit
BusinessRefinery.com/data matrix barcodes
Visual Basic Example of a Complicated Test Moved Into a Boolean Function, With New Intermediate Variables To Make the Test Clearer
C11620245.fm Page 381 Wednesday, June 9, 2004 4:49 PM
Qt Quick takes a radically different approach to user interface development than you ve seen previously in C++ with Qt. More like HTML than C++, Qt Quick uses QML, a JavaScript-like language to define your user interface. QML is a declarative language instead of writing imperative statements that do things, you write declarations of your user interface objects. While at the top level both environments are inherently objectoriented, how you work at the level of individual statements is very different. In C++ with Qt, we might draw a new rectangle using pseudocode like this:
Working with the SharePoint data in Excel is similar to working with an Excel list that s been previously published in SharePoint. You don t have to be online to edit the SharePoint list data, even while online users are working with the SharePoint list. You can edit the list and add rows (records), then save the workbook locally until you re back on the network and ready to synchronize your offline data with the SharePoint list. If you have a group of users who need to update the same data set, a SharePoint list allows each user to export and work with the data independently and simultaneously. As you ll see, SharePoint manages any conflicts that occur when two or more users make different changes to the same data element. Many users prefer to work with their SharePoint lists offline in Excel: Managers: During budget crunch time when they re finalizing line items and many users need to update different rows in the same list Sales people: Logging sales leads and customer information on the road instead of waiting until they return to the office Developers: Recording actions taken to resolve items in an Issues log Information workers: Editing lists during peak hours on a slow network Frequent flyers and other travelers: Working with list data when they don t have access to their corporate SharePoint site
Shares and Permissions
3
The NavigationService Object
More QR Code on C#
how to generate barcode in asp.net c#: xxii in .net C# Printer qr bidimensional barcode in .net C# xxii
qr code c#.net generator sdk: More INfo in C# Receive QR Code JIS X 0510 in C# More INfo
qr code c#.net generator sdk: POST in visual C# Writer qr-codes in visual C# POST
c# qr code generator: Lesson 1: Understanding Web Communications ChAPTER 1 13 in C#.net Encoder qr codes in C#.net Lesson 1: Understanding Web Communications ChAPTER 1 13
c# qr code generator: Website Solution Files in visual C# Use qr codes in visual C# Website Solution Files
c# qr code generator library: Website Compilation in visual C# Printer QR-Code in visual C# Website Compilation
c# qr code generator library: Global machine %SystemRoot%\Microsoft.NET\Framework\ versionNumber \CONFIG\Machine.config in .net C# Attach QRCode in .net C# Global machine %SystemRoot%\Microsoft.NET\Framework\ versionNumber \CONFIG\Machine.config
c# qr code generator library: real World in visual C# Generate QR Code in visual C# real World
c# qr code generator code project: pages masterPageFile="MySite.Master" / in visual C#.net Draw qr codes in visual C#.net pages masterPageFile="MySite.Master" /
c# qr code generator code project: FigURE 2-3 Creating the Professional.master page. in .net C# Include QRCode in .net C# FigURE 2-3 Creating the Professional.master page.
c# itextsharp create barcode: Lesson 2: Using Themes in c sharp Generating qr bidimensional barcode in c sharp Lesson 2: Using Themes
c# qr code generator code project: Lesson 2: Using Themes in visual C#.net Integration qr codes in visual C#.net Lesson 2: Using Themes
c# qr code generator code project: Using Master Pages, Themes, and Caching in C# Generating QR Code in C# Using Master Pages, Themes, and Caching
c# qr code generator code project: Using Master Pages, Themes, and Caching in visual C# Display QR Code JIS X 0510 in visual C# Using Master Pages, Themes, and Caching
c# qr code generator code project: Case Scenario 2: Improving the Performance of a Public Website in visual C#.net Drawer qr codes in visual C#.net Case Scenario 2: Improving the Performance of a Public Website
asp.net barcode: Finally, ASP.NET processes requests through the HttpApplication pipeline. This pipeline in visual C#.net Generate qr barcode in visual C#.net Finally, ASP.NET processes requests through the HttpApplication pipeline. This pipeline
c# qr code generator code project: Handling Events and Managing State in C#.net Generator QRCode in C#.net Handling Events and Managing State
c# qr code generator code project: Handling Events and Managing State in C#.net Creator qr-codes in C#.net Handling Events and Managing State
c# create and print barcode: Lesson 2: Using Client-Side State Management ChAPTER 3 123 in C#.net Printing QR in C#.net Lesson 2: Using Client-Side State Management ChAPTER 3 123
c# qr code generator code project: Initial page request Respond with cookie 3 Client 4 Pass cookie Pass cookie Web server 2 in C# Display qrcode in C# Initial page request Respond with cookie 3 Client 4 Pass cookie Pass cookie Web server 2
Articles you may be interested
c# ean 13 check: de Complete in visual C# Develop EAN-13 in visual C# de Complete
visual basic 6.0 barcode generator: Current, Next Pairs in .NET Integration QR Code ISO/IEC18004 in .NET Current, Next Pairs
vb.net print barcode: The DefaultParameterValue and Optional Attributes in .NET Access QR Code in .NET The DefaultParameterValue and Optional Attributes
c# qr code reader pdf: Structured Exception Handling in C# Make datamatrix 2d barcode in C# Structured Exception Handling
vb.net create barcode image: Applying Transformations in VB.NET Produce bar code 39 in VB.NET Applying Transformations
create barcode in asp.net c#: Restrict Project in visual C#.net Include Quick Response Code in visual C#.net Restrict Project
vb.net barcode scanner programming: DATA BINDING AND SILVERLIGHT LIST CONTROLS in C#.net Deploy PDF 417 in C#.net DATA BINDING AND SILVERLIGHT LIST CONTROLS
barcode generator in vb.net: TESTING in Java Generate barcode pdf417 in Java TESTING
zxing barcode reader c#: Try It Out: Declaring and Working with Variables in visual C#.net Deploy Data Matrix barcode in visual C#.net Try It Out: Declaring and Working with Variables
c# barcode scanner tutorial: TRIGGERS in .net C# Creator Data Matrix ECC200 in .net C# TRIGGERS
c# print barcode: ch a pter t wo in vb Build 39 barcode in vb ch a pter t wo
visual basic barcode program: Figure 2-4. Compilation in .NET Implementation QRCode in .NET Figure 2-4. Compilation
.net qr code reader: Thread Basics in .NET Draw QR Code JIS X 0510 in .NET Thread Basics
barcode visual basic: Inside Microsoft SQL Server 2008: T-SQL Programming in .NET Development QR-Code in .NET Inside Microsoft SQL Server 2008: T-SQL Programming
create barcode in asp.net c#: Inside Microsoft SQL Server 2008: T-SQL Querying in c sharp Receive Quick Response Code in c sharp Inside Microsoft SQL Server 2008: T-SQL Querying
visual basic barcode program: Deleting Data in .NET Creation QR Code JIS X 0510 in .NET Deleting Data
barcode generator github c#: INTRODUCTION TO SQL, SQL*PLUS, AND SQL DEVELOPER in Java Generate ECC200 in Java INTRODUCTION TO SQL, SQL*PLUS, AND SQL DEVELOPER
qr code generator c# code project: RDoc Techniques in visual C#.net Assign QRCode in visual C#.net RDoc Techniques
c# zxing qr code generator: Forcing a Join Strategy in .net C# Creator qrcode in .net C# Forcing a Join Strategy
code 39 generator c#: CREATING THE PRODUCT CATALOG: PART II in visual C# Get USS Code 39 in visual C# CREATING THE PRODUCT CATALOG: PART II | http://www.businessrefinery.com/yc1/254/8/ | CC-MAIN-2022-40 | refinedweb | 2,044 | 57.67 |
NAME
g_provider_by_name - find GEOM provider with given name
SYNOPSIS
#include <geom/geom.h> struct g_provider * g_provider_by_name(const char *name);
DESCRIPTION
The g_provider_by_name() function searches for a provider called name and returns the structure g_provider bound to it. Argument name should be a name, not a full path (i.e., “da0”, instead of “/dev/da0”).
RESTRICTIONS/CONDITIONS
The topology lock has to be held.
RETURN VALUES
The g_provider_by_name() function returns a pointer to the provider called name or NULL if there is no such provider.
SEE ALSO
geom(4), DECLARE_GEOM_CLASS(9), g_access(9), g_attach(9), g_bio(9), g_consumer(9), g_data(9), g_event(9), g_geom(9), g_provider(9), g_wither_geom(9)
AUTHORS
This manual page was written by Pawel Jakub Dawidek 〈pjd@FreeBSD.org〉. | http://manpages.ubuntu.com/manpages/lucid/en/man9/g_provider_by_name.9freebsd.html | CC-MAIN-2014-15 | refinedweb | 121 | 56.66 |
Source code: Lib/bisect.py
This insertion point for x in a to maintain sorted order. The parameters lo and hi may be used to specify a subset of the list which should be considered; by default the entire list is used. If x is already present in a, the insertion point will be before (to the left of) any existing entries. The return value is suitable for use as the first parameter to list.insert() assuming that a is already sorted.
The returned insertion point i partitions the array a into two halves so that all(val < x for val in a[lo:i]) for the left side and all(val >= x for val in a[i:hi]) for the right side.
Similar to bisect_left(), but returns an insertion point which comes after (to the right of) any existing entries of x in a.
The returned insertion point i partitions the array a into two halves so that all(val <= x for val in a[lo:i]) for the left side and all(val > x for val in a[i:hi]) for the right side.
Insert x in a in sorted order. This is equivalent to a.insert(bisect.bisect_left(a, x, lo, hi), x) assuming that a is already sorted. Keep in mind that the O(log n) search is dominated by the slow O(n) insertion step.
Similar to insort_left(), but inserting x in a after any existing entries of x.
See also
SortedCollection recipe that uses bisect to build a full-featured collection class with straight-forward search methods and support for a key-function. The keys are precomputed to save unnecessary calls to the key function during searches.
The above bisect() functions are useful for finding insertion points but can be tricky or awkward to use for common searching tasks. The following five functions show how to transform them into the standard lookups for sorted lists:
def index(a, x): 'Locate the leftmost value exactly equal to x' i = bisect_left(a, x) if i != len(a) and a[i] == x: return i raise ValueError def find_lt(a, x): 'Find rightmost value less than x' i = bisect_left(a, x) if i: return a[i-1] raise ValueError def find_le(a, x): 'Find rightmost value less than or equal to x' i = bisect_right(a, x) if i: return a[i-1] raise ValueError def find_gt(a, x): 'Find leftmost value greater than x' i = bisect_right(a, x) if i != len(a): return a[i] raise ValueError def find_ge(a, x): 'Find leftmost item greater than or equal to x' i = bisect_left(a, x) if i != len(a): return a[i] raise ValueError
The bisect() function can be useful for numeric table lookups. This example uses bisect() to look up a letter grade for an exam score (say) based on a set of ordered numeric breakpoints: 90 and up is an ‘A’, 80 to 89 is a ‘B’, and so on:
>>> def grade(score, breakpoints=[60, 70, 80, 90], grades='FDCBA'): ... i = bisect(breakpoints, score) ... return grades[i] ... >>> [grade(score) for score in [33, 99, 77, 70, 89, 90, 100]] ['F', 'A', 'C', 'C', 'B', 'A', 'A']
Unlike the sorted() function, it does not make sense for the bisect() functions to have key or reversed arguments because that would lead to an inefficient) | http://docs.python.org/dev/library/bisect.html | CC-MAIN-2013-48 | refinedweb | 555 | 58.62 |
List performance consolidation thread
I'm building a music control application, consisting of a dedicated LAN-only HTTP server, and Sencha-based clients. The nature of this application means that I must deal gracefully with lists of data potentially in the 10,000 to 100,000 item range, with a high degree of interactivity. There have been several threads commenting that list performance, especially scrolling performance, degrades badly as the number of items goes up. So I'm starting this thread as a way of consolidating tips and strategies to dealing with large lists.
Two general approaches to take are 1) minimize the number of items displayed, and/or 2) investigate alternative scrolling algorithms.
The first approach is to minimize the number of items actually displayed in the list at any one time. My experiments show that scrolling performance is almost entirely a function of the number of nodes actually displayed in the scrolling div. So if we hide the list elements that are well off the top or bottom of the list viewport, and then replace the space those items take with a blank spacer div, we'll maintain scrolling performance for the items actually in front of the user. Of course, this means we have to update the items that are hidden/displayed, and the size of those top and bottom spacers, when the list gets scrolled.
So the markup structure inside the list component will look like:
Code:
<div class="top-list-spacer" height="(number of hidden items at top) * (height of list item)"></div> <div class="x-list-item" style="display:none">/*hidden list item*/</div> /*repeat hidden list items at top of list*/ <div class="x-list-item" style="display:block">/*visible list item*/</div> /*repeat visible list items*/ <div class="x-list-item" style="display:none">/*hidden list item*/</div> /*repeat hidden list items at top of list*/ <div class="bottom-list-spacer" height="(number of hidden items at bottom) * (height of list item)"></div>
Code:
var myList = new Ext.List({...}); myList.mon(myList.scroller, { scrollend: handleScrollEnd, scope:myList } ); var handleScrollEnd = function() { // Note: "this" inside this function refers to my list component // Compute how many list items are past the top of the list viewport ................. // Hide items that are too far past top of list viewport ................. // Set height of top spacer div to match height of top list items just hidden ................. // Compute how many list items are past the bottom of the list viewport ................. // Hide items that are too far past the bottom of list viewport ................. // Set height of bottom spacer div to match height of bottom list items just hidden ................. }
If people are interested, and willing to contribute their own ideas and feedback, I'm willing to spend a couple of days creating more generic working versions of this kind of list extension.
The second strategy, investigating other kinds of scrolling algorithms, is not something I've done yet. I know the Sencha team has already put in quite a bit of time here. But I'd also say, check out the "List Performance" example on jquerymobile.com:
The rendering is very slow. The scrolling is very responsive, but very unnatural. There are obviously some tradeoffs around this, and some more investigation might lead to a set of tradeoffs that work better for large lists.
EDIT: I screwed up. The above jquery mobile example does not do local scrolling at all - the "scrolling" is simply native scrolling of the entire document, not an element within the document. So there is very likely no magic to be found in changing the scrolling algorithm.
Very interesting. Today, while waiting at the train station, I've been chewing this very issue, and I had the same idea (no tests yet, just been thinking). Do you have some working code?
One other approach to reduce load time is dynamically rendering items. It's helping a lot, but not enough for my taste. See. What I'm doing there is render empty items of a fixed height, but render every item. Then render the full content in a sliding window of items while scrolling. It greatly helped performance, but still won't scale to > 1000 items, as even with a simplified item structure the DOM becomes too large.
Maybe some of my code might become useful to your approach too.HTC Desire - Android: 2.2 - Kernel: 2.6.32.15-gf5a401c - Build: 2.29.405.2 - WebKit: 3.1
mherger,
Yes, I'm working right now on a custom component grouped list that dynamically renders. If everything goes well, it should be ready in a couple of hours. We can exchange code when I post it up
As you have found, I've found that simply rendering empty items doesn't help the performance as much as I'd like. Instead, I'm using a single "proxy" item to represent a group that isn't rendered - you'll see what I mean when I post my extension later.
Scott
Hi MahlerFreak,
I have a list component with about 1000 items . Can you help me to implement the above things ?
iGlossary.views.AlphabeticList = Ext.extend(Ext.List, {
singleSelect: true,
grouped: true,
indexBar: true,
onItemDisclosure:true,
initComponent: function() {
this.store = iGlossary.stores.AlphabeticItems;
this.itemTpl = '{term}';
iGlossary.views.AlphabeticList.superclass.initComponent.call(this);
},
show: function(){
this.scroller.on('scrollend', function (comp, offset) {
});
},
hide: function(){
this.scroller.un('scrollend', function (comp, offset) {
});
}
});
Shiju
I am very much curious on MahlerFreak's solution and hope that it can be applied to SenchaTouch-Lists as well... Good luck with that!
It is great, that you will share the fruits of your brain with the community!
OK, here is a custom grouped list that manages scroll performance by rendering only one group of items at a time. The other groups are represented by a single proxy item. If you scroll or use the index bar to go to a new group, that group becomes the rendered group. If you're scrolling and tap on one of the proxy groups to stop the scroll, it becomes the rendered group. You can obviously scrub up and down on the index bar to get to any one particular group.
The performance with the provided example is quite good on iPad, OK on iPod touch, not too great on Samsung Galaxy S, brilliant on desktop Chrome
The code below is quite functional, but still has some things to be finished (I think item selection probably won't work correctly as is - it will give the wrong record/index) and some other bugs to work out. But I'm not going to spend much more time on this until I get some feedback from folks - does this get enough performance? Is the user interaction acceptable, or not? It certainly is a non-standard way to interact with lists on these devices, and may be too confusing. Be completely honest; I don't want to put more work into something people won't use.
The code for the custom list component is below. It is quite copiously commented, so you can see what's
going on.
EDIT: updated the code to reflect bug fixes in rev 2
Code:
Ext.namespace('Ext.ux.touch'); Ext.ux.touch.GroupRevealList = Ext.extend(Ext.List,{ initComponent: function() { // This is a grouped list only, regardless of config options this.grouped = true; // We're going to replace the default group template with our own special version. // It's not much more complicated than the original. Only the list items inside // the group specified by the "activeRenderGroup" member of the template will be // actually be rendered; the rest of the groups are each represented by a single "proxy" // item. this.groupTpl = [ '<tpl for=".">', '<tpl if="this.shouldRenderItems(group)">', '{[this.incRenderCount(values.nitems)]}', '<div class="x-list-group x-group-{id}">', '<h3 class="x-list-header">{group}</h3>', '<div class="x-list-group-items">', '{items}', '</div>', '</div>', '</tpl>', '<tpl if="!this.itemsRendered">', '{[this.incStartCount(values.group,values.nitems)]}', '<div class="x-list-group x-group-{id}">', '<h3 class="x-list-header">{group}</h3>', '<div class="x-list-group-items x-list-group-proxy" groupProxyId="{group}">', '{[values.group + "..."]}', '</div>', '</div>', '</tpl>', '</tpl>' ]; // Save the super class in an object variable. This saves both typing, and // two levels of object dereferencing when calling superclass functions. this.mySuper = Ext.ux.touch.GroupRevealList.superclass; // superclass constructor will now set up the render template including our // new group template. It will also create our index bar, if any. this.mySuper.initComponent.call(this); // add a member to this object to save the number of items not rendered // before the start of rendered items. This offset is needed to make // selection work - see below. this.renderIndexOffset = 0; // Create the active group member in our render template // Add some new data members and functions to our template, // to be used by our new group template above. It is simpler // to add them here, with the template object otherwise complete, than // to figure out how to add them to the complicated process of // building this template in DataView and List. this.tpl.activeRenderGroup = "A"; this.tpl.nRendered = 0; this.tpl.startRenderIndex = 0; this.tpl.itemsRendered = false; this.tpl.shouldRenderItems = function(group) { this.itemsRendered = (group >= this.activeRenderGroup && this.nRendered < 20); return this.itemsRendered; } this.tpl.incRenderCount = function(nitems) { this.nRendered += nitems; return ""; } this.tpl.incStartCount = function(group,nitems) { if ( group < this.activeRenderGroup ) { this.startRenderIndex += nitems; } return ""; } this.tpl.prepareNewRender = function(group) { this.activeRenderGroup = group; this.nRendered = 0; this.startRenderIndex = 0; this.itemsRendered = false; } }, // we override this purely to be able to collect the number of children in a group as part of // the data passed to our template. Otherwise, this is an exact copy of List:collectData collectData : function(records, startIndex) { if (!this.grouped) { return this.mySuper.collectData.call(this, records, startIndex); } var results = [], groups = this.store.getGroups(), ln = groups.length, children, cln, c, group, i; for (i = 0, ln = groups.length; i < ln; i++) { group = groups[i]; children = group.children; for (c = 0, cln = children.length; c < cln; c++) { children[c] = children[c].data; } results.push({ group: group.name, id: this.getGroupId(group), // This is our mod nitems: cln, items: this.listItemTpl.apply(children) }); } return results; }, // override of inherited list function - need to monitor scrollend // we need to update our list rendering any time scrolling ends, or // any time the user has tapped or scrubbed our index bar. initEvents: function() { // call base class method this.mySuper.initEvents.call(this); // monitor scroll end this.mon(this.scroller, { scrollend: this.onScrollEnd, scope: this } ); // monitoring the index bar is a bit tricky. We don't want to re-render // every time the index changes, only when the listener ends his touch // on the desired index. What makes this tricky is that the index bar // itself listens for touchend, and stops the event from reaching us. // So we create a sequence instead, and tell the index bar to call that // sequence on touchend. if ( this.indexBar ) { var bar = this.indexBar; // remove existing touchend listener on index bar bar.mun(bar.el,'touchend',bar.onTouchEnd); // create function sequence bar.onTouchEnd = Ext.createSequence(bar.onTouchEnd,this.onLastIndex,this); // and re-establish the touchend listener on the index bar bar.mon(bar.el, { touchend: bar.onTouchEnd, scope: bar } ); } }, // overriden method - if tap was on a group proxy element, make that the // current rendered group. onTap: function(e) { var proxyEl, groupId; // clear any pending updates if ( this.scrollEndTimer ) { clearTimeout(this.scrollEndTimer); this.scrollEndTimer = null; } // check if this is tap on proxy group element. If so, make that // group the current rendered group. proxyEl = e.getTarget('.x-list-group-proxy', this.getTargetEl()); if ( proxyEl ) { groupId = proxyEl.getAttribute('groupProxyId'); if ( groupId ) { this.setRenderGroup(groupId); } } else { this.mySuper.onTap.apply(this,arguments); } }, // Override of inherited Ext:List function. If the user has suddenly flicked the scroller // again, cancel any pending render of the new group. onScrollStart: function() { if ( this.scrollEndTimer ) { clearTimeout(this.scrollEndTimer); this.scrollEndTimer = null; } this.mySuper.onScrollStart.call(this); }, // This gets called when the scroll ends. Must check if there is a new group // visible, and render it if so. We delay execution of this, so that the // user can flick the list some more, or change scroll direction. onScrollEnd: function() { if ( this.scrollEndTimer ) { clearTimeout(this.scrollEndTimer); this.scrollEndTimer = null; } this.scrollEndTimer = Ext.defer(this.deferredScrollEnd,300,this); }, // do the acutal work of rendering a new group on scrollend. deferredScrollEnd: function() { // null out the scroll timer - it is no longer valid since we have already gotten here. if ( this.scrollEndTimer ) { this.scrollEndTimer = null; } // try/catch/finally block to make sure any errors don't kill our scroll events // permanently. try { // ignore scrolling events while rendering is happening this.scroller.suspendEvents(); // scoller position var scrollPos = this.scroller.getOffset(); // height of list viewport var vpHeight = this.getTargetEl().getHeight(); // what are the group(s) in view? var closest = this.getClosestGroups(scrollPos); // which group should we render? The one that takes up more of the // screen ... var group; if ( closest.current.offset >= scrollPos.y ) { group = closest.current; } else if ( closest.next && (closest.next.offset - scrollPos.y) < (vpHeight/2) ) { group = closest.next; } else { group = closest.current; } // get the id of the group to update var groupId = group.header.getHTML(); // render the group to update (does nothing if group is the same as previous) this.setRenderGroup(groupId); } catch(e) { this.scroller.resumeEvents(); throw(e); } finally { this.scroller.resumeEvents(); } }, // override of inherited Ext:List function onIndex: function(record,target,index) { // save the last index record, to use when the user ends touch // on the index bar this.lastIndexRecord = record; // call overridden base class method this.mySuper.onIndex.apply(this,arguments); }, // this gets called after the user has finished tapping or scrubbing on the index bar onLastIndex: function() { if ( this.lastIndexRecord ) { // see which group we've just indexed to var groupId = this.lastIndexRecord.get('key').toUpperCase(); this.lastIndexRecord = null; // render the group (does nothing if group is the same as previous) this.setRenderGroup(groupId); } }, // We need to create our own, overridden version of updateIndexes (from DataView) // Our version corrects the index for all of the items not rendered. updateIndexes : function(startIndex, endIndex){ var indexOffset = this.tpl.startRenderIndex || 0; var ns = this.all.elements; startIndex = startIndex || 0; endIndex = endIndex || ((endIndex === 0) ? 0 : (ns.length - 1)); for(var i = startIndex; i <= endIndex; i++){ ns[i].viewIndex = i + indexOffset; } }, // Unfortunately, we also need to override getNode, which needs to account // for the offset in render index also getNode : function(nodeInfo) { if ( nodeInfo instanceof Ext.data.Model ) { var idx = this.store.indexOf(nodeInfo); return this.all.elements[idx - this.renderIndexOffset]; } return this.mySuper.getNode.call(this,nodeInfo); }, // As the name says, set the active group using the group string as identifier setActiveGroupById: function(groupId) { var groups = this.groupOffsets; var ngrp = groups.length; var i, group; for ( i = 0; i < ngrp; i++ ) { group = groups[i]; if ( group.header.getHTML() == groupId ) break; } if ( group ) this.setActiveGroup(group); }, // Do the actual work of re-rendering a new active group. Trivially simple at // this point. Returns true if new rendering was done, false otherwise. setRenderGroup: function(/*string*/groupId) { if ( groupId != this.tpl.activeRenderGroup ) { this.tpl.prepareNewRender(groupId); // calling refresh() causes the list to be re-rendered, with the new // group set. refresh() also does some extra work we don't need, but // it will do for now ... this.refresh(); // call updateOffsets and updateBoundary to compensate for the new dimensions // of the group div, and the overall list div this.updateOffsets(); this.scroller.updateBoundary(); // scroll to the top of the newly rendered group this.scrollToGroup(groupId); // set the active group this.setActiveGroupById(groupId); // save the render index offset this.renderIndexOffset = this.tpl.startRenderIndex; return true; } return false; }, // Scroll to the top of the specified group. Code stolen from List:onIndex function. scrollToGroup: function(/*string*/groupId) { var closest = this.getTargetEl().down('.x-group-' + groupId.toLowerCase()); if (closest) { this.scroller.scrollTo({x: 0, y: closest.getOffsetsTo(this.scrollEl)[1]}, false, null, true); } } });
as follows:
Code:
.x-list-group-proxy { height: 500px; text-align:center; line-height:500px; font-size:5em; }
Hi MahlerFreak,
First of all , i want to thank you for your great effort.
I have just integrated this with my project.it is working good with high performance.
There are some bugs we to fix,
As you told when i click an item it fetches a wrong item
When we scroll to another group we have seen the detailed group but in the bottom we again see the same group
Shiju
Thanks a lot MahlerFreak! You've taken a slightly different approach than I would have, but your code reveals some great ideas. And I learned about sequences :-).
I don't want to restrict myself to lists with index bars, as I don't always have them. Maybe I'll just add a numerical index bar at some point :-). But for now my simplification is a unique item height, which allows to do the buffer calculation as you suggested it in your initial posting. And I'm hiding the group headings, as otherwise the calculation becomes a bit more complex too.
I'll spend some time trying to figure out what exactly you're doing and how I could re-use some of this in my own solution. I'll let you know. Thanks a lot!HTC Desire - Android: 2.2 - Kernel: 2.6.32.15-gf5a401c - Build: 2.29.405.2 - WebKit: 3.1
I tested your zip on my HTC Desire HD - performance is amazing - no screen-freezing anymore...
On the iPad it works good as well, but on an iPod (2nd Generation, iOS 4) normal scrolling (without using the index) does not work good at all: the list does not move when you move your finger away from the screen...
I'll try to take a look at the code within the next days - right know it seems to be magic mysteries to me... Do you think it can be adapted for list without index bar and with scroll-bar instead (the problem here probably is that you do not know the length of the total list, which is crucial for the length of the scroll-bar).
Yes, of course it is possible to not show the index bar: 'indexBar: false'
Why does every group, that is rendered with content ('this.shouldRenderItems(group)') appear underneath the rendered group again ('!this.shouldRenderItems(group)')? - Is there a simple way to fix this 'bug'? (Perhaps I'll find out by myself, when I try to implement your version into my application...)
Similar Threads
Sencha Touch and long list/store performanceBy bklaas in forum Sencha Touch 1.x: DiscussionReplies: 3Last Post: 20 Dec 2010, 10:41 AM
Massive Performance Issue with Custom List (XTemplate) on AndroidBy konki_vienna in forum Sencha Touch 1.x: DiscussionReplies: 1Last Post: 20 Dec 2010, 10:40 AM
where's my thread?By innivodave in forum Ext 3.x: Help & DiscussionReplies: 4Last Post: 8 Dec 2009, 11:43 AM
List of performance strategies for extjsBy mavenn in forum Community DiscussionReplies: 9Last Post: 5 Mar 2009, 8:54 AM | https://www.sencha.com/forum/showthread.php?119185-List-performance-consolidation-thread/ | CC-MAIN-2016-22 | refinedweb | 3,168 | 58.89 |
Intro: How to Control TV Functions Using Analog Input and Arduino
Have you ever wanted to use a good old knob to control your TV volume instead of repeated button pushing? Or make it controlled by light? Do you want your remote to be replaced by an awesome arduino and let it do the hard work of using a remote for you? Well this instructable is for you!
Also if you just want to learn how to use IR to control your TV with your arduino this will help you gain some understanding. Or maybe inspire some awesome project and instructable ideas!
In this instructable we will be using a 10k potentiometer, IR LED, and an Arduino to control the volume (or anything else you want) on a TV.
Here is a materials list
NOTE: "*" means optional
Arduino (or clone, but the arduino Leonardo has not worked for me)
10k potentiometer
IR LED
100 ohm resistor
IR receiver
*NPN transistor and 1k resistor (if you want to amplify power to IR LED)
*pushbutton
*regular LED and 470 ohm resistor
You will also need to download the IRremote library from this site:
and make shure to make it arduino 1.0/1.0.1 compatible by changing
#include
to
#include
in IRRemoteInt.h.
Now Lets Begin!
Step 1: Get the IR Remote Library, Learn About It, and Get Some Remote Codes.
To get the really handy IR remote library that Ken Shirriff made
go to
and download and install the library
make sure to change
#include <WProgram.h>
to
#include <Arduino.h>
in IRRemoteInt.h. (to open IRRemoteInt.h and edit it, use Notepad on Windows, or Text Editor on Mac, but don't open with arduino IDE because it won't open with it.)
to make it arduino 1.0/1.0.1 compatable
On the page make sure to read about how to use the library and find the correct protocol for your device
check out the sending example and test it to make sure it works on your arduino
NOTE: picture 1 shows the sending code, and picture 2 shows the hardware setup for it.
then do the receiving test on your arduino to find out your remote hexadecimal codes by pointing your remote at the receiver and pressing the buttons you want the codes for, the see the codes in the Serial monitor.
make sure to write down the codes or store them somewhere for future use.
NOTE: picture 3 shows the receiving code, and picture 4 shows the hardware setup for the receiving code.
If you get all these to work then you are set to continue!
Step 2: Set Up Your Hardware!
Here are the steps to building the arduino remote:
hookup a pushbutton to pin 4 on your arduino, 5V, and ground with a 10k resistor
connect the center pin of your potentiometer to analog pin 3 on your arduino, and the side pins to 5V and ground
connect the long lead ("+") of the indicator LED to pin 2 of your arduino with a 470 ohm (or whatever resistor works) resistor, and the short ("-") lead to ground
connect the long lead ("+") of the IR LED to pin 3 of your arduino with a 100 ohm resistor, and the short lead ("-") to ground
NOTE: the first picture shows this basic circuit, and the ones after it show the steps listed above.
(OPTIONAL) if you want to use a transistor to amplify the IR LED's power do this:
connect the long lead ("+") of the IR LED to the emitter of the NPN transistor you have, the short lead ("-") to ground, a 100 ohm resistor from the collector pin of the NPN transistor to 5V, and connect the base pin of the NPN transistor to pin 3 of your arduino with a 1k resistor.
NOTE: the last three pictures show this circuit with transistor and how to set it up
Step 3: Program It!
Now we will program this remote killer!!!!
Here is the program I wrote. It sends a volume up signal whenever the knob is turned up, a volume down when the knob is turned down, a volume down signal every few milliseconds as long as the volume down button is pressed, and lights the indicator LED when it is sending signals.
I am still a beginner at arduino programming, so any improvements would be nice.
Remember to change the codes according to your remote control codes!!!
Here is a arduino file and a .txt file that you can copy and paste.
CODE (it is best to not copy it from here, copy from .txt or download the arduino code instead):
#include <IRremote.h>
#include <IRremoteInt.h>
//enable IR signal sending ability (only works on digital pin 3!!!)
IRsend irsend;
//these pins can be changed if you like
//pin from center of potentiometer
int potpin = 3;
int val = 0;
int old_val = 0;
int level = 0;
int old_level = 0;
//pin from volume down pushbutton
int downVolpin = 4;
int downVolVal = 0;
//led to indicate changes in volume (for debugging) you can change pin
int indicatorLED = 2;
//volume up 490
//volume down c90
void setup()
{
Serial.begin(9600);
pinMode(downVolpin, INPUT);
pinMode(indicatorLED, OUTPUT);
}
void loop() {
downVolVal = digitalRead(downVolpin); //state of volume down button
val = analogRead(potpin); // analog value of pot, between 0-1023
level = map(val, 0, 1023, 0, 100); //changes values from 0-1023 to 0-100
delay(10);
if (downVolVal == HIGH) { // if down volume button is pressed
digitalWrite(indicatorLED, HIGH);
for (int i = 0; i < 3; i++) {
irsend.sendSony(0xc90, 12); // Sony TV down volume (change for your device)
delay(100);
}
}
else if (level > old_level) { //if knob is turned up
digitalWrite(indicatorLED, HIGH);
for (int i = 0; i < 3; i++) {
irsend.sendSony(0x490, 12); // Sony TV up volume (change for your device)
delay(100);
}
}
else if (level < old_level) { //if knob is turned down
digitalWrite(indicatorLED, HIGH);
for (int i = 0; i < 3; i++) {
irsend.sendSony(0xc90, 12); // Sony TV down volume (change for your device)
delay(100);
}
} else { //if neither action is done
digitalWrite(indicatorLED, LOW);
}
old_val = val; //the value is now old
old_level = level; //the mapped value is now old
}
Step 4: Test It Out and Think of More Ways to Use Your IR Remote Powers!
Make sure all the codes are correct and that you made all your connections secure.
Turn on your arduino and make sure the IR LED is pointing at the TV IR receiver (make sure it is in line of sight)
Press or hold down the pushbutton to turn down your volume to the lowest level, and make sure the potentiometer is turned all the way down to sync it with the TV volume.
Now gently turn the potentiometer up and see if the volume on your TV goes up with it, and then turn it the other way and see if the volume goes down.
Also see if the indicator LED is lighting up whenever a change in volume is made.
If everything works, GREAT!
if not:
TROUBLESHOOTING:
check to see if you connected everything to the correct pins on your arduino
make sure connections are correct and secure
make sure the remote codes in the sketch are the ones that are for your TV and use the right protocol.
make sure the TV is in the line of sight of the IR LED
try putting the IR right in front of the TV receiver to see if it works, if it does, it means that you were to far away for the IR LED to reach the TV receiver. To improve range use a NPN transistor to amplify power to the IR LED as shown in the extra part of step #2.
Also look at the troubleshooting area of the IR Library page
5 Discussions
2 years ago
my led broke off my tv remote I cannot change the picture size, have time Warner remote.will not change pic size,, using air TV only.pic size on time Warner works through cable box . advice ideas ? TOM
5 years ago on Introduction
I too have had issues getting the IRremote library working on an Arduino Leonardo or clone.. I was glad to find you had issues too and that i'm not crazy. It led me to look further and find that Leonard has a timer4 which operates on pin13. The library supports this and it now works for me.
thanks!
--matt
via
6 years ago on Introduction
Great job! I am looking forward to seeing more of your inventions and instructibles!
6 years ago on Introduction
Nice first Instructable!
Reply 6 years ago on Introduction
Thanks! | https://www.instructables.com/id/How-to-control-TV-functions-using-Analog-input-and/ | CC-MAIN-2018-43 | refinedweb | 1,440 | 65.15 |
Several weeks ago, I tweeted that a lot of people appear to be making their software far more complex than it needs to be. I was asked to elaborate and share details. The comment was prompted by reading dozens of forum posts by desperate developers in over their heads trying to apply enormous and complex frameworks to applications that really could use simple, straightforward solutions. I've witnessed this in projects I've taken over and worked with other developers and of course am guilty of making these same mistakes myself in the past. After years of working with line of business Silverlight applications, and speaking with several of my colleagues, what I thought might be a "Top 5" turned out to be a "Top 10" list. This is not necessarily in a particular order, but here are the ten most common mistakes I see developers make when they tackle enterprise Silverlight applications (and many of these apply to applications in general, regardless of the framework).
1. YAGNI
YAGNI is a fun acronym that stands for, "You aren't going to need it." You've probably suffered a bit from not following YAGNI and that is one mistake I've definitely been guilty of in the past. YAGNI is why my Jounce framework is so lightweight, because I don't want to have another bloated framework with a dozen features when most people will only ever use one or two. YAGNI is violated by the framework you build that has this awesome policy-based logging engine that allows dynamic configuration and multiple types of logs ... even though in production you always dump it to the same rolling text file. YAGNI is violated by the complex workflow you wrote using the Windows Workflow Engine ... when you could have accomplished the same thing with a dozen lines of C# code without having to spin up a workflow process. Many of the other items here evolve around YAGNI.
Learn how to avoid these mistakes in my new book, Designing Silverlight Business Applications: Best Practices for Using Silverlight Effectively in the Enterprise (Microsoft .NET Development Series)
A good sign that YAGNI is being violated is when the team spends three months building the "application framework" without showing a single screen. It has to be built just right with a validation engine, a configurable business rules engine, an XSLT engine and your own data access adapter. The problem is that trying to reach too far ahead into the project is a recipe for disaster. Not only does it introduce a bulk of code that may not be used and adds unnecessary complexity, but many times as the project progresses you'll find you guessed wrong and now have to go back and rewrite the code. I had a boss once who would throw up his hands in exasperation and say, "No, don't ask me for another refactoring." Often users only think they know what they want, and building software too far into the future means disappointment when they figure out they want something different after testing the earlier versions of the code.
Well-written software follows the SOLID principles. Your software should be made of small, compact building blocks. You don't have to boil the ocean. Instead, assume you aren't going to need it. That doesn't mean the logger or rules engine will never come into play, it just means you defer that piece of the software puzzle until it is relevant and comes into focus. Often you will find the users weren't really sure of what they really wanted, and waiting to place the feature until the software has been proven will save you lots of cycles and unnecessary overhead. Instead of pulling in the Enterprise Library, just implement an
ILogger interface. You can always put the Enterprise Library behind it or refactor it later, but you'll often find that simple implementation that writes to the debug window is all you'll ever need.
2. Sledgehammer Framework Syndrome
This syndrome is something I see quite a bit with the Prism framework. I see a lot of questions about how to scratch your left ear by reaching around your back with your right arm using five layers of indirection. Contrary to what some people have suggested, I am a huge fan of Prism and in fact give quite a bit of credit to Prism concepts in Jounce. The problem is that while Prism provides guidance for a lot of features, few people learn to choose what features are relevant and most simply suck in the entire framework and then go looking for excuses to use the features. You don't have to replace all of your method calls with an event aggregator message and an application with five menu items doesn't always have to be chopped into five separate modules. I can't tell you how many projects I've seen pull in the Enterprise Library to use the exception handling block only to find there are only two classes that actually implement it and the rest do the same old try ... catch ... throw routine. Understand the framework you are using, and use only the parts that are relevant and make sense. If the source is available, don't even compile the features you aren't going to need ... there it is, YAGNI again.
The problem with pulling in the whole framework is what many of you have experienced. You jump into a new code base and find out there are one million lines of code but it just seems weird when the application only has a dozen pages. You started to work on a feature and find entire project libraries that don't appear to be used. You ask someone on the team about it, and they shrug and say, "It's working now ... we're afraid if we pull out that project, something might be broken that we won't learn about until after it's been in production for 6 months." So the code stays ... which is Not A Good Thing™.
3. Everything is Dynamic
The first question I often get about Jounce is "how do I load a dynamic XAP" and then there is that stare like I've grown a third eye when I ask "Do you really need to load the XAP dynamically?" There are a few good reasons to load a XAP dynamically in a Silverlight application. One is plug-in extensibility — when you don't know what add-ons may be created in the future, handling the model through dynamic XAP files is a great way to code for what you don't know. Unfortunately, many developers know exactly what their system will need and still try to code everything dynamic. "Why?" I ask. "Because it is decoupled." "But why is that good?" "Because you told me to follow the SOLID principles." The SOLID principles say a lot about clean separation of concerns, but they certainly don't dictate the need to decouple an application so much that you can't even tell what it is supposed to load.
Following SOLID means you can build the application as a set of compiled projects first, and then refactor modules into separate XAP files if and when they are needed. I mentioned one reason being extensibility. The other is managing the memory footprint. Why have the claims module referenced in your XAP file if you aren't going to use it? The thing is, there isn't much of a difference between delaying the creation of the claim view and the claim view model versus adding the complexity of loading it from a separate XAP file. If you profile the memory and resources, you'll find that most satellite XAP files end up being about 1% of the total size of the application. The main application is loaded with resources like images and fonts and brushes and controls, while the satellite XAP files are lightweight views composed of the existing controls and view models. Instead of making something dynamic just because it's cool, why not build the application as an integrated piece and then tackle the dynamic XAP loading only if and when it's needed?
4. Must ... Have ... Cache
Caches are great, aren't they? They just automatically speed everything up and make the world a better place by reducing the load on the network. That sounds good, but it's a tough sell for someone who actually profiles their applications and is looking to improve performance. Many operations, even with seemingly large amounts of data, end up having a negligible impact on network traffic. What's worse, a cache layer adds a layer of complexity to the application and another level of indirection. In Silverlight, the main option for a cache is isolated storage. Writes to isolated storage are slower than slugs on ice due to the layer between the isolated storage abstraction and the local file system.. Often you will find that your application is taking more time to compute whether or not a cached item has expired and de-serializing it from isolated storage than it would have taken to simply request the object from the database over the network. Obviously, there are times when a cache is required such as when you want the application to work in offline mode. The key is to build the cache based on need, and sometimes you may find that you aren't going to need it. As always, run a performance analysis and measure a baseline with and without the cache and decide based on those results whether or not the cache is necessary — don't just add one because you assume it will speed things up.
5. Optimistic Pessimistic Bipolar Synchronization
Synchronization is a problem that has been solved. It's not rocket science and there are great examples of different scenarios that deal with concurrency. Many applications store data at a user scope, so the "concurrency" really happens between the user and, well, the user. If you are writing an application that works offline and synchronizes to the server when it comes online, be practical about the scenarios you address. I've seen models that tried to address "What if the user went offline on their phone and updated the record, then updated the same record on their other offline phone then they went to their desktop in offline mode and updated the same record but the time stamp on the machine is off, and now both go online - what do we do?!" The reality is that scenario has about a 1 in 1,000,000 likelihood. Most users simply aren't offline that much and when they are, it's an intermittent exception case. Field agents who work in rural areas will be offline more often, but chances are they are using your application on one offline device, not multiples. It simply doesn't make sense to create extremely complex code to solve the least likely problem in the system, especially when it's something that can be solved with some simple user interaction. Sometimes it makes more sense to simply ask the user, "You have multiple offline updates. Synch with your phone or your desktop?" rather than trying to produce a complex algorithm that analyzes all of the changes and magically constructs the target record.
6. 500 Projects
I'm a big fan of planning your projects carefully. For example, it often does not make sense to include your interfaces in the same project as your implementations. Why? Because it forces a dependency. If you keep your interfaces (contracts) in a separate project, it is possible to reference them across application layers, between test and production systems, and even experiment with different implementations. I've seen this taken to the extreme, however, with applications that contain hundreds of projects and every little item is separated out. This creates a convoluted mass of dependencies and building the project can take ages. Often the separation isn't even needed because groups of classes are often going to be updated and shipped together.
A better strategy is to keep a solid namespace convention in place. Make sure that your folder structure matches your namespaces and create a folder for models, contracts, data access, etc. Using this approach enables you to keep your types in separate containers based on namespaces, which in turn makes it easy to refactor them if you decide that you do need a project. If you have a project called
MyProject with a folder called
Model and a class called
Widget, the class should live in the
MyProject.Model namespace. If you find you need to move it to a separate project, you can create a project called
MyProject.Model, move the class to it and update references, and you're done - just recompile the application and it will work just fine.
Designing Silverlight Business Applications: Best Practices for Using Silverlight Effectively in the Enterprise (Microsoft .NET Development Series)
7. No Code Behind!
This is one that amazes me sometimes. Developers will swear MVVM means "no code behind" and then go to elaborate lengths to avoid any code-behind at all. Let's break this down for a minute. XAML is simply declarative markup for object graphs - it allows you to instantiate types and classes, set properties and inject behaviors. Code-behind is simply an extension of those types and the host class that contains them. The idea of MVVM is to separate concerns - keep your presentation logic and view-specific behaviors separate from your application model so you can test components in isolation and reuse components across platforms (for example, Windows Phone 7 vs. the new WinRT on Windows 8). Having business logic in your code-behind is probably NOT the right idea because then you have to spin up a view just to engage that logic. What about something completely view-specific, however? For example, if you want to kick off a storyboard after a component in the view is loaded, does that really have to end up in a view model somewhere? Why? It's view-only logic and doesn't impact the rest of the application. I think it's fine to keep concerns separated, but if you find you are spending 2 hours scouring forums, writing code and adding odd behaviors just so you can take some UI code and shove it into a view model, you're probably doing it wrong. Code-behind is perfectly fine when it makes sense and contains code that is specific to the view and not business logic. A great example of this is navigation on Windows Phone. Because of the navigation hooks, some actions simply make sense to write in the code-behind for the view.
8. Coat of Many Colors
Have you ever worked on a system where your classes wear coats of many colors? For example, you have a data base table that is mapped to an Entity Framework class, which then gets shoved inside a "business class" with additional behaviors, that is then moved into a lightweight Data Transfer Object (DTO) to send over the wire, is received using the proxy version of the DTO generated by the service client, and then pushed into yet another Silverlight class? This is way too much work just to move bits over the wire. Modern versions of the Entity Framework allow you to create true POCO classes for your entities and simply map them to the underlying data model. Silverlight produces portable code that you can share between the client and server projects, so when you define a service you can specify that the service reuses the type and de-serializes to the original class instead of a proxy object. WCF RIA Services will handle all of the plumbing for sharing entities between the client and the server for you. You know you are a victim of this problem if you ask someone to add a new entity into the mix and they moan for 10 minutes because they know it's going to take forever to build the various incarnations of the object. When there is too much ritual and ceremony involved with moving a simple entity from the server to the Silverlight client, it's time to step back and re-evaluate. In some cases it might make sense to keep the entities but use code-generation techniques like T4 templates to simplify the repetive tasks, but in many cases you can probably get away with reusing the same class across your entire stack by separating your models into a lightweight project that you reference from both sides of the network pond.
9. Navigation Schizophrenia
Are you building a web site in Silverlight, or an application? The presence of the navigation framework has led many projects down the path of using URL-driven navigation for line of business applications., To me, this is a complete disconnect. Do I use URLs in Excel? What does the "back" button mean in Word? The point is that some applications are well-suited to a navigation paradigm similar to what exists on the web. There is a concept of moving forward and "going back." Many line of business applications are framed differently with nested menus, multiple areas to dock panels and complex graphs, grids, and other drill-downs. It just doesn't make sense to try to force a web-browser paradigm on an application just because it is delivered over the web. Sometimes you have no choice - for example, navigation is an intrinsic part of the Windows Phone experience, and that's fine. Just make sure you are writing navigation based on what your application needs, rather than forcing a style of navigation on your application simply because there is a template for it.
10. Everything is Aggregated
The final issue is one that I've seen a few times and is quite disturbing. If you are publishing an event using the event aggregator pattern, and receiving the same event on the same class that published it, there's something wrong. That's a lot of effort to talk to yourself. The event aggregator is a great pattern that solves a lot of problems, but it shouldn't be forced to solve every problem. I've always been a fan of allowing classes communicate with peers through interfaces. I don't see an issue with understanding there is a view model that handles the details for a query, so it's OK to expose an interface to that view model and send it information instead of using the event aggregator. I still expose events on objects as well. For example, if I have a repository that is going to raise a notification when the collection changes, I'll likely expose that as an event and not as a published message. Why? Because for that change to be interesting, the consumer needs to explicitly understand the repository and have an established relationship. The event aggregator pattern works great when you have messages to publish that may have multiple subscribers, impact parts of the system that may not be explicitly aware of the class publishing the message, and when you have a plug-in model that requires messages to cross application boundaries. Specific messages that are typically shared between two entities should be written with that explicit conversation in mind. In some cases you want the coupling to show the dependency because it is important enough that the application won't work well without it. There is nothing wrong with using the event aggregator, just understand the implications of the indirection you are introducing and determine when a message is really an API call, a local notification, or a global broadcast.
Conclusion
I love writing line of business software. I've been doing it for well over a decade across a variety of languages ranging from C++, Java, VB6, JavaScript and XSLT (yes, I called XSLT a language, if you've worked with systems driven by XSLT you know what I mean) ... and I've been guilty of most of the items I listed here. One thing I learned quickly was that most people equate "enterprise software" to "large, clumsy, complex and difficult to maintain software" and that doesn't have to be the case. The real breakthrough for me happened when I started to focus on the tenants of SOLID software design as well as DRY (don't repeat yourself) and YAGNI. I learned to focus on simple building-block elements and working with what I know and not spending too much time worrying about what I don't know. I think you'll find that keeping the solution simple and straightforward creates higher quality software in a shorter amount of time than over-engineering it and going with all of the "cool features" that might not really be needed. If there is nothing else you take away from this article, I hope you learn two things: first, don't code it unless you know you need it, and second, don't assume - measure, spike, and analyze, but never build a feature because you THINK it will benefit the system, only build it when you can PROVE that it will.
Want to avoid these mistakes? Read about lessons learned from over a decade of enterprise application experience coupled with hands-on development of dozens of line of business Silverlight applications in my new book, Designing Silverlight Business Applications: Best Practices for Using Silverlight Effectively in the Enterprise (Microsoft .NET Development Series). | http://csharperimage.jeremylikness.com/2011_09_01_archive.html | CC-MAIN-2015-06 | refinedweb | 3,606 | 57.61 |
Automating Penetration Testing in a CI/CD Pipeline: Part 2
Automating Penetration Testing in a CI/CD Pipeline: Part 2
How to use OWASP ZAP API and Python scripts to automatically start penetration testing your web applications. first post, we discussed what OWASP ZAP is, how it’s installed and automating that installation process with Ansible. This second article of three will drill down into how to use the ZAP server, created in Part 1 for penetration testing your web-based application.
Penetration Test Script
If you recall the flow diagram (below) from the first post, we will need a way to talk to ZAP so that it can trigger a test against our application. To do this we’ll use the available ZAP API and wrap up the API in a Python script. The script will allow us to specify our ZAP server, target application server, trigger each phase of the penetration test and report our results.
The core of the ZAP API is to open our proxy, access the target application, spider the application, run an automated scan against it and fetch the results. This can be accomplished with just a handful of commands; however, our goal is to eventually get this bound into a CI/CD environment, so the script will have to be more versatile than a handful of commands.
The Python ZAP API can be easily installed via pip:
pip install python-owasp-zap-v2.4
We’ll start by breaking down what was outlined in the above paragraph. For learning purposes, these can be easily ran from the Python command line.
from zapv2 import ZAPv2 target = "http://" % target_application_url zap = ZAPv2(proxies={'http': "" %zap_hostname_or_ip, 'https': "" %zap_hostname_or_ip} zap.urlopen(target) zap.spider.scan(target) zap.spider.static() # when status is >= 100, the spider has completed and we can run our scan zap.ascan.scan(target) zap.ascan.status() # when status is >= 100, the scan has completed and we can fetch results print zap.core.alerts()
This snippet will print our results straight to STDOUT in a mostly human readable format. To wrap all this up so that we can easily integrate this into an automated environment we can easily change our output to JSON, accept incoming parameters for our ZAP host names and target url. The following script takes the above commands and adds the features just mentioned.
The script can be called as follows:
./pen-test-app.py --zap-host zap_host.example.com:8080 --target app.example.com
Take note, the server that is launching our penetration test does not need to run ZAP itself, nor does it need to run the application we wish to run our pen test against.
Lets set up a very simple web-based application that we can use to test against. This isn’t a real-world example but it works well for the scope of this article. We’ll utilize Flask, a simple Python-based http server and allow it run a basic application that will simply display what was typed into the form field once submitted. The script can be downloaded here.
First Flask needs to be installed and the server started with the following:
pip install flask python simple_server.py
The server will run on port 5000 over http. Using the example command above, we’ll run our ZAP penetration test against it as so:
/pen-test-app.py --zap-host 192.168.1.5:8080 --target Accessing Spidering Spider completed Scanning Info: Scan completed; writing results.
Please note that the ZAP host is simply a url and a port, while the target must specify the protocol, either ‘http’ or ‘https’.
The ‘pen-test-app.py’ script is just an example of one of the many ways OWASP ZAP can be used in an automated manner. Tests can also be written to integrate FireFox (with ZAP as its proxy) and Selenium to mimic user interaction with your application. This could also be ran from the same script in addition to the existing tests.
Scan and Report the Results
The ZAP API will return results to the ‘pen-test-app.py’ script which in turns will write them to a JSON file, ‘results.json’. These results could be easily scanned for risk severities such as “grep -ie ‘high’ -e ‘medium’ results.json”. This does not give us much granularity in determining which tests are reporting errors nor if they critical enough to fail an entire build pipeline.
This is where a tool called Behave comes into play. Behave is a Gerkin-based language that allows the user to write test scenarios in a very human readable format.
Behave can be easily installed with pip:
pip install behave
Once installed our test scenarios are placed into a feature file. For this example we can create a file called ‘pen_test.feature’ and create a scenario.
Feature: Pen test the Application Scenario: The application should not contain Cross Domain Scripting vulnerabilities Given we have valid json alert output When there is a cross domain source inclusion vulnerability Then none of these risk levels should be present | risk | | Medium | | High |
The above scenario gets broken down into steps. The ‘Given’, ‘When’ and ‘Then’ will each correlate to a portion of Python code that will test each statement. The ‘risk’ portion is a table, that will be passed to our ‘Then’ statement. This can be read as “If the scanner produced valid JSON, succeed if there are no CSX vulnerabilities or only ones with ‘Low’ severity.
With the feature file in place, each step must now be written. A directory must be created called ‘steps’. Inside the ‘steps’ directory we create a file with the same name as the feature file but with a ‘.py’ extension instead of a ‘.feature’ extension. The following example contains the code for each step above to produce a valid test scenario.
import json import re import sys from behave import * results_file = 'results.json' @given('we have valid json alert output') def step_impl(context): with open(results_file, 'r') as f: try: context.alerts = json.load(f) except Exception as e: sys.stdout.write('Error: Invalid JSON in %s: %s\n' % (results_file, e)) assert False @when('there is a cross domain source inclusion vulnerability') def step_impl(context): pattern = re.compile(r'cross(?:-|\s+)(?:domain|site)', re.IGNORECASE) matches = list() for alert in context.alerts: if pattern.match(alert['alert']) is not None: matches.append(alert) context.matches = matches assert True @then('none of these risk levels should be present') def step_impl(context): high_risks = list() risk_list = list() for row in context.table: risk_list.append(row['risk']) for alert in context.matches: if alert['risk'] in risk_list: if not any(n['alert'] == alert['alert'] for n in high_risks): high_risks.append(dict({'alert': alert['alert'], 'risk': alert['risk']})) if len(high_risks) > 0: sys.stderr.write("The following alerts failed:\n") for risk in high_risks: sys.stderr.write("\t%-5s: %s\n" % (risk['alert'], risk['risk'])) assert False assert True
To run the above test simply type ‘behave’ from the command line.
behave Feature: Pen test the Application # pen_test.feature:1 Scenario: The application should not contain Cross Domain Scripting vulnerabilities # pen_test.feature:7 Given we have valid json alert output # steps/pen_test.py:14 0.001s When there is a cross domain source inclusion vulnerability # steps/pen_test.py:25 0.000s Then none of these risk levels should be present # steps/pen_test.py:67 0.000s | risk | | Medium | | High | 1 feature passed, 0 failed, 0 skipped 1 scenario passed, 0 failed, 0 skipped 3 steps passed, 0 failed, 0 skipped, 0 undefined Took 0m0.001s
We can clearly see what was ran and each result. If this was ran from a Jenkins server, the return code will be read and the job will succeed. If a step fails, behave will return non-zero, triggering Jenkins to fail the job. If the job fails, it’s up to the developer to investigate the pipeline, find the point it failed, login to the Jenkins server and view the console output to see which test failed. This may not be the most ideal method. We can tell behave that we want our output in JSON so that another script can consume the JSON, reformat it into something an existing reporting mechanism could use and upload it to a central location.
To change behave’s behavior to dump JSON:
behave --no-summary --format json.pretty > behave_results.json
A reporting script can either read the behave_results, json file or read the STDIN pipe directly from behave. We’ll discuss more regarding this in the followup post.
Summary
If you’ve been following along since the first post, we have learned how to set up our own ZAP service, have the ZAP service penetration test a target web application and examine the results. This may be a suitable scenario for many systems. However, integrating this into a full CI/CD pipeline would be the optimal and most efficient use of this.
In part three we will delve into how to fully integrate ZAP so that not only will your application involve user, acceptance and capacity testing, it will now pass through security testing before reaching your end Nick DeClario , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/automating-penetration-testing-in-a-cicd-pipeline-1 | CC-MAIN-2018-47 | refinedweb | 1,559 | 63.29 |
The MinnowBoard packs a SATA, gigabit ethernet port, and PCI Express connectivity with the HDMI and USB trimmings one expects from a modern Single Board Computer (SBC). The MinnowBoard also brings an Atom CPU with Intel GMA 600 graphics, 1Gb of RAM, 4Gb of flash storage, a handful of GPIO ports to tinker with and the beginnings of a daughterboard community.
The first duaghterboard is the BoB which offers more GPIO, SPI, I2C, and UART headers. The daughterboards are referred to as Lures in the MinnowBoard community, much like Ardunio Shields and Beagle Capes. The white header shown in the top left of the below picture contains many goodies such as PCIE, SATA, USB, UART, I2C and SPI.
Booting the board
After booting the MinnowBoard for the first time I was automatically logged into a GNOME desktop running at 1600x900. It took a moment for the bad font rendering on my LCD to lead me notice this and change to 1080p resolution. The Angstrom Linux Distribution that is the default choice for the MinnowBoard shows signs of its embedded heritage. For example, the default shell is /bin/sh and the dropbear ssh daemon is running instead of openssh. I discovered the latter because dropbear wasn't allowing connections. It wasn't denying them, but was failing with an error about buffers.
A bit of shuffling with the opkg package manager that Angstrom uses and I had a working openssh server up and running. I later found that I had to install nfs-utils-client in order to mount an NFS share from the MinnowBoard. A final hiccup was having to download and install Firefox manually because it didn't appear in the opkg listing.
Graphics Performance Tests
I then turned to seeing how the GMA 600 worked on the board. First, mplayer on Big Buck Bunny, I lost the desktop and could only see the terminal text before killing mplayer over ssh. Then attention was turned to the cairo demos to see how 2D graphics performance was. Flowers got 0.5 fps and gears 6.6 fps. The gears number is directly in line with what theGK802 could get, but well below the 25+ FPS that the Beagle Bone Black gave when it was running at 720p. For reference, a desktop machine running an Intel 2600K with an NVidia 570 graphics card got 140 FPS in gears.
After digging in a bit, the GMA 600 is based on a PowerVR chip and it seems Linux support may require some tinkering to get up and running. I then contacted Scott Garman from the Intel Open Source Technology Center who responded that "Patrik Jakobsson is the maintainer of the open source GMA500 kernel driver. He currently has a MinnowBoard and is working on including some 2D acceleration in it. With any luck he hopes to get that included in the upcoming 3.12 kernel".
Programming on the Board
My first thoughts for programming on the MinnowBoard were Toggling an LED and then some light GPIO programming to do a similar thing. Both of those having been done and well documented for me I had to aim a little higher. The 8 GPIO pins on the J9 header block can be read and written through the /sys/class/gpio filesystem just as with the Beagle Bone Black. This is wonderful for code portability, you may need to change the path to which GPIO file to use, but code written for Linux GPIO should work across a range of hardware. For the MinnowBoard there is a warning about overloading the GPIO pins by going over 3.3V/10mA which may cause permanent damage.
Shown in the image below is the work in progress with the ultimate aim of having the MinnowBoard drive some 595 shift registers powering a bunch of LEDs using an external power source. As at the diagram stage, the transistor circuit on the top left of the larger breadboard is being passed by leaving the LEDs quite dimly lit. The two ICs in the bottom breadboard of the figure are 595 shift registers. These work with a latch, clock, and data line, turning those three lines into any multiple of 8 output lines. This is because each 595 has 8 outputs and can chain to a subsequent 595 shift register which itself can chain to another and so on. Using 595s can quickly give you many output lines from only 3 GPIO headers on the Minnowboard.
To program the 595 you hold the latch line low, write a bit of data (high or low) to the data line and pulse the clock line to have the 595 take whatever the value on the data line is currently as the next bit of input. When you are done you release the hold on the latch line (set it high again) which instructs the 595 to output your data. The 595 makes no changes to its output while you are shifting your data in using the data and clock lines. The changes to the output of the 595 happen at once when you release the latch line.
Running Arduino Functions
In the code I've reimplemented some Arduino functions on top of the Linux kernel /sys/class/gpio filesystem. Although these functions are similar to the Arduino ones, the program and memory size restrictions of Arduino programming don't apply and you get to choose which language you like, for me this time around it is C++. First the Arduino like functions, pinMode() sets a GPIO pin to read or write, digitalWrite() puts a single boolean value to a GPIO pin, and shiftOut() sends an octet of bits to a data line pulsing the clock as it goes.
#include <string> #include <fstream> #include <iostream> #include <bitset> using namespace std; #include <unistd.h> enum pinmode { INPUT = 0, OUTPUT = 1 }; static void pinMode( std::string pin, pinmode mode ) { ofstream oss(( pin + "direction" ).c_str()); if( mode ) oss << "out" << flush; else oss << "in" << flush; } enum writemode { LOW = 0, HIGH }; static void digitalWrite( string fname, int state ) { ofstream oss( (fname + "/value").c_str() ); oss << state << flush; } enum shiftOutMode { MSBFIRST = 1 }; static void shiftOut( const string& data, const string& clock, enum shiftOutMode, char userdata ) { for( int i = 7; i>=0; --i ) { digitalWrite( clock, 0 ); int v = !!(userdata & (1<<i)); digitalWrite( data, v ); digitalWrite( clock, 1 ); usleep( 20 ); digitalWrite( clock, 0 ); usleep( 20 ); } }
The main program is shown below, first the three lines are set for output and the two octets of data are set to an initial value. Each iteration the latch is held while the data is shifted into the 595 ICs and the latch released. The next 6 lines just shift the two octets by one bit as a circular buffer. For simplicity a bitset<16> could be used which would reduce those 6 lines right down. This code starting as an Arduino sketch still leaves refactoring to be done.
int data[ 4 ]; int main( int argc, char** argv ) { std::string DATA("/sys/class/gpio/gpio251/"); // PIN 10 std::string CLOCK("/sys/class/gpio/gpio249/"); // PIN 8 std::string LATCH("/sys/class/gpio/gpio247/"); // PIN 6 data[0] = 0xE2; data[1] = 0xAD; pinMode(LATCH, OUTPUT); pinMode(DATA, OUTPUT); pinMode(CLOCK, OUTPUT); while( true ) { digitalWrite(LATCH, LOW); shiftOut(DATA, CLOCK, MSBFIRST, data[0] ); shiftOut(DATA, CLOCK, MSBFIRST, data[1] ); digitalWrite( CLOCK, 0 ); digitalWrite(LATCH, HIGH); int b0 = data[0] & 0x1; int b1 = data[1] & 0x1; data[0] >>= 1; data[1] >>= 1; data[0] |= (b1 << 7); data[1] |= (b0 << 7); cerr << "data[0]:" << (bitset<8>)data[0] << " data[1]:" << (bitset<8>)data[1] << endl; usleep( 1000 * 1000 ); } return 0; }
Speed Performance Tests
As for performance, the MinnowBoard got 1164 overall in Octane. For comparison TI ARM OMAP5432 (Dual core A15) at 800Mhz got 1914, the IFC6410 quad ARM A15 Snapdragon obtained 1439, and the ODroid-U2 quad core ARM got 1411. Openssl 1.0.1e "speed" performance for the MinnowBoard for 1024 bit RSA got 89 signs/s and 1562 verify/s. This puts the MinnowBoard at a little over a third the number RSA operations/sec that the IFC6410 quad ARM A15 Snapdragon can perform.
The MinnowBoard is in the ball park of around half the RSA performance of the Beagle Bone Black. As the Octane benchmark can take advantage of multiple threads of execution the MinnowBoard performed reasonably closely to the ARM machines. It would seem that the code for the openssl that came with the MinnowBoard may not have been optimized as best as it could for the Atom CPU it was running on.
Power wise the MinnowBoard took 9.2 Watts at an idle desktop, up to 10.5 when a keyboard and mouse where connected using a passive hub. Still with the hub connected for the rest of the figures, power usage moved up to 10.8 during an openssl speed test. While running the Octane benchmark peaks up to 11.5 Watts where seen. As with all the articles in this series, I am using a Belkin at the wall meter to measure power, so these numbers all also include the inefficiency of the power supply.
Bringing PCIE to the maker market is a wonderful thing. The MinnowBoard has 1Gb of RAM and a single (Hyper threaded) core. There are high end ARM boards coming with 2Gb of RAM, for example the ODroid-U2. There are also those with only 512Mb of RAM such as the Raspberry Pi Model B and Beagle Bone Black ($45).
It will be interesting to see what variants of the MinnowBoard rise up, taking advantage of the open design with "Customizations possible without signing NDAs" and available information about the board itself (bottom of linked page). We would like to thank the Intel Open Source Technology Center for providing a review sample.
Jayne.
Jayneil Dal.
David Anders Said:
Ben, just fyi, there is the Userspace-Arduino project that creates a platform for using generic arduino functions under linux userspace. significant portions of this project are used on the new Intel Galileo and the newly announced Arduino TRE. you can easily tweak it for any system running linux: | https://www.linux.com/learn/tutorials/745700-minnowboard-the-200-atom-based-maker-board | CC-MAIN-2014-15 | refinedweb | 1,688 | 67.18 |
Plz provide me all the material for Java
Plz provide me all the material for Java Plz provide me all the material for Java
Please go through the following link:
Java Tutorials
java material
java material Please provide option for full materials download ... i dont have internet connection in my home. plz provide offline tutorial option
Triggers - JSP-Servlet
how can i use update triggers. That is, how can i get the result of a trigger... should be increased automatically.
Thanks.. Hi friend,
Plz give
Accessing your site
Accessing your site I cant acess your site.I am getting good grip on java through roseindia.kindly help me so that I can acess roseindia
We have organized our site map for easy access.
You can browser though Site Map to reach the tutorials and information
pages. We will be adding the links to our site map as and when new pages
are added
plz check my codings are correct or not...There is an error..i cant find it..
plz check my codings are correct or not...There is an error..i cant find... Scanner(System.in);
System.out.print("Enter your index No: ");
indexNo=text.nextLine();
System.out.print("Enter your Gender
Check PHP MySQL Connectivity
you to check whether your PHP web page has
been connected with MySQL server...Check PHP MySQL Connectivity:
The first step of any kind of connectivity is to check whether the
connectivity has been established or not. To check
Mysql Check
Mysql Check
Mysql Check is used to check the existence of table in the database.
Understand with Example
The Tutorial illustrate an example from 'Mysql Check
WEB SITE
WEB SITE can any one plzz give me some suggestions that i can implement in my site..(Some latest technology)
like theme selection in orkut
like... Technical Subject if u have knowledge about PHP, MySQL, JavaScript, Jquery, and CSS
Hi da SAKTHI ..check thiz - Java Beginners
Hi da SAKTHI ..check thiz package bio;//LEAVE IT
import java.lang.... YOUR CHOICE FROM THE LIST BELOW");
p1.add(lb1);p1.add(lb2);p1.add(lb3);p1.add.......CHECK THIZ
String url = "jdbc:odbc:bioDB";
Class.forName
Types of Triggers
Types of Triggers hii,
How many types of Triggers in sql?
hello,
there are three type of trigger
DML triggers
Instead of triggers
System triggers
*>... Source Name and click ok button.
5)Your DSN will get created.
6) Restart your
java & mysql
like
"You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '' at line 1"
my...("com.mysql.jdbc.Driver");
Connection con=DriverManager.getConnection("jdbc:mysql
Hi ...CHECK - Java Beginners
JLabel("SELECT YOUR CHOICE FROM THE LIST BELOW");
p1.add(lb1);p1.add(lb2... frend,
Plz explain about the two classes which not exist in this code
How to promote your writing through your website
.
A User friendly Home Page
A user will come to your site but if it’s... site then why lose out on them? Make them your permanent visitors? Now you... updating of your site is a must that you need to do in order to keep up the user
Professional Web Design Services For You Web Site
Professional Web Design Services For You Web Site
... designer
If the content of your website is such that it requires art work then it becomes essential for you to consult a professional designer and tell him your
The Scannability of your website
of your site needs to be such that it can be easily scanned. Search results... textual material then this will definitely increase your scannability.
If you have... the contents of your site pretty easily.
Also the page titles that you create
Jobs & Triggers
Jobs & Triggers
... as a JobDataMap, which can be used to store state information for a given instance of your job... to provide the scheduling according to your need. There are currently
two types
Submit your site to 100 top search engines
where you can
submit your site. Top search engines are Yahoo, Google, MSN, AOL, Dmoz, etc.,
but there are also many more high quality sites to list your
MySQL Front
web site management.
This all-inclusive web based mysql front end provids...
MySQL Front
In this page you will find many tools that you can use as MySQL Front to work
with the MySQL
Index | About-us | Contact Us
|
Advertisement |
Ask
Questions | Site
Map | Business Software
Services India
Tutorial Section ... FrameWork
Tutorial | MySQL Tutorials
|
XML
Tutorial |
VB
India website.
Index |
Ask
Questions | Site
Map
Web Services... for Beginners
| PHP Examples |
Date Functions |
PHP
MySQL | Questions? ... Services
Tutorials |
Bioinformatics Tutorials
| MySQL Tutorials
Confirm problem
Confirm problem Sir
i have used following code in button delete
onclick="if (confirm('sure?')) submit();"
and if i choose cancel it still submits the form however if choose cancel i want to remain on same jsp page
please help
MySQL Books
source database server. Whether you are a seasoned MySQL user looking to take your... material and code on MySQL 5, PHP 5 and on PHPs object model and validation... site for the first edition of the book PHP & MySQL for Dynamic Web Sites
confirm delete
confirm delete Sir,
I want to ask confirmation before deletion, i m using submit button then the form is passed to servlet for deletion,Please help
Thanks in advance
Get an Editor to polish your pages
will improve the credibility of your site as bad language makes the users lose confidence in the material that they will find on your site. Editors can also make... as stale information is not welcome anywhere.
For instance your site is based upon
Plz help me with this Question - Java Beginners
; Hi Friend,
Your code:
class Check
{
public static void main(String...Plz help me with this Question this is java code
-------------------------
Consider the following code:
int [ ] a = [1, 2, 3];
Object o ="123
TRIGGERS
TRIGGERS
... take when some databases related event occurs. Triggers are executed when you... and drop the triggers and describe you some examples of how to use them.
CREATE
Generate a documentation site for Maven project
In this section, you will learn to auto generate a documentation site with default design and pages for your Maven based project
MySQL Client
your ecommerce site.
The MySQL Client... 4.02 to 5.12 and all of the latest MySQL features including views, triggers, stored...
MySQL Client
My Sql Clint
MySQL Client
from your ecommerce site.
The MySQL Client... commands against a given MySQL database table, all from your own browser.
Now... MySQL Client
My
javascript confirm box
javascript confirm box javascript confirm box
Submitting Web site to search engine
Registering Your Web Site
To Search Engines... Web Sites
Once your web site is running, the next job for you is to attract the
visitors to your websites. You can advertise your site using your
Please check it and let me know - Java Beginners
Please check it and let me know Hi friends,
your las suggession... code and requirement please check it and solve again thanks
Requirements:- I...");
con=DriverManager.getConnection("jdbc:mysql://localhost:3306/student","root
What are the types of triggers?
What are the types of triggers? What are the types of triggers?
Hi,
There are 12 types of triggers in PL/SQL that consist of combinations of the BEFORE, AFTER, ROW, TABLE, INSERT, UPDATE, DELETE and ALL key words
material management system
material management system hi frnds i need a mini project based on material management system if u have the project please post to me on java beginners
backup your MySQL data using mysqldump:cPanel MYSQL 5.1 to 5.5
backup your MySQL data using mysqldump:cPanel MYSQL 5.1 to 5.5 backup your MySQL data using mysqldump:cPanel MYSQL 5.1 to 5.5
SAP MM Study Material
SAP MM Study Material Please let me know where can i get the SAP MM study material an ask queries as well
mysql
you need to download the mysql-connector jar file for connecting java program from mysql database.......
Hi friend,
MySQL is open source database and you can download and install it on your own local system also. Here
Java application PLZ ?
Java application PLZ ? Write an application that demonstrates each of the following methods, based on the statement String dedicate = ?Dedicated to making your event a most memorable one:; indexOf(?D?)
charAt(15)
endsWith(one
Running First Hibernate 3.0 Example
of database.
After running the code example you may check your MySQL database... and configured on your computer
Eclipse IDE installed
MySQL... be download from.
Visit the site and download
mysql-java
" com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use...mysql-java i try to execute this code
stmt1.executeUpdate("insert
javascript confirm yes no
javascript confirm yes no javascript confirm yes no - how to create confirm dialog box yes or no in JavaScript?
if (confirm('Are You... are not redirected")
}
How to confirm yes or no in JavaScript
Hibernate Tools Update Site
Site. The anytime you can user Hibernate
Tools Update Manager from your eclipse...
Hibernate Tools Update Site
Hibernate Tools Update Site
In this section we
MySQL Training ? Online Free MySQL Training
MySQL Training ? Online Free MySQL Training
... want to know about MySQL, we are starting from
installation to the development of enterprise application using MySQL. In this
online training/tutorial we are using
mysql - WebSevices
check your code of PHP that when you insert the first record then it should...mysql Hello,
mysql storing values in column with zero... when I want to store this value in mysql it store 0 first and than 60
Servlets and confirm box
Servlets and confirm box Sir...! I want to insert or delete records form oracle based on the value of confirm box can you please give me the idea.... thanks
Hi Check this.... - Java Beginners
Hi Check this.... Hi Sakthi here..
Run This Code..
Hi sakthi
Your code is not visible here, can u send again please..
Thanks
How to make your web writing persuasive?
is, but wait is the user who surfs your site satisfied? Persuasion begins...; this is because the interested party will be attracted towards your site... a customer surfing your site will be feeling.
The emotion of the customer needs
Plz give java coding for this
Plz give java coding for this ****** *
* *
***********
* *
* ******
Hello Friend;
below is the code... thing this will print your desired pattern. :)
thnk u boss
Gui plz help
Gui plz help Create a Java application that would allow a person to practice math (either addition, subtraction, multiplication or division...(java.awt.event.ActionEvent evt) {
// TODO add your
plz help - Java Beginners
plz help i have to programs and it takes too long to run them so i...("Your file has been written");
}
}
---------------
Second code:
import...();
System.out.println("Your file has been written");
}
catch
plz - Java Interview Questions
.your class must contain the method rollDie that returns a number between 1... the expression
(int)(Math.random() * 6) + 1.
wrie aprogram to test your class(Hw9_2) ... your help.
Thanks a lot
Dynamic check box problem
Dynamic check box problem In my project i have used a dynamic table, in the each row of that table there is one check box [that i have created... check boxes ... pleas help me as soon as possible...
1)application.jsp
Web Site promotion services at roseindia.net
Web site services
will help you get listed in major search engines and directory of the
world. Our own site traffic comes from the major search....
to get your web sites submitted by us
How to Upload Site Online
The first step in making your website is to create the graphics and export... the
website on your server.
To upload the website on your server, you should have the IP address, user
name and password of your hosting server. Once all
best Struts material - Struts
Welcome to the MySQL Tutorials
to quickly
migrate your proprietary databases to MySQL.
...
MySQL Tutorial - SQL Tutorials
The MySQL database server is most popular database
Cron Expression for Cron Triggers
ide php mysql
ide php mysql what is ide for php and mysql
Hi,
You can use Eclipse to develop your PHP project. The easist way is to download the Eclipse having PHP support from eclipse download site.
The Eclipse PHP plugin
More About Triggers
More About Triggers
... of
triggers. As we know that the trigger objects are used for executing the jobs... of triggers. Here we are going to provide two types of triggers like:
Simple
check
updated");
will the above code check if the user has entered value for empcode
Need help on mySQL 5 - SQL
Need help on mySQL 5 Dear Sir,
I'm a beginner of mySQL 5 . I need help on mySQL5 command.
This is the table which i created is called... |
| | |
+---------+------------------+
Thanks for your help
maven mysql jdbc driver
maven mysql jdbc driver How to add the maven mysql jdbc driver... simple!! Add the following into your pom.xml file:
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java<
MySQL Download
Access databases to MySQL server. Depending on your privileges on the target MySQL... on your privileges on the target MySQL server you can export MS Excel data into new...
MySQL Download
PHP User Authentication Login and password check Tutorial
depending upon your already installed script is up to
you. Count is where...;;
//Now to connect to mySQL.
$connect = mysql_connect("$dbhost", "$dbname", "$dbpass");
mysql_select_db($dbbase, $connect
help plz - Java Interview Questions
/ output the number of milk cartons needed to hold milk (round your answer... , if the input string is abcd, the output is : edcba
plz plz plz plz plz help
MySql Open Source
MySql Open Source
MySQL Open Source License
MySQL is free use for those who are 100% GPL. If your application is licensed under GPL or compatible OSI license approved by MySQL AB, you are free to ship any GPL software of MySQL AB
confirm message box - Java Beginners
confirm message box How can I create a confirm message with Yes...
function showConfirm() {
myBooleanVariable = newConfirm(
'My Confirm Title',
'My Confirm Message',
2,0,0)
if (retVal==true) return true;
else
MySql function - SQL
correspinding to your MySQL version for right systax'
I am using MySQL5.0 version.
Please check my code and let me inform what mistake i did.
Your help...MySql function Hi sir/madam,
I am creating a function in mysql5.0
How to check no records - JSP-Servlet
to check and display your cart is empty....
PLs help me.........
Reply...How to check no records Hi all,
I am developing an onlineshopping project, in that how i have 2 check no records.
I saw an answer
MySQL How to Topic
the Password for Your Own User Account?...
MySQL How to Topic
How to installation mysql window 2000 and window XP
http
help plz - Java Interview Questions
help plz 1 )write a program that does the following :
a. prompts the user to input five decimal numbers representing the scores?
b. prints.../ output the number of milk cartons needed to hold milk (round your answer
Show message and confirm dialog box
Show Message and Confirm Dialog Box - Swing Dialogs... use in
your swing applications, example of each type of dialog boxes are provided here.
When your run the program, it will display a frame with three buttons
JobStores
give to the scheduler: jobs, triggers, calendars, etc. The
important step... JobStore your scheduler should use (and it's configuration settings) in the properties... to produce your scheduler instance.
The JobStore is for behind-the-scenes use of Quartz
Download.jsp Urgent plz - JSP-Servlet
Download.jsp Urgent plz Respected Sir/Madam,
I am R.Ragavendran.. I got your coding for download.jsp.. A lot of Thanks... inserted after every upload very successfully..
I am sending the code for your kind
Connect JSP with mysql
you how to connect to
MySQL database from your JSP code. First, you need...
Connect JSP with mysql
... a database:
First create a database named usermaster in
mysql. Before running
Intranet Website creation - plz help me
Intranet Website creation - plz help me hi..
I have assign with the intranet website creation work..right from scratch to end..And honestly I know... for it ...its a corporate site and deadline is May end
Create your Email Server by PHP
as follows and it will ask
for your User name, real name, password, confirm...
Create your Email Server by PHP
Create your Email Server by PHP
(Learn to Create your Email Server.)
Create
PHP MySql Server Version
PHP MYSQL server version
Sometimes we need to check the version of MySql, PHP provides mysql_get_server_info() function to check the MySql version.
To check... these values with your MySql's username & respective password
if (!$link
JBBC ,MYSQL ND JAVA
saved in a mysql table... plz if u've any such programmes kindly help me...JBBC ,MYSQL ND JAVA import java.sql.*;
import...:mysql://localhost/ACCOUNT","root","ADMIN");
st=con.createStatement
Where is the top site to buy swtor credits in 2014 new year?
Where is the top site to buy swtor credits in 2014 new year? In 2014... is here to tell you that a 8% discount code is granted to support you to have your...-credits-were-not-your-obstruction-what-would-be-next-for-swtor-class-changes
check box realtive information in page store in database
check box realtive information in page store in database check box... create a checkbox in html and when i click on my check box realtive information in page store in database
plz help me | http://www.roseindia.net/tutorialhelp/comment/94568 | CC-MAIN-2014-49 | refinedweb | 2,957 | 65.62 |
Reading a text file
main.py -- put your code here!
print ("In main...")
import setup_wifi
print ("WiFi setup...")
import ReadFile
ReadFile.py is:
print ("In ReadFile...")
file = open('02_250.txt', 'r')
file.close()
Getting OSError Errno 2 ENOENT on file = open('02_250.txt', 'r')
I am returning to the SiPy after 6 months and I'm sure this worked back then.
@pmulvey said in Reading a text file:
02_250.txt
Got myself in a muddle having the file in the PC window of WinSCP and not on the device! Thanks for the reply. Oh, what are the special characters that signify a code block (black background) on the forum?
@pmulvey The error code just tells that the file does not exist. I made such a file as "/flash/02_250.txt", and successfully tried the three lines:
file = open('02_250.txt', 'r') content=file.readlines() file.close() | https://forum.pycom.io/topic/1879/reading-a-text-file | CC-MAIN-2019-13 | refinedweb | 146 | 89.14 |
> m trying to export a namespace from my file server (that is a
> namespace's bootes) to my terminal (logging as a client), i tried to
> do:
> servname% exportfs -a -r /tmp (from file server)
> but i have this error:
> exportfs: auth_proxy: auth_proxy write fd: inappropriate use of fd
> before doing this, i opened the listeners by doing:
> aux/listen tcp
why not use cpu to do this?
> Furthermore,from fileserver, i tried to export a namespace by doing:
> srvfs -d spy /tmp
> so it display the issue only on it; from terminal i can import
> services from filesever by doing:
> import -a servername '#s' /username/tmp
> but i can't see /srv/spy, i see all other things but not the last
> created.
namespaces are not lexically bound. i think you'll
find that srvfs is in a namespace that can't see the result
of the import. if instead you do the import before the
srvfs, i think you'll see the expected result.
- erik | http://fixunix.com/plan9/553612-exporting-namespace.html | CC-MAIN-2016-30 | refinedweb | 167 | 56.32 |
.
Yama: not so fast
Posted Aug 5, 2010 5:21 UTC (Thu) by thedevil (subscriber, #32913)
[Link]
What would you do in your day job, if after completing your task using approach A1, your boss told you that wouldn't do, use approach A2 instead; then, after some time passed, you completed approach A2, and the same boss suggested approach A1? I know what I would do: start sending out resumes that evening.
Dignity matters. Even to some hackers.
Posted Aug 5, 2010 8:51 UTC (Thu) by sgros (subscriber, #36440)
[Link]
Of course, it easier to do that with someone's else time, but in the same time it is better for everyone else and, in this case, for kernel.
Now, before someone starts flame war, my comment only presents one view and it's certainly not a general view applicable to all situations.
Posted Aug 5, 2010 9:27 UTC (Thu) by epa (subscriber, #39769)
[Link]
Do you really expect a typical LWN hacker to accept what he considers a technically wrong solution because of 'social' reasons, to keep a contributor happy? Perhaps they should; perhaps it would be better for the kernel in the long run; but there's no way it will happen.
Posted Aug 5, 2010 12:59 UTC (Thu) by NAR (subscriber, #1313)
[Link]
On the other hand the line "If you think the objection is about having things in fs/ you're smoking some really bad stuff." is really something that could leave bad taste in developers' mouth. Probably this is why it's meaning was missed/ignored.
Posted Aug 5, 2010 12:21 UTC (Thu) by Cyberax (✭ supporter ✭, #52523)
[Link]
The LSM folks need to stop inventing excuses and create a stackable LSM.
Posted Aug 5, 2010 15:34 UTC (Thu) by dpquigl (subscriber, #52852)
[Link]
As a side note I'm pretty sure stackable LSMs are going to be an administrative nightmare. People have a hard time as it is telling where the errors are when an LSM is involved and you want to add several ones that can potentially fail out at some point in the chain. Plus once you start having to allocate data structures you need to worry about unwinding the stack of allocations on failure. What happens if your free conditions can fail? What happens when you get past two LSMs which allow something to be freed but the 3rd doesn't?
So once again I invite you to take part in creating the framework you're looking for.
Posted Aug 5, 2010 16:10 UTC (Thu) by Cyberax (✭ supporter ✭, #52523)
[Link]
And their obsession with safety is laughable, frankly. Because it leads to the use of zero LSMs instead of 2. Sure, NSA might be smart enough to know how SELinux policies work. I certainly am not and so I want to combine several LSMs that I can actually understand. But I'd like to be able to run Yama and AppArmor, for example.
I don't quite understand the problem with your unwinding example. Certainly, if you're doing unwinding you should not be doing additional access checks. Think about PAM, for an example of stackable security modules. It works just fine.
As for doing this job myself - I don't have time to do it. But I'm really starting to think about sponsoring it.
Posted Aug 5, 2010 18:37 UTC (Thu) by dpquigl (subscriber, #52852)
[Link]
The problem with unwinding the allocation and deallocation stacks is mostly with deallocation. It means that you need to ensure that the entire stack can dealloc something before you start going down the stack otherwise you may have two layers removing their security information from an inode for example before it hits a layer that fails. Now you need to repopulate everything at that point above the stack. I'm not saying its not doable but that's just one tricky case out of many you'll run across.
Also another example would be what do we do with labeled protocols now like Labeled-IPSec and Labeled-NFSv4? What do I send across the wire when I support both smack and selinux labels? What if tomoyo wants to send the label on their process as well? Do we make the hook to get all relevant security information return this massive blob of data for a number of LSMs that might not be needed?
There are a lot of things that need to be thought about rather than just saying hey this should exist. I've payed attention to the stackable LSM conversations and no one has actually proposed a solution to this yet. Everyone just seems to say it needs to be done and walks away.
Posted Aug 5, 2010 18:53 UTC (Thu) by Cyberax (✭ supporter ✭, #52523)
[Link]
Ahh, sweet memories:
"The Tomoyo guys have said it would be nice to run Tomoyo and SELinux at the same time but I can't really see a good use case for doing that."
Look at Yama - it has some nice security features, I'd like to use them on my systems. However, I'd also like to use AppArmor.
Also, some time ago we used a small patch for Linux security subsystem to grant/revoke some privileges for some logged on users. If I rewrite this patch as a LSM, then I'd lose all abilities to use other LSMs!
I still don't understand why you need to do something LSM-related during deallocation of something. You're not performing a security-related action when you deallocate something on unwinding, so no need to use any LSM actions here.
Label interoperability is really a non-issue. If you're using ipsec-labels then most likely you are the NSA or the CIA. Because you need to have the same labels used EVERYWHERE on your network. So no AppArmor on one host, SMACK on another host and SELinux on the server. Or you need to have some sort of label translation layer, which can be integrated with stacked LSMs.
Posted Aug 5, 2010 19:20 UTC (Thu) by dpquigl (subscriber, #52852)
[Link]
I didn't realize one person consisted of "Most". You'll see I also referenced that exact email in a response to brad later. I said in that response that the kernel removes dead code all of the time. You had an entire framework for one in kernel user. The kernel doesn't cater to out of tree modules so LSM should be no different. This is no longer the case as we have multiple LSMs in tree now and honestly James's attempt to remove LSM is what accelerated Smack getting into the kernel. I believe Linus took Smack directly without going through the security subsystem maintainer.
"I still don't understand why you need to do something LSM-related during deallocation of something. You're not performing a security-related action when you deallocate something on unwinding, so no need to use any LSM actions here.". How does module A and B handle this? Do you just leave the resource unprotected under A and B and just remove it when C is finally able to free its security information?
Posted Aug 5, 2010 19:29 UTC (Thu) by Cyberax (✭ supporter ✭, #52523)
[Link]
."
Use 'two-phase commit'. First, ask each module if it vetoes the decision to delete the resource. If everything is OK, then call the actual 'free_resource' functions.
Posted Aug 5, 2010 16:18 UTC (Thu) by spender (subscriber, #23067)
[Link] ;)
Posted Aug 5, 2010 15:46 UTC (Thu) by spender (subscriber, #23067)
[Link]
Posted Aug 5, 2010 20:09 UTC (Thu) by Cyberax (✭ supporter ✭, #52523)
[Link]
For example, Linux containers with PID and network namespaces are a superior alternative to gresecurity 'anti-chroot-jailbreak' features and simple restriction of netstat to root user.
Though I agree, the kernel needs one coherent set of hook points that can be used to implement different kinds of security (MAC, RBAC).
Posted Aug 5, 2010 21:45 UTC (Thu) by spender (subscriber, #23067)
[Link]
Posted Aug 5, 2010 22:09 UTC (Thu) by Cyberax (✭ supporter ✭, #52523)
[Link]
My systems run a fair amount of legacy code, but I was able to migrate all of it to LXC. It's even feasible to isolate it even further using Xen/KVM now, because hardware is so damn cheap. So adding protections to 'chroot' just isn't worth it, IMO. Actually, it should be possible to reimplement chroot on top of containers.
Some features of grsecurity would better be generalized. For example, IP address tracking should be generalized to other types of metadata (what if I use IP-less protocols?).
Personally, I'd like to see some of features of grsecurity in the mainline kernel. Though I don't really care about a lot of other features...
Posted Aug 6, 2010 11:56 UTC (Fri) by nix (subscriber, #2304)
[Link]
So even when we provide patches to allow existing mechanisms to be used in pervasive things like glibc *and prove that they are useful*, they are still rejected. *sigh*
(That was the last time I was tempted to contribute to glibc at all. Life is too short to work with maintainers with attitudes like that. The stack-protection patch still works, if anyone wants a version against eglibc head.)
Posted Aug 9, 2010 21:09 UTC (Mon) by ilmari (subscriber, #14175)
[Link]
Posted Aug 10, 2010 5:58 UTC (Tue) by rahulsundaram (subscriber, #21946)
[Link]
Posted Aug 11, 2010 23:46 UTC (Wed) by nix (subscriber, #2304)
[Link]
Posted Aug 5, 2010 20:57 UTC (Thu) by MattPerry (guest, #46341)
[Link]
I think what needs to happen here is the people who advocate it being a LSM and the people who think it should be in individual subsystems should come to an agreement on what to do before Kees wastes any more time rewriting code.
Posted Aug 5, 2010 21:19 UTC (Thu) by kees (subscriber, #27264)
[Link]
All my time will be wasted trying to convince people that the changes have value. I'm still stunned that it's not obvious based on all the evidence.
Posted Aug 6, 2010 11:59 UTC (Fri) by nix (subscriber, #2304)
[Link]
Or something.
(No, I can't figure out what their rationale could be, either. I note that nobody has come up with a single case, even an academic one, which your /tmp-race-fixing restrictions would break. But it's apparently unacceptable anyway.)
Posted Aug 13, 2010 15:01 UTC (Fri) by renox (subscriber, #23785)
[Link]
I hope that you'll keep applying pressure and be able to participate to the right 'real life' kernel meetings to resolve things.
*: provided there are not too many people in the meeting and that the 'right' people are here.
Linux is a registered trademark of Linus Torvalds | http://lwn.net/Articles/398607/ | CC-MAIN-2013-20 | refinedweb | 1,817 | 68.81 |
:)
Log.h
#ifndef LOG_H #define LOG_H #include <fstream> #include <stdarg.h> #include <iostream> #include <pthread.h> #define INFO "Info :" #define WARNING "Warn :" #define ERROR "Error:" #define DEBUG "Debug:" using namespace std; class Log { public: Log (char* logFilename, bool debugOn = false); virtual ~Log (); void writeInfo (const char* logline, ...); void writeWarn (const char* logline, ...); void writeError (const char* logline, ...); void writeDebug (const char* logline, ...); private: pthread_mutex_t mLock; char* getTimeStamp (); ofstream mStream; bool mDebugOn; }; #endif
Log.cpp
#include "Log.h" #include <stdarg.h> Log::Log (char* logFilename, bool debugOn) : mDebugOn (debugOn) { pthread_mutex_init (&mLock, NULL); mStream.open (logFilename, fstream::app); } Log::~Log () { mStream.close (); pthread_mutex_destroy (&mLock); } void Log::writeInfo (const char* logline, ...) { pthread_mutex_lock (&mLock); va_list argList; char buffer [1024]; va_start (argList, logline); vsnprintf (buffer, 1024, logline, argList); va_end (argList); mStream << getTimeStamp () << " Info : " << buffer << endl; cout << getTimeStamp () << " Info : " << buffer << endl; pthread_mutex_unlock (&mLock); } void Log::writeError (const char* logline, ...) { pthread_mutex_lock (&mLock); va_list argList; char buffer [1024]; va_start (argList, logline); vsnprintf (buffer, 1024, logline, argList); va_end (argList); mStream << getTimeStamp() << " Error : " << buffer << endl; cout << getTimeStamp() << " Error : " << buffer << endl; pthread_mutex_unlock (&mLock); } char* Log::getTimeStamp () { char* tString = new char [80]; time_t t = time (0); struct tm* today = localtime (&t); strftime (tString, 80, "%d/%m/%Y %H:%M:%S", today); return tString; }
Using a mutex to make things “thread safe” is actually pretty easy. That is what they are designed to do. However, keep in mind, the purpose of a mutex is to protect a Resource, generally some sort of data storage element. You protect Operations (that is sections of executable code) by creating multiple instances of the code, not with a mutex. Usually, you want the Lock and Unlock as close together as possible, ideally just around the actual access to the resource. And, it is bad karma to call other functions within a locked block (be careful even calls to system functions…). For example, in your code, I would move the calls to getTimeStamp outside of the loecked region. Also, just make a single call. You don’t really want to have different times in the log and on the output. Be careful that you don’t use nested mutexes unless you know EXACTLY what you are doing. For example, this can happen when you put a lock around a call to an access function to get a value. If the access function also has a lock around the value (not necessarily the same mutex), you have created a nested lock. This will fail sometimes (or every time if you are lucky. Intermittent lock problems are a pain).
Remember, being inside of a locked region does not guarantee the code will not be interrupted! It just means that no other task using the same mutex can execute past where it tries to get the mutex. If low priority task “A” is in a locked region, and higher priority task “B” starts, A will block and B can run until it tries to get the mutex locked by A. Does A restart then? Maybe – but it is possible that task “C” may start first. Hopefully, C does not need a resource locked by B…
Hi,
Thanks for the reply - I guess I was expecting it too be alot more complicated! I see what you mean about the getTimeStamp, I thought since it was called at the same as the write* function it would need to be included in it, but I see now I was probably making pointless calls to it when I could get it in one go lol :P
I have also got some thing same in c++ i did a sample project using critical section but have never used mutexes can you give some help if i post code snippet here.
Thanks | http://www.daniweb.com/software-development/cpp/threads/410861/creating-thread-safe-logger | CC-MAIN-2014-15 | refinedweb | 615 | 62.17 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.