text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Constructing a User-Friendly JTree from a DOM
Now that you know what a DOM looks like internally, you'll be better prepared to modify a DOM or construct one from scratch. Before we go on to that, though, this section presents some modifications to the
JTreeModelthat let you produce a more user-friendly version of the
JTreesuitable for use in a GUI.
Note: In this section, we modify the Swing GUI to improve the display, culminating in
DomEcho04.java. If you have no interest in the Swing details, you can skip ahead to Creating and Manipulating a DOM and use
DomEcho04.javato proceed from there.
Compressing the Tree View
Displaying the DOM in tree form is all very well for experimenting and for learning how a DOM works. But it's not the kind of friendly display that most users want to see in a
JTree. However, it turns out that very few modifications are needed to turn the
TreeModeladapter into something that presents a user-friendly display. In this section, you'll make those modifications.
Note: The code discussed in this section is in
DomEcho03.java. The file the program operates on is
slideSample01.xml. (The browsable version is
slideSample01-xml.html.)
Make the Operation Selectable
When you modify the adapter, you're going to compress the view of the DOM, eliminating all but the nodes you really want to display. Start by defining a boolean variable that controls whether you want the compressed or the uncompressed view of the DOM:public class DomEcho extends JPanel { static Document document;
boolean compress = true;static final int windowHeight = 460; ...
Identify Tree Nodes
The next step is to identify the nodes you want to show up in the tree. To do that, add the following highlighted code:... import org.w3c.dom.Document; import org.w3c.dom.DOMException;
import org.w3c.dom.Node;public class DomEcho extends JPanel { ... public static void makeFrame() { ... } // An array of names for DOM node type static final String[] typeName = { ... };
static final int ELEMENT_TYPE = Node.ELEMENT_NODE; // The list of elements to display in the tree static String[] treeElementNames = { "slideshow", "slide", "title", // For slide show #1 "slide-title", // For slide show #10 "item", }; boolean treeElement(String elementName) { for (int i=0; i<treeElementNames.length; i++) { if ( elementName.equals(treeElementNames[i]) ) return true; } return false; }
This code sets up a constant you can use to identify the
ELEMENTnode type, declares the names of the elements you want in the tree, and creates a method that tells whether or not a given element name is a tree element. Because
slideSample01.xmlhas
titleelements and because
slideSample10.xmlhas
slide-titleelements, you set up the contents of this array so that it will work with either data file.
Note: The mechanism you are creating here depends on the fact that structure nodes like
slideshowand
slidenever contain text, whereas text usually does appear in content nodes like
item. Although those "content" nodes may contain subelements in
slideShow10.xml, the DTD constrains those subelements to be XHTML nodes. Because they are XHTML nodes (an XML version of HTML that is constrained to be well formed), the entire substructure under an
itemnode can be combined into a single string and displayed in the
htmlPanethat makes up the other half of the application window. In the second part of this section, you'll do that concatenation, displaying the text and XHTML as content in the
htmlPane.
Although you could simply reference the node types defined in the class
org.w3c.dom.Node, defining the
ELEMENT_TYPEconstant keeps the code a little more readable. Each node in the DOM has a name, a type, and (potentially) a list of subnodes. The functions that return these values are
getNodeName(),
getNodeType, and
getChildNodes(). Defining our own constants will let us write code like this:
As a stylistic choice, the extra constants help us keep the reader (and ourselves!) clear about what we're doing. Here, it is fairly clear when we are dealing with a node object, and when we are dealing with a type constant. Otherwise, it would be tempting to code something like
if (node == ELEMENT_NODE), which of course would not work at all.
Control Node Visibility
The next step is to modify the
AdapterNode's
childCountfunction so that it counts only tree element nodes--nodes that are designated as displayable in the
JTree. Make the following highlighted modifications to do that:public class DomEcho extends JPanel { ... public class AdapterNode { ... public AdapterNode child(int searchIndex) { ... } public int childCount() {
if (!compress) { // Indent thisreturn domNode.getChildNodes().getLength();
} int count = 0; for (int i=0; i<domNode.getChildNodes().getLength(); i++) { org.w3c.dom.Node node = domNode.getChildNodes().item(i); if (node.getNodeType() == ELEMENT_TYPE && treeElement( node.getNodeName() )) { ++count; } } return count;} } // AdapterNode
The only tricky part about this code is checking to make sure that the node is an element node before comparing the node. The
DocTypenode makes that necessary, because it has the same name (
slideshow) as the
slideshowelement.
Control Child Access
Finally, you need to modify the
AdapterNode's
childfunction to return the Nth item from the list of displayable nodes, rather than the Nth item from all nodes in the list. Add the following highlighted code to do that:public class DomEcho extends JPanel { ... public class AdapterNode { ... public int index(AdapterNode child) { ... } public AdapterNode child(int searchIndex) { //Note: JTree index is zero-based. org.w3c.dom.Node node = domNode.getChildNodes()Item(searchIndex);
if (compress) { // Return Nth displayable node int elementNodeIndex = 0; for (int i=0; i<domNode.getChildNodes().getLength(); i++) { node = domNode.getChildNodes()Item(i); if (node.getNodeType() == ELEMENT_TYPE && treeElement( node.getNodeName() ) && elementNodeIndex++ == searchIndex) { break; } } }return new AdapterNode(node); } // child } // AdapterNode
There's nothing special going on here. It's a slightly modified version of the same logic you used when returning the child count.
Check the Results
When you compile and run this version of the application on
slideSample01.xmland then expand the nodes in the tree, you see the results shown in Figure 6-8. The only nodes remaining in the tree are the high-level "structure" nodes.
Figure 6-8 Tree View with a Collapsed Hierarchy
Extra Credit
The way the application stands now, the information that tells the application how to compress the tree for display is hardcoded. Here are some ways you can consider extending the application:
- Use a command-line argument: Whether you compress or don't compress the tree could be determined by a command-line argument rather than being a hardcoded Boolean variable. On the other hand, the list of elements that goes into the tree is still hardcoded, so maybe that option doesn't make much sense, unless...
- Read the
treeElementlist from a file: If you read the list of elements to include in the tree from an external file, that would make the whole application command-driven. That would be good. But wouldn't it be really nice to derive that information from the DTD or schema instead? So you might want to consider...
- Automatically build the list: Watch out, though! As things stand right now, there are no standard DTD parsers! If you use a DTD, then, you'll need to write your parser to make sense out of its somewhat arcane syntax. You'll probably have better luck if you use a schema instead of a DTD. The nice thing about schemas is that they use XML syntax, so you can use an XML parser to read the schema in the same way you use it to read any other XML file.
As you analyze the schema, note that the
JTree-displayable structure nodes are those that have no text, whereas the content nodes may contain text and, optionally, XHTML subnodes. That distinction works for this example and will likely work for a large body of real world applications. It's easy to construct cases that will create a problem, though, so you'll have to be on the lookout for schema/DTD specifications that embed non-XHTML elements in text-capable nodes, and take the appropriate action.
Acting on Tree Selections
Now that the tree is being displayed properly, the next step is to concatenate the subtrees under selected nodes to display them in the
htmlPane. While you're at it, you'll use the concatenated text to put node-identifying information back in the
JTree.
Note: The code discussed in this section is in
DomEcho04.java.
Identify Node Types
When you concatenate the subnodes under an element, the processing you do depends on the type of node. So the first thing to do is to define constants for the remaining node types. Add the following highlighted code:public class DomEcho extends JPanel { ... // An array of names for DOM node types static final String[] typeName = { ... }; static final int ELEMENT_TYPE = 1;
static final int ATTR_TYPE = Node.ATTRIBUTE_NODE; static final int TEXT_TYPE = Node.TEXT_NODE; static final int CDATA_TYPE = Node.CDATA_SECTION_NODE; static final int ENTITYREF_TYPE = Node.ENTITY_REFERENCE_NODE; static final int ENTITY_TYPE = Node.ENTITY_NODE; static final int PROCINSTR_TYPE = Node.PROCESSING_INSTRUCTION_NODE; static final int COMMENT_TYPE = Node.COMMENT_NODE; static final int DOCUMENT_TYPE = Node.DOCUMENT_NODE; static final int DOCTYPE_TYPE = Node.DOCUMENT_TYPE_NODE; static final int DOCFRAG_TYPE = Node.DOCUMENT_FRAGMENT_NODE; static final int NOTATION_TYPE = Node.NOTATION_NODE;
Concatenate Subnodes to Define Element Content
Next, you define the method that concatenates the text and subnodes for an element and returns it as the element's content. To define the
contentmethod, you'll add the following big chunk of highlighted code, but this is the last big chunk of code in the DOM tutorial.public class DomEcho extends JPanel { ... public class AdapterNode { ... public String toString() { ... }
public String content() { String"; s += adpNode.content(); s += "</" + node.getNodeName() + ">"; } else if (type == TEXT_TYPE) { s += node.getNodeValue(); } else if (type == ENTITYREF_TYPE) { // The content is in the TEXT node under it s += adpNode.content(); } else if (type == CDATA_TYPE) { StringBuffer sb = new StringBuffer( node.getNodeValue() ); for (int j=0; j<sb.length(); j++) { if (sb.charAt(j) == '<') { sb.setCharAt(j, '&'); sb.insert(j+1, "lt;"); j += 3; } else if (sb.charAt(j) == '&') { sb.setCharAt(j, '&'); sb.insert(j+1, "amp;"); j += 4; } } s += "<pre>" + sb + "</pre>"; } } return s; }... } // AdapterNode
Note: This code collapses
EntityRefnodes, as inserted by the JAXP 1.1 parser that is included in the Java 1.4 platform. With JAXP 1.2, that portion of the code is not necessary because entity references are converted to text nodes by the parser. Other parsers may insert such nodes, however, so including this code future proofs your application, should you use a different parser in the future.
Although this code is not the most efficient that anyone ever wrote, it works and will do fine for our purposes. In this code, you are recognizing and dealing with the following data types:
Element
For elements with names such as the XHTML
emnode, you return the node's content sandwiched between the appropriate
<em>and
</em>tags. However, when processing the content for the
slideshowelement, for example, you don't include tags for the
slideelements it contains, so when returning a node's content, you skip any subelements that are themselves displayed in the tree.
Text
No surprise here. For a text node, you simply return the node's
value.
Entity Reference
Unlike
CDATAnodes, entity references can contain multiple subelements. So the strategy here is to return the concatenation of those subelements.
CDATA
As with a text node, you return the node's
value. However, because the text in this case may contain angle brackets and ampersands, you need to convert them to a form that displays properly in an HTML pane. Unlike the XML
CDATAtag, the HTML
<pre>tag does not prevent the parsing of character-format tags, break tags, and the like. So you must convert left angle brackets (
<) and ampersands (
&) to get them to display properly.
On the other hand, there are quite a few node types you are not processing with the preceding code. It's worth a moment to examine them and understand why:
Attribute
These nodes do not appear in the DOM but are obtained by invoking
getAttributeson element nodes.
Entity
These nodes also do not appear in the DOM. They are obtained by invoking
getEntitieson
DocTypenodes.
Processing Instruction
These nodes don't contain displayable data.
Comment
Ditto. Nothing you want to display here.
Document
This is the root node for the DOM. There's no data to display for that.
DocType
The
DocTypenode contains the DTD specification, with or without external pointers. It appears only under the root node and has no data to display in the tree.
Document Fragment
This node is equivalent to a document node. It's a root node that the DOM specification intends for holding intermediate results during operations such as cut-and-paste. As with a document node, there's no data to display.
Notation
We're just ignoring this one. These nodes are used to include binary data in the DOM. As discussed earlier in Choosing Your Parser Implementation and Using the DTDHandler and EntityResolver, the MIME types (in conjunction with namespaces) make a better mechanism for that.
Display the Content in the JTree
With the content concatenation out of the way, only a few small programming steps remain. The first is to modify
toStringso that it uses the first line of the node's content for identifying information. Add the following highlighted code:public class DomEcho extends JPanel { ... public class AdapterNode { ... public String toString() { ... if (! nodeName.startsWith("#")) { s += ": " + nodeName; }
if (compress) { String t = content().trim(); int x = t.indexOf("\n"); if (x >= 0) t = t.substring(0, x); s += " " + t; return s; }if (domNode.getNodeValue() != null) { ... } return s; }
Wire the JTree to the JEditorPane
Returning now to the application's constructor, create a tree selection listener and use it to wire the
JTreeto the
JEditorPane:public class DomEcho extends JPanel { ... public DomEcho() { ... // Build right-side view JEditorPane htmlPane = new JEditorPane("text/html",""); htmlPane.setEditable(false); JScrollPane htmlView = new JScrollPane(htmlPane); htmlView.setPreferredSize( new Dimension( rightWidth, windowHeight ));
tree.addTreeSelectionListener( new TreeSelectionListener() { public void valueChanged(TreeSelectionEvent e) { TreePath p = e.getNewLeadSelectionPath(); if (p != null) { AdapterNode adpNode = (AdapterNode) p.getLastPathComponent(); htmlPane.setText(adpNode.content()); } } } );
Now, when a
JTreenode is selected, its contents are delivered to the
htmlPane.
Note: The
TreeSelectionListenerin this example is created using an anonymous inner-class adapter. If you are programming for the 1.1 version of the platform, you'll need to define an external class for this purpose.
If you compile this version of the application, you'll discover immediately that the
htmlPaneneeds to be specified as
finalto be referenced in an inner class, so add the following highlighted keyword:public DomEcho04() { ... // Build right-side view
finalJEditorPane htmlPane = new JEditorPane("text/html",""); htmlPane.setEditable(false); JScrollPane htmlView = new JScrollPane(htmlPane); htmlView.setPreferredSize( new Dimension( rightWidth, windowHeight ));
Run the Application
When you compile the application and run it on
slideSample10.xml(the browsable version is
slideSample10-xml.html), you get a display like that shown in Figure 6-9. Expanding the hierarchy shows that the
JTreenow includes identifying text for a node whenever possible.
Figure 6-9 Collapsed Hierarchy Showing Text in Nodes
Selecting an item that includes XHTML subelements produces a display like that shown in Figure 6-10:
Figure 6-10 Node with
<em>Tag Selected
Selecting a node that contains an entity reference causes the entity text to be included, as shown in Figure 6-11:
Figure 6-11 Node with Entity Reference Selected
Finally, selecting a node that includes a
CDATAsection produces results like those shown in Figure 6-12:
Figure 6-12 Node with
CDATAComponent Selected
Extra Credit
Now that you have the application working, here are some ways you might think about extending it in the future:
- Use title text to identify slides: Special case the
slideelement so that the contents of the
titlenode are used as the identifying text. When selected, convert the title node's contents to a centered
H1tag, and ignore the
titleelement when constructing the tree.
- Convert item elements to lists: Remove
itemelements from the
JTreeand convert them to HTML lists using
<ul>,
<li>, and
</ul>tags, including them in the slide's content when the slide is selected.
Handling Modifications
A full discussion of the mechanisms for modifying the
JTree's underlying data model is beyond the scope of this tutorial. However, a few words on the subject are in order.
Most importantly, note that if you allow the user to modify the structure by manipulating the
JTree, you must take the compression into account when you figure out where to apply the change. For example, if you are displaying text in the tree and the user modifies that, the changes would have to be applied to text subelements and perhaps would require a rearrangement of the XHTML subtree.
When you make those changes, you'll need to understand more about the interactions between a
JTree, its
TreeModel, and an underlying data model. That subject is covered in depth in the Swing Connection article, "Understanding the TreeModel" at.
Finishing Up
You now understand what there is to know about the structure of a DOM, and you know how to adapt a DOM to create a user-friendly display in a
JTree. It has taken quite a bit of coding, but in return you have obtained valuable tools for exposing a DOM's structure and a template for GUI applications. In the next section, you'll make a couple of minor modifications to the code that turn the application into a vehicle for experimentation, and then you'll experiment with building and manipulating a DOM. | http://docs.oracle.com/javaee/1.4/tutorial/doc/JAXPDOM6.html | CC-MAIN-2014-42 | refinedweb | 2,921 | 56.76 |
public class CompeteLatch extends Object
await. After the winner thread finishes its job, it should call
donewhich will open the latch. All blocking loser threads can pass the latch at the same time.
See LPS-3744 for a sample use case.
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
public CompeteLatch()
public void await() throws InterruptedException
InterruptedException- if the current thread is interrupted
public boolean await(long timeout, TimeUnit timeUnit) throws InterruptedException
timeout- the timeout value
timeUnit- the time unit
trueif the latch was open,
falseif the waiting time elapsed before the latch be opened.
InterruptedException- if the current thread is interrupted
public boolean compete()
trueif the current thread is the winner thread
public boolean done()
awaitmethod. If a loser thread does call this method when a winner thread has locked the latch, the latch will break and the winner thread may be put into a non thread safe state. You should never have to do this except to get out of a deadlock. If no one threads have locked the latch, then calling this method has no effect. This method will return immediately.
trueif this call opens the latch,
falseif the latch is already open
public boolean isLocked()
trueif the latch is locked. This method should not be used to test the latch before joining a competition because it is not thread safe. The only purpose for this method is to give external systems a way to monitor the latch which is usually be used for deadlock detection.
trueif the latch is locked;
falseotherwise | https://docs.liferay.com/dxp/portal/7.2-latest/javadocs/portal-kernel/com/liferay/portal/kernel/concurrent/CompeteLatch.html | CC-MAIN-2021-10 | refinedweb | 258 | 69.82 |
We would especially like to thank Rob Martell (Digital Renaissance) for his review and contributions to this specification.
This document is a submission to the World Wide Web Consortium (see Submission Request, W3C Staff Comment). It is intended for review and comment by W3C members.
This document is a NOTE made available by the W3 Consortium for discussion only. This indicates no endorsement of its content, nor that the Consortium has, is, or will be allocating any resources to the issues addressed by the NOTE.
This document presents Timed Interactive Multimedia Extensions for HTML (HTML+TIME). This is a proposal for adding timing and synchronization support to HTML. HTML+TIME builds upon the SMIL recommendation to extend SMIL concepts into HTML and web browsers. The current version is a result of collaboration and review among Microsoft, Macromedia, Compaq/Digital and Digital Renaissance. It is currently only a proposal and subject to change. It assumed that the reader is familiar with the ideas expressed in the W3C Recommendation: SMIL [SMIL].
The W3C has recently approved SMIL as a recommendation. SMIL introduces many valuable ideas, but has some limitations. In particular, SMIL is a data interchange format for media authoring tools and players - it does not include a means to apply the ideas to HTML and web browsers. This document describes a means of extending SMIL functionality into HTML and Web browsers. The proposal includes timing and interactivity extensions for HTML, as well as the addition of several new tags to support specific features described in the SMIL 1.0 spec. HTML+TIME also adds some extensions to. HTML+TIME also introduces a number of extensions to SMIL that are required for a reasonable level of flexibility and control. These extensions could easily be worked into the SMIL specification as well (indeed, some of the ideas have been discussed by the SYMM WG in the context of SMIL).
The layout capabilities described in the SMIL specification are subsumed by the CSS functionality standard in current browsers. Several other minor features are also standard in HTML, and are not duplicated here (but are documented in Appendix B).
Finally, an Object Model is described for HTML+TIME. The SMIL 1.0 specification did not include this, but given the tradition of HTML and the DOM, we feel this is a critical aspect of the specification.
A set of extensions are described to add additional timing, interaction and media delivery capabilities to HTML. These are modeled closely along the lines of SMIL, and attempt to reuse terminology wherever feasible. The timing and interaction support augment current script support for timers and DHTML.
Using the timing extensions, any HTML element can be set to appear at a given time, to last for a specified duration, and to repeat (i.e. loop). Simple timing is supported with a very simple syntax, but more complex timing constructs can also be described. Interactive timing is supported. The first section of this document describes how the timing support is designed, and how script writers use the timing extensions.
In order to easily integrate time-based media (movies, audio and animation content), a set of new media tags are introduced (again, based upon the SMIL 1.0 specification), and the associated integration with the timing model is documented.
Additional tags are described to support fine-grained control of synchronization and media-loading behavior. Also, the notion of temporal hyperlinks presented in SMIL is generalized to apply to HTML in general.
SMIL introduces some very powerful elements that support conditional delivery of content, specifically to support differing client platform multimedia capabilities and preference settings. These are included with minor changes in this proposal as well.
Finally, the object model for the time support is documented. Included is a discussion of support for media extensions and for extension behaviors that will take advantage of the timing support.
With the advent of CSS, the DOM and dynamic properties, it is possible for HTML to be a much more powerful medium. Designers can now begin to think of the web page not just as a static page of information, but as a dynamic, interactive presentation. Various tools exist to author animation based upon custom runtimes, and at least one (Macromedia Dreamweaver) supports animation based upon script and timers. Finally, the W3C Recommendation: SMIL [SMIL] specifies a means of describing media-rich timelines in a separate XML-based file.
None of these solutions provide a simple, standard means for HTML authors to easily add timing and interaction relationships to arbitrary HTML elements, and to coordinate these with time-based media. HTML+TIME will fill this need. It defines a simple and powerful standard for time containment within the document.
Timing support is based upon a simple model of adding attributes to HTML elements. HTML elements can be set to have a begin time and a duration. Additionally, elements can be made to repeat. We describe this as "decorating" the HTML with additional attributes. The important point is that HTML authors need not learn an entirely new syntax or document structure to add timing to pages. They simply add attributes to the elements that they want to be dynamic.
For more complex scenarios, authors can group timing into timelines. These timelines can then be controlled and timed as well. The structure is in some ways analogous to the DOM structure (cf. the <div> tag in particular), in that it defines a local time region. Two means are provided of defining local timelines: a new tag, and an attribute that can be applied to HTML container elements.
Note that the timing support augments the behavior of the elements only with respect to the time during which the element is active or visible. The rules for applying the timing are fairly simple:
For time-based media, a media player is controlled to start and stop the media when appropriate.
See the discussion of current Syntax Issues.
All HTML elements that are legal within the BODY and that represent content or style, can support timing. Appendix A presents the list of HTML elements, and the classification of elements for the purposes of timing support.
For all HTML elements that support timing, the following simple timing attributes are supported:
The current element will begin when the referenced event is raised (plus any begin
value). If the referenced event is never raised, the current element may never be
active/displayed. If a negative begin (delay) value is used with this attribute, the
element will start when the event is raised, but will start the local timeline at an
offset from 0. See also the section on negative offsets in Usage
notes below.
If the named event is "none", this element will simply wait to be turned on (e.g. by script).
No more than one of beginWith, beginAfter or beginEvent should be specified.
Legal values include:
If the value of the "skip-content" attribute is "true", and one of
the cases above apply, the content of the element is ignored. If the value is
"false", the content of the element is processed.
The default value for "skip-content" is "true".
Reviewers - is this really necessary in the HTML/CSS context?
Clock values have the following syntax:
Signed-Clock-value ::= ("+" | "-" )? Clock-value ; default is "+" Clock-value ::= HMS-value | Timecount-value HMS-value ::= (Hours ":")? Minutes ":" Seconds ("." Fraction)? Timecount-value ::= Timecount ("." Fraction)? ("h" |"min" |"s" |"ms")? ; default is "s" Hours ::= DIGIT+ Minutes ::= 2DIGIT ; range from 00 to 59 Seconds ::= 2DIGIT ; range from 00 to 59 Fraction ::= DIGIT+ Timecount ::= DIGIT+ 2DIGIT ::= DIGIT DIGIT DIGIT ::= [0-9]
The minutes and seconds fields in an HMS-value are constrained to the range 00 to 59; leading zeros must be specified for terms between ":" and "." delimiters. Hours can be any integer value, and need not have leading zeros. The fractional seconds in both HMS-values and Seconds-values can have arbitrary precision, but nothing greater than millisecond accuracy is guaranteed.
The following are examples of legal clock values:
- Clock values:
02:30:03 = 2 hours, 30 minutes and 3 seconds
2:33 = 2 minutes and 33 seconds
- Timecount values:
3h = 3 hours
45min = 45 minutes
30s = 30 seconds
5ms = 5 milliseconds
- Signed clock values:
+2:10 = plus 2 minutes and 10 seconds
+300ms = plus 300 milliseconds
-10.3 = minus 10.3 seconds (10,300 milliseconds)
We can add attributes to any object in HTML to add timing. By default, all timed elements are relative to a document root context, and so exist in a single time scope for the page. I.e. no nested timing is required/defined in the examples cited. Advanced timing support allows for more powerful constructs when the authors needs them. Nevertheless, scripters just beginning to use TIME do not have to understand anything about a timing structure or hierarchy to do simple things. Also, all the offsets are in the same timespace (in the simple, default case), making it easy to align elements in time that are laid out all over the page.
Example: Making paragraphs appear over time:
... <p t: This is a paragraph of text that appears after one second </p> <p t: This is a paragraph of text that appears after two seconds </p> <p t: This is a paragraph of text that appears after three seconds </p> ...
In order to support greater flexibility in terms of relative timing, we make use of the duration and time base parameters.
Note to reviewers:
Many multimedia runtimes support complete hierarchical relative timing, tied to a scene graph or some equivalent. This makes sense, as the containment/lexical hierarchy is explicit with a scene graph. However, most HTML authors do not work with a user model of HTML that includes a containment/lexical hierarchy based upon the actual DOM. Tying time containment to DOM containment would be confusing at best. As such, we will support default timing as described above, plus explicit time containers and referential time bases (i.e. timing relative to another named element). HTML authors should easily accept referential timing, as IDs are used very commonly in scripting.
Example: Defining relative timing among elements:
... <p id="P1" t: This is some text that appears immediately and remains visible for 5 seconds. </p> <img src="image.gif" t: <!-- This is an image that appears just after the first paragraph appears (i.e. at 0.5 seconds) and remains visible indefinitely. Note the support for fractional seconds. --> <p t: This is some text that appears just before the first paragraph goes away (i.e. at 4.5 seconds) and remains visible indefinitely. Note the support for negative offsets. </p> ...
Users can make any given element or timeline play repeatedly (i.e. loop), using the repeat attributes. Authors can specify either a number of times to play the simple duration, or a total time for which to repeat the element timeline.
... <t:audio t: <t:animation <t:animation t: ...
The audio is set to repeat indefinitely. The intro animation will repeat for 2 minutes, and then stop. The second animation will then begin, and will repeat indefinitely. Note that the repeat controls can also be combined with the sequence element and timelines, described below.
The timeAction attribute provides a flexible means of defining what it means for an element to be active and not on a timeline. For the purposes of control in HTML+TIME, all HTML elements are grouped into categories (see also Appendix A). By default, all elements categorized as content will be controlled with the visibility property. That is, before the element is active on the timeline (before its defined begin time), the element visibility will be set to "hidden". While the element is active on the timeline (from the defined begin until the defined end), the visibility property will be set to "visible". Again, after the end time, the visibility property will be set to "hidden". All style elements will be controlled by removing the effect of the element intrinsic behavior. It would be nice if all style elements supported an "on" property to control easily and in a well-document manner.
Examples of default timeAction usage:
... <span t: This is some text that appears after ten seconds and remains visible for 20 seconds. </span> ... <b t: This is some text will appear normally at first, then be displayed bold for 10 seconds, and then revert to normal display again. </b> ... <v:oval t: ...
The above example shows the default behavior for timeActions. The span will be hidden when inactive. The bold element will be visible, but not bold, when inactive. The VML extension element for an oval will be hidden, as it supports a visibility property.
Example using display value:
... <span t: This is some text that appears after ten seconds, and remains visible for 20 seconds. When it becomes visible and again when it is hidden, the document will reflow. </span> ...
The display value is useful when the author wants the document to reflow over time. This is useful, e.g. for image sequences, where only the active image should affect the layout of the document.
Example using style value:
... <span style="text-decoration:line-through; color:red" t: This is some text will appear normally at first, then be displayed in red strikethrough for 10 seconds, and then revert to normal display again. </span> ...
The style value makes it easier to control a complex set of styles over time. Any style control that can be defined using the inline style attribute can be animated over time using this timeAction setting.
Example using onOff value:
... <v:oval t: <v:fill t: ... </v:oval> ...
This example shows the use of the onOff value to control the intrinsic behavior of an extension style - in this case a fill element for an oval in VML. The oval will appear unfilled for 3 seconds and then the fill will be applied until the oval is hidden at 10 seconds.
The simple time attributes provide a very easy to use mechanism to add simple timing to a page. A good deal of animation is supported just via this simple syntax. Nevertheless, there will be cases in which an author wants to build up more complex timing structures, and to easily manipulate them. HTML+TIME provides a new par attribute to structure timed elements. This introduces a local, nested timeline that can be manipulated independent of the document (or parent) timing. The naming comes from SMIL, and is short for "parallel". There may be a potential conflict with the notion of "paragraphs". If need be, the token can be renamed "parallel" or perhaps "timeline".
An additional <par> tag could also be introduced, but this is syntactic sugar for <span t:. Reviewers?
An example use might be to set up a block of paragraphs with declared timing, which the author wishes to manipulate as an independent segment of animation (i.e. a relative timeline) within the document. The par attribute defines a relative timeline which can be manipulated as a unit, moved in time, looped, cut and pasted, etc.
Timeline Attribute Syntax
An alternate syntax may used, which is equivalent in terms of the time behavior. This uses a new element to define a local timeline. Reviewers: is this necessary? It more closely follows SMIL syntax, but does not fit into HTML as cleanly.
Timeline Element Attributes
Timeline containers can be timed in the same manner as any other element. An offset value and/or a time base will offset the entire local context, and shift the time of everything within the timeline scope (except elements timed to events out of the timeline scope).
Examples
<span t: <!-- This begins right away, and lasts for 10 seconds --> <p> This is some text that appears immediately </p> <p t: This is some text that appears after two seconds </p> <p t: This is some text that appears after three seconds </p> </span><div t: <!-- This begins slightly after the first chunk is done, at 10.2 seconds --> <p> This is some much more exciting text that appears right away on the new timeline, which does not begin until the first big timeline is done. It should be 10.2 seconds into the page display before you see this... </p> <p t: This is some more exciting text that appears one second into the second timeline, which should be 11.2 seconds after the page Hides up. Just imagine doing a slide show with this stuff... </p> </div>
The case sometimes arises that authors want to have a series of (e.g.) images appear. This can be accomplished with the time attributes described above, but a very simple declarative syntax is indicated to support this specific case. The <t:seq> tag is provided for this purpose. Note that this element can be used for other cases as well as the simple sequence, but this is not recommended. A sequence is not a good general purpose means of declaring timing structure when the document is being edited - changing from a sequence declared with the <t:seq> tag to another timing relationship requires much more work than changing timing attributes associated with individual elements.
Note to reviewers: Should this instead be presented as an attribute to containers, a la "timeline"? If the goal is to support novices, an explicit element may be easier for them to use.
Sequence Element Syntax
Sequence Element Attributes
The sequence element can be timed in the same manner as any other element. An offset value and/or a time base will offset the entire local context, and shift the time of everything within the local timeline scope.
Contained (child) tag Attributes
Children of a sequence element can take most of the time attributes, excepting the time base. Reviewers: it appears that it is legal in SMIL to allow children of a seq element to repeat, and potentially to have indefinite duration. How much flexibility should we allow? E.g. should we allow endEvent specification? In addition, it may make sense for the default syncBehavior for all sequence children to be locked, so that sequences hold together.
Example: A slide show of images:
... <div width="200" height="200"> <t:seq t: <!-- This will sequence the three images, repeating indefinitely. Any timebase parameters will be ignored, as the time base is implicit via the sequence block. Offsets are legal. Durations are recommended, as the default is to remain visible indefinitely, which means that nothing after that will ever show up. --> <img src="image1.gif" t: <img src="image2.gif" t: <img src="image3.gif" t: </t:seq> </div> ...
Note that this ignores all aspects of layout. It is up to the HTML author to describe the desired layout (e.g. laid out left to right, stacked with absolute positions, etc.). With the timeAction attribute set to "display", the document will reflow over time.
Example: A sequence of styles:
... <p> <t:seq t: <!-- This will sequence the three styles, repeating indefinitely. --> <FONT color="red" t: <FONT color="green" t: <FONT color="blue" t: Here is some text that will get a really gaudy color treatment. </FONT> </FONT> </FONT> </t:seq> </p> ...
Multimedia without interaction is just a movie. It must be possible for the author to describe interactive responses to user actions, and to define timing variants that support interaction. In this timing model, interactive timing is just a variant in which the begin time is indeterminate. An element (or an entire timeline container) that should begin in response to some user input is simply defined with a beginEvent. When the element is not tied to a specific event (e.g. a particular button click or a stream trigger), but rather will be started by script on the page, the element can be defined with '...beginEvent="none"...' . Such an element can be started from script using a simple, familiar syntax.
Exposed action methods
In order to support interactive control of a timed element, the following methods are exposed to script:
Example script syntax
Authors can simply use the familiar script events (methods) like onclick(), onmouseover(), onmouseout(), etc. to define actions on timed HTML elements. The script method implementation simply references the timed element by id, and then calls one of the action methods exposed by the element. Note that several possible script solutions are described, just for documentation.
Note that the final image is set to begin when the slideshow is complete. It is possible to set up a timeline of actions that chain off an interactive begin. The timing for dependent elements is computed when the head of the timing dependency chain is turned on with a trigger.
<div height=200 width=300> <t:seq <!-- This begins when a trigger is sent. This will sequence the three images. If the user clicks on an image before the assigned duration, it will advance to the next image. --> <img src="image1.gif" t: <img src="image2.gif" t: <!-- This uses endEvent syntax. Note that 'dur' will override if there is no click by 5 seconds --> <img src="image3.gif" t: </t:seq> <img src="showOver.gif" t: <p align=center Click here to begin the slideshow. </p> <p align=center> If it advances too slowly for you, just click on an image to advance it interactively. </p> </div>
SMIL introduces the notion of temporal hyperlinks. Rather than create a new element to handle this, we add capabilities to the timing model to support the same functionality. The implementation must catch navigation events (i.e. docReady events as well as changes to the hash property), and then advance the root timeline to the start time of the hash element. If the hash element itself is not timed, the element parents are traversed up to the document body to find a timed element. If the hash element is not contained in any timed block, then the document timeline begins normally. If the hash element is set to begin interactively (with beginEvent="none"), the element is turned on as though it had been triggered and the document timeline plays normally.
Timed Hyperlink control Attribute Syntax
In some instances, authors will want to preclude jumping into the middle of a timeline. One example would be an advertisement before a presentation; the author may not want the end-user to be able to skip the ad. A new attribute is supported on the body tag to control this behavior, allowing the author to enable or disable the support:
This nice thing about this approach is that it requires no change to the link tag, and can cleanly work even if the link is coming from a page that has no timing defined.
The timing syntax primarily addresses relationships among elements on a page. However, there also a need to define, and to be able to control the start of overall document time. In the simple case, it will be acceptable to start document time when the document is loaded. However, in particular for long HTML documents it may be unacceptable to defer the document timeline until the document has completely loaded. Rendering of the first screenful is performed as soon as possible in most browsers, and authors will require that time can be started to run animations, etc. near the top of the page.
The model presented here provides simple controls for authors to control the start of document time. By default, document time begins when the document is fully loaded. This covers many cases, and simplifies the model for novice authors. Additional settings cause document time to begin either immediately (as soon as possible), or when the document is complete (the document and all associated media and objects have been loaded). Finally, there is an advanced option for authors to specify the point in the document at which time should begin.
Need some examples of usage, and in particular some warnings and examples of trouble they can cause themselves by starting time before document.onLoad.
Add attribute to doc root (on body tag) to specify alternatives: immediate (default), onDocLoad, onDocComplete, onStartTag.
A new attribute is supported to specify the rule for when the document root timeline starts. This attribute is only legal on the body element.
Note that if document time begins before the document has fully loaded, the author must define all timing relationships such that the timing relationships are legal when document time begins, and as the rest of the document is parsed. This means that all timing references to other timed elements (e.g. using beginWith and beginAfter) must refer to elements that are defined earlier in the document. I.e. authors may not use forward references if document time begins before the document is loaded.
A new (XML) tag is supported to control when document time starts.
Time Scope
For time attributes that reference another timed element (e.g. beginWith, beginAfter), the referenced element must be timed (i.e. it must specify one of the TIME attributes or be a TIME element) and it must be in the current timing scope. That is, the referenced element must be within the HTML subtree defined by the closest parent time container of the current tag, and must have the same parent time container (i.e. it must be a a time-sibling).
In the absence of timeline attributed containers (or timeline tags), this will be the document root and all references will be in the same scope. If a timed element using a reference is within the scope of a timeline container, the scope is the local timeline block. It is illegal to reference any timed element outside of this scope. This constraint is imposed to preclude ambiguous and potentially confusing time dependency graphs.
Negative Offsets
The model described by HTML+TIME explicitly allows negative offsets. The common use of these is in something like a sequence, where one object should appear just before another completes. In this common case, the final computed begin time is still positive. Nevertheless, there are situations in which a negative computed begin time can obtain; this is not considered illegal.
When the computed start time for an element is negative relative to the timeline container, the element is started with the parent (it can never appear or have influence before the time container does). However, the sync relationship of the local timeline to the parent is offset: the local timeline for the element is defined to begin before it actually appears, and so it effectively begins somewhere in the middle of its timeline. This can be useful in situations where an element is set to repeat, and the author wants the first repeat iteration to begin in the middle (repeating motion-paths, scrolling, etc. are sometimes authored this way).
Invalid timing definition
It is possible to describe timing relationships between or among elements that are invalid. Typically, the timing is invalid because it creates circular time-dependency references. For example, if two elements are defined to begin with one another (using either beginWith or beginEvent syntax), this is invalid. Invalid timing can result through chained combinations of begin and end timing specification. Any combination that produces a circular reference is illegal.
When an invalid timing specification is detected by the implementation, an error will be generated, and one or all of the elements involved will revert to default timing.
If both a duration and any end value (including clip-end) are specified for any element, the effective duration will be the minimum of the specified duration attribute and the computed duration for the specified end attribute.
In order to coordinate HTML and time-based media elements on a common timeline, HTML+TIME introduces new tags to easily integrate time-based media. The new media tags will simplify the declaration of time-based media elements over the traditional methods of declaring various plug-ins or other embedded player objects. It will also manage the exchange of simple timing and control information between HTML+TIME timing implementation and the media players.
The tags are defined as XML extensions in the new TIME namespace. They will be ignored by down-level browsers.
Note that the definition of time-based media is not restricted to simple media like audio and video. Support is also intended for animation media, including extension players for existing popular animation formats. HTML+TIME would not render these, but would simply coordinate the associated players in the HTML time context.
SMIL introduced a set of new tags for a variety of media types. Currently, the individual tags have no semantic significance over the catch-all <ref> tag - the semantics are really tied to the MIME type from the server or specified as an attribute. However, the use of individual media type tags allows for future extensions such as specific attributes appropriate to individual media types (e.g. audio level). We would prefer to use "media" in place of "ref", but we can adopt the SMIL naming if desired.
The media tags take a src attribute to specify the source-media URL, and an optional specifier for the MIME/media type (the MIME type provided by the server is used by default). The implementation will associate the type with an appropriate player, and manage the instantiation of a player. Implementations may handle this e.g. by injecting HTML for an EMBED tag, or it may be handled more deeply.
There is also a player attribute that supports a reference to EMBED or OBJECT element from the media tags. This allows authors to use the traditional means of declaring the player, in particular to support all the specific attributes and controls of the respective players. The referenced element must support the media player interface described in the object model below. This also supports integration of third-party media players with the HTML+TIME model.
Note that the declaration of individual media elements with associated time syntax does not preclude the implementation from associating a group of media elements with a single player instance. It may be desirable from a performance standpoint to combine the media elements into a single grouped-element (e.g. creating a temporary playlist for the associated player). However, this can have other drawbacks, including a loss of interactive control over individual elements. The noCombine attribute provides author control over this.
Media players must support a basic set of controls to integrate with HTML+TIME. The implementation or the media tags must present an interface to the media players (e.g. with a wrapper). The details of the control are provided in the object model discussion below.
Note that if no dur attribute is specified, the media wrapper will set the duration property of the element (it is legal to set the duration to indefinite). This will have the side-effect that end-time-dependents (other elements defined with beginAfter referencing this node) will have a defined begin time.
Need description of integration of stream-based events with TIME model. Should require minimal Object Model support, as events fit in like all other events. Need to discuss recommended support. Reviewers?
Stream-based events should be raised by the player object, and can be referenced as "object_id.event_name". When the player object is implicit (e.g. using a video tag), the associated media element should raise the events. This allows script and HTML+TIME event specifications to respond to all media and server generated events.
Media Element Syntax
The tags described here parallel the tags defined in SMIL 1.0 [SMIL]. The only difference is the media tag which replaces the generic ref tag in SMIL 1.0, and the omission of the SMIL text tag, which is subsumed by simple HTML tags for text. Reviewers: what was the intended use of ref, and the naming used? Unlike SMIL, there is no need for the abstract region reference. There is a need for reference to a player object, but this is supported as an attribute on all types.
The media element takes the base HTML attributes appropriate to any div tag. The following are interpreted by the wrapper:
Clip-time-value ::= [Metric "="]( Clock-val | Index-val | Smpte-val | timeID-val ) Metric ::= Smpte-type | "clock" | "index" Smpte-type ::= "smpte" | "smpte-30-drop" | "smpte-25" Smpte-val ::= Hours ":" Minutes ":" Seconds [ ":" Frames [ "." Subframes ]] Hours ::= DIGIT+ Minutes ::= 2DIGIT ; range from 00 to 59 Seconds ::= 2DIGIT ; range from 00 to 59 Fraction ::= DIGIT+ Frames ::= 2DIGIT Subframes ::= 2DIGIT Index-val ::= DIGIT+ timeID-val ::= [legal HTML id] 2DIGIT ::= DIGIT DIGIT DIGIT ::= [0-9]
The value of this attribute consists of an optional metric specifier (defaults to "clock"),.
Unlike the strictest SMPTE format, TIME will allow for more than two digits in the hours field. This is specifically to accommodate long format material.
Examples:
clip-begin="smpte=10:12:33:20"
clip-begin="smpte=102:12:33"
clip-begin="123.45""
clip-begin="12:05:35.3
clip-begin="142"
clip-begin="Bill talks about NT"
clip-end="Bill talks about simplicity"
SMIL references a textstream media element, but does not in any way define or describe it. Simple cases could be handled with HTML using timing markup. Nevertheless, having a separate media type has advantages when streaming a large amount of content - e.g. captions to a lengthy audio presentation.
The specification of a "textstream" must be formalized before it can be widely supported by media players.
To integrate timeline media and animation into the page, authors must have control over the synchronization behavior of the page. In addition, authors need to be able to define how time behaves relative to the initial loading or cueing of media. HTML+TIME unifies support for asynchronous media loading and dynamic resynchronization of players into a simple concept of sync rules and scope. HTML+TIME defines additional attributes for media elements, as well as a general mechanism for managing dynamic synchronization of timeline elements.
This level of support requires a means to specify which media elements must be ready (i.e. loaded or cued) in order to play a portion of an animation timeline. This allows the author to control when the page or any local timeline (e.g. a div) starts, relative to the media that is required. The author can force the page to wait for all media to be ready. Alternatively, the author can specify that a timeline (e.g. the main page timeline) can begin when certain specific media elements are ready, but before all media has been prepared. The author can define and control the end-user experience.
Dynamic synchronization support provides an author the means to define which elements must remain in tight synchronization, and which elements (or local timelines) can slip if the players cannot keep up (e.g. due to network congestion). This provides a balance between the requirements of coordinating an animation and the realities of network media delivery.
The author's model will be very simple: they can describe the synchronization behavior of each media element and of each time container. By definition, synchronization includes the startup sync relationship; allowing an element to slip sync provides control over media loading. By describing the sync behavior of time containers, authors can control the scope of a synchronization context. When the timing model must handle resync events (i.e. when a media player falls out of sync while playing), the sync rules of the time containers define how far-reaching the resync handling will be.
In addition, authors can describe how to fill in for elements that are not ready to play at the time they were originally authored to.
For a time container, syncBehavior="locked"
means that the local timeline must remain locked to the parent timeline. If elements
within the time container are defined with syncBehavior="canSlip",
the time container setting does not overrule the contained element setting. This only
defines the sync relationship of the time container to the parent.
Note that the most common approach to resolving sync problems will be to pause the parent timeline. If however the parent is also defined with locked sync, the resync must be propagated up the time tree until a parent is reached that has slip sync defined, or until the document body is reached. In the case of a fully locked timing definition, the entire page timeline will be paused if any element falls out of sync and raises a resync event.
Note that the behavior is only used when a direct exit-time dependent is not ready to play. If a dependent of a dependent is not ready, or if a dependent is defined relative to the start of the current element, no fill behavior is used.
Note that when the element has multiple time dependents, the fill behavior will be used if any one of the dependents cannot begin on-time (on-sync). There is no way to define the fill behavior for individual time-dependents. An example of a potential problem is an image that shows for a few seconds and is followed by a video and some audio. Assume the image is set to fill with the intention of covering for the video until it is ready. If both video and audio are defined to begin when the image ends (beginAfter="image_id"), and if the audio is not ready to start, the image will continue to show until the audio is ready. In this case, the author should define the audio as starting with the video, and make only the video directly dependent on the image.
Default settings Attribute syntax:
Example use cases:
Need examples of use of fill behavior.
By default, everything will have loose sync. This lets authors ignore the issues with maintaining sync among all the elements. This allows all the players the most leeway as well.
Example: syncing audio and video together, independent of the rest of the page:
... <span t: <t:media <t:media </span> ...
The sync of the video and audio is locked, meaning that the local timeline cannot start until the media for both elements is ready to roll. Also, if either player has problems during playback, the parent container must maintain sync.
The timeline sync is defined to slip, which means that the rest of the document timeline will not be held up when the video and audio media is loading/cueing. It also means that any resync event required by one of the video or audio players losing sync, will force the timeline to resolve the sync, but not to propagate the resync event to the parent timeline (which may be the document root timeline).
Example: streaming video relative to an animated page, and holding sync loosely:
... <body t: ... <t:media ...
The video is set to begin playing 5 seconds after the document timeline begins, and to hold sync +/- 2 seconds to the document timeline. This allows other animations on the page to stay in sync with the video. If the video falls out of sync by more than 2 seconds, the time implementation (at the document level) must resync the video (e.g. by pausing the root timeline or the video timeline until they are more closely aligned).
If an element is defined to start interactively (e.g. specifying beginEvent), the syncBehavior has a slightly modified interpretation. In this case, there is no original sync relationship defined between the element and the time container. As such, when the element is started (e.g. via script on a button), the sync relationship will be propagated to the time dependents, but no resync event is propagated to the time container. However, once the element has been started and a sync relationship has been established, the syncBehavior of the element will determine how the object will maintain sync. Thus an element can be defined to start interactively (with indeterminate sync) but to maintain sync once started. This is probably not a common authoring scenario.
It is assumed that media players and extensions provide at the basic level of control described below as the minimum for integration.
An emerging application of HTML combines the web browser with television, either as a traditional broadcast or in digital forms like DVD (e.g. see the ATVEF spec [ATVEF]. HTML+TIME is an ideal tool for these applications, providing a means of controlling time and synchronization in a web page that accompanies the television content. Several tools are provided for integrating HTML+TIME with applications like ATVEF:
SMIL introduces the notion of conditional attributes, and the switch construct built upon the conditional attributes. These are valuable constructs for many applications, and are included for support with HTML in web browsers.
The syntax described below is taken largely from the SMIL specification. Minor extensions to SMIL are described to generalize the support for HTML and the browser environment.
Test attributes provide a means of enabling or disabling an element based upon some predefined system parameters. If the expression testing a particular parameter evaluates true, the element is rendered normally. If the expression evaluates false, then the element is ignored (e.g. removed from the DOM tree), and will not be rendered.
In the context of SMIL, the use cases describe conditional delivery of various content forms, based upon built-in conditionals related to network bandwidth, screen size and depth, system language and various other user preferences. In the context of a some dedicated media players, the user preferences are associated with the SMIL renderer. Certain attributes can be directly mapped to system settings or reasonable defaults on most platforms (e.g. language). Associating some user preferences (e.g. for typical speed of the network connection) is less direct for browsers.
The recommended solution is to support typical defaults for all the parameters, and allow specific preferences in the environment. The preferences should be made available in the DOM (e.g. as attributes of the window or document). Browser installers could set these values, and/or a simple form page or equivalent would allow the user to set the test-attribute values.
The attributes supported by SMIL include (taken from the SMIL spec, with the descriptions it provides, and additional notes in italics)::
<t:audio t:src="foo.rm" t.
The switch element allows an author to specify a set of alternative elements from which only one acceptable element should be chosen. An element is acceptable if the element is an HTML element, if any associated media-type can be decoded , and all of the test-attributes of the element evaluate to "true".
An element is selected as follows: the parser.
Element Content
The switch element should be able to contain any HTML content.
Examples
These examples are taken directly from the SMIL 1.0 specification..
... <t:switch> <img src="img_hires.gif" t: <img src="img_midres.gif" t: <img src="img_lowres.gif" t: </t:switch> ...
2) Choosing between audio resources with different bitrate
The elements within the switch may be any combination of elements. For instance, one could merely be specifying an alternate audio track:
... <t:switch> <t:audio <t:audio </t:switch> ...
3) Choosing between audio resources in different languages
In the following example, an audio resource is available both in French and in English. Based on the user's preferred language, the player can choose one of these audio resources.
... <t:switch> <t:audio <t:audio </t:switch> ...
4) Choosing between content written for different screens
In the following example, the presentation contains alternative parts designed for screens with different resolutions and bit-depths. Depending on the particular characteristics of the screen, the player can choose one of the alternatives.
... <t:switch> <div t: ... </div> <div t: ... </div> <div t: ... </div> </t:switch> ...
This is a list of miscellaneous SMIL elements which may need to be supported. The relevant portions of the SMIL spec are included here.
Note: for a list of SMIL elements that will not be supported, and for specific differences between SMIL and TIME extensions, see Appendix B.
The "meta" element can be used to define properties of a document (e.g., author, expiration date, a list of key words, etc.) and assign values to those properties. Each "meta" element specifies a single property/value pair.
Element Attributes
The "meta" element can have the following attributes:
If the value of the "skip-content" attribute is "true", and one of
the cases above apply, the content of the element is ignored. If the value is
"false", the content of the element is processed.
The default value for "skip-content" is "true".
The list of properties is open-ended. This specification defines the following properties:
Element Content
"meta" is an empty element.
There was a general issue related to the parameter specification syntax in the HTML+TIME specification. This has been largely resolved, but the alternatives are described here for context.
The document currently presents one syntax to illustrate the model and provide simple examples in HTML. As the chosen syntax is based upon a draft specification, it may change. Nevertheless, the syntax changes, if any, will not materially affect the model of time containment in HTML documents.
In addition, this document describes the use of embedded XML elements in the HTML. This has been referred to as "XML sprinkles". While this is not part of a current standard, it has been discussed in a related note [XMLinHTML].
This version of the document presents a model of syntax that defines parameters using XML Namespace qualified attributes. The new XML Namespace proposal [XMLNS] allows for extension attributes that are qualified with a namespace id. This provides the cleanest syntax, and is used with this document. Nevertheless, at this point the XML Namespace proposal is only a draft.
It might be argued that simple html "expando" attributes would be easiest for authors to use. Such a syntax imposes minimal changes to the HTML that they author today. However, expandos are problematic in that they violate the HTML DTD (despite the fact that both major browsers parse them without problems).
An alternative syntax was proposed that moves the parameters to a STYLE string. These STYLE-expandos are HTML DTD compliant, and CSS specifies that unknown attributes be ignored, making this a reasonable syntax. This was rejected as less elegant than the XMLNS expando syntax.
E.g. where this document describes parameters as XMLNS expandos:
<p t:Some text...</p>
the syntax with STYLE-expandos would look like:
<p style="begin:1;dur:3">Some text...</p>
and the syntax with simple expando attributes would look like:
<p begin="1" dur="3">Some text...</p>
It should be possible to support SMIL 1.0 compliant documents using HTML+TIME. The layout mechanisms of SMIL can be translated to CSS2, and the timing constructs translate directly. A relatively simple extension could support such a translation mechanism. This could even be placed within an alternate clause in a switch statement, if the browser supports HTML+TIME extensions.
Because of the asynchronous nature of loading the SMIL file, there may be some issues related to synchronization with the rest of the document. In all likelihood, the start of the SMIL timeline would normally be deferred relative to the rest of the document, but this can be easily controlled with the synchronization control facilities in HTML+TIME.
Layout within the SMIL document will be achieved using standard CSS functionality. The SMIL syntax describes a subset of CSS, and so no extensions should be required. See also the related W3C note [SMIL-CSS]. The output HTML would likely be wrapped in a div (i.e. the SMIL:import tag should essentially subclass div). The SMIL specification provides for a declaration of the dimensions of the presentation; these will define the default dimensions of the wrapping div. How should the declared dimensions interact with the use of the screen-size test-attribute?
Finally, note that any SMIL renderer that supports the defined interface for media players can be hosted as an object (or embed) on the page. In this way, a pure SMIL 1.0 presentation can be placed in the web browser context for timing and synchronization with an HTML page.
Note that the translation is only for rendering, and not for editing. As such, there is no requirement for retaining the full fidelity of SMIL information, as long as the media is correctly rendered. Certain SMIL attributes have no presentation function and these need not be preserved in the translation.
Need to formally document the exposed methods, the events and the way the timing model will handle resync events. This is still rough, and needs further discussion and review. This is incomplete, but gives an idea of where we are going.
- beginElement()
- Turns on the behavior, starting the local timeline. Fires an onBegin event and propagates time dependencies.
- endElement()
- Turns off the behavior, stopping the local timeline. Fires an onEnd event. If the element is stopping prematurely (i.e. before the defined end time) this also raises an onResync event and propagates time dependencies. Note that turnOff is different from pause(). Unlike pause, turnOff is used to advance to the end of the local timeline, as though the element had played to the end.
- cue()
- Returns a boolean to indicate whether the element is ready to display (e.g. img media is ready). Time containers return an aggregate of all descendents. Todo: need to include notes on interaction of sync rules and cue state.
- pause()
- Pauses the local timeline, and raises an onResync event.
- run()
- Resumes a paused timeline. If the element was not paused, this has no effect.
- localToGlobalTime( int ltime, string elementid )
- Convert from a local time to a global time in the referenced time space. Returns an integer indicating the count of milliseconds that represents the local time ltime in the time space of the ancestor timeline specified by elementid. If elementid is null, this converts from the local timeline to the document global timeline. This is useful both for authoring applications as well as aligning or otherwise relating elements on disparate timelines. Note that ltime need not be within the constraints of the local duration. The conversion is always done relative to the simple local duration; to convert a simple local time in iteration n of a repeating timeline, the caller must account for the offset of n-1 simple durations.
The following properties will be supported. Some platforms may fire property change events, as an alternative means of wiring time-based script or other functions to the local timeline. The list below does not indicate legal elements for the different properties.
The properties for defining the basic timing are read-only. The model supports run-time modification only through the methods defined and the currTime property.
For numeric timing properties, the values are the effective values. Thus, if no dur or end attributes are specified, the dur and end properties will be infinite, and if no begin attribute is specified, the value will be 0.
- abstract
- string, read-only.
- author
- string, read-only.
- begin
- floating point number (seconds), read-only.
- beginWith
- string (element id), read-only.
- beginAfter
- string (element id), read-only.
- beginEvent
- string (element id "." event id), read-only.
- clip-begin
- string, read-only.
- clip-end
- string, read-only.
- clockSource
- boolean, read-only.
- string, read-only.
- dur
- floating point number (seconds) or POSITIVE_INFINITY if indefinite, read-only.
- end
- floating point number (seconds) or POSITIVE_INFINITY if indefinite, read-only.
- endWith
- string (element id), read-only.
- endEvent
- string (element id "." event id), read-only.
- endSync
- string, read-only.
- fill
- string, read-only.
- longdesc
- string, read-only.
- noCombine
- boolean, read-only.
- player
- string (id), read-only.
- par
- boolean, read-only.
- repeat
- floating point number (seconds) or POSITIVE_INFINITY if indefinite, read-only.
- repeatDur
- floating point number (seconds) or POSITIVE_INFINITY if indefinite, read-only.
- skip-content
- boolean, read-only.
- syncBehavior
- string, read-only.
- syncBehaviorDefault
- string, read-only.
- syncTolerance
- floating point number (seconds), read-only. Only valid if syncBehavior set to "locked"
- syncToleranceDefault
- floating point number (seconds), read-only. Only valid if syncBehavior set to "locked"
- timeAction
- string, read-only.
- timeStartRule
- string, read-only.
- title
- string, read-only.
- type
- string, read-only.
- useTimedHyperlinks
- boolean, read-only.
- currTime
- This provides read/write access to the local time for the element. Reading this property provides the current time on the local timeline. Note that when a local timeline repeats, this property presents the simple time ranging from 0 to the repeat duration. Writing a value to this property will resync the element within the time container. If the sync rules specify hard sync, the new value will be ignored. The implementation could just raise a resync event and let the normal resync mechanisms deal with the new sync relationship, but this does not seem useful enough to justify the complexity or potential confusion in use.
- isPaused
- boolean that indicates whether the local element timeline is paused.
Value is "true" if the element timeline is currently paused.
- Value is read-only (can be changed via the pause() and run() methods).
Note that all properties/attributes associated with the test-attributes will be read-only. I.e. these values cannot be set other than in the original syntax. The test attributes and switch elements will only be evaluated once. Any changes to attributes (e.g. via script) will have no effect upon the evaluation of a switch element.
Nevertheless, there should be a mechanism for users to control the system settings for captioning and overdub support, in accordance with the WAI accessibility guidelines [WAI]. This should probably take the form of a system control, or additional user agent (browser) support for indicating elements with the associated test attributes and allowing dynamic control of these. Further work is required in this area.
- system-bitrate
- string, read-only.
- system-captions
- string ("on" or "off"), read-only
- system-language
- string, read-only.
- system-overdub-or-caption
- string ("caption" or "overdub"), read-only
- system-required
- string, read-only.
- system-screen-size
- string (widthXheight), read-only.
- system-window-size
- string (widthXheight), read-only.
- system-screen-depth
- integer, read-only.
This present the basic set of events that are associated with HTML+TIME implementations. Further work may be done to identify additional support required or recommended. In particular, the issues associated with media loading and bandwidth management should be addressed, to help ensure that a presentation plays back similarly on different implementations (browsers).
- onBegin
- Raised when the element starts for any reason - just because of the timing, or because of a turnOn call.
- onEnd
- Raised when the element stops for any reason - just because of the timing, or because of a turnOff call.
- onRepeat
- Raised when the local timeline repeats (i.e. on the first sample of each repeat iteration after the first one). This is not raised when the local timeline starts (i.e. on the first iteration). If the local timeline does not repeat, this event will never be raised.
The event should include the iteration count (0 based, so the first event raised will have the value 1).
- onResync
- Raised when a media player has broken sync for some internal reason, and when resync is called. The default action of the timing model is to attempt to reestablish sync, depending on the media load rules. As part of this, it will propagate any time dependencies. The onResync event will generally bubble up to the first enclosing element that is set to slip sync.
The event should include information on the reason for the resync (e.g. communications, pause-request, etc.). Not clear how to specify this without more formal specification of events in DOM.
- onMediaComplete
- Applies to media elements (especially streaming media). Raised when media is done loading. This should generally be before the media is done playing, and will support implementations in optimizing media loading/buffering. Intended use is that when, e.g. a video is done loading, the next video in a sequence can begin to cue or buffer.
It should be possible to integrate new media players into the HTML+TIME model, to support an open set of media types. Media players generally support varying levels of control, depending on the constraints of the underlying renderer as well as media delivery, streaming etc. HTML+TIME defines 4 levels of support, allowing for increasingly tight integration, and broader functionality. The details of the interface will be presented in a separate document.
HTML+TIME provides the underpinnings for time-based animation and interaction. HTML authors and animation tools vendors can build upon this basis to provide animation capabilities via simple script or DHTML behaviors [BEHAVIORS]. From the perspective of HTML+TIME, these additional behaviors are clients of the timing services.
Client behaviors must be able to leverage the timing and synchronization support provided by HTML+TIME. The behaviors are considerably simpler than extension media players, and have a simple interface to HTML+TIME. Time-varying behaviors should be modeled on a local timeline that can be arbitrarily sampled (i.e. no dependencies on the sample rate or the ordering of sample times). Note that the implementation is not constrained by this requirement, and may be based upon interpolation, closest keyframe fit or random numbers. The details of the interface will be described in a separate document, but the general mechanism is described here.
A client behavior will attach to the local timeline by calling an addClient method on the element. As the local timeline advances, the HTML+TIME behavior will call back to a client behavior method: update(). Parameters include the current local time, which the client behavior will use to sample its respective timeline. Reviewers: an alternative mechanism would simply present a tick event, simplifying the model somewhat. This could make it hard to handle resync situations, as the time-children are not registered. Opinions?
A set of basic queries on the element will support information about the simple timeline duration, etc.
Note: if behaviors are developed that require a broader interface to HTML+TIME, they can be modeled as media players.
The syntax for declaring client behaviors is currently open. XML and scene graph descriptions would describe them within the block (i.e. element) that they modify (act upon). This also makes sense from the TIME standpoint, as they appear underneath (i.e. within) the local timeline of the modified element. However, traditional HTML marks up content by wrapping it in a span. This conflicts with the user-models generally used in animation and time-based media authoring.
HTML Elements that are divided into two basic groups for the purposes of control by HTML+TIME behaviors:
The first group are controlled by manipulating the display or visibility properties. The second group must be controlled by removing the effect of the element when it is not active on the timeline. Thus the HTML+TIME behavior for a bold tag must remove the bold effect before the element is defined to begin and after it ends. It should not force the style off in a manner that would override an enclosing style, but rather should simply remove the effect of the style element outside the active duration.
There are some HTML tags that make no sense to integrate with a timeline, including those that occur within the HEAD, and some that affect neither display or content (e.g. COMMENT).
The classification of HTML elements is presented in Appendix A.
Each HTML element is classified according to the type (manner) of HTML+TIME control.
It is important to note that certain elements can be considered either style or
content. A judgement call is made to classify them according to the more logical
type. Thus, BLOCKQUOTE, LISTING and PRE are classified as content (as they function more like a P
element than a font style element), and
CENTER (although a block element) is
classified as a style.
Two elements are not accounted for in the sets below: COL and COLGROUP. Some documentation states that the visibility property applies to these elements. This could place them in the content group, although they do not really contain content the way most content elements do. This needs further attention.
Note that it would be really nice to support timed SCRIPT content. Need to consider the model for delayed evaluation, repeated evaluation, etc. Will it make sense to repeat script (with some declared pseudo-duration)? What constraints must be placed upon usage?
The defined types are:
HTML+TIME is based in large part upon SMIL, and so the core syntax is very similar. Because of the integration with HTML and CSS, the SMIL layout syntax and some SMIL boilerplate syntax is not needed. Some differences in the timing syntax are introduced, however. In some cases, these changes allow the syntax to match the HTML Document Object Model (DOM) in a more standard manner. In other cases, HTML+TIME extends the SMIL functionality to support more control and integration with a document time model. We have attempted to keep the extensions within the overall SMIL model, so that they could be worked into the SMIL specification as well.
Because layout is handled by existing HTML/CSS mechanisms, there is no layout section as there would be in SMIL documents. HTML+TIME marks up existing HTML with inline timing info, and so effectively integrates layout and timing. This keeps the information relating to an element all together. This can make editing simpler and more straight-forward.
On a minor note, SMIL naming (using hyphens) seems to follow the CSS property naming practice, in conflict with the HTML attribute naming practice. HTML+TIME preserves the SMIL names with hyphens for compatibility, but uses the HTML naming standard for new attributes.
One significant difference between HTML+TIME and SMIL is the description of an Object Model. This is appropriate in the HTML environment, and should support platform implementers as well as authors.
This appendix details the differences between the two specifications.
These elements are not needed, as the equivalent functionality is already in HTML and/or CSS. There is no functional loss in not supporting these. The elements are presented in the order they appear in the SMIL 1.0 specification.
These elements are supported in HTML+TIME, but have additional support or modified syntax. The SMIL defined functionality is preserved by HTML+TIME. The elements are presented in the order they appear in the SMIL 1.0 specification.
This should present DTD for all new HTML+TIME elements. It will not provide a means of verifying an entire HTML document with TIME extensions, but will support verification of the new elements.
Consider Accessibility issues. See also. In particular, consider issues related to title, longdesc and controls for captioning. Also, recommendations describe need for "a standard means of notifying third-party assistive technologies ... of the existence of audio description for a video object" (and related proxies). Should this be incorporated somehow into the system-required test attribute, or should a new test attribute be created for this kind of thing? Alternatively, should we have OM support for locating elements of interest to assistive technologies? These issues may have to wait for a later version of SMIL to resolve.
Consider adding support for acceleration/deceleration. This is very powerful for client scripts or behaviors, and hard for them to do otherwise. It should just be a hint (it will never affect duration anyway), so that if applied to something like a movie player, it can be ignored without affecting the time graph. Proposed support would be an attribute with a percentage value. The value is the portion of the local duration over which acceleration or deceleration should be applied. The effect will modify the rate at which the element is played such that the duration is preserved. The sum of any acceleration and deceleration values for an element must be no greater than 100.
Consider adding support for transitions. We should not go very far with this, or HTML+TIME will start overlapping with animation and video authoring tools. Nevertheless, it would be nice to specify some behavior between videos in sequence, etc. Simple fades, cross-fades, etc. If modeled as a hint, rather than a requirement, may be easier to handle. Getting into full effects and wipes makes less sense. What about layering and opacity? For Enhanced TV, transparency would be important (content screened over the BG video).
Need to further document the Object Model, especially w.r.t. child behaviors and media player integration. Need to describe the properties associated with the attributes. Need to add more rigorous definition of what happens on repeat, and how this interacts with synchronization and resync. Need to resolve how time children attach to time parents (e.g. update() call versus tick event). | http://www.w3.org/TR/NOTE-HTMLplusTIME | CC-MAIN-2014-41 | refinedweb | 10,749 | 56.35 |
Opened 7 years ago
Closed 6 years ago
Last modified 4 years ago
#9286 closed (worksforme)
Starting other processes in a view gives me some weird results.
Description (last modified by kmtracey)
I cannot start process as daemon.
This error is similar to this thread:
Django hangs when I execute this code:
def start(request): if request.user.is_authenticated(): output = Popen(["/usr/local/tomcat/bin/jboss.sh"], stdout=PIPE).communicate()[0] foo ="a" return render_to_response('cms/templates/list.html', {'logs': foo}) else: return HttpResponseRedirect("/accounts/login")
Change History (7)
comment:1 Changed 7 years ago by namename12
- Needs documentation unset
- Needs tests unset
- Patch needs improvement unset
comment:2 Changed 7 years ago by kmtracey
comment:3 Changed 6 years ago by jacob
- milestone set to 1.1
- Triage Stage changed from Unreviewed to Accepted
comment:4 Changed 6 years ago by jacob
- Resolution set to worksforme
- Status changed from new to closed
I'm not sure this is a bug at all. That is, Popen.communicate() is supposed to block until stdin reaches EOF (). And that's exactly what happens when I spawn processes from a Django view: when the process completes, so does the view.
comment:5 Changed 6 years ago by stevecrozz@…
- Resolution worksforme deleted
- Status changed from closed to reopened
I'd call this a bug, and its affecting me.
The django development server is able to start a background process, but for some reason it must block until the sub process dies. Place something like this in a view and you'll get your response 5 seconds later, Popen.communicate() is not needed to reproduce it:
subprocess.Popen(['/bin/sleep', '5']) return HttpResponse(u'That sure took a while!')
comment:6 Changed 6 years ago by Alex
- Resolution set to worksforme
- Status changed from reopened to closed
This bug was closed by a core developer, if you disagree please bring this up for discussion ont he django-development mailiing list.
comment:7 Changed 4 years ago by jacob
- milestone 1.1 deleted
Milestone 1.1 deleted
def start(request):<br> | https://code.djangoproject.com/ticket/9286 | CC-MAIN-2015-27 | refinedweb | 345 | 55.34 |
Created on
06-01-2017
02:05 PM
Small
File Offenders
This perl script helps to
inform you of the users that have the most "small files". If you are
on HDP 2.5+, you do not need a script like this. Why? Because HDP 2.5 has a
Zeppelin notebook that will help you identify what users are contributing to
small file volume. This is part of SmartSense.
Read more here
on that. If you are on an older HDP version, you can take a look at this script...
Why
Worry About Small Files?
The HDFS NameNode
architecture, explained here mentions
that "the NameNode keeps an image of the entire file system namespace and
file Blockmap in memory." What this means is that every file in HDFS adds
some pressure to the memory capacity for the NameNode process. Therefore, a
larger max heap for the NameNode Java process will be required as the files
system grows.
How
to use this script
Before beginning, process
the image file into TSV format, as shown in this example command:
hadoop oiv -i
/hadoop/hdfs/namesecondary/current/fsimage_0000000000003951761 -o
fsimage-delimited.tsv -p Delimited
then pipe the output file
(fsimage-delimited.tsv) into this program, eg. cat fsimage-delimited.tsv |
fsimage_users.pl
Note: For large fsimage files, you'll probably need to have a larger heap for oiv to run. Set max heap like this (adjust the value to something that makes sense for the host where you run the command):
export HADOOP_OPTS=“-Xmx4096m $HADOOP_OPTS"
Example
HW13177:~ clukasik$ ./fsimage_users.pl ./fsimage-delimited.tsv
Limiting output to top 10 items per list. A small file is considered anything less than 134217728. Edit the script to adjust these values.
Average File Size (bytes): 0; Users:
hive (total size: 0; number of files: 12)
yarn (total size: 0; number of files: 8)
mapred (total size: 0; number of files: 7)
hcat (total size: 0; number of files: 1)
anonymous (total size: 0; number of files: 1)
Average File Size (bytes): 219.65; Users:
ambari-qa (total size: 4393; number of files: 20)
Average File Size (bytes): 245.942307692308; Users:
hbase (total size: 12789; number of files: 52)
Average File Size (bytes): 1096.625; Users:
spark (total size: 8773; number of files: 8)
Average File Size (bytes): 34471873.6538462; Users:
hdfs (total size: 896268715; number of files: 26)
Average File Size (bytes): 46705038.25; Users:
zeppelin (total size: 186820153; number of files: 4)
Users with most small files:
hbase: 52 small files
hdfs: 23 small files
ambari-qa: 20 small files
hive: 12 small files
spark: 8 small files
yarn: 8 small files
mapred: 7 small files
zeppelin: 3 small files
anonymous: 1 small files
hcat: 1 small files
Once you identify top offenders, you will need to assess the root cause. It could be bad practices by applications. Engage Hortonworks Professional Services for help tackling the problem!
Created on
06-01-2017
04:57 PM
Hi Craig, this is indeed a useful tool. Thanks!
AFAIK, HDFS snapshots could increase the small files. Have you taken care of snapshots in your script or have they already been ruled out during the FSImage->TSV phase? | https://community.cloudera.com/t5/Code-Repositories/Identifying-quot-small-file-quot-offenders/tac-p/250034 | CC-MAIN-2020-05 | refinedweb | 527 | 72.16 |
user attributes in PortletRequest not setJoerg Harm Oct 14, 2010 10:46 AM
Hi,
using GateIn 3.1.0.GA and the Portletbridge 2.0.0.FINAL I tried to access the user attributes (PortletSpec 2.0, PLT.21) "user.name.given" and "user.name.family" from a JSF portlet. GateIn does not set these user attributes. Is there a way to get this work?
Kind regards,
Joerg
1. Re: user attributes in PortletRequest not setTrong Tran Oct 15, 2010 1:10 AM (in response to Joerg Harm)
you have read PLT.21, so you could also read the "PLT.21.3 Important Note on User Information"
There are some discussion related to this :
-
-
I hope it could help you
2. Re: user attributes in PortletRequest not setJoerg Harm Oct 15, 2010 10:16 AM (in response to Trong Tran)
Of course, I read PLT.21.3, too. As far as I know up to now there is no Java standard to access user information. Therefore, the recommendation of the PortletSpec should be supported by GateIn (q.v.).
As workaround I followed the example of UIUserInfoPortlet.java:
public User getUser() { ConversationState state = ConversationState.getCurrent(); return (User)state.getAttribute(CacheUserProfileFilter.USER_PROFILE); }
3. Re: user attributes in PortletRequest not setTrong Tran Oct 15, 2010 12:33 PM (in response to Joerg Harm)
Yes, i agree with you
I think the workaround that you are following is a good way for now and it's acceptable in GateIn
4. Re: user attributes in PortletRequest not setChris Laprun Oct 18, 2010 5:46 AM (in response to Joerg Harm)
Hi Joerg,
This is actually a problem in GateIn itself, not in the portlet container. I took the liberty of moving your issue over to GTNPORTAL instead as this is where it should be fixed. Thanks a lot for your involvement with GateIn!
5. Re: user attributes in PortletRequest not setJoerg Harm Oct 18, 2010 5:51 AM (in response to Chris Laprun)
Hi Chris,
thank you for moving the issue to the right place ().
Kind regards,
Joerg
6. Re: user attributes in PortletRequest not setKhoi Nguyen Oct 19, 2010 1:17 AM (in response to Joerg Harm)
Hi Joerg
Please re-check your problem, i think we're mistaking 'user profile' and 'account info' concept. In gatein, we had user profile attributes as same as PLT spec, and account info including of User name, first name, last name, email, password. And you're confusing firstname, lastname with user.name.given and user.name.family in PLT specs, so you couldn't get them from your portlet. You must fill given name and family name of user in User profile for your requirement instead of
Anyway, we should also clarify whether use 'first name', 'last name' and 'email' in account info or not
7. Re: user attributes in PortletRequest not setJoerg Harm Oct 19, 2010 4:57 PM (in response to Khoi Nguyen)
Hi Khoi,
you are right! There are two different classes (org.exoplatform.services.organization.User for the account info and org.exoplatform.services.organization.UserProfile for the user profile). But the two classes intersect somewhat. "first name", "given name", "forename", and "Christian name" describe the same real world concept (as do "last name", "family name", "second name", and "surname"). Is there any reason to manage the same information in two places (and to force the user to type the same twice)?
Kind regards,
Joerg
8. Re: user attributes in PortletRequest not setming li Nov 2, 2010 10:29 PM (in response to Joerg Harm)
oh thank you very much
9. Re: user attributes in PortletRequest not setJoerg Harm Jan 28, 2011 6:11 PM (in response to Khoi Nguyen)
The user profile can be used to store additional attributes beside those defined in PortletRequest.P3PUserInfos. Unfortunately, these attributes are not accessible the PLT way because the class org.exoplatform.portal.webui.application.ExoUserContext restricts the attributes that from P3PUserInfos. PLT.21 does not say that the recommended attribute list is exclusive. It just says that attributes not declared in the deployment descriptor of the portlet application should not be exposed to the portlets. The filter in ExoUserContext seems to be unnecessary.
Kind regards,
Joerg | https://community.jboss.org/message/566610 | CC-MAIN-2016-07 | refinedweb | 702 | 65.42 |
Since .NET Core was announced, I've really been loving the command line experience for .NET Core. Being able to use
dotnet new to start a new application regardless of the OS or what shell/terminal experience is on a machine is the way to happiness.
The Azure Functions team has put together some fancy new templates for creating serverless apps now.
Getting the new templates installed
.NET Core templates are all just NuGet packages and installed using
dotnet new -i <package-path>. In this case, the templates are on a private feed, so you need to get them nupkg files from.
The latest current version right now is from the feed:
"2.3.3": { "Microsoft.NET.Sdk.Functions": "1.0.14", "cli": "", "nodeVersion": "8.11.1", "localEntryPoint": "func.exe", "itemTemplates": "", "projectTemplates": "", "templateApiZip": "", "sha2": "4A2B808E86AE4C4DEF38A2A14270D19EC384648AD1FDF635921A64F609D41098", "FUNCTIONS_EXTENSION_VERSION": "beta", "requiredRuntime": ".NET Core", "minimumRuntimeVersion": "2.1"
Download the projectTemplates and itemTemplates nupkg files and install them using the command:
dotnet new -i ~/Downloads/Azure.Functions.Templates.2.0.0-beta-10224.nupkg" dotnet new -i ~/Downloads/Microsoft.AzureFunctions.ProjectTemplates.2.0.0-beta-10224.nupkg"
Now when
dotnet new is run, the Azure Function templates are available.
Templates Short Name Language Tags ------------------------------------------------------------------------------------------------------------------------------- QueueTrigger Queue [C#] Azure Function HttpTrigger Http [C#] Azure Function BlobTrigger Blob [C#] Azure Function TimerTrigger Timer [C#] Azure Function DurableFunctionsOrchestration DurableFunctionsOrchestration [C#] Azure Function SendGrid SendGrid [C#] Azure Function EventHubTrigger EventHub [C#] Azure Function ServiceBusQueueTrigger SBQueue [C#] Azure Function ServiceBusTopicTrigger SBTopic [C#] Azure Function EventGridTrigger EventGrid [C#] Azure Function Azure Functions azureFunctionsProjectTemplates [C#] AzureFunctions/ClassLib
Creating a quick function app
# initial a new Azure Function application dotnet new azurefunction --output myfunctionapp # change dir cd myfunctionapp # add a new HttpTrigger Function dotnet new http --name echo # directory . ├── echo.cs ├── host.json ├── local.settings.json └── myfunctionapp.csproj
Now that I have initialized my app and first function, I could start the app using the functions cli with
func host start but I want to be able to be able to debug the app, and deploy. I can open the project using VS Code using
code . or in my case I use the Insiders Build
code-insiders .
I have the Azure Extension Pack for VS Code installed which includes the Azure Functions extension which allows you quickly browse, create, manage, deploy, and even debug functions locally.
I am prompted with a notification that this is an Azure Function created outside of VS Code, and it is asking to add the assets for setting up debugging. Click Yes.
Now I can hit F5, add break points, and inspect my code just as I would expect from a great tooling experience but also have the ability to add new functions. Using the integrated terminal in VS Code you can see that the endpoint is available.
If we wanted to add an additional HttpTrigger, use the following command:
# using -n as the short form for --name and -na short for namespace dotnet new http -n echo2 -na myfunctionapp
Or if you prefer a UI experience in VS Code, there is the Functions Extension for adding a new one as well.
- Click on the "Add new function"
- Select the folder you want to work in
- Select the type of function
- Type the name of the function
- Type the name of the namespace
- ... potentially other options.
Both options are great depending on how you like to work. The templates are in line with how I work for all .NET Core applications now and I spend much of my time using command line tools like Docker, Kubernetes and the Azure CLI.
The Azure Functions are open source at please provide feedback. Azure Function development, debugging and deployment in Visual Studio is available as well. | https://tattoocoder.com/dotnet-new-templates-for-serverless-functions/ | CC-MAIN-2022-05 | refinedweb | 615 | 51.99 |
public class Graph implements Entry { public String name; public List<Integer> nodeUIDs = new ArrayList<Integer>(); public List<Integer> EdgeUIDs = new ArrayList<Integer>(); public Graph() { } public Graph ( String name ) { this.name = name; } }The may or may not be named. A non-distributed graph would contain direct object references to its contained nodes and edges. But a distributed version should not - based on the principle in SplitItUp?. Instead we store "keys" to the individual nodes and edges - in this case the keys are UIDs. More on how the UIDs are generated later. Here is Node.java:
public class Node implements Entry { public Integer uid; public List<Integer> edgeUIDs = new ArrayList<Integer>(); public Node() { } public Node ( Integer uid ) { this.uid = uid; } public void addEdgeUID ( Integer uid ) { edgeUIDs.addElement ( uid ); } }Each Node has a UID. And it has to list of the UIDs of the Edges that it is involved with. Here is Edge.java:
public class Edge implements Entry { public Integer uid; public Integer nodeUID1; public Integer nodeUID2; public Edge() { } public Edge ( Integer uid, Integer n1, Integer n2 ) { this.uid = uid; this.nodeUID1 = n1; this.nodeUID2 = n2; } }Again, has a UID and the UIDs of the two Nodes it it involved with. How are the UIDs generated? Well, use the Shared Var pattern outlined in JavaSpacesPrinciplesPatternsAndPractice - I won't go into it here. So that's the data structure. Now what is the protocol to access it? Create a Graph instance and write it to the java space. Create two nodes and write them to the space. Add the nodes' UIDs to graph.nodeUIDs. Create an Edge linking the two nodes (via UIDs). Add the Edge to the space. Update both nodes' edgeUIDs Vector with the UID of the new Edge. Now we have a simple graph. To navigate, retrieve the Graph object from the space using a template. From the Graph object we can get the UIDs of each Node and Edge and retrieve them using a template. So, we can implement a distributed graph using this approach. But it has a number of drawbacks. Its biggest problem is that it scales very badly as the number of objects grows. Place your better ideas here. -- MikeHogan?
class Relation extends Entry{ Entry one; Entry two; } class PhysicalConnector? extends Relation{ //has some more characteristics of distance ,type of connector etc etc } class Access extends Relation{ //in SAN and NAS world a logical relation tells what components have access (which is completely different from physical connection) to what other components //its parameters }The retrieval of any relation is in the same way, to get physically connected switches to a host, I would query the configuration for relations that have given Entry(host) and has a relation of type physical connection. Similarly I would ask the configuration for all the switches a host has access to. So this forms a uniform way of relating any two Entries if they don't fall under parent child relation. -- SeshKumar | http://c2.com/cgi/wiki?DistributedGraphDataStructure | CC-MAIN-2015-14 | refinedweb | 491 | 67.04 |
Priority Queues with Binary Heaps
One important variation of the queue is the priority queue. A priority queue acts like a queue in that items remain in it for some time before being dequeued. However, in a priority queue the logical order of items inside a queue is determined by their “priority”. Specifically, the highest priority items are retrieved from the queue ahead of lower priority items.
We will see that the priority queue is a useful data structure for specific algorithms such as Dijkstra’s shortest path algorithm. More generally though, priority queues are useful enough that you may have encountered one already: message queues or tasks queues for instance typically prioritize some items over others.
You can probably think of a couple of easy ways to implement a priority queue using sorting functions and arrays or lists. However, sorting a list is . We can do better.
The classic way to implement a priority queue is using a data structure called a binary heap. A binary heap will allow us to enqueue or dequeue items in .
The binary heap is interesting to study because when we diagram the heap it looks a lot like a tree, but when we implement it we use only a single dynamic array (such as a Python list) as its internal representation. The binary heap has two common variations: the min heap, in which the smallest key is always at the front, and the max heap, in which the largest key value is always at the front. In this section we will implement the min heap, but the max heap is implemented in the same way.
The basic operations we will implement for our binary heap are:
BinaryHeap()creates a new, empty, binary heap.
insert(k)adds a new item to the heap.
find_min()returns the item with the minimum key value, leaving item in the heap.
del_min()returns the item with the minimum key value, removing the item from the heap.
is_empty()returns true if the heap is empty, false otherwise.
size()returns the number of items in the heap.
build_heap(list)builds a new heap from a list of keys.
The Structure Property
In order for our heap to work efficiently, we will take advantage of the logarithmic nature of the binary tree to represent our heap. In order to guarantee logarithmic performance, we must keep our tree balanced. A balanced binary tree has roughly the same number of nodes in the left and right subtrees of the root. In our heap implementation we keep the tree balanced by creating a complete binary tree. A complete binary tree is a tree in which each level has all of its nodes. The exception to this is the bottom level of the tree, which we fill in from left to right. This diagram shows an example of a complete binary tree:
Another interesting property of a complete tree is that we can represent it using a single list. We do not need to use nodes and references or even lists of lists. Because the tree is complete, the left child of a parent (at position ) is the node that is found in position in the list. Similarly, the right child of the parent is at position in the list. To find the parent of any node in the tree, we can simply use integer division (like normal mathematical division except we discard the remained). Given that a node is at position in the list, the parent is at position .
The diagram below shows a complete binary tree and also gives the list representation of the tree. Note the and relationship between parent and children. The list representation of the tree, along with the full structure property, allows us to efficiently traverse a complete binary tree using only a few simple mathematical operations. We will see that this also leads to an efficient implementation of our binary heap.
The Heap Order Property
The method that we will use to store items in a heap relies on maintaining the heap order property. The heap order property is as follows: In a heap, for every node with parent , the key in is smaller than or equal to the key in . The diagram below also illustrates a complete binary tree that has the heap order property.
Heap Operations
We will begin our implementation of a binary heap with the constructor.
Since the entire binary heap can be represented by a single list, all
the constructor will do is initialize the list and an attribute
current_size to keep track of the current size of the heap.
The code below shows the Python code for the constructor.
You will notice that an empty binary heap has a single zero as the first
element of
items and that this zero is not used, but is there so
that simple integer division can be used in later steps.
class BinaryHeap(object): def __init__(self): self.items = [0] def __len__(self): return len(self.items) - 1
The next method we will implement is
insert. The easiest, and most
efficient, way to add an item to a list is to simply append the item to
the end of the list. The good news about appending is that it guarantees
that we will maintain the complete tree property. The bad news about
appending is that we will very likely violate the heap structure
property. However, it is possible to write a method that will allow us
to regain the heap structure property by comparing the newly added item
with its parent. If the newly added item is less than its parent, then
we can swap the item with its parent. The diagram below shows
the series of swaps needed to percolate the newly added item up to its
proper position in the tree.

Notice that when we percolate an item up, we are restoring the heap
property between the newly added item and the parent. We are also
preserving the heap property for any siblings. Of course, if the newly
added item is very small, we may still need to swap it up another level.
In fact, we may need to keep swapping until we get to the top of the
tree. The code below shows the
percolate_up method, which
percolates a new item as far up in the tree as it needs to go to
maintain the heap property. Here is where our wasted element in
items is important. Notice that we can compute the parent of any
node by using simple integer division. The parent of the current node
can be computed by dividing the index of the current node by 2.
def percolate_up(self): i = len(self) while i // 2 > 0: if self.items[i] < self.items[i // 2]: self.items[i // 2], self.items[i] = \ self.items[i], self.items[i // 2] i = i // 2
We are now ready to write the
insert method (see below). Most of the
work in the
insert method
is really done by
percolate_up. Once a new item is appended to the tree,
percolate_up takes over and positions the new item properly.
def insert(self, k): self.items.append(k) self.percolate_up()
With the
insert method properly defined, we can now look at the
delete_min method. Since the heap property requires that the root of the
tree be the smallest item in the tree, finding the minimum item is easy.
The hard part of
delete_min is restoring full compliance with the heap
structure and heap order properties after the root has been removed. We
can restore our heap in two steps. First, we will restore the root item
by taking the last item in the list and moving it to the root position.
Moving the last item maintains our heap structure property. However, we
have probably destroyed the heap order property of our binary heap.
Second, we will restore the heap order property by pushing the new root
node down the tree to its proper position.
The diagram shows the series of swaps needed to move
the new root node to its proper position in the heap.
In order to maintain the heap order property, all we need to do is swap
the root with its smallest child less than the root. After the initial
swap, we may repeat the swapping process with a node and its children
until the node is swapped into a position on the tree where it is
already less than both children. The code for percolating a node down
the tree is found in the
percolate_down and
min_child methods below.
def percolate_down(self, i): while i * 2 <= len(self): mc = self.min_child(i) if self.items[i] > self.items[mc]: self.items[i], self.items[mc] = self.items[mc], self.items[i] i = mc def min_child(self, i): if i * 2 + 1 > len(self): return i * 2 if self.items[i * 2] < self.items[i * 2 + 1]: return i * 2 return i * 2 + 1
The code for the
delete_min operation is below.
Note that once again the hard work is handled by a helper function, in
this case
percolate_down.
def delete_min(self): return_value = self.items[1] self.items[1] = self.items[len(self)] self.items.pop() self.percolate_down(1) return return_value
To finish our discussion of binary heaps, we will look at a method to build an entire heap from a list of keys. The first method you might think of may be like the following. Given a list of keys, you could easily build a heap by inserting each key one at a time. Since you are starting with a list of one item, the list is sorted and you could use binary search to find the right position to insert the next key at a cost of approximately operations. However, remember that inserting an item in the middle of the list may require operations to shift the rest of the list over to make room for the new key. Therefore, to insert keys into the heap would require a total of operations. However, if we start with an entire list then we can build the whole heap in operations. The code below shows the code to build the entire heap.
def build_heap(self, alist): i = len(alist) // 2 self.items = [0] + alist while i > 0: self.percolate_down(i) i = i - 1
Above we see the swaps that the
build_heap
method makes as it moves the nodes in an initial tree of
[9, 6, 5, 2,
3] into their proper positions. Although we start out in the middle of
the tree and work our way back toward the root, the
percolate_down method
ensures that the largest child is always moved down the tree. Because
the heap is a complete binary tree, any nodes past the halfway point
will be leaves and therefore have no children. Notice that when
i==1,
we are percolating down from the root of the tree, so this may require
multiple swaps. As you can see in the rightmost two trees of
above, first the 9 is moved out of the root
position, but after 9 is moved down one level in the tree,
percolate_down
ensures that we check the next set of children farther down in the tree
to ensure that it is pushed as low as it can go. In this case it results
in a second swap with 3. Now that 9 has been moved to the lowest level
of the tree, no further swapping can be done. It is useful to compare
the list representation of this series of swaps as shown in
above with the tree representation.
i = 2 [0, 9, 5, 6, 2, 3] i = 1 [0, 9, 2, 6, 5, 3] i = 0 [0, 2, 3, 6, 5, 9]
The assertion that we can build the heap in may seem a bit
mysterious at first, and a proof is beyond the scope of this book.
However, the key to understanding that you can build the heap in
is to remember that the factor is derived from the height of
the tree. For most of the work in
build_heap, the tree is shorter than
. | https://bradfieldcs.com/algos/trees/priority-queues-with-binary-heaps/ | CC-MAIN-2018-26 | refinedweb | 2,037 | 70.13 |
Implementing Database Migrations to Badgeyay
Badgeyay project is divided into two parts i.e front-end of Ember JS and back-end with REST-API programmed in Python.
We have integrated PostgreSQL as the object-relational database in Badgeyay and we are using SQLAlchemy SQL Toolkit and Object Relational Mapper tools for working with databases and Python. As we have Flask microframework for Python, so we are having Flask-SQLAlchemy as an extension for Flask that adds support for SQLAlchemy to work with the ORM.
One of the challenging jobs is to manage changes we make to the models and propagate these changes in the database. For this purpose, I have added Added Migrations to Flask SQLAlchemy for handling database changes using the Flask-Migrate extension.
In this blog, I will be discussing how I added Migrations to Flask SQLAlchemy for handling Database changes using the Flask-Migrate extension in my Pull Request.
First, Let’s understand Database Models, Migrations, and Flask Migrate extension. Then we will move onto adding migrations using Flask-Migrate. Let’s get started and understand it step by step.
What are Database Models?
A Database model defines the logical design and structure of a database which includes the relationships and constraints that determine how data can be stored and accessed. Presently, we are having a User and file Models in the project.
What are Migrations?
Database migration is a process, which usually includes assessment, database schema conversion. Migrations enable us to manipulate modifications we make to the models and propagate these adjustments in the database. For example, if later on, we make a change to a field in one of the models, all we will want to do is create and do a migration, and the database will replicate the change.
What is Flask Migrate?
Flask-Migrate is an extension that handles SQLAlchemy database migrations for Flask applications using Alembic. The database operations are made available through the Flask command-line interface or through the Flask-Script extension.
Now let’s add support for migration in Badgeyay.
Step 1 :
pip install flask-migrate
Step 2 :
We will need to edit run.py and it will look like this :
import os from flask import Flask from flask_migrate import Migrate // Imported Flask Migrate from api.db import db from api.config import config ...... db.init_app(app) migrate = Migrate(app, db) // It will allow us to run migrations ...... @app.before_first_request def create_tables(): db.create_all() if __name__ == '__main__': app.run()
Step 3 :
Creation of Migration Directory.
export FLASK_APP=run.py flask db init
This will create Migration Directory in the backend API folder.
└── migrations ├── README ├── alembic.ini ├── env.py ├── script.py.mako └── versions
Step 4 :
We will do our first Migration by the following command.
flask db migrate
Step 5 :
We will apply the migrations by the following command.
flask db upgrade
Now we are all done with setting up Migrations to Flask SQLAlchemy for handling database changes in the badgeyay repository. We can verify the Migration by checking the database tables in the Database.
This is how I have added Migrations to Flask SQLAlchemy for handling Database changes using the Flask-Migrate extension in my Pull Request.
Resources: | https://blog.fossasia.org/tag/flask-migrate/ | CC-MAIN-2019-13 | refinedweb | 530 | 57.47 |
in the example client1.c,client_fifo.c(inside directory fs)
to run the program,need to associa both files,no main functions in
client1.c,client_fifo.c uses client1.c to read and write.To run
client_fifo.c,needs 2 files:fifo.1 and fifo.2,so create these two files
before running,because then unlink,so two files are removed.
run server first,gcc -o server server_fifo.c server1.c err_sys.c to create these
two files
(Unlink:call the unlink function to remove the special file)
to delete a file(rc=0.link count =0)
use ls -l show hard link count only,
ln filename defaultly create hard link,to unlink is to unlink hard link.
ln -s is to create soft link.
no matter close file descriptor before unlink or not,ls -l it won't have entry of special files after unlink
How do I determine the block size of an ext3 partition on Linux?
# tune2fs -l /dev/sda1 | grep -i 'block size' Block size: 1024
Replace /dev/sda1 with the partition you want to check.
What is block size in Linux?
A block is a sequence of bit or Bytes with a fixed length ie 512 bytes, 4kB, 8kB, 16kB, 32kB etc.
blockdev --getbsz partition
Example
# blockdev --getbsz /dev/sda1 4096
So the block size of this file system is 4kB.
how to sequentially read and process files/records in linux
ps -aef | while read f1 f2 f3; do echo $f1 $f2 $f3 ; done
disk inodes and in-core inodes
Internal representation of a file is called an "inode" (contraction of term - index node) which contains all the required information and description of the file data and its layout on disk.
Inodes resides on the disk and the kernel reads them into an into memory which we can call as in-core inodes.
fseek
头文件:#include <stdio.h>
定义函数:int fseek(FILE * stream, long offset, int whence);
函数说明:
fseek()用来移动文件流的读写位置.
1、参数stream 为已打开的文件指针,
2、参数offset 为根据参数whence 来移动读写位置的位移数。参数 whence 为下列其中一种:
SEEK_SET 从距文件开头offset 位移量为新的读写位置. SEEK_CUR 以目前的读写位置往后增加offset 个位移量.
SEEK_END 将读写位置指向文件尾后再增加offset 个位移量. 当whence 值为SEEK_CUR 或
SEEK_END 时, 参数offset 允许负值的出现.
下列是较特别的使用方式:
1) 欲将读写位置移动到文件开头时:fseek(FILE *stream, 0, SEEK_SET);
2) 欲将读写位置移动到文件尾时:fseek(FILE *stream, 0, 0SEEK_END);
返回值:当调用成功时则返回0, 若有错误则返回-1, errno 会存放错误代码.
附加说明:fseek()不像lseek()会返回读写位置, 因此必须使用ftell()来取得目前读写的位置.
范例
#include <stdio.h>
main()
{
FILE * stream;
long offset;
fpos_t pos;
stream = fopen("/etc/passwd", "r");
fseek(stream, 5, SEEK_SET);
printf("offset = %d\n", ftell(stream));
rewind(stream);
fgetpos(stream, &pos);
printf("offset = %d\n", pos);
pos = 10;
fsetpos(stream, &pos);
printf("offset = %d\n", ftell(stream));
fclose(stream);
}
执行
offset = 5
offset = 0
offset = 10
#define _POSIX_SOURCE #include <fcntl.h> int creat(const char *pathname, mode_t mode);
General description
The function call: creat(pathname,mode) is equivalent to the call:
open(pathname, O_CREAT|O_WRONLY|O_TRUNC, mode);
Thus the file named by pathname is created, unless it already exists. The file is then opened for writing only, and is truncated to zero length. See open() — Open a file for further information.
The mode argument specifies the file permission bits to be used in creating the file
#include <fcntl.h> ... int fd; mode_t mode = S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH; char *pathname = "/tmp/file"; ... fd = creat(pathname, mode); ... 是将当前磁盘根路径(和当前进程和它们的子进程)更改到另一个根目录。当你更改根路径到另一个目录下时,你不能在那个目录外存取文件和使用命令。这个目录叫作 chroot jail。切换根目录通常为了系统维护,例如重装引导程序或者重置遗忘的密码。 detailed explanation:
Why can't I mount the same filesystem at multiple points, and why can't a mount-point inode reference count be > 1?
This isn't a direct answer, but you can get behavior similar to mounting in two places by using
mount --bind.
You get the first EBUSY when your question (1) applies because:
if the directory is already a mount point, you lose access to the previously mounted directory, which makes the prior mount irrelevant.
if the directory (say
/some/where) is some process's current directory, you have a process with a different view of the contents of
/some/where; newcomers see what's on the mounted file system, but the old processes see what was in the mounted-upon directory..
Ctrl+Enter 发布
发布
取消 | http://blog.51cto.com/11259454/1771339 | CC-MAIN-2018-43 | refinedweb | 674 | 64.91 |
Hello All,
Management Portal try to create a new IRIS DB but fail in SMP but also from terminal.
I always get Directory doesn't exist. But it is there and accessible.
I would like to Compact globals in a database to free up space.
I would begin the process on Saturday morning, but am concerned, due to the size, that it would not complete by Sunday evening. I understand that the process is setup so that it can run with users on the system, however, as the advice indicates, this would not be ideal.
Can the process be stopped if it does not complete by the time you want/need it to?
Do you know how to guestimate how long the process would take?
Hi there
I've noticed extreme slowness using the portal in my Healthconnect dev enviroment lately.
Any page in the Portal takes a lot of time Loading.
Any ideas of what it could be?
Kind Regards,
Joao
InterSystem.
Are>
Hi Guys,
In System Management Portal, I'm on UnknownUser (which I've accidentally removed the %All role from), so I log out of UnknownUser and try to log in as root or Admin, but only see the following screen:
Good day,
Is there a way to change the theme in management portal? or at least the color of the header.
Issue is, some users have access to Development and Testing and Production environments. I would like a way to color-differentiate the environments to reduce the errors.
Hello,
When i click on the menu to run the Data import wizard from MP, i receive following CSP error
It is happening for all the namespaces. Looks like some permission issue. Same issue with Data Export wizard. Help to resolve this will be appreciated.
I am using
Cache for Windows (x86-64) 2017.2.2 (Build 865_0_18763U)
Thanks,
Jimmy Christian.
Hi all,
I am trying to create multiple tasks all in a single task.
For instance MyApp has three tasks.
One to send a email if a limit is exceeded = MyApp-check-credits
One to purge files = MyApp-purge
One to auto delete files = MyApp-Del
I would love to get all tasks MyApp-check-credits, MyApp-purge, MyApp-Del into a single parent task called MyApp-AllTasks.
Is there anyone that could give me guidance of how to complete this it would be much appreciated.
So i'm having this problem with the task manager, the tasks simply stopped running. I had a problem with queued massages and trying to figure out what to do i'm afraid I may messed up something else, can someone help me ?
I have an above error when purging record map batches and was wondering if anyone out there has ever experienced this and if they have please any advice
Failed to purge body for header 9747192, BodyClassname='******.Batch':ERROR #5823: Cannot delete object, referenced by '*****.Record.%ParentBatch'
On one of our servers, when I am in Mgmt Portal and click the link for Configure / CSP Gateway Management, I get this url:
but the page displays a 0, and nothing else. Literally, just a 0. This link works on our other servers, with the same URL. Any idea why?
Thanks,
Laura.
I'm able to log into my local instance of HealthShare through the Management Portal, but once I've done so, the screen is entirely blank. I'm still able to access Terminal and Studio without any issue, as well as a hosted instance's Management Portal. I've tried stopping and starting HealthShare, no luck. I've been working on this instance for the past several months and haven't experienced anything like this, and I don't know of anything that I was doing that would have broken the Management Portal. Anyone have a suggestion as to where to go from here?
Can I apply a custom resource to a Management Portal page through code, using the method or global? The documentation only shows the manual mode:
Or export the settings already saved.
Hi,
I have a custom classes that that I use with
Weirdly the management portal is not drawing the the lines between my process and operation when viewing my production on the ' Ensemble > Production Configuration' screen. Clicking the green dot flashes the 'computing connections' message, and highlights my operation, but no lines get rendered:
Our PAS system supplies date in a particular date format (ISO 8601 compliant) that includes seconds and milliseconds. Because many downstream systems cannot handle milliseconds (and some don't even want seconds) many transformations are required to truncate the data.
Friends , can anyone help in how to kill instances of a business process? We have searched the documentation and looked in the Production, We am not seeing how to perform this task.
Hi
I need to query my messages and filter by a XML node.
Time after time on CSP Session page of our Cache 2017.2.1 installation I see that all licenses are consumed by CSP sessions of /csp/sys, /csp/sys/op/and /csp/sys/mgr applications which I assume are sessions of Management Portal. The problem is that there are only few of us accessing the Portal and as we test by browsing Portal, we can't reproduce the problem.
Is there any way to see client IP of CSP session? Any other way to approach the problem?
The problem looks very similar to the Forefox-related one but we don't use Forefox.
I have a cache client with a list of several servers.
One of the server is working with an IIS server that is not the Cache DB server.
The connection to the IIS server is only through https (SSL)
I tried to define the Web Server IP Address to but it didn't let me to specify the https
So I tried to define Web Server Port to 443 but when I chose the SMP it's trying to open
Hi! I have a local project written on Cache and Atelier on my PC. I need to move it to notebook. Tried to export globals, classes, MAC-programms and csp with frontend stuff, but after I created my apps on notebook and imported my set, it just didn't work. I think it's because I have some settings on Management Portal, so how can I export portal settings and what I should export to have my working apps on another computer?
New Windows 10 Cache [ TRYCACHE] successfully installed - but unable to log onto the Management Portal . What "User Name" and "Password" are being asked for? No opportunity to specify the sources. Thanks.
Hello : | https://community.intersystems.com/tags/management-portal | CC-MAIN-2020-05 | refinedweb | 1,112 | 71.24 |
When Project Jigsaw will finally be released in Java 9, it will be a little over eight years old.
In its first years it had to compete with two similar Java Specification Requests, namely JSR 277 Java Module System, and JSR 294 Improved Modularity Support. It also caused a conflict with the OSGi community, which feared Project Jigsaw would be an unnecessary and inadequate duplicate of functionality that would force Java developers to use one of two incompatible module systems.
In its early years the project was not well staffed and even halted in 2010 during the merging of Sun into Oracle. It was not until 2011 that the dire need for a module system in Java was restated and work resumed with a full staff.
What followed was a three year exploratory phase, which ended in July 2014 when several Java Enhancement Proposals (JEP 200 Modular JDK, JEP 201 Modular Source Code, and JEP 220 Modular Run-Time Images) and ultimately JSR 376 Java Platform Module System were launched. The last-mentioned defines the actual Java module system that will be implemented in the JDK under a new JEP.
As of July 2015 the modules into which the JDK will be split are largely decided (see JEP 200), the JDK source code was restructured to accommodate them (see JEP 201) and the run-time images were prepared for modularization (see JEP 220). All of this is available in the current JDK 9 early access releases.
The code being developed as part of JSR 376 is expected to be deployed to the JDK repository soon, but as of yet there is unfortunately no way to experiment with the module system itself.
Motivation
The motivation for Project Jigsaw changed slightly over its history. It was initially only intended to modularize the JDK. This scope was extended when it became clear that libraries and applications would benefit considerably from using this tool on their own code.
Ever-growing and Indivisible Java Runtime
The Java runtime has always been growing in size. But before Java 8 there was no way to install a subset of the JRE; all Java installations were distributed with libraries for such API’s as XML, SQL and Swing, whether you needed them or not.
While this may not be terribly significant for medium sized computing devices (for example desktop PCs or laptops) it is very significant for small devices like routers, TV-boxes, cars and all the other tiny nooks and crannies where Java is used. With the current trend of containerization it also gains new relevance on servers, where reducing an image’s footprint will reduce costs.
Java 8 brought compact profiles, which define three subsets of Java SE. These alleviated the problem somewhat but only in restricted cases, and the profiles are too rigid to cover all current and future needs for partial JREs.
JAR/Classpath Hell
JAR Hell and Classpath Hell are endearing terms referring to the problems that arise from the deficiencies of Java's class loading mechanism. Especially for large applications these can cause lots of pain in many interesting ways. Some of the problems build on one another; others are independent.
Unexpressed Dependencies
A JAR cannot express which other JARs it depends on in a way that the JVM will understand. Users are hence left to identify and fulfill the dependencies manually, by reading the documentation, finding the correct projects, downloading the JARs and adding them to the project.
Then there are optional dependencies, where a JAR might only require another JAR if the user wants to use certain features. This complicates the process further.
The Java runtime will not detect an unfulfilled dependency until it is actually required. This will lead to a
NoClassDefFoundError crashing the running application.
Build tools like Maven help solve this problem.
Transitive Dependencies
For an application to work it might only need a handful of libraries. Each of those in turn might need a handful of other libraries, and so on. As the problem is compounded it becomes exponentially more labor-intensive and error prone.
Again this is helped by build-tools.
Shadowing
Sometimes different JARs on the classpath contain classes with the same fully-qualified name, for example when they are two different versions of the same library. listed on the classpath. This may well differ across different environments, for example between a developer's IDE and the production machine where the code will eventually run.
Version Collisions
This problem arises when two required libraries depend on different versions of a third library.
If both versions are added to the classpath, the behavior will be unpredictable. First, because of the shadowing problem, classes that exist in both versions will only be loaded from one of them. Worse, if a class that exists in one but not the other is accessed, that class will be loaded as well. Code calling into the library will hence find a mix of the two versions.
At best, the library code might fail loudly with a
NoClassDefFoundError if it tries to access code that does not exist in a loaded class. In the worst case, where versions only differ semantically, actual behavior may be subtly changed introducing hard-to-find bugs.
Identifying this as the source of unexpected behavior can be hard. Solving it directly is impossible.
Complex Class Loading
By default all classes are loaded by the same
ClassLoader. In some circumstances it might be necessary to add additional loaders, for example to allow users to extend the application by loading new classes.
This can quickly lead to a complex class loading mechanism that creates unexpected and hard to understand behavior.
Weak Encapsulation Across Packages
Java’s visibility modifiers are great to implement encapsulation between classes in the same package. But across package boundaries there is only one visibility: public.
Since a classloader folds all loaded packages into one big ball of mud, all public classes are visible to all other classes; there is no way to create functionality that is visible, for example, throughout a whole JAR but not outside of it.
Manual Security
An immediate consequence of weak encapsulation across package boundaries is that security relevant functionality will be exposed to all code running in the same environment. This means that malicious code can access critical functionality that may allow it to circumvent security measures.
Since Java 1.1 this was prevented by a hack: The
SecurityManager is invoked on every code path into security relevant code and checks whether the access is allowed. Or more precisely: it should be invoked on every such path. The omission of these calls in some places led to some of the vulnerabilities that plagued Java in the past.
Startup Performance
Finally, it currently takes a while before the Java runtime has loaded and JIT compiled all required classes. One reason is that class loading executes a linear scan of all JARs on the classpath. Similarly, identifying all occurrences of a specific annotation requires the inspection of all classes on the classpath.
Goals
Project Jigsaw aims to solve the problems discussed above by introducing a language level mechanism to modularize large systems. This mechanism will be used on the JDK itself and is also available to developers to use on their own projects.
It is important to note that not all goals are equally important to the JDK and to us developers. Many are more relevant for the JDK and most will not have a huge impact on day-to-day coding (in contrast to recent language modifications like lambda expressions and default methods). Nonetheless, they will still change the way big projects are developed and deployed.
Scalable Platform
With the JDK being modularized, users will have the possibility to cherry pick the functionality they need and create their own JRE consisting of only the modules they require. This will help. - JSR 376
Reliable Configuration
The specification will endow individual modules with the ability then
Improved Security And Maintainability
The strong encapsulation of module internal APIs will greatly improve security because critical code is now effectively hidden from code that does not need to use it. It also. - JSR 376. - JSR 376
Core Concept
Since modularization is the goal, Project Jigsaw will introduce the concept of modules, which give modules some context, think of well-known libraries such as Google Guava or the ones in Apache Commons (e.g. Collections or IO) as modules. Depending on how granular their authors want to split them, each of those might themselves be divided into several modules.
The same is true of an application. It might be a single monolithic module but it might also be split up. A project’s size and cohesion will be important factors for deciding on how to split it into modules.’s –.
(Click the image to enlarge it)
Features
So how do modules work? Looking at the requirements of Project Jigsaw and JSR 376 will help us get a feeling for them.
Dependency Management
In order to solve “JAR/Classpath hell” one of the core features of Project Jigsaw is dependency management. Let’s look into the components.
Declaration And Resolution
A module will declare which other modules it requires to compile and run. This will be used by the module system to transitively identify all the modules required to compile or run the initial one.
It will also be possible to depend not on specific modules but on a set of interfaces. The module system will then try to identify modules that implement these interfaces and thus satisfy the dependency, binding them appropriately to the interface.
Versioning
Modules will be version. But wait, then how does this solve JAR Hell? Good question!
Version selection - the act of selecting the appropriate version from a set of different versions of the same module - is not mandated by the specification. So when I wrote above that the module system will identify the modules required to compile or run another module, this was based on the assumption that there is only one version of each. In case there are several, an upstream step (e.g. the developer or, more likely, the build tool he uses) must make a selection, and the system will only validate that it satisfies all constraints.
Encapsulation
The module system will enforce strong encapsulation in all phases. This centers around an export mechanism where only a module’s exported packages are accessible. Encapsulation is imposed independently of the security verification tasks performed by any
SecurityManager that may be present.
The exact syntax for the proposal is not yet defined, but JEP 200 provides some XML renditions of the main semantics. As an example the following is the declaration of the
java.sql module.
<module> <!-- The name of this module --> <name>java.sql</name> <!-- Every module depends upon java.base --> <depend>java.base</depend> <!-- This module depends upon the java.logging and java.xml modules, and re-exports their exported API packages --> <depend re-java.logging</depend> <depend re-java.xml</depend> <!-- This module exports the java.sql, javax.sql, and javax.transaction.xa packages to any other module --> <export><name>java.sql</name></export> <export><name>javax.sql</name></export> <export><name>javax.transaction.xa</name></export> </module>
We can see from this snippet that java.sql depends on j
ava.base, java.logging, and
java.xml. After covering the different export mechanisms we will understand the rest of the declaration.
Export
A module will declare specific packages for export, and only the types contained in them will be exported. This means that only they will be visible and accessible to other modules. Even stricter, the types will only be exported to those modules which explicitly depend on the module containing them.
Interestingly enough, different modules will be able to contain packages with the same name, and they will even be allowed to export them.
In the example above,
java.sql exports the packages
java.sql,
javax.sql, and
javax.transaction.xa.
Re-export
It will also be possible for one module to re-export the API (or parts thereof) of any other module it depends upon. This will support refactoring by providing the ability to split and merge modules without breaking dependencies because the original ones can continue to exist. They will export the exact same packages as before even though they might not contain all the code. In the extreme case so-called aggregator modules could contain no code at all and act as a single abstraction of a set of modules. In fact, the compact profiles from Java 8 will be exactly that.
We can see from the example that
java.sql re-exports the APIs of its dependencies
java.logging and
java.xml.
Qualified Export
To help developers (especially those modularizing the JDK) with keeping exported API surfaces small, an optional qualified export mechanism will allow a module to specify additional packages to be exported exclusively to a declared set of modules. So while with the “standard” mechanism the exporting module won’t know (nor care) who accesses the packages, using qualified exports will allow a module to limit the set of possible dependents.
Configuration, Phases, And Fidelity
As mentioned earlier, a goal of JEP 200 is that). Similarly, developers can use the mechanism to compose different variants of their own modularized applications.
At compile-time, the code being compiled will only see types that are exported by a configured set of modules. At build-time, a new tool (presumably to be called JLink) will allow the creation of binary run-time images that contain specific modules and their dependencies. At launch-time, an image can be made to appear as if it only contains a subset of its modules.
It will also be possible to replace modules that implement an endorsed standard or a standalone technology with a newer version in each of the phases. This will replace the deprecated endorsed standards override mechanism and the extension mechanism (see below).
All aspects of the module system (like dependency management, encapsulation and so forth) will work in the same manner in all phases unless this is not possible for specific reasons . for example Spring annotated configuration classes) currently requires scanning all classes in some specified packages. This is usually done during a program’s startup, and can slow it down considerably.
Modules will have an API allowing callers to identify all classes with a given annotation. One envisioned approach is to create an index of such classes that will be created when the module is compiled.
Integration With Existing Concepts And Tools
Diagnostic tools (e.g. stack traces) will be upgraded to convey information about modules. Furthermore, they will be fully integrated into the reflection API, which can be used to manipulate them in the same manner as classes. This will include the version information that can be reflected on and overridden at runtime.
The module’s design will allow build tools to be used for them “with a minimum of fuss”. The compiled form of a module will be usable on the classpath or as a module so that library developers are not forced to create multiple artifacts for class-path and module-based applications.
Interoperability with other module systems, most notably OSGi, is also planned.
Even though modules can hide packages from other modules it will be possible to perform white box testing of the contained classes and interfaces..
Developers will also be able to package a set of modules which make up an application into an OS-specific package, “which can be installed and invoked by an end user in the manner that is customary for the target system”. Building on the above, only those modules that are not present on the target system must be packaged.
Dynamic Configuration
Running applications will have the possibility to create, run, and release multiple isolated module configurations. These configurations can contain developer and platform modules. This will be useful for container architectures like IDEs, application servers, or the Java EE platform.
Incompatibilities
As usual for Java these changes are implemented with a strong focus on backward compatibility; all standardized and non-deprecated APIs and mechanisms will continue to function. But projects might depend on other, undocumented constructs in which case their switch to Java 9 will require some work.
Internal APIs Become Unavailable
With strong encapsulation.
So what are internal APIs? Definitely everything that lives in a
sun.*-package. If it’s in
com.sun.* and annotated with
@jdk.Exported, it will still be available on Oracle JDKs; if it has no annotation it will be unavailable
One example that might prove especially problematic is
sun.misc.Unsafe . It is used in quite a number of projects for mission and performance critical code and its pending inaccessibility has stirred up quite a discussion. During one such exchange it was pointed out, though, that it will still be available via a dedicated command-line flag. This might be a necessary evil, considering that not all functionality will find its way into a public API.
Another example is everything in
com.sun.javafx.*. Those classes are a crucial ingredient to properly building JavaFX controls and is also needed to work around a number of bugs. Most functionality from these classes is targeted for publication.
Merge Of JDK And JRE
With a scalable Java runtime, which allows the flexible creation of runtime images, the JDK and JRE lose their distinct character and become just two possible points in a spectrum of module combinations.
This implies that both artifacts will have the same structure, which includes the folder structure and any code that relies on it (e.g. by utilizing the fact that a JDK folder contains a subfolder jre) will stop working correctly.
Internal JARs Become Unavailable
Internal JARs like
lib/rt.jar and
lib/tools.jar will no longer be accessible. Their content will be stored in implementation-specific files with a deliberately unspecified and possibly changing format.
Any code that assumes the existence of these files will stop working correctly. This might also lead to some transitional pains in IDEs or similar tools URL.getContent) will continue to work as today. But if it depends on the structure of jar URLs (e.g. by constructing them manually or parsing them), it will fail.
Removal of the Endorsed Standards Override Mechanism
Some parts of the Java API are labeled “Standalone Technologies” and created outside of the Java Community Process (e.g. JAXB). It might be desirable to update those independently of the JDK or use alternative implementations. The endorsed standards override mechanism allows the installation of alternative versions of these standards into a JDK.
This mechanism is deprecated in Java 8 and will be removed in Java 9, to be replaced by the upgradeable modules mentioned above.
Removal of the Extension Mechanism
With the extension mechanism custom APIs can be made available to all applications running on the JDK without having to name them on the classpath.
This mechanism is deprecated in Java 8 and will be removed in Java 9. Some features that are useful on their own will be retained.
Next Steps
We have glanced at the history of Project Jigsaw, saw what motivated it, and discussed its goals as well as how they are going to be implemented by specific features. What else can we do besides wait for Java 9?
Prepare
We should prepare our projects and examine whether they rely on anything that will be unavailable or removed in Java 9.
At least dependencies on internal APIs don’t have to be searched manually. Since Java 8 the JDK contains the Java Dependency Analysis Tool JDeps (introduction with some internal packages, official documentation for Windows and Unix), which can list all packages upon which a project depends. If run with the parameter -
jdkinternals, it will output almost all internal APIs a project uses.
“Almost all” because it does not yet recognize all packages that will be unavailable in Java 9. This affects at least those which belong to JavaFX, as can be seen in JDK-8077349. (Using this search I could not find other issues regarding missing functionality.)
There are also at least three JDeps-plugins for Maven: by Apache, Philippe Marschall and myself. The latter is currently the only one which fails the build if
jdeps -
jdkinternals reports dependencies on internal APIs.
Discuss
The main source for up-to-date information on Project Jigsaw is the Jigsaw-Dev mailing list. I will also continue to discuss this topic on my blog.
If you are concerned about some specific API that will be unavailable in Java 9, you could check the mailing list of the corresponding OpenJDK project as these will be responsible for developing public versions of them.
Adopt
Java 9 early access builds are already available. While JSR 376 is still under development and the actual module system is not yet available in those builds, many preparatory changes are. In fact, anything besides strong encapsulation is already in place.
Information gathered this way can be returned to the project
Unsafe and compiler (tools)...
by Richard Richter /
Unsafe and compiler (tools)...
by Richard Richter /
Your message is awaiting moderation. Thank you for participating in the discussion.
I'm curious what they'll do with these. I'm not using Unsafe personally, but I'm watching what is happening there. And tools.jar? There must be some good way how to call compiler without resorting to external process execution. Recently we developed some simple annotation processor and it is much easier to develop/test/debug in IDE when I can just call it on compiler from tools.jar from some simple bootstrap class with main.
Otherwise - of course - modularity is dearly missing. Package-level vs public access is a big gap that needs something in between. I can't wait for module encapsulation. :-) | https://www.infoq.com/articles/Project-Jigsaw-Coming-in-Java-9/ | CC-MAIN-2021-04 | refinedweb | 3,643 | 54.52 |
It's not the same without you
Join the community to find out what other Atlassian users are discussing, debating and creating.
Hi All,
How to get version from and also from version to release date.The fallowing code i am useing
But I am not getting please any body have on this
GenericValue issueGv = issue.getProject();
<font style="font-size: x-small;" size="2"></font>
List versions =
<font style="font-size: x-small;" size="2"></font>
versionManager<u style="text-decoration: underline;">.getVersions(issueGv)</u>;
List<Version> versions = componentManager.getVersionManager().getVersions(issue.getProjectObject().getId()) versions.first().getReleaseDate()
I have used but i am getting
InvocationException and NullpointerException while getting versions
What language are you using? I'm just trying to give you a pointer to the correct APIs.. first() is not valid in java. Make sure you've checked for nulls etc and have initialised componentManager.
Java only.I am trying to get earliest date from the vesions.Suppose fix version is multiple choice filed
So need get earleist date in those version(I am useing calculated custome plug-in)and displaying the date
ok how to initialise the componentManager I am new to this JIRA API
ComponentManager componentManager = ComponentManager.getInstance();
i am ComponentManager componentManager = ComponentManager.getInstance();
is it ok? and How to get release date from version list in java?
suppose i have added version v1.1 and v1.2
v1.1 release date is 2/7/2010
v1.2 release date is 2/7/2011
So i need to get v1.2 release date(2/7/2011) from the issue
This date i need show in the edit issue like filed "Sloution Available Date"
everytime the ticket is edited based on the earliest date of the field "fix version".
Yes your component manager line is correct. Fix version(s) is a multiplie choice field, so I'm assuming you're using fixversions as opposed to a custom field. Then you want the java moral equivalent of:
Date lowestRelDate = issue.getFixVersions().min {Version version -> version.releaseDate}?.releaseDate
That is, get all the fix versions from the issue. Find the Version with the lowest release date, and get its release date. I can't write that in java for you, too boring. Maybe someone else will.
I get it already. Without writing it for you I don't see what else I can say.
ThankYou so much I am getting release date from version.
And in getValueFromIssue methos it is return the release date Then How to set this date as custome filed
while edting the issue.
Oh you mean this question?
If it's a computed field then it does it for you when the issue is updated. To set the value on existing issues, re-index.
Jira 4.3, Python 2.7 environment.
Fetching the version (in which fixed) from an issue: use REST API.
import json
from restkit import Resource, BasicAuth, request
#... authentication
# Convert the text in the reply into a Python dictionary
issue = json.loads(response.body_string())
#...
fields = issue['fields']
for field_name in fields:
field_object = fields[field_name]
#...
elif field_name in ["fixVersions"]:
fDict = getValueDict(field_object['value'])
try:
for fkey,fval in fDict.items():
if fkey in ["name"]:
iDict['targetSW'] = fval
except:
iDict['targetSW'] = 'unset'
Setting the version (in which found) when creating an issue: REST is not available until Jira 5.0, must use suds.
Using the Python suds jira-cli-4.4, thanks to Matt Doar, modified it, to be able to set the Affects Version/s, by version name NOT id, so the flag used is -r "Affects Version/s:99.9.1" for version named '99.9.1'
Retrieves the version names for the project using getVersions.
Gets the id for the version name. Uses the existing API to set the version for creating the issue.
# Get the project versions
try:
jira_env['versionnames'] = soap.service.getVersions(auth, options.project)
except Exception, e:
jira_env['versionnames'] = [{'id':-1, 'name':'unavailable'}]
verf_choices = getChoicesStr(jira_env['versionnames'])
# Translate affectsVersions names to id
affectsVersions = []
if options.affectsversions:
nameV, valV = options.affectsversions.split(':')
avId = 'unknown'
for i,v in enumerate(jira_env['versionnames']):
if v['name'] == valV:
avId = v['id']
if avId == "unknown":
logger.error("Field version '%s' not found in %s" % (valV, verf_choices))
return 0
else:
version = {'id': avId.strip()}
affectsVersions.append(version)
You could code for fixVersions in the same way to be able to specify the name instead of the id.. | https://community.atlassian.com/t5/Marketplace-Apps-questions/How-to-get-Version-from-issue/qaq-p/301972 | CC-MAIN-2019-04 | refinedweb | 733 | 59.7 |
Math::RungeKutta.pm - Integrating Systems of Differential Equations
use Math::RungeKutta; # When working on data in an array ... sub dydt { my ($t, @y) = @_; # the derivative function my @dydt; ... ; return @dydt; } @y = @initial_y; $t=0; $dt=0.4; # the initial conditions # For automatic timestep adjustment ... while ($t < $tfinal) { ($t, $dt, @y) = &rk4_auto(\@y, \&dydt, $t, $dt, 0.00001); &display($t, @y); } # Or, for fixed timesteps ..., working on data in a hash... sub dydt { my ($t, %y) = @_; # the derivative function my %dydt; ... ; return %dydt; } %y = %initial_y; $t=0; $dt=0.4; # the initial conditions # For automatic timestep adjustment on hashes ... while ($t < $tfinal) { ($t, $dt, %y) = &rk4_auto(\%y, \%dydt, $t, $dt, 0.00001); &display($t, %y); } # Or, for fixed timesteps on hashes ..., also available but not exported by default ... import qw(:ALL); ($t, @y) = &rk4_classical(\@y, \&dydt, $t, $dt); # Runge-Kutta 4th-order ($t, @y) = &rk4_ralston(\@y, \&dydt, $t, $dt); # Ralston's 4th-order # or similarly for data in hashes. problem..
Perl is not the right language for high-end numerical integration like global weather simulation, colliding galaxies and so on (if you need something like this you could check out xmds). But as Gear says, "Many equations that are solved on digital computers can be classified as trivial by the fact that even with an inefficient method of solution, little computer time is used. Economics then dictates that the best method is the one that minimises the human time of preparation of the program."
This module has been designed to be robust and easy to use, and should be helpful in solving systems of differential equations which arise within a Perl context, such as economic, financial, demographic or ecological modelling, mechanical or process dynamics, etc.
Version 1.07
where the arguments are: \@y a reference to the array of initial values of variables, \%y a reference to the hash of initial values of variables, \&dydt a reference to $t and @y are now the new values at the completion of the timestep, or it returns ($t, %y) if called with the data in a hashref. $t and @y are now the new values at the completion of the timestep.
In the I>epsilon> form the arguments are: \@y a reference to the array of initial values of variables or \%y a reference to the hash of initial values of variables, \&dydt a reference to the function calculating the derivatives, $t the initial time, $dt the initial timestep, $epsilon the errors per step will be about $epsilon*$ymax
In the errors form the last argument is: \@errors a reference to an array of maximum permissible errors, or \%errors a reference to a hash, accordingly., or $y{'gross national product'} and $y{'interest rate'} accordingly. $t, $dt and @y are now the new values at the completion of the timestep, or ($t, $dt, %y) accordingly.) { ($t, $dt, @y) = &rk4_auto(\@y, \&dydt, $t, $dt, $epsilon); ($t_midpoint, @y_midpoint) = &rk4_auto_midpoint(); &update_display($t_midpoint, @y_midpoint); &update_display($t, @y); }
rk4_auto_midpoint returns ($t, @y) where $t and @y were the values at the midpoint of the previous call to rk4_auto; or ($t, %y) accordingly.
You will pass this subroutine by reference as the second argument to rk2, rk4 and rk4_auto. The name doesn't matter of course. It must expect the following arguments: $t the time (in case the equations are time-dependent), @y the array of values of variables or %y the hash of values of variables.
It must return an array (or hash, accordingly) of the derivatives of the variables with respect to time.
The following routines are not exported by default, but are exported under the ALL tag, so if you need them you should:
import Math::RungeKutta qw(:ALL);.
There are a couple of example scripts in the examples/ subdirectory of the build directory. You can use their code to help you get your first application going.
This script uses Term::Clui (arrow keys and Return, or q to quit) to offer a selection of algorithms, timesteps and error criteria for the integration of a simple sine/cosine wave around one complete cycle. This was the script used as a testbed during development.
This script uses the vt100 or xterm 'moveto' and 'reverse' sequences to display a little simulation of three-body gravity. It uses rk4_auto because a shorter timestep is needed when two bodies are close to each other. It also uses rk4_auto_midpoint to smooth the display. By changing the initial conditions you can experience how sensitively the outcome depends on them.) { $y[17]*=-0.9; $y[20]*=-0.9; }
and thus, again, let the numerical integration solve just the smooth part of the problem.
In the
js/ subdirectory of the install directory there is RungeKutta.js, which is an exact translation of this Perl code into JavaScript. The function names and arguments are unchanged. Brief Synopsis:
<SCRIPT type="text/javascript" src="RungeKutta.js"> </SCRIPT> <SCRIPT type="text/javascript"> var dydt = function (t, y) { // the derivative function var dydt_array = new Array(y.length); ... ; return dydt_array; } var y = new Array(); // For automatic timestep adjustment ... y = initial_y(); var t=0; var dt=0.4; // the initial conditions // Arrays of return vaules: var tmp_end = new Array(3); var tmp_mid = new Array(2); while (t < tfinal) { tmp_end = rk4_auto(y, dydt, t, dt, 0.00001); tmp_mid = rk4_auto_midpoint(); t=tmp_mid[0]; y=tmp_mid[1]; display(t, y); // e.g. could use wz_jsgraphics.js or SVG t=tmp_end[0]; dt=tmp_end[1]; y=tmp_end[2]; display(t, y); } // Or, for fixed timesteps ... y = post_ww2_y(); var t=1945; var dt=1; // start in 1945 var tmp = new Array(2); // Array of return values while (t <= 2100) { tmp = rk4(y, dydt, t, dt); // Merson's 4th-order method t=tmp[0]; y=tmp[1]; display(t, y); } </SCRIPT>
RungeKutta.js uses several global variables which all begin with the letters
_rk_ so you should avoid introducing variables beginning with these characters.
In the
lua/ subdirectory of the install directory there is RungeKutta.lua, which is an exact translation of this Perl code into Lua. The function names and arguments are unchanged. Brief Synopsis:
local RK = require 'RungeKutta' function dydt(t, y) -- the derivative function -- y is the table of the values, dydt the table of the derivatives
See also the scripts examples/sine-cosine and examples/three-body,,, Math::WalshTransform, Math::Evol, Term::Clui, Crypt::Tea_JS, | http://search.cpan.org/~pjb/Math-RungeKutta-1.07/RungeKutta.pm | CC-MAIN-2017-04 | refinedweb | 1,054 | 53.81 |
In this article, we’re going to learn about a concept that’s widely used nowadays in JavaScript applications: immutability.
We’re going to learn more about immutability in JavaScript, how this concept can help us to write better applications, and help us manage our data, so that when we use it on a daily basis it will improve our code.
The way we’re writing code is changing pretty fast — every day we have something new being released, a new concept created, a new framework or library to help us better do a specific task. With these daily changes, we must always be learning something new — it becomes part of our job. Especially in JavaScript development, a language that evolves and changes every day with new technologies, we must pay attention to what’s really important in our applications and what should be left out, finding the right thing for the right situation.
With the rising popularity of functional programming, one of the concepts that’s trending and being talked about a lot is immutability. This concept is not exclusive to functional programming languages — we can have it in any language we want, but the concept was really brought to light and widely spread by functional programming in the JavaScript development community.
So, let’s dive into immutability, especially in JavaScript, and understand how it can help us write better applications that keep our data safer and immutable.
The concept of immutability is pretty simple and powerful. Basically, an immutable value is something that cannot be changed. Especially when we’re developing our applications, we might end up in some situations where we want to create a new object in our code, containing a new property or value while also maintaining the original value. The concept of immutability can help us to create new objects, making sure that we’re not changing the original value.
In JavaScript, we have primitive types and reference types. Primitive types include numbers, strings, boolean, null, undefined. And reference types include objects, arrays and functions.
The difference between those types is that the primitive types are immutable (or unchangeable), and the reference types are mutable (changeable). For example, the string type is immutable:
let myAge = "22"; let myNewAge = myAge; myAge = "23";
We just created two variables and assigned the
myAge to the
myNewAge variable. But after we changed the value of
myAge, we will see that they’re not the same.
console.log(myAge === myNewAge); // false
The ES6 version allowed us to replace variables in our code with constants by using the
const keyword. But a little detail that a lot of developers might not notice, is that the
const keyword is not immutable.
const myName = "Leonardo Maldonado";
The
const keyword only creates a read-only reference to a value, which means that the value cannot be reassigned. As the MDN reference says:
The const declaration creates a read-only reference to a value. It does not mean the value it holds is immutable, just that the variable identifier cannot be reassigned.
But if we try to change the value of the constant, we receive an error.
const myName = "Leonardo Maldonado"; myName = "Leo"; // Identifier 'myName' has already been declared
The ES6 version also gave us a new way to declare variables, which we can understand as the opposite of the
const keyword. The
let keyword allows us to create variables that are mutable, as the constants, but with this keyword, we can actually assign a new value.
let myName = "Leonardo Maldonado"; myName = "Leo"; console.log(myName) // Leo
By using the
let keyword, we’re able to assign a new value. In this example, we created a
let with value of
Leonardo Maldonado; then we reassigned it with the value of
Leo. This is the difference between
let and
const.
We know that JavaScript is evolving pretty fast, and with each new version of the language we’re getting more amazing features, so the consequence is that, over the years, it’s getting easier to write better JavaScript and we can achieve more with less code.
Let’s take a look now at some methods that we can start to use in our applications to help us achieve a nice level of immutability.
One of the pillars of our applications is the object. We use objects in every piece of our applications, from the front end to the back end, in the most complex component to the simplest.
Let’s imagine that we have an object called
myCar, which has the following properties:
const myCar = { model: "Tesla", year: 2019, owner: "Leonardo" };
For example, we could change a property directly if we wanted to, right? Let’s change the owner of
myCar.
const myCar = { model: "Tesla", year: 2019, owner: "Leonardo" }; myCar.owner = "Lucas";
But this is a bad practice! We should not change the property of an object directly — this isn’t how immutability works. As the Redux documentation recommends, we should always create a modified copy of our object and set the
owner to
Lucas.
But how we could do that? Well, we could use the
Object.assign method.
The
Object.assign method allows us to copy or pass values from one object to another. It returns the target object. This is how it works:
Object.assign(target, source);
The method receives a parameter that’s our target, the object that we want to modify.
The second parameter is our source, so we’ll merge the source object with our target object.
Let’s have a look in this example:
const objectOne = { oneName: "OB1" }; const objectTwo = { twoName: "OB2" }; const objectThree = Object.assign(objectOne, objectTwo); console.log(objectThree); // Result -> { oneName: "OB1", twoName: "OB2" }
Now, let’s imagine that we want to pass the values from a specific object to a new variable. This is how we would do:
const myName = { name: "Leonardo" }; const myPerson = Object.assign({}, myName); console.log(myPerson); // Result -> { name: "Leonardo" }
By doing this, we’re copying the values and properties of the
myName object, and assigning it to our new variable
myPerson.
Let’s imagine that we wanted to copy all the values and properties of the
myName object, but we also wanted to add a new property to the
myPerson object. How would we do it? Simple: by passing a third parameter and passing our new property to it, in our case the
age.
const myName = { name: "Leonardo" }; const myPerson = Object.assign({}, myName, { age: 23 }); console.log(myPerson); // Result -> { name: "Leonardo", age: 23 }
Another way we can copy or pass values to another object is by using the
spread operator. This feature, that was released in the ES6 version, allows us to create a new object by copying the properties of an existing object. For example, if we wanted to copy the
myName object into a new one, this is how we would do it:
const myName = { name: "Leonardo" }; const myPerson = { ...myName } console.log(myPerson); // Result -> { name: "Leonardo" }
And if we wanted to copy the properties of
myName and add a new property to our new object:
const myName = { name: "Leonardo" }; const myPerson = { ...myName, age: 23 } console.log(myPerson); // Result -> { name: "Leonardo", age: 23 }
The first principle of Redux is immutability, that’s why we should mention Redux here. Not only because it’s the most famous and used state management library for React applications, but also because it has the immutability concept in its core ideas. The right way to use Redux is by having immutable reducers.
Redux didn’t invent the concept of immutability — it’s way older than this state management library — but we must recognize that with this library a lot of developers started to use and talk about immutability.
If you don’t know how Redux works, this is a pretty simplified explanation, just so you can understand why the immutability is important here:
store. This can help us to achieve a nice level of scalability and maintainability. So let’s imagine that we have our store, and inside that store, we have our initial state:
const initialState = { name: "Leonardo Maldonado", age: 22 }
If we want to change our state, we should dispatch an action. An action in Redux is an object with two properties:
type— which describes the type of our action, what exactly this action does.
payload — describes exactly what should change.
So, an action in Redux looks like this:
const changeAge = payload => ({ type: 'CHANGE_AGE', payload })
We have our initial state; we created the action that will be dispatched to change the state; now we’ll create our reducer and understand how the immutability concept is used in Redux and why it’s so important to have immutable data.
reduceris basically a function that reads the type of action that was dispatched, and, based on the action type, it produces the next state and merges the action payload into the new state. In our case, we dispatched an action called
CHANGE_AGE, so in our reducer function, we should have a case to deal with when this action is dispatched.
const initialState = { name: "Leonardo Maldonado" age: 22 } const reducer = (state = initialState, action) => { switch (action.type) { case 'CHANGE_AGE': return { ...state, age: action.payload } default: return state; } }
This is where the magic happens: when our
CHANGE_AGE action is dispatched, our reducer has to perform a task based on the type of the action. In our case it changes the age, but it also has to maintain the original value of our initial state, in our case the name. It’s pretty important to maintain our initial state. Otherwise, we would lose data very easily and it would be very hard to keep track of our data. That’s why the first principle of Redux its immutability.
If you’re into React development and are not using Redux right now but want to have an immutable state in your application, you can use the Immer library. Basically, this is how this library works:
You have your current state.
It lets you apply your changes to the
draftState, basically a copy of the
currentState.
After all your changes are completed, it’ll produce your
nextState based on the changes in the
draftState.
For example, let’s imagine that we have our current state and we wanted to add a new object to this array. We would use the
produce function.
import produce from "immer"; const state = [ { name: "Leonardo", age: 23 }, { name: "Lucas", age: 20 } ]; const nextState = produce(state, draftState => { draftState.push({ name: "Carlos", age: 18 }) });
Basically the
produce functions receive two parameters: the
currentState, and a callback function, which we’ll use to modify our
draftState. This function we’ll produce our
nextState. Pretty simple, yet very powerful.
If you’re working with React and are having problems with your state management in your application, I’d really recommend you use this library. It might take some time to understand exactly how this library works, but it’ll save you a lot of time in the future in case your applications grow hugely.
Conclusion
Immutability is not a JavaScript-specific topic — it can be applied in every language — and it’s very recommendable that you use it in any language. The point to pay attention to is how you’re managing your data, and if you’re doing everything that you can to assure that your data is immutable and you’re following a nice pattern of clean code.
In this article, we learned about immutability in JavaScript, what this concept is that has been widely talked about in this past year by functional programming developers, and how it’s being used in a lot of JavaScript applications nowadays. We also learned more about how JavaScript has a lot of immutable methods to add, edit and delete data, and how we can use vanilla JavaScript to have an immutable piece of code. By using immutability in your application, you’ll see only positive points — it’ll improve the way you think about code and make your code cleaner and easier to understand. So, start to write more immutable code now, and see how it’ll help you improve your developer life!
Leonardo is a full-stack developer, working with everything React-related, and loves to write about React and GraphQL to help developers. He also created the 33 JavaScript Concepts. | https://www.telerik.com/blogs/immutability-in-javascript | CC-MAIN-2021-43 | refinedweb | 2,050 | 60.55 |
- Dan Williams authored
commit a95c90f1 upstream. The last step before devm_memremap_pages() returns success is to allocate a release action, devm_memremap_pages_release(), to tear the entire setup down. However, the result from devm_add_action() is not checked. Checking the error from devm_add_action() is not enough. The api currently relies on the fact that the percpu_ref it is using is killed by the time the devm_memremap_pages_release() is run. Rather than continue this awkward situation, offload the responsibility of killing the percpu_ref to devm_memremap_pages_release() directly. This allows devm_memremap_pages() to do the right thing relative to init failures and shutdown. Without this change we could fail to register the teardown of devm_memremap_pages(). The likelihood of hitting this failure is tiny as small memory allocations almost always succeed. However, the impact of the failure is large given any future reconfiguration, or disable/enable, of an nvdimm namespace will fail forever as subsequent calls to devm_memremap_pages() will fail to setup the pgmap_radix since there will be stale entries for the physical address range. An argument could be made to require that the ->kill() operation be set in the @pgmap arg rather than passed in separately. However, it helps code readability, tracking the lifetime of a given instance, to be able to grep the kill routine directly at the devm_memremap_pages() call site. Link::
Dan Williams <dan.j.williams@intel.com> Fixes: e8d51348 ("memremap: change devm_memremap_pages interface...") Reviewed-by:
"Jérôme Glisse" <jglisse@redhat.com> Reported-by:
Logan Gunthorpe <logang@deltatee.com> Reviewed-by:
Logan Gunthorpe <logang@deltatee.com> Reviewed-by:
Christoph Hellwig <hch@lst.de> Cc: Balbir Singh <bsingharora@gmail.com> Cc: Michal Hocko <mhocko@suse>6e6a8b24 | https://gitlab.com/post-factum/pf-kernel/blob/4c8e581506abda82392b619b6f38afd9f074eff8/include/linux/memremap.h | CC-MAIN-2019-39 | refinedweb | 267 | 50.53 |
C++ inherits data types for time from C language. To use these data types in your program you have to include ctime header:
#include <ctime>
This header provides 4 data types used for time representation:
- clock_t – Clock type
- size_t – Unsigned integral type
- time_t – Time type
- struct tm – Time structure
The first 3 data types represent time as integers and you will need to convert these integers to get commonly used representation of time.
The most user friendly way of time representation has struct tm. What is a structure is discussed in the C++ Data Structures . The
tm has the following fields that represent time:
To use a variable of type tm you can declare it in the same way you declare any variable:
tm my_time;
The
ctime header provides a range of useful functions to work with data types:
- char* asctime (const struct tm * timeptr); converts pointer to struct tm to an array of chars
- char* ctime (const time_t * timer); converts value of a time_t value to a char array in format Www Mmm dd hh:mm:ss yyyy (Www – weekday, Mmm – month, dd – day of the week, dd – date, mm – minutes, ss- seconds, hh – hours, yyyy – year).
- struct tm * gmtime (const time_t * timer); convert a time_t value to struct tm as UTC time.
- struct tm * localtime (const time_t * timer); convert a time_t value to struct tm in local time format.
- size_t strftime (char* ptr, size_t maxsize, const char* format, const struct tm* timeptr ); this functions copies the time value of timeptr according to the format into an array of char ptr of maximum size maxsize.
The main format specifiers for this function are:
clock_t clock (void);– returns the time consumed by the program from its launch. The return value is number of clock ticks. You can convert this value to seconds using CLOCKS_PER_SEC constant.
time_t mktime (struct tm * timeptr);– coverts tm structure to time_t.
time_t time (time_t* timer);– gets the current time in format of time_t by using a timer. You can use NULL as the parameter for this function: time(NULL)
Using these functions with modern compilers can lead to an error message:
“error C4996: ‘ctime’: This function or variable may be unsafe. Consider using ctime_s instead. To disable deprecation, use _CRT_SECURE_NO_WARNINGS. ”
If you are sure that your program is safe, you can disable this error by using the following directive:
#pragma warning(disable : 4996)
This is a simple demo program that shows how you can work with time using described functions:
//get the starting value of clock clock_t start = clock(); tm* my_time; //get current time in format of time_t time_t t = time(NULL); //show the value stored in t cout << "Value of t " << t << endl; //convert time_t to char* char* charTime = ctime(&t); //display current time cout << "Now is " << charTime << endl; //convert time_t to tm my_time = localtime(&t); //get only hours and minutes char* hhMM = new char[6]; strftime(hhMM, 6, "HH:MM", my_time); //show a part of tm struct //the operator -> is used to access members of the tm struct. It's described in the data structures topic cout << "Year " << 1900 + my_time->tm_year << endl; cout << "Month " << my_time->tm_mon << endl; clock_t end = clock(); clock_t exec = end - start; cout << "Program is executed in " << exec << " clocks or " << 1000 * exec / CLOCKS_PER_SEC << " milliseconds" << endl; cin.ignore();
The output for this program is:
Value of t 1417965525
Now is Sun Dec 07 17:18:45 2014
Year 2014
Month 11
Program is executed in 6 clocks or 6 milliseconds | https://www.tutorialcup.com/cplusplus/date-time.htm | CC-MAIN-2021-39 | refinedweb | 577 | 58.45 |
Greetings. I know some of you have run into this situation: If you have a package where there is no *.spec file present and you try to run any of the fedora cvs Makefile.common targets, nothing happens and the command just hangs. Turns out it's doing a grep of the spec file to figure out if the package is noarch or not. When there is no spec file the grep hangs. Here's a very hacky patch that should at least error out in this case. Makefile hackers welcome to provide a better one. kevin -- Index: Makefile.common =================================================================== RCS file: /cvs/extras/common/Makefile.common,v retrieving revision 1.127 diff -u -r1.127 Makefile.common --- Makefile.common 15 Apr 2009 04:57:41 -0000 1.127 +++ Makefile.common 24 Apr 2009 21:15:03 -0000 @@ -35,6 +35,9 @@ BUILD_FLAGS ?= $(KOJI_FLAGS) +ifndef $(SPECFILE) +SPECFILE = "NO_SPEC_FILE_FOUND" +endif LOCALARCH := $(if $(shell grep -i '^BuildArch:.*noarch' $(SPECFILE)), noarch, $(shell uname -m)) ## a base directory where we'll put as much temporary working stuff as we can
Attachment:
signature.asc
Description: PGP signature | http://www.redhat.com/archives/fedora-devel-list/2009-April/msg02037.html | CC-MAIN-2015-18 | refinedweb | 182 | 78.35 |
GatsbyJS with Headless WordPress
This is a quick tutorial on setting up Gatsby with Headless WordPress. The source code can be found here. You will need to setup and configure your own instance of WordPress if you want to follow along. The instructions will help you through this.
Setup Headless WordPress
Install local WordPress instance
The simplest way to get up and running WordPress is to use Local By Flywheel. You can download and install the app here
Create a new project called ‘headless’ and add some pages and posts if you like.
Install the plugins
First install the Advance Custom Fields plugin from the WordPress admin console.
Next we need to install three plugins, two will expose the WordPress data as a GraphQL endpoint and the other will provide a handy editor for exploring the endpoint.
Download these as zips
-
-
-
-
Expand the zips, rename then to wp-graphql, wp-graphiql, wp-graphql-acf, and wp-graphql-custom-post-type-ui. The copy the folders to:
Testing
Once you activate both plugins in the Admin console you will see the GraphiQL option in the Admin menu.
Create a Custom Type
Goto CPT UI and create a new custom Type called Product.
Now create some advanced custom fields on for the Product type.
Now add Product using our new custom type and query it in GraphiQL.
In order to really make use of WordPress as a Headless CMS you will need to upgrade to ACF Pro to get access to Flex Fields and other advanced fields.
Setup GatsbyJS
Install gatsby-starter-wordpress
If you don’t have the gatsby-cli installed, now is the time. Follow instructions at
Now move to a folder we will create our gatsbyJS project in and run
gatsby new gatsby-wordpress cd gatsby-wordpress gatsby develop
Open these URLs in your browser to confirm Gatsby is running:
Stop Gatsby and install the following plugins then restart Gatsby
npm install --save gatsby-plugin-sharp npm install --save gatsby-source-graphql
Edit gatsby-config.js.
/** * Configure your Gatsby site with this file. * * See: */ module.exports = { plugins: [ { resolve: 'gatsby-source-graphql', options: { // Arbitrary name for the remote schema Query type typeName: 'WORDPRESS', // Field under which the remote schema will be accessible. You'll use this in your Gatsby query fieldName: 'wordpress', // Url to query from url: '', }, }, ], }
Now run it.
gatsby develop
Open a browser to you will see your Wordpress data available to GatsbyJS.
Now it’s just a matter of writing a normal Gatsby app. For this simple example we will generate some Product pages. We first need to create a page in gatspy-node.js then we create the reference template in src/templates/product.js
const path = require(`path`) exports.createPages = ({ graphql, actions }) => { const { createPage } = actions const productTemplate = path.resolve(`src/templates/product.js`) return graphql( ` query Products { wordpress { products { nodes { slug title content details { price sku } } } } } `, { limit: 1000 } ).then((result) => { if (result.errors) { throw result.errors } result.data.wordpress.products.nodes.forEach((node) => { createPage({ path: `product/${node.slug}`, component: productTemplate, context: { slug: node.slug, }, }) }) }) }
import React from 'react' import { graphql } from 'gatsby' const Product = ({ data }) => { const product = data.wordpress.productBy return ( <section> <h2>Product</h2> <h3>{data.wordpress.productBy.title}</h3> <p dangerouslySetInnerHTML={{ __html: product.content, }} /> <dl> <dt>Price</dt> <dd>{product.details.price}</dd> <dt>SKU</dt> <dd>{product.details.sku}</dd> </dl> </section> ) } export default Product export const query = graphql` query($slug: String!) { wordpress { productBy(slug: $slug) { id details { price sku } title content } } } `
Open a browser to to view the product page. | https://jameskolean.tech/post/2019-11-01-gatsbyjs-with-headless-wordpress/ | CC-MAIN-2022-40 | refinedweb | 589 | 57.67 |
gotten some signs of life from the i2c pins, but I'm having
problems opening /dev/i2c-0.
When I modprob i2c_pxa, I see on my oscilloscope the i2c pulses I expect
to see.
This is good news. I'm pretty sure I'm working with the correct pins.
and dmesg shows the driver loading and associating itself to i2c-0
i2c_adapter i2c-0: registered as adapter #0
Resetting I2C Controller Unit
i2c_adapter i2c-0: found device 0x20
i2c_adapter i2c-0: found device 0x7e
my dev node for i2c-0 is
crw-rw-rw- 1 root root 89, 0 Feb 6 2005 /dev/i2c-0
Now, my problem is that my test application fails to open this dev node.
My assumption is that I should be able to send bytes down the the I2C
bus by opening and writing bytes to the dev node. This doesn't seem to
work.
The following is my simple test program. What I want to happen is for
the program to run and for my writes and reads on the I2C bus show up on
my scope. What happens is that the program fails attempting to open the
device.
Any ideas?
thanks,
--mgross
//#include <linux/i2c.h>
//#include <linux/i2c-dev.h>
#include <stdio.h>
//#include <unistd.h>
//#include <sys/types.h>
//#include <sys/stat.h>
#include <fcntl.h>
#define I2C_SLAVE 0x0703 /* Change slave address
*/
int main ()
{
int file;
int adapter_nr = 0; /* probably dynamically determined */
int addr = 0xc0; /* The I2C address */
char filename[20];
unsigned char buf[16];
sprintf(filename,"/dev/i2c-%d",adapter_nr);
if ((file = open(filename,O_RDWR)) < 0) { // <-- this fails :(
/* ERROR HANDLING; you can check errno to see what went wrong */
printf("failure %d to open dev node %s\n",file, filename);
exit(1);
}
if (ioctl(file,I2C_SLAVE,addr) < 0) {
/* ERROR HANDLING; you can check errno to see what went wrong */
printf("failure to open to set adder \n");
exit(1);
}
/* Using I2C Write, equivalent of
i2c_smbus_write_word_data(file,register,0x6543) */
buf[0] = 1;
buf[1] = 0x43;
buf[2] = 0x65;
if ( write(file,buf,3) != 3) {
/* ERROR HANDLING: i2c transaction failed */
printf("write failed\n");
}
/* Using I2C Read, equivalent of i2c_smbus_read_byte(file) */
if (read(file,buf,1) != 1) {
/* ERROR HANDLING: i2c transaction failed */
printf("read failed\n");
} else {
/* buf[0] contains the read byte */
}
return 1;
}
Fails how? Is errno set to something?
C
On Feb 6, 2005, at 1:19 PM, mark gross wrote:
> What happens is that the program fails attempting to open the
> device.
On Sun, 2005-02-06 at 14:38 -0800, Craig Hughes wrote:
> Fails how? Is errno set to something?
>
: No such device or address
errno = 6
# lsmod
Module Size Used by
i2c_pxa 5052 0 - Live 0xbf045000
i2c_algo_pxa 3872 1 i2c_pxa, Live 0xbf043000
i2c_core 20592 1 i2c_algo_pxa, Live 0xbf03c000
unix 22712 10 - Live 0xbf035000
af_packet 16680 0 - Live 0xbf02f000
g_ether 21180 0 - Live 0xbf028000
pxa2xx_udc 13188 1 g_ether, Live 0xbf023000
gumstix_gadget 1376 1 pxa2xx_udc, Live 0xbf021000
nls_cp437 5504 0 - Live 0xbf01e000
nls_iso8859_1 3840 0 - Live 0xbf01c000
vfat 10848 0 - Live 0xbf018000
fat 34812 1 vfat, Live 0xbf00e000
nls_base 6528 4 nls_cp437,nls_iso8859_1,vfat,fat, Live 0xbf00b000
pxamci 5696 0 - Live 0xbf008000
mmc_block 5640 0 - Live 0xbf005000
mmc_core 14468 2 pxamci,mmc_block, Live 0xbf000000
> C
>
> On Feb 6, 2005, at 1:19 PM, mark gross wrote:
>
> > What happens is that the program fails attempting to open the
> > device.
>
>
>
> -------------------------------------------------------
> This SF.Net email is sponsored by: IntelliVIEW -- Interactive Reporting
> Tool for open source databases. Create drag-&-drop reports. Save time
> by over 75%! Publish reports on the web. Export to DOC, XLS, RTF, etc.
> Download a FREE copy at
> _______________________________________________
> | https://sourceforge.net/p/gumstix/mailman/gumstix-users/thread/06badec91bd89edfc9b6a3f5492efbdb@hughes-family.org/ | CC-MAIN-2016-44 | refinedweb | 601 | 71.14 |
tests XML. If you are new to Haskell, please read How to install a Cabal package. The easiest way to install packages in Haskell is using cabal. On Linux, you can install Haskell and cabal using yum install ghc happy cabal-install. You must then update the list of known packages using cabal update.
The simplest way to install the HXQ library is by using the cabal command:
cabal install HXQwhich will automatically download and install all required packages.
(If you use the old base-3 ghc library, use the option -fbase3 in cabal). Then, to compile the xquery command line interpreter, you download xquery.hs and you do:
ghc --make xquery.hs -o xqueryAn alternative way of installing HXQ is to download and install all required packages one-by-one: First download HXQ version 0.19.0 and untar it (using tar xfz on Linux/Mac or 7z x on Windows). Then, you execute the following commands inside the HXQ directory:
runhaskell Setup.lhs configure --user runhaskell Setup.lhs build runhaskell Setup.lhs installIf the configure command indicates that some packages are missing, download these packages from Hackage and install them using the same process.
HXQ consists of the executable xquery, which is the XQuery command line tests/Test1.hs. You compile it using ghc -O2 --make tests/Test1.hs -o a.out. Using the latest ghc (version >= 6.9), one may use quasi-quotations instead of strings, as is shown in tests HXQ command line interpreter, called xquery, can evaluate.
Currently, HXQ supports type testing and casting using the XQuery expressions: typeswitch, instance-of, cast-as, etc. The validation and type inference systems are still a work in progress. To use type inference, use the option -tp in xquery. To associate an XML document with an XML Schema, use the XQuery import schema statement. For example:
import schema default element namespace "dept" at "data/department.xsd"; validate {doc("data/cs.xml")//gradstudent}; (doc("data/cs.xml")//gradstudent[.//lastname='Galanis']//address) instance of element(address)*For a faster validation of a document, use the Haskell function validateFile:
validateFile "data/dblp.xml" "data/dblp.xsd"
Last modified: 01/08/10 by Leonidas FegarasLast modified: 01/08/10 by Leonidas Fegaras | http://hackage.haskell.org/package/HXQ-0.19.0/src/ | CC-MAIN-2013-48 | refinedweb | 370 | 59.19 |
See "FNS and NIS+ Naming" for overview and background information relating to FNS and NIS+. If you are not familiar with NIS+ and its terminology, refer to Part 1 and Glossary of this guide. You will find it helpful to be familiar with the structure of a typical NIS+ environment.
FNS stores bindings for enterprise objects in FNS tables which are located in domain-level org_dir NIS+ directories on NIS+ servers. FNS tables are similar to NIS+ tables. These FNS tables store bindings for the following enterprise namespaces:
Organization namespaces as described in "NIS+ Domains and FNS Organizational Units".
Hosts namespaces as described in "NIS+ Hosts and FNS Hosts"
Users namespace as described in "NIS+ Users and FNS Users".
Sites namespace which allows you to name geographical sites relative to the organization, hosts, and users.
Services namespace which allows you to name services such a printer service and calendar service relative to the organization, hosts, and users.
F.
The.
Hosts in the NIS+ namespace are found in the hosts.org_dir table of the host's home domain. Hosts in an FNS organization correspond to the hosts in the hosts.org_dir table of the corresponding NIS+ domain. FNS provides a context for each host in the hosts table.
Users in the NIS+ namespace are listed in the passwd.org_dir table of the user's home domain. Users in an FNS organization correspond to the users in the passwd.org_dir table of the corresponding NIS+ domain. FNS provides a context for each user in the passwd table.
The. | http://docs.oracle.com/cd/E19455-01/806-1387/af1pol-34460/index.html | CC-MAIN-2016-22 | refinedweb | 256 | 66.64 |
Bug Description
pytz should provide a simple way to set a different 'zoneinfo' directory for the timezone files.
This could be used to switch to the installed OS files or in my case to easier switch zoneinfo versions.
One solution could be to change in the open_resource(name) function in __init__.py :
filename = os.path.
to
filename = os.path.
and insert on module level
zoneinfo_dir = os.path.
In this way, the calling program is able to set the directory to a different one:
import pytz
pytz.
Another option could be to evaluate an environment variable e.g. TZDATADIR and use its string as zoneinfo directory
zoneinfo_dir = os.environ.
A change would support the question discussed in
https:/
Hello,
I want to confirm the importance of this feature. Because of maintenance costs, it is very useful to have single copy of the zoneinfo for a project written in several programming languages. Luckily the most of the Unix-like systems already have /usr/share/zoneinfo by default and it would be great if the pytz could use it too.
I took the liberty of creating a patch for pytz-2016.7 which implements the ideas from the initial post regarding the environment variable approach. However I used PYTZ_TZDATADIR for variable name.
I will apply this patch as supplied. Thanks!
An environment variable seems appropriate here, to avoid the issue where a module (eg. a third party dependency) has already been imported and retrieved, causing timezones to appear from data in the unwanted path. | https://bugs.launchpad.net/pytz/+bug/1373960 | CC-MAIN-2017-51 | refinedweb | 251 | 57.98 |
How to consume a RESTful API in React
You will need npm installed, and a basic knowledge of React.
This brief tutorial will help you understand a few concepts you need to know so as to integrate a RESTful API into a React application.
React is the one of the most popular frontend frameworks out there and more and more developers are learning how to build real life applications with React. When learning React, you will eventually come to a point where you will need to integrate APIs in your React application.
We will be building a simple contact list application to display the contact’s name and email and then store a catch phrase.
To access the application on any device you have connected on the same network, you can access it via
will be shown to you in terminal.
Project setup
The next step is to modify the
App.js file located in the
src folder to look like this.
// src/App.js import React, {Component} from 'react'; class App extends Component { render () { return ( // JSX to render goes here... ); } } export default App;.
// public/index.html ... "> ... </head> ...
When this is done we will render a bootstrap card in the
App.js file by including this snippet in the
return() method.
// src/App.js import React, { Component } from 'react'; class App extends Component { render() { return ( <div class="card"> <div class="card-body"> <h5 class="card-title">Steve Jobs</h5> <h6 class="card-subtitle mb-2 text-muted">steve@apple.com</h6> <p class="card-text">Stay Hungry, Stay Foolish</p> </div> </div> ); } } export default App;
If we reload our application the following changes will reflect showing the contact’s name, email and catch phrase in a bootstrap card." } } ]
Creating a state
A state is simply an object that holds data pending to be rendered. This is where we will store the output from the API call.
// src/App.js import React, { Component } from 'react'; class App extends Component { state = { contacts: [] } ... }
In the snippet above we have created a state to store the the output from our API request.
Calling the API
To fetch our contact list, we will use a
componentDidMount() method in our
App.js file. This method is executed immediately our component is mounted and we will also make our API request in that method.
// src/App.js import React, { Component } from 'react' class App extends Component { ... componentDidMount() { fetch(' .then(res => res.json()) .then((data) => { this.setState({ contacts: data }) }) .catch(console.log) } ... }:
// src/components/contacts.js import React from 'react' const Contacts = ({ contacts }) => { return ( <div> <center><h1>Contact List</h1></center> {contacts.map((contact) => ( <div class="card"> <div class="card-body"> <h5 class="card-title">{contact.name}</h5> <h6 class="card-subtitle mb-2 text-muted">{contact.email}</h6> <p class="card-text">{contact.company.catchPhrase}</p> </div> </div> ))} </div> ) }; export default Contacts
The
Contacts method accepts the
contacts state we created earlier and then returns a mapped version of the state, which loops over the bootstrap card to insert the contact’s
name,
catch phrase.
Rendering the contacts component
The final step to this application is to render our component in
src/App.js . To do this, we have to import the component into
App.js.
// src/App.js import React, { Component } from 'react'; import Contacts from './components/contacts'; ...
Then in our render method we have to clear out whatever we had there before and pass our component along with the
contacts state in there for it to be rendered.
// src/App.js import React, { Component } from 'react' import Contacts from './components/contacts' class App extends Component { ... render() { return ( <Contacts contacts={this.state.contacts} /> ) } } export default App you will be prompted to change the port..
March 29, 2019
by Fisayo Afolayan | https://pusher.com/tutorials/consume-restful-api-react/ | CC-MAIN-2022-21 | refinedweb | 624 | 56.25 |
Building an automated watering system is conceptually simple:
- Build a network of pipes that brings water to plants,
- Use a microcontroller controlling solenoid valve to start or stop the flow of water,
- Activate watering at predefined times through the microcontroller (e.g. once every 2 days for 20 minutes).
We'll look at building an automated watering system with the Omzlo NoCAN IoT platform, which is based on a set of Arduino-compatible nodes managed by a Raspberry Pi. Our general setup is shown in the figure above:
- An Arduino-compatible CANZERO node is connected to a valve, which is switched and off to water the plants.
- The CANZERO node is connected to a Raspberry-Pi with a cable both for power and networking.
- The Raspberry-Pi (fitted with a PiMaster HAT) is used to send instructions to the CANZERO, for automated and manual watering.
To keep things simple, we will limit the features described in this project. But it's easy to build on this example to add support for controlling watering with a smartphone or through the MQTT protocol.
Hardware
Basics
There are plenty of types of water valves, using various voltages, latching or non-latching, adapted to different topologies, etc. One of the simplest water valves is the 12V plastic 13mm (1/2") solenoid valve pictured below and that you can find for cheap on Adafruit or on eBay.
That valve is "normally closed" and when there is no current applied to the solenoid, the valve simply closes stopping the flow of water. It requires a minimum water pressure to operate (0.02 Mpa): without that pressure, the valve won't close well. As such, these valves are not bi-directional: they expect water to flow in a specific direction as noted with an arrow on the plastic body of the valve. While it is rated for 12V, it operates without issue at 9V or even below. A current of 200mA to 300mA is necessary to keep the valve open, letting water flow through.
One of the simplest ways to control this solenoid valve is with a low side (N-channel) Mosfet switch, as illustrated in the figure below.
A microcontroller IO pin is used to control the Mosfet M1. When a sufficient voltage is applied to the gate of M1 current flows through the solenoid and opens the valve. Conversely, when the gate voltage goes down to 0, the Mosfet stops allowing current to flow through, thereby closing the solenoid.
The selected Mosfet should be "logic-level compatible" so that the 3.3V volt provided by a microcontroller is enough to fully turn on the Mosfet. As always with an inductive load like a solenoid, a "flyback diode" D1 is added to protect the circuit from voltage spikes. In addition, a weak pull-down resistor (not shown) should be placed between the gate of M1 and GND. This assures that the gate of the M1 is at a predefined state when the micro-controller is not powered on or defective.
A final point to consider is that a 12V power source must supply current to the solenoid.
Using a CANZERO node
Our goal is to control our watering system with a CANZERO node in a NoCAN network, allowing us later to program it directly from the command line on a Raspberry-Pi or through a web interface.
NoCAN networks use a single cable to connect a series of nodes together. The cable brings both power and networking (CANbus). A NoCAN network can use any voltage between 6 to 28V, but using 12V or 24V is a classic approach. Since we can build a NoCAN network with 12V, we can make an interesting simplification here: we can power both the solenoid and the network nodes with the same source, simplifying our cabling by avoiding an extra 12V power supply and cable for the solenoid. Note that if your NoCAN network uses long cables with small wires, then the activation of the solenoid will generate current and result in a minor voltage drop due to the resistance of the wire. In most settings, this is not a problem.
With this in mind, the low-side Mosfet switching circuit described above is very easy to build on a breadboard or even on a prototype shield that can be connected to a CANZERO node.
But we are lazy and there is an even simpler solution: using the Omzlo GO-24V shield which is an Arduino MKR compatible shield that is designed to interface a microcontroller with 24V systems (and of course 12V systems). It is pictured below, sitting on top of a classic Arduino MKR-Zero.
The GO-24V shield has notably 4 Mosfet controlled sinking outputs, exactly like in the low-side switch we described above.
We connect the VIN header (12V) on the shield to one connector of the valve and we connect the other connector of the valve to the first sinking output of the shield, both identified with an arrow in the figure below.
We only need to add the flyback diode and we are done. One simple way to do this is to build the following cable, designed on one side to connect to the shield through terminal blocks and on the other side to the solenoid with female disconnect crimp terminals that fit the solenoid, integrating a 1N4007 diode. The cathode of the diode should be connected to the 12V side, the anode to GND.
The valve, the CANZERO, and the GO-24 shield can then all be placed in a box and connected to the NoCAN network as well as the water pipes.
Software
Arduino sketch
We will create two channels to control our watering system on the CANZERO node:
- A channel called "watering/valve": sending "1" (or "open") to the channel opens the valve while sending "0" (or "close") closes the valve.
- A channel called "watering/timer": sending a numeric string greater than 0 to that channel will open the valve for the corresponding duration in seconds.
In addition, these channels will have the following properties:
- Sending "0" to "watering/valve" channel will cancel any ongoing watering timer.
- If a timer is set, the CANZERO node will update the channel "watering/timer" with the remaining watering time every 10 seconds, providing feedback to the user.
The gate of the Mosfet controlling the solenoid will be connected to the CANZERO/Arduino pin 0. We will also use the Arduino RTC library for timing purposes, taking advantage of the 32.768K crystal oscillation of the CANZERO.
This provides us the following sketch.
#include <nocan.h> #include <RTCZero.h> #define SOLENOID_PIN 0 NocanChannelId valve_cid; NocanChannelId timer_cid; uint32_t start, duration, last; RTCZero rtc; String msg_to_string(NocanMessage &msg) { // nocan messages are always <= 64 bytes long char buf[65]; for (int i=0; i<msg.data_len; i++) { buf[i] = msg.data[i]; } buf[msg.data_len]=0; return String(buf); } void open_valve() { digitalWrite(SOLENOID_PIN, HIGH); Nocan.led(true); } void close_valve() { Nocan.led(false); digitalWrite(SOLENOID_PIN, LOW); if (duration>0) Nocan.publishMessage(timer_cid, "0"); duration = 0; } void setup() { // Init the RTC rtc.begin(); Nocan.open(); Nocan.registerChannel("watering/valve", &valve_cid); Nocan.subscribeChannel(valve_cid); Nocan.registerChannel("watering/timer", &timer_cid); Nocan.subscribeChannel(timer_cid); pinMode(SOLENOID_PIN, OUTPUT); close_valve(); } void loop() { // put your main code here, to run repeatedly: NocanMessage msg; if (Nocan.receivePending()) { Nocan.receiveMessage(&msg); if (msg.channel_id == valve_cid) { String valve_str = msg_to_string(msg); if (valve_str == String("close") || valve_str == String("0")) { close_valve(); } if (valve_str == String("open") || valve_str == String("1")) { open_valve(); } } if (msg.channel_id == timer_cid) { String duration_str = msg_to_string(msg); duration = duration_str.toInt(); if (duration>0) { last = start = rtc.getEpoch(); open_valve(); Nocan.publishMessage(valve_cid, "1"); } } } if (duration>0) { uint32_t now = rtc.getEpoch(); if (now-start>=duration) { close_valve(); Nocan.publishMessage(valve_cid, "0"); } else { if (last!=now && (duration-now+start)%10==0) { Nocan.publishMessage(timer_cid, String(duration-now+start,DEC).c_str()); last = now; } } } delay(100); }
With the latest version of the Arduino IDE, uploading the sketch to the node is easy: simply select the right node in the Tools/Port submenu and click on the "upload" button.
For more details on uploading sketches see the relevant section in our big tutorial.
Testing
Our system can be tested either with the web interface or from the command line.
On the command line, using
nocanc running on the Raspberry-Pi NoCAN gateway:
- typing
nocanc publish "watering/valve" 1 should open the valve.
- typing
nocanc publish "watering/valve" 0 should close the valve.
- typing
nocanc publish "watering/valve" 0 should close the valve.
We can also use the builtin web user interface to control the watering system. Log on the Raspberry Pi gateway and launch the web UI, for example with the following command:
nocanc webui --web-server=":8080"
Assuming, as an example, that the Raspberry Pi gateway has the address 192.168.0.32, you can then point a browser to, where you will find a screen similar to the following:
Clicking on the channels numbered 0 ("watering/valve") or 3 ("watering/timer") allows controlling the watering system the same way we can do it on the command line.
Notes: our own setup has many channels, including a set of temperature/humidity/pressure channels based on a BME280 sensor.
Scheduling watering
The NoCAN network is controlled by a Raspberry Pi computer fitted with a PiMaster HAT. This is the perfect location to automate your watering system schedule with just a single line of code!
Linux systems, such as the Raspbian distribution that runs many Raspberry Pi machines, offer a very simple tool to schedule tasks on a regular basis:
cron.
The Raspberry-Pi website already provides a nice and quick tutorial on
cron so we won't go into details here.
Assuming that the
nocanc tool has been installed in the
/usr/local/bin/nocanc directory and that the watering timer is controlled by the channel named
watering/timer, we can program a 10 minute watering session every day at 8:00 AM, using the following entry created with
crontab -e:
0 8 * * * /usr/local/bin/nocanc publish "watering/timer" 600
Going further
Adding more nodes
What happens if we have several nodes each controlling one (or more) valve(s)? Each node would need to create channels with a different name to distinguish the different valves, e.g. "watering1/valve", "watering2/valve", etc. This would mean that we need to write a slightly different version of our sketch for each node, changing the name of the channels in each sketch.
Luckily there is a little trick we can use to simplify our life: if we insert the string
$(ID) in a channel name used in NoCAN, that portion of the channel name will be replaced by the actual node id of the CANZERO node creating the channel. As such, in our sketch, we can replace the following line:
Nocan.registerChannel("watering/valve", &valve_cid);
with:
Nocan.registerChannel("watering/$(ID)/valve", &valve_cid);
If the node creating the channel is node number '6', then the created channel will be
watering/6/valve. If 3 different nodes run the same sketch they will, therefore, create 3 channels, each with a different name based on their node identifier. Node identifiers remain constant across restarts of the
nocand server, so, for example, we can use the channel
watering/6/valve in our crontab script as it will not change.
We would proceed similarly for the channel
watering/timer which would be replaced by
watering/$(ID)/timer
We only need to write one Arduino sketch and we can deploy it on as many nodes as needed!
Final thoughts
To keep things simple, this blog entry provided just a basic but functional watering system based on the NoCAN IoT platform. It's easy to extend it by adding a nice web interface to control and schedule watering operations. You can also connect this system to blynk in order to control plant watering with a smartphone, even if you are far away from your garden!
As a next step, we will try to add a capacitive soil moisture sensor to schedule watering based on the soil humidity. But this is for another blog entry!
To stay updated on the NoCAN project, don't forget to follow us on Twitter or on our Facebook page. | https://www.omzlo.com/articles/automated-plant-watering-with-can-bus | CC-MAIN-2020-45 | refinedweb | 2,036 | 52.29 |
table of contents
NAME¶
sin, sinf, sinl - sine function
SYNOPSIS¶
#include <math.h>
double sin(double x); float sinf(float x); long double sinl(long double x);
Link with -lm.
sinf(), sinl():
_ISOC99_SOURCE || _POSIX_C_SOURCE >= 200112L
|| /* Since glibc 2.19: */ _DEFAULT_SOURCE
|| /* Glibc <= 2.19: */ _BSD_SOURCE || _SVID_SOURCE
DESCRIPTION¶
These functions return the sine of x, where x is given in radians.
RETURN VALUE¶
On success, these functions return the s, C89.
BUGS¶
Before version 2.10, the glibc implementation did not set errno to EDOM when a domain error occurred.
SEE ALSO¶
acos(3), asin(3), atan(3), atan2(3), cos(3), csin(3), sincos(3), tan(3)
COLOPHON¶
This page is part of release 5.13 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. | https://manpages.debian.org/unstable/manpages-dev/sinl.3.en.html | CC-MAIN-2022-27 | refinedweb | 141 | 68.47 |
We see how to use QHBoxLayout to nicely put the contained widgets in an horizontal box; how to set the title in a window; we get acquainted with the spin box and the slider widget, and we see how to connect them, so that changing the value of one automatically result in the adjustment of the value of the other.
All this stuff in such a short example:
#include <QtGui/QApplication>
#include <QtGui/QHBoxLayout>
#include <QtGui/QSlider>
#include <QtGui/QSpinBox>
namespace
{
const int MIN_AGE = 0;
const int MAX_AGE = 130;
const int DEF_AGE = 35;
}
int b103(int argc, char** argv)
{
QApplication app(argc, argv);
QSpinBox* sb = new QSpinBox(); // 1.
sb->setRange(MIN_AGE, MAX_AGE);
QSlider* sl = new QSlider(Qt::Horizontal); // 2.
sl->setRange(MIN_AGE, MAX_AGE);
QObject::connect(sb, SIGNAL(valueChanged(int)), sl, SLOT(setValue(int))); // 3.
QObject::connect(sl, SIGNAL(valueChanged(int)), sb, SLOT(setValue(int)));
// spin box and slider now are connected
sb->setValue(DEF_AGE); // 4.
QHBoxLayout* layout = new QHBoxLayout(); // 5.
layout->addWidget(sb);
layout->addWidget(sl);
QWidget* w = new QWidget(); // 6.
w->setWindowTitle("Enter Your Age");
w->setLayout(layout);
w->show();
return app.exec();
}
1. create and then specify a range for a spinbox widget.
2. create a (horizontal) slider and then specify its range.
3. the connection between two widgets is not different to the connection between a widget an the application. We just specify the object and the method for the emitting widget and then the object and method for the receiving one.
4. now that our two widget are connected we can set a default value for one of them relying on the signal-slot mechanism to let the other being consequentely adjusted.
5. we create a horizontal box and put in it our two widgets.
6. the widget w is the main (and only) window of our application, we set its title calling the setWindowTitle() method, then we set its layout using the one we have just created, and show the result to the user.
I wrote this post as homework while reading "C++ GUI Programming with Qt 4, Second Edition" by Jasmin Blanchette and Mark Summerfield. | http://thisthread.blogspot.com/2010/06/layout-and-widget-synchronization.html | CC-MAIN-2018-26 | refinedweb | 351 | 51.58 |
Make the RIL thread code talk to the POSIX socket provided by the b2g-dialer-daemon.
In bug 698621 we're having our existing IO thread talk to the ril socket, and then ferry complete packets over to the ril worker. Did that plan change? If so, why?
Also depends on the outside task of getting the radio socket account change daemon finished and integrated. See
This is a subportion of that bug. I was just creating this for the file socket implementation part, I thought bug 698621 was about getting the tri-thread comms working?
That's fine --- what confused me was that we discussed doing the socket IO on the IO thread, but this bug title says "worker thread" which might mean either the IO thread or the ril Web Worker thread.
My fault. Fixed.
Created attachment 577401 [details] [diff] [review] WIP for IPC Thread communication with cell phone radio Attaching the current work in progress for the RIL. Hasn't been tested on phone yet due to issues listed at but the general idea is there. Still need to clean out printfs. Would be nice to leave in network code for desktop testing, can #if 0 it before final landing.
Created attachment 578714 [details] [diff] [review] Part 1 (v1): Add MOZ_B2G_RIL configure flag
Created attachment 578716 [details] [diff] [review] Part 2 (v1): RIL IPC implementation
Landed part 1 on inbound to unblock all the other RIL related patches:
Created attachment 579022 [details] [diff] [review] Part 2 (v2): RIL IPC implementation Kyle changed RIL Write to use tasks instead of fd polling, fixes write thread CPU exhaustion issue.
Part 1:
Comment on attachment 579022 [details] [diff] [review] Part 2 (v2): RIL IPC implementation >diff --git a/ipc/ril/Makefile.in b/ipc/ril/Makefile.in >+# Contributor(s): >+# Chris Jones <jones.chris.g@gmail.com> >+# Kyle Machulis <kmachulis@mozilla.com> >+# Nit: tab character crept in. >diff --git a/ipc/ril/Ril.cpp b/ipc/ril/Ril.cpp >+#if defined(MOZ_WIDGET_GONK) Is ril-enabled-but-no-gonk a configuration we want to continue to support? Why? >+class RILWriteTask : public Task { Nit: "RilWriteTask". >+bool >+RilClient::OpenSocket() >+{ >+ /* >+ * XXX IMPLEMENT ME >+ * I think this is implemented ... >+ if(mSocket.mFd < 0) >+ { Nit: "if (mSocket.mFd < 0) {" >+ >+ >+ Extranenous whitespace. >+ if(connect(mSocket.mFd, (struct sockaddr *) &addr, alen) < 0) { Nit: "if (connect(..." >+ >+ >+ Extraneous whitespace. >+void >+RilClient::OnFileCanReadWithoutBlocking(int fd) >+ if(ret <= 0) >+ { Nit: "if (ret <= 0) {" >+? >+void >+RilClient::OnFileCanWriteWithoutBlocking(int fd) >+{ >+ MOZ_ASSERT(fd == mSocket.mFd); >+ >+ /* >+ * IMPLEMENT ME >+ */ I think this is implemented too (partly). >+ for an example of how this should work. If we fail to write a complete message, we need to remember it, register our write-ready watcher, and then continue. >+>diff --git a/ipc/ril/Ril.h b/ipc/ril/Ril.h >+struct RilMessage >+{ >+ static const size_t DATA_SIZE = 1024; >+ char mData[DATA_SIZE]; What limits RIL messages to <= 1024 bytes?
> >+#if defined(MOZ_WIDGET_GONK) > > Is ril-enabled-but-no-gonk a configuration we want to continue to > support? Why? This allows us to adb forward the file socket to the desktop for development and testing using the RIL threads, like we would on the phone. We've also got a solution that just uses nsISocketTransport that I suppose we could use, but I liked being able to have the IPC <-> worker interface in place even while doing desktop dev. > >+? We're just trying to exhaust the buffer at this point. We have no idea what a message even is here, we're just bringing in bytes and shoving them up to the state machine. The JS parcel handler deals with the storage of partially read messages. We could most likely reallocate a buffer for the full size every time we read, but why? This just saves us some management. > >+ > > ipc_channel_posix.cc#663 > for an example of how this should work. If we fail to write a > complete message, we need to remember it, register our write-ready > watcher, and then continue. Ok, yeah, since we're sending a PostTask in the first place with no idea whether it'll block or not, this could probably be more robust. > >+>diff --git a/ipc/ril/Ril.h b/ipc/ril/Ril.h > > >+struct RilMessage > >+{ > >+ static const size_t DATA_SIZE = 1024; > >+ char mData[DATA_SIZE]; > > What limits RIL messages to <= 1024 bytes? Nothing. That was just a static size to read in/out easily so I didn't have to reallocate at any point. There's also not going to be RIL messages in the multiple megabyte (or even near a mb) range, so we shouldn't cause lots of polling loops on this. Was basically just an average to get the data thru and up. to the state machine.
Ok, looking at the review and the code again, we really need to rename RilMessage to RilRawData or something, so it's obvious we're dealing with bytes without context on this level. We started calling it RilMessage back before we realized we were doing everything up in the worker in JS. I'll add that to the patch.
Created attachment 579623 [details] [diff] [review] Part 2 (v3): RIL IPC implementation Patch with Kyle's latest changes from the GitHub fork.
Comment on attachment 579623 [details] [diff] [review] Part 2 (v3): RIL IPC implementation >diff --git a/ipc/ril/Makefile.in b/ipc/ril/Makefile.in >+# Contributor(s): >+# Chris Jones <jones.chris.g@gmail.com> >+# Kyle Machulis <kmachulis@mozilla.com> >+# Still have a stray tab here. >diff --git a/ipc/ril/Ril.cpp b/ipc/ril/Ril.cpp >+#if defined(MOZ_WIDGET_GONK) >+#include <sys/socket.h> >+#include <sys/un.h> >+#include <sys/select.h> >+#include <sys/types.h> >+#endif >+ Does this need to be ifdef WIDGET_GONK? I don't think it does. Please remove if not. >+ RilRawData* mCurrentRilRawData; Make this nsAutoPtr<RilRawData>. >+void >+RilClient::OnFileCanWriteWithoutBlocking(int fd) >+{ >+ // Try to write the bytes of mCurrentRilRawData. If all were written, continue. >+ // >+ // Otherwise, save the byte position of the next byte to write >+ // within mCurrentRilRawData, and request >+ // And request what?!?? What?!?!? A pony? The suspense is killing me. >+ const uint8_t *toWrite; >+ >+ toWrite = (const uint8_t *)mCurrentRilRawData->mData; >+ const uint8_t* toWrite = mCurrentRilRawData->mData; You shouldn't need to cast this. >+ delete mCurrentRilRawData; >+ mCurrentRilRawData = NULL; With mCurrentRilRawData as an nsAutoPtr, you just need to assign it null to free its pointed-to data. So get rid of the |delete mCurrentRilRawData| statement. >diff --git a/ipc/ril/Ril.h b/ipc/ril/Ril.h >+struct RilRawData >+{ >+ static const size_t DATA_SIZE = 1024; Call this MAX_DATA_SIZE. r=me with those fixes.
Pushed on Kyle's behalf:
Part 2: | https://bugzilla.mozilla.org/show_bug.cgi?id=699222 | CC-MAIN-2017-17 | refinedweb | 1,088 | 67.25 |
:
• Create a View Controller from scratch
• Implement this View to your current OF project
• Add a couple of UIKit components to adjust parameters on our OF project
You don’t need to be an expert in Objective-C and/or iOS development but basic knowledge will help understand the process better.
Requirements
• OF v0073
• XCode 4.5.x
Support
If you are having problems following the tutorial or need a more in deep explanation please leave a comment below or contact me on:
• Twitter @nardove
Setting up
To start the tutorial, lets create a new OF project for iOS, that will draw a solid circle in the middle of the screen, here is how your code should looks like:
testApp.h
#pragma once #include "ofMain.h" #include "ofxiPhone.h" #include "ofxiPhoneExtras.h" class testApp : public ofxiPhoneApp{ public: void setup(); void update(); void draw(); void touchDoubleTap(ofTouchEventArgs & touch); float radius; bool hasFill; };
testApp.mm
#include "testApp.h" void testApp::setup() { ofBackground( ofColor::red ); radius = 100; hasFill = true; } void testApp::update() { } void testApp::draw() { if ( hasFill ) { ofFill(); } else { ofNoFill(); } ofSetColor( ofColor::white ); ofCircle( ofGetWidth() / 2, ofGetHeight() / 2, radius ); } void testApp::touchDoubleTap(ofTouchEventArgs & touch) { }
First Steps
We will create 3 new files in the src folder, select src and Right+Click on the menu select New File and create the following 3 files:
• MyGuiView.h
• MyGuiView.mm
• MyGuiView.xib
On the New File window under iOS select C and C++
Name your file, and change the extension of the newly created .cpp file to .mm, this will allow to mix Objective-C with C++ code in the same file, then New File again select User Interface and select View, click Next, on Device Family select iPhone, click Next and finally name your xib file and click on Create.
Note: make sure that all files share the same name it is very important.
You should see something like the image below in your Project Navigator:
Before we setup our xib file lets add the necessary code to its class.
Open MyGuiView.h and add the following:
#import <UIKit/UIKit.h> @interface MyGuiView : UIViewController @end
Then open MyGuiView.mm and add:
#include "MyGuiView.h" #include "ofxiPhoneExtras.h" #include "testApp.h" @implementation MyGuiView testApp *myApp; -(void)viewDidLoad { myApp = (testApp*)ofGetAppPtr(); } @end
After this is done we can setup our xib file MyGuiView.xib double click on it, Xcode will automatically change its view to deal with xib files. You will notice a couple of icons in the middle of the window.
First click on File’s Owner icon and on the Identity Inspector change its Class to MyGuiView.
Then on the Connections Inspector connect the outlet view to the View icon.
You can now set the view properties specific to your device, I recommend to change the view background to a different colour from you testApp and make it slightly transparent, that way we can see the changes to our app as we change parameters from our view.
Lets set our OF project, open testApp.mm and add the following code:
#include "MyGuiView.h" MyGuiView *gui; void testApp::setup() { ... gui = [[MyGuiView alloc] initWithNibName:@"MyGuiView" bundle:nil]; [ofxiPhoneGetGLView() addSubview:gui.view]; } ... void testApp::touchDoubleTap(ofTouchEventArgs & touch) { // toggle gui view visibility gui.view.hidden = !gui.view.hidden; }
If you Compile and Run you should see the new UIView on top of testApp, and when a double tap is detected the UIView should show and hide accordingly.
Setting up our GUI
Lets go back to MyGuiView.h and add the following code mark in bold letters:
#import <UIKit/UIKit.h> @interface MyGuiView : UIViewController @property(retain, nonatomic) IBOutlet UISlider *radiusSlider; @property(retain, nonatomic) IBOutlet UISwitch *fillSwitch; @end
We just created our outlets to tell XCode that those properties we’re going to want to connect to an object(s) in our xib file.
Open MyGuiView.mm and add the following:
#include "MyGuiView.h" #include "ofxiPhoneExtras.h" #include "testApp.h" @implementation MyGuiView testApp *myApp; -(void)viewDidLoad { myApp = (testApp*)ofGetAppPtr(); } -(IBAction)radiusSliderHandler:(id)sender { UISlider *sliderObj = sender; myApp->radius = [sliderObj value]; } -(IBAction)fillSwitchHandler:(id)sender { UISwitch *switchObj = sender; myApp->hasFill = [switchObj isOn]; } @end
These are the methods that can be triggered by a control in a xib file.
Now we’ll add the necessary components to our UIView, open up MyGuiView.xib, if the Utilities panel is not already open select View > Utilities > Show Utilities. First select a Label object from the Object Library and drag it to the view, preferably to the top left corner, double click on it to change its text to “Circle Radius”.
Underneath the Circle radius label add another Label, change its text to “Render fill”, then add a Slider next to the Circle radius label adjust its size to the edge of the screen until you see the blue margin line, after that add a Switch next to Render fill label, place it near until you see the margin blue line. You should have something like the image below:
Now click on the slider component again and open the Attributes Inspector to change its minimum, maximum and current starting values, it should look something like this:
We are almost done, click on the File’s Owner icon and open up the Connection Inspector
All we need to do here is make a couple of connections, we’ll connect our outlets to our components, on the Outlets panel click on the circle on the right of radiusSlider and drag it to the slider next to the Circle radius label like shown in the following image:
Do the same for the fillSwitch outlet like this:
The last step is to tell the components what to do when the user interact with them (connect action methods), to make this happen we most connect the action method to the corresponding component, on the Received Actions panel click and drag the circle next to radiusSliderHandler to the slider component, a drop down menu will appear from there select Value Changed, what this means is that every time the slider change its value the attached method will be called updating the corresponding values.
And we do the same for the switch component, click and drag the circle next to fillSwitchHandler to the switch component, from the drop down menu select Value Changed.
And this is it! Run the project and you should be able to changes the circle appearance using the controls we just created.
I hope this tutorial has help you to understand the process of setting up UIKit components to your existing OF project, and give you a starting point to learn how to add other types of components.
You can find a entire project here (download NativeUIKitExample.zip) | http://www.creativeapplications.net/tutorials/integrating-native-uikit-to-your-existing-openframeworks-ios-project/ | CC-MAIN-2015-22 | refinedweb | 1,109 | 58.01 |
I tried executing a simple line drawing program using exec().
It worked fine. But when I tried to execute the same program by sending it through a tcp/ip network(the server reads the program and sends it to the client which receives it to a variable(b) of string type) and then i use exec(b) in the client to execute it but it says:
NameError: global name 'display' is not defined
The line drawing code is:
from OpenGL.GLUT import * from OpenGL.GLU import * from OpenGL.GL import * import sys name = 'line' def display(): glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT) glPushMatrix() glTranslatef(-1,-1,0) gluLookAt( 0.1, 0.1, 0.3, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0); glLineWidth(3.0) color = [1.,1.,1.,1.] glBegin(GL_LINES) glVertex3f(0,0,0) # origin of the line glVertex3f(.5,1.0,.9) # ending point of the line glEnd() glPopMatrix() glutSwapBuffers() return def main(): glutInit(sys.argv) print 'hello' glutCreateWindow(name) glClearColor(0.4,0.5,0.3,1.0) glutDisplayFunc(display) glutMainLoop() return main() This part of the client code receives the program and stores it into the variable and then we use exec(): while f: a = client.recv(1024) if a=="#p": f=0 break b+=a print b exec(b)
The code executes upto the part where print hello is given and then stops.
The error message:
hello
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python2.6/threading.py", line 532, in __bootstrap_inner
self.run()
File "r13client.py", line 31, in run
exec(b)
File "<string>", line 34, in <module>
File "<string>", line 31, in main
NameError: global name 'display' is not defined
I am unable to understand what is going wrong here. If anyone could help I'd be grateful.
Edited by __avd: Added [code] tags. | https://www.daniweb.com/programming/software-development/threads/337754/network-pgmming-in-python | CC-MAIN-2017-43 | refinedweb | 309 | 66.03 |
We've struggled for the last several releases with multiple release summaries and overviews, keeping them in sync, and so forth. Last release, Paul and I swore we would find a way to provide a single, canonical source for *all* of these. Today I noticed yet another release summary in the wild that needs taming, inclusion, or something: A release summary is referred to there that can be added to by anyone. *sigh* Do we want to invite those people to instead add their enhancements to canonical summary? Then cross-link back in to that namespace? We can use clever [[Include()]] statements that draw only specific parts, so each 'mirror' of the canonical summary can show the most appropriate fragment. Anyone interested in doing this work? It involves: * Working out a single canonical location from amongst the several - Releases/#/ReleaseSummary - Docs/Beats/OverView - Press release needs (more lightweight) - etc. * Define a format for that page so that it can be fragmented (if needed) for different summaries * Write up a process for jamming all that together * Publicize/evangelize - Karsten -- Karsten Wade, Developer Community Mgr. Dev Fu : Fedora : gpg key : AD0E0C41
Attachment:
signature.asc
Description: This is a digitally signed message part | https://www.redhat.com/archives/fedora-docs-list/2008-January/msg00185.html | CC-MAIN-2019-43 | refinedweb | 200 | 53.71 |
Nesting Laravel 4 Routes
Posted: 2014-01-31 13:21:00
Make sure your route file is set to show this
For my example is is projects and they have issues
So my route looks like this
#routes.php Route::resource('projects', 'ProjectsController'); Route::resource('projects.issues', 'IssuesController');
So now my URLs will look lik this
/projects/4/issues <--shows all issues /projects/4/issues/2 <--shows issue 2 in project 4
Finally on the Project Show page I have these linkRoute's in place
<tr> @endif <td>{{ $issue['id'] }}</td> <td>{{ HTML::linkRoute('projects.issues.show', $issue['name'], array($project->id, $issue['id'])) }}</td> <td>{{ $issue['active'] }}</td> <td>{{ $issue['description'] }}</td> </tr>
and
{{ HTML::linkRoute('projects.issues.create', 'Create Issue', $project->id, array('class' => 'btn btn-info')) }}
That is it. I will post my Controller shortly for Issues.
More help | https://alfrednutile.info/posts/41 | CC-MAIN-2018-22 | refinedweb | 142 | 53.21 |
21 November 2008 17:48 [Source: ICIS news]
HOUSTON (ICIS news)--Celanese shares plunged 32% on Friday as analysts downgraded the US chemical producer's stock following its Thursday disavowal of its 2008 earnings guidance.
Celanese shares plunged to an all-time low of $5.71 on Friday from Thursday's close of $8.46, before bouncing back to $6.94 in New York trading.
Bank of America analyst Kevin McCarthy said he was expecting Celanese only to break even in the fourth quarter this year and slashed his estimate of year-end earnings per share (EPS) to $3.05 from his previous estimate of $3.48.
"It is hard to overstate chemical industry weakness," McCarthy said in a note to investors, adding that recent comments and announcements by Dow and BASF show that Celanese "is not suffering alone."
McCarthy remained upbeat about Celanese's business in the coming year, though, calling the stock a Buy and setting a 12-month target price of $17.
Citi Investment Research analyst PJ Juvekar was less positive, downgrading Celanese to Hold and cutting his price target to $9 from $28.
Citigroup slashed target prices and profit estimates for Celanese and four other major ?xml:namespace>
On Thursday Celanese pulled its previous full-year earnings guidance of $3.40-$3.55 per share issued only a month ago, saying economic conditions have had more impact than it expected.
Based in
($1 = €0.80)
For more on Celanese visit ICIS company intelligence | http://www.icis.com/Articles/2008/11/21/9173794/celanese-stock-dives-32-on-analyst-downgrades.html | CC-MAIN-2014-10 | refinedweb | 247 | 64.51 |
JavaScript is partly a functional language.
To learn JavaScript, we got to learn the functional parts of JavaScript.
In this article, we’ll look at how to use higher-order functions.
Higher-Order Functions in the Real World
Higher-order functions are used in the real world a lot.
For example, arrays have many instance methods that are higher-order functions.
One of them is the
every method.
every takes a callback that returns the boolean expression to check if each item is what we’re looking for.
For example, we can use it by writing:
const allEven = [1, 2, 3].every(a => a % 2 === 0);
We pass in a callback to check if each entry is evenly divisible by 2.
This should return
false since we have 1 and 3 which are odd.
Also, the
every method can be implemented with our own code:
const every = (arr, fn) => { for (const a of arr) { if (!fn(a)) { return false; } } return true; }
We loop through the entries of
arr and then call
fn to check if the entry matches the given condition.
If it does, then we return
false since we have at least one item that doesn’t match the given we have in the callback.
We can use it by writing:
const allEven = every([1, 2, 3], a => a % 2 === 0)
And
allEven should be
false .
some Function
The
some method is similar to
every .
It’s also part of the array instance.
For example, we can call the array instance’s
some method by writing:
const hasEven = [1, 2, 3].some(a => a % 2 === 0)
We call the
some method on the array.
The callback returns the condition and that we’re looking for.
It checks if at least one item matches the given condition.
Therefore,
hasEven should be
true since 2 is even.
Also, we can implement it in our own way.
For example, we can write:
const some = (arr, fn) => { for (const a of arr) { if (fn(a)) { return true; } } return false; }
We loop through the items and check if
fn(a) returns
true .
If one does, then we return
true .
Otherwise, we return
false .
We can call our own
some function by writing:
const hasEven = some([1, 2, 3], a => a % 2 === 0)
then we get
true .
We pass in the array and the callback function that returns the function we’re checking for.
sort
The array instance
sort method takes a function that lets us compare 2 entries and sort them.
The callback takes 2 parameters, which are 2 entries of the array.
If the first parameter should come before the second, then we return a negative number.
If we keep the same order, then we return 0.
Otherwise, we return a positive number.
We can improve this by creating a function that returns a comparator function.
For example, we can write:
const sortBy = (property) => { return (a, b) => a[property] - b[property]; } const arr = [{ foo: 3 }, { foo: 1 }, { foo: 2 } ] const sorted = arr.sort(sortBy('foo')); console.log(sorted);
We have the
sortBy function that returns a function to let us compare a property value.
Then we call
arr.sort with our
sortBy function, which returns the comparator function for the property we want.
Then
sorted should be:
[ { "foo": 1 }, { "foo": 2 }, { "foo": 3 } ]
We can see the items are sorted.
Conclusion
We can implement various array methods our way.
There are many applications of higher-order functions in the real world. | https://thewebdev.info/2020/08/13/functional-javascript%E2%80%8A-%E2%80%8Ahigher-order-functions-in-the-real-world/ | CC-MAIN-2020-45 | refinedweb | 578 | 74.49 |
.
This article explores the Entity Maps operational mode of Kerosene ORM. Please refer to the introductory article Kerosene ORM Introductory Article for more context information as this article elaborates on the concepts introduced in it.
Kerosene ORM
In the discussions that follow we are going to use basically the same business scenario as the one used in the introductory article. Remember that we were dealing with a minimalist HR system composed by three tables:
The Regions table maintains the hierarchy of regions our company uses, and its ParentId column maintains what region the current one belongs to, or null if it is a top-most one. The Countries table maintains the countries where our company has operations, and its RegionId column contains the region that country belong to. Finally the Employees table maintains our employees, and contain both a CountryId column (that must not be null) and a ManagerId one that, if it is not null, identifies the manager of the current employee.
Regions
ParentId
Countries
RegionId
Employees
CountryId
ManagerId
As discussed in the introductory article the Kerosene ORM Entity Maps operational mode takes care of dynamically map the records obtained from the database to instances of our business-level POCO classes, tracking their contents, state and dependencies. Remember that Kerosene ORM does not require us to write external configuration or mapping files, using any kind of pre-canned conventions, polluting our classes with attributes, or needing them to inherit from any ORM specific base class.
Even.
The only caveat to bear in mind is that our POCO types have to be classes, not structs or any other kind of objects.
A repository is an object that implements the IDataRepository interface, used by Kerosene ORM to maintain a view on the state and contents of the underlying database, what maps have been explicitly or implicitly registered into it to put in correspondence tables and POCO classes, and to access the cache of managed entities. Repositories also implement the Repository and Unit Of Work patterns.
IDataRepository
The easiest way of obtaining a repository if by using the Create() method of the static RepositoryFactory class, as follows:
Create()
RepositoryFactory
using Kerosene.ORM.Maps;
...
var link = ...;
var repo = RepositoryFactory.Create(link);
Here the link object passed to that method can be obtained by any of the mechanisms discussed in the introductory article. Remember that this object will manage the physical connection with the underlying database, opening, closing and disposing it as needed – our applications need not to bother with these details.
Once we have obtained a repository instance we can just start using it without any further ceremony. Let’s suppose that we have laid down our Region POCO class as a representation of the columns in the database we are interested at:
Region
public class Region
{
public string Id { get; set; }
public string Name { get; set; }
public string ParentId { get; set; }
}
Now, to retrieve the list of regions in our database we can just write:
var cmd = repo.Query<region>();
var list = cmd.ToList();
</region>
We have not written any configuration or mapping files. We have not written any map. We have not polluted our domain-level code with any attributes or ORM related stuff. Actually, we have not even told Kerosene ORM the name of the table to use for these entities.
By default Kerosene ORM maps operate in a Simple (or Table) mode in which the members in the POCO class are automatically mapped to columns whose name match the name of those members. If such names are not case sensitive in the database Kerosene ORM will follow the database rule and won’t enforce case sensitiveness when finding those matches. If a match is not found the corresponding member, or column, is not taken into consideration.
In this scenario the first time the type of a POCO class is used Kerosene ORM will try to find a suitable table in the database using some educated guesses based upon the name of that type. If such table is found, then a “weak” map is created on our behalf using such direct correspondence rule among the members of our type and the columns found in that table.
These maps are said to be “weak” because if we register an explicit map for that type, and such weak map was registered into the repository, then it is discarded. The reason is because a given type can be registered only once in a given repository.
We can also filter what entities to retrieve using any logic that fits into the domain problem we are trying to solve:
var cmd = repo.Where<employee>(x => x.Id == "007");
var emp = cmd.First();
</employee>
Note that we are not constrained to use a pre-canned set of FindXXX() methods, but rather we can use any logic we need. As we will see below Kerosene ORM provides a wide range of methods in its Query commands to support non-conventional query scenarios where, for instance, and even in this POCO world, we can query or join from several tables simultaneously.
FindXXX()
To persists into the database a new entity we just need to use the Insert() method with the affected entity:
Insert()
var emp = new Employee() { Id = "007", CountryId = "uk" };
...
var cmd = repo.Insert(emp);
cmd.Submit();
...
repo.ExecuteChanges();
Kerosene ORM uses the Unit Of Work pattern so we have to create the command associated with the entity, submit that command and, when we are done with all the changes we may need in between, including other change operations, execute them all as a single unit. handy way to submit and execute the command, along with any other submitted ones, in just one call. There are also similar InsertNow() and DeleteNow() methods available.
UpdateNow()
InsertNow()
DeleteNow()
Finally, we can delete our entity by using:
repo.Delete(obj).Submit();
repo.ExecuteChanges();
or just:
repo.DeleteNow(obj);
If, for any reasons, we are not happy with the changes we have annotated into a repository we can discard them all by using the repository DiscardChanges() method:
DiscardChanges()
repo.DiscardChanges();
When invoking this method all the pending operations are disposed.
Kerosene ORM is prepared to identify what columns are read-only ones in the database. As a safety net, even when they are used in a given map, they are not going to be persisted back into the database despite how their corresponding members were used in our domain model. This information is obtained from the database and Kerosene ORM provides no mechanism to circumvent it.
Other ORM solutions require that our business classes have a parameterless constructor – Kerosene ORM does not. Remember that its philosophy it to impose no restrictions to the way we may want to develop and architect our POCO classes.
If our POCO class has such a constructor it will be captured and used for performance reasons. If not then Kerosene ORM will create in memory an un-initialized object without invoking any constructor but, at least, we will have an instance ready to be used.
When we need to use a table name that Kerosene ORM cannot figure out automatically, when our POCO class contains members that should not be mapped for whatever reasons, when the contents of those members are to be obtained by querying the database, performing complex calculations, or even accessing external systems, or when we use navigational properties in our POCO class and we want to take these dependencies into consideration, then we need to create a custom map.
The recommended way of proceeding is by creating a class that inherits from the DataMap<T><t> base one, and perform those customizations in its constructor:
DataMap<T>
public class RegionMap : DataMap<Region><region>
{
public RegionMap(DataRepository repo) : base(repo, x => x.Regions)
{ ... }
}
</region>
The DataMap<T> constructor takes two arguments. The first one is the repository where our new map instance will be registered into, and the objects returned by the RepositoryFactory class can be casted into DataRepository instances safely. The second one is a dynamic lambda expression that resolves into the name of the primary table in the database.
DataRepository
It may happen our table is used to maintain records that we will map to different POCO types. For instance suppose that we have in our domain the Employee class, but also the Director one to represent those employees without a manager associated to them. We can use the Discriminator property of the map to express this condition:
Employee
Director
Discriminator
public class Director : Employee { ... }
public class DirectorMap : DataMap<Director>
{
public DirectorMap(DataRepository repo) : base(repo, x => x.Employees)
{
Discriminator = x => x.ManagerId == null;
...
}
}
This property has the Func<dynamic, object> signature and if it is not null will be parsed and injected as part of the WHERE clauses when needed.
Func<dynamic, object>
If our table has a column used to keep track of the version of the row we can tell the map to take this column into consideration when executing update or delete operations, by using its VersionColumn property:
VersionColumn
VersionColumn.SetName(x => x.MyVersionColumnName);
Even if there is not a corresponding member in our POCO class Kerosene ORM will keep track of the last value retrieved from the database, and will compare it with the most up-to-date one before executing those operations. If the value has changed then a ChangedException exception will be thrown.
ChangedException
Note that we have not had to specify the type of the values maintained by that column. By default Kerosene ORM will use an agnostic normalized string representation to perform those comparisons. If for whatever reasons you would like to modify how these values are compared you can set the ValueToString property of the VersionColumn one, which is a delegate that takes the object representing the value and returns a string representation:
ValueToString
VersionColumn.ValueToString = x => x.ToString();
Note that modifying this property is not needed as the default mechanism will suffice in almost all possible scenarios.
It may happen our POCO class has a member whose name matches the one of a column in the database but, for whatever reasons, we don’t want that column-member combination to participate in the map. Easy, we just need to tell the map this by adding an entry into its Columns collection, and specifying that this column has to be excluded:
Columns
Columns.Add(x => x.MyColumn).SetExcluded(true);
Eager and Lazy members are those whose contents are to be obtained not from the primary table but by querying the database, performing complex calculations, or even accessing external systems if we need so. They are also used to express dependencies.
For instance, let’s suppose that our Country class has a BusinessValue member whose contents we want to populate when the record entity is loaded from the database (yes, we can also use its getter, but let me progress with this as an example). It may involve quite convoluted operations, querying other databases, or accessing external systems. We just need to tell the map how to proceed when completing that value as follows:
Country
BusinessValue
Members.Add(x => x.BusinessValue)
.OnComplete((rec, obj) => {
obj.BusinessValue = ...;
});
We have used the Add() method of the map’s Members collection to add a new entry for the member we want the map to complete when needed. Its OnComplete() method is a delegate that takes two arguments: the first one is the last record obtained from the primary table (that we can use to get the values of some columns that might be relevant for us), and the second one is a reference to the host entity itself.
Add()
OnComplete()
If the BusinessValue member is a virtual property with at least an accessible getter or setter then the member is said to be a “Lazy” one. If it is a field or a non-virtual property then it is said to be an “Eager” one. The contents of the eager members are populated just after the primary record is obtained from the database, whereas the contents of the lazy ones are populated only when their getters are used.
In the general case Lazy members are preferred over Eager ones. The reason if that, when using dependencies, eager members will try to load those dependencies into memory, potentially cascading and loading the complete object’s graph, and experimenting a delay while these operations complete. On the flip side we may have no access to the source code of our POCO class (for instance when it is defined in an external assembly we are not allowed to modify), and for these scenarios eager members are handy.
In any case we are free to mix Simple, Lazy and Eager members as we wish or need.
Let’s now suppose that we want to use dependencies and navigational properties in our POCO classes. We could have written our Region class as follows:
public class Region
{
public string Id { get; set; }
public string Name { get; set; }
virtual public Region Parent { get; set; }
virtual public List<region> Childs { get; private set; }
virtual public List<country> Countries { get; private set; }
}
</country></region>
We are using virtual properties here, so Lazy members, but we could have used Eager ones instead without any changes in the discussions that follow.
The Countries property will maintain the list of countries that belong to our region: it is “Child” dependency. We can tell this fact to the map as follows:
Countries
Members.Add(x => x.Countries)
.OnComplete((rec, obj) =>
{
obj.Countries.Clear();
obj.Countries.AddRange(
Repository.Where<country>(x => x.RegionId == obj.Id).ToList());
})
.SetDependencyMode(MemberDependencyMode.Child);
</country>
In our already-known OnComplete() method we are instructing the map to firstly clear that list, for sanity reasons, and then to query the database to obtain its most up-to-date contents. And because we are defining this dependency inside the map’s constructor we can use its Repository property for simplicity. Finally we are setting the dependency mode to “Child”.
Repository
Our Region POCO class also has a Parent property that is a reference to the parent region of the current one. We can define this dependency as follows:>
The main difference is that we want to be sure that the ParentId column is taken into consideration for the map, because is the one used in the database to reference the parent record. In order to do that we use the WithColumn() method that takes two arguments: the first one is a dynamic lambda expression that resolves into the name of that column, and the second one is a delegate that takes the new column added to the internal collection and let us customize how it behaves.
WithColumn()
In our case we don’t need to tell the map how to read the value of that column, because it will do so automatically using its name (and this value will be stored in the record maintained in the metadata associated with each entity). We just need to tell the map how to persist back the value of that column when needed: we use the OnWriteRecord() method that has to return the value to write back into the database. In the example it is enough to return null if no parent reference is used, or the value of its Id property otherwise.
OnWriteRecord()
Id
See also how we have written its OnComplete() method: for performance reasons instead of querying the database we have used the FindNow() method that will try to find in the in-memory cache a valid entity and, only if it is not found there, go to the database to find it. This method will return null if no entity is found in the cache or in the database, which is precisely what we want in this example. Finally we are setting its dependency mode to “Parent”.
FindNow()
Only those dependencies whose mode is set to “Child” or to “Parent” are cascaded when executing a change operation (Insert, Delete or Update). For instance, when this is the case Kerosene ORM will insert parent dependencies that were not persisted yet into the database, or will make sure that child ones are deleted before deleting its parent entity.
Actually this feature let us work naturally with aggregate roots from our C# code without needing to pollute our domain-level code with ORM operations. For instance:
var root = repo.Query<region>.Where(...).First();
...
root.Countries.RemoveAt(0);
root.Countries.Add(new Country() { Id = "ZZZ" });
...
repo.UpdateNow(root);
</region>
In this example we are removing and adding entries into the Countries property at our domain level. Only later, when we are done with all the changes we need, we just need to persist back the hosting entity and Kerosene ORM will find out what changes it has experimented and materialize those changes in the database.
Kerosene ORM injects into each managed entity a metadata package that, among other purposes, is used to keep track of the state and changes the entity may experiment. The internals of this mechanism are discussed later in this document. When a given dependency is a collection-alike one this metadata will keep the original contents of that collection and, when the time comes, Kerosene ORM will compare the original ones against the current contents. If it is not a collection-alike one then the state of the current entity is used to decide how to proceed.
This section explores further the Entity Maps operational mode discussing a number of advanced concepts and internals of Kerosene ORM. In order to do so we are going to expand a bit our business scenario incorporating two additional tables: Talents, that maintain the talents our HR friends are interested at, and EmployeeTalents that basically is a join table among employees and the talents assigned to them:
Talents
EmployeeTalents
Let’s suppose our Talent POCO class is written down as follows:
Talent
public class Talent
{
public string Id { get; set; }
public string Description { get; set; }
virtual public List<Employee> Employees { get; }
...
}
Here we have and Employees property being a list (set by the class constructor) that we want to populate: we need to find all the employees whose Id appear associated to the talent Id in the join table. We can define in this case a dependency as follows:
Id
Members.Add(x => x.Employees)
.OnComplete((rec, obj) => {
obj.Employees.Clear();
obj.Employees.AddRange(
Repository
.Where<employee>(x => x.Emp.Id == x.Temp.EmployeeId)
.MasterAlias(x => x.Emp)
.From(x =>
x(Repository.Where<employeetalent>(y => y.TalentId == obj.Id))
.As(x.Temp))
.ToList()
);
})
.SetDependencyMode(MemberDependencyMode.Parent);
</employeetalent></employee>
The interesting thing to note here is that we are firstly querying from the EmployeeTalents table for those records whose talent Id match the Id of our talent hosting entity, and then finding their related employees by injecting these results into the FROM clause of the main query.
As we are using several tables simultaneously and they all have their respective Id column we need to provide aliases to disambiguate among them: this is easy with the inner query as we can use the As() virtual extension method but… how can we do it within the main mapped query?
As()
The answer is the MasterAlias() method whose argument is a dynamic lambda expression that resolves into the alias to use with the primary table for this query only. It doesn’t matter what the name of that primary table is (and remember that Kerosene ORM might have found it out automatically): its alias will be the one we are specifying.
MasterAlias()
With this two aliases, the Emp and Temp ones, it is now very easy to write the main WHERE clause we need to find the employees we are interested at. Just take a minute to review the code in the example and the above explanations because it sounds more complex than what it really is. Other ORM solutions take a different approach and try to automate this scenario (some of them struggle to solve it by the way), but we will be constrained by their assumptions and rules.
Emp
Temp
This unique Kerosene ORM approach is an advanced feature that you can use or not, but that put a lot more power on your hands – indeed, Kerosene ORM supports, even in this POCO world, Join(), GroupBy(), and Having() methods, as well as basically any logic we may need to incorporate to solve our business problem. Yes, you need to write some SQL-alike code but it does also give us the opportunity to optimize that code and not to use the fat one produced automatically.
Join()
GroupBy()
Having()
We now know that when a map instance is created it will be automatically registered into the repository used in its constructor. We can customize it inside constructor of the derived class, which is the recommended approach, or by using its methods and properties directly. It will remain in a non-validated state while we are customizing it.
Now, as soon as the map is used for any interesting purpose it will be validated: its structure and rules will be checked against the database and if any inconsistency is found the corresponding exception will be thrown. Once a map is validated it becomes locked and cannot be customized any further.
As said this validation takes place automatically when needed. It may happen that, for whatever reasons, you want to lock and validate your map – in this case you can invoke its Validate() method to do so. By the way this method can be called as many times as needed without any side effects if the map was already validated. Anyhow, the map’s IsValidated property will return true if it has been validated, or false otherwise.
Validate()
IsValidated
Each repository carry a Maps property maintaining the collection of maps registered into it. You can use this collection, or the several GetMap() method overrides, to find what map is associated with a given POCO type.
Maps
GetMap()
Note that registration of maps is based upon a one-to-one correspondence between the POCO type and the type of the entities managed by the registered map. So a map registered for the Employee POCO class, and a map registered for the Manager POCO class, even if the later inherits from the former, are considered different maps.
Manager
When a map is disposed it is removed from its repository. Similarly repositories have the ClearMaps() method that will remove and dispose all maps registered into it.
ClearMaps()
Finally, repositories also have the RetrieveMap() method that will return either a registered map for the given type, or if no one was registered, will create a new one for that type. If no arguments are used the name of the primary table will be proposed by Kerosene ORM using a number of educated guesses and pluralization rules, and the map returned will be a “weak” one.
RetrieveMap()
This method is provided in case we want not to create a custom map class and rather we want an instance to customize using its properties and methods. In this case it is recommended that its IsWeakMap and its IsValidated properties are used accordingly.
IsWeakMap
Its Table property maintains the name of the primary table the map is associated with, either the one we have specified or the one found automatically by Kerosene ORM.
Table
In purity Kerosene ORM does not require primary key columns in the primary table. It just need a way to univocally identify what record to associate with a given entity and, for this, if no primary key columns are defined, it will try to find unique valued ones. If neither primary key columns not unique valued ones exist in the table then Kerosene ORM will throw an exception when validating the map.
Note that we have not to identify which ones are those identity columns: Kerosene ORM will find them out automatically from the database’s metadata.
Kerosene ORM does not require us to decorate our POCO classes with any attributes, and they have not to inherit from any ORM-specific one. What it does instead is to inject into each POCO instance it manages a package of metadata to keep track of its state, the latest record read from or persisted to the database, and the state of its dependencies, among other things.
This package is an object that implements the IMetaEntity interface that can be obtained using the Locate() method of the static EntityFactory class:
IMetaEntity
Locate()
EntityFactory
var obj = ...;
var meta = EntityFactory.Locate(obj);
Note that this method will throw an exception if the object used as its argument is not a class. Value types, enumerations, or structs, are not considered as valid Kerosene ORM entities.
This metadata object has only two public properties: the first one, Entity, is a reference back to the entity the metadata is associated with; the second one, State, is an enumeration that gives as back the state of this underlying entity.
Entity
State
The value of the Entity property can also be null. This situation can happen when we have obtained the metadata reference and, after a while, if the underlying entity is used no longer, it may have been collected by the CLR garbage collector. Indeed, to avoid locking the entities in memory the metadata package just holds a weak reference to them.
Entity
The value of the State property can be Detached if we have just created our entity and Kerosene ORM has not yet used it, Collected if the underlying entity has been collected by the CLR, Ready if it has been read from or persisted to the database, or ToInsert, ToUpdate or ToDelete if the corresponding pending operation has been submitted.
State
Detached
Collected
Ready
ToInsert
ToUpdate
To
For performance reasons, instead of maintaining any kind of list or similar structure, what Kerosene ORM does is injecting that metadata package into the CLR descriptor associated with any object it manages, in the form of a run-time attribute. Yes, IMetaData instances internally inherit from the Attribute class.
IMetaData
Attribute
Kerosene ORM does not, internally, keep track of the entities themselves but rather of the metadata packages associated with them which, in turn, just maintain a weak reference back to the original entities. This way it permits those entities to be collected by the CLR garbage collector when they are needed any longer.
But this also means that a number of metadata packages will remain with no associated entities. Kerosene ORM repositories implement an internal collector that fires periodically to perform the cleaning of these zombie packages. Note that this feature is not part of the IDataRepository interface but is provided by the concrete DataRepository instances.
In the general case you don’t need to interact with this mechanism. But it may happen that you want to disable it for debug purposes and, for these scenarios, you can use the following methods:
repo.DisableCollector();
repo.EnableCollector();
You can also use the IsCollectorEnabled property to interrogate the repository about the state of its internal collector. The EnableCollector() method has also an override that accept two arguments: the number of milliseconds after which the collector is fired, useful if you want to tweak this interval for performance reasons, and a Boolean value that specified if a CLR garbage collection is forced before firing it – but this second one is seldom used except for debug or very specialized scenarios.
IsCollectorEnabled
EnableCollector()
Even if our applications will only deal with its own domain-level POCO instances it may happen that the entities Kerosene ORM will return from the database are not of these types, but rather of a proxy type that inherits from the original POCO one.
This is the situation when lazy dependencies are used: Kerosene ORM will create a proxy type where the setters and getters of those lazy virtual properties are overridden, if possible, in order to inject the logic to load their contents in a deferred way. A number of additional fields and properties are also included in the proxy type (whose names end with either “_Completed” or with “_Source”) but, otherwise, their instances behave as the original ones.
The NewEntity() method will return either an instance of the original POCO class or, if a proxy type has been created by Kerosene ORM for the associated map, an instance of the proxy type:
NewEntity()
var obj = repo.NewEntity<region>();
</region>
Remember that if the original POCO class has a parameterless constructor it will be used. Otherwise, a new un-initialized object will be created in-memory and returned without invoking any constructor.
The way Kerosene ORM generates these proxy types involve emitting some IL code to add the additional properties and fields mentioned, and to override the virtual getters and setters of the lazy properties. Please refer to the accompanying articles for more details.
Kerosene ORM follows the Unit Of Work pattern. It prescribes that we have to submit (annotate) into the repository all the change operations we are interested at and, when we are done, execute them all as a single unit against the underlying database:
var region = new Region() { ... }; repo.Insert(region);
var ctry = new Country() { ... }; repo.Insert(ctry);
// etc...
repo.ExecuteChanges();
Internally Kerosene ORM will cascade the dependencies associated with the entities for which we have submitted change operations, will reorder them all to meet their logical constrains, and then execute them one by one under a transaction.
If the execution of any of those operations fail the transaction is aborted so leaving the database at its original state, and then, by default, an exception is thrown with the description of the failure (this typically will be the exception returned from the database). Our application can execute the ExecuteChanges() method inside a try-catch block or rather it can set the OnExecuteChangesError property of the repository with the delegate to invoke if an exception happens with, precisely, that exception as its argument. In this case the exception is not thrown but rather used as that argument. This is handy for many scenarios, and for logging and tracing purposes.
ExecuteChanges()
OnExecuteChangesError
When there is no corresponding method for a given column in the database but, for whatever reasons we want that column to participate into the mapping mechanism, we can achieve so by adding an entry into the map’s Columns collection:
Columns.Add(x => x.MyColumnName)
.OnWriteRecord(entity => { ... })
.OnLoadEntity((value, entity) => { ... })
.OnMember(x => x.MyMemberName);
The OnWriteRecord() method takes a delegate that shall return the value of that column when the time comes to persist it back to the database. It takes the hosting entity as its argument and can do whatever operations it may need.
Similarly the OnLoadEntity() method takes a delegate that will be invoked when the associated record is read from the database. It takes the value of the column and a reference to the hosting entity. Again it can do whatever operations needed.
OnLoadEntity()
The last one, OnMember(), is used for simplification purposes when instead of invoking convoluted operations we just want to map the column in the database with a given member in the type whose name may not match. In this case its argument is a dynamic lambda expression that resolves to the name of that member. Remember that it can be either a property or a field, and they can be public, protected or private ones.
OnMember()
Once a map is validated its Columns collection will contain all the columns taken into consideration. In many circumstances Kerosene ORM have discovered automatically a number of columns to map. If this is the case their AutoDiscovered property is set to true.
AutoDiscovered
This article is the last generic tutorial on the Entity Maps operational mode of Kerosene ORM. Next articles will be shorter ones and focused on the specific details of the techniques used in the internals of Kerosene ORM.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
Quote:Press [Enter] to execute... .dll'. Symbols loaded.
.DataDB.dll'. Symbols loaded.
'Kerosene.ORM.Maps.Table.Tests.vshost.exe' (CLR v4.0.30319: Kerosene.ORM.Maps.Table.Tests.vshost.exe): Loaded 'C:\Windows\Microsoft.Net\assembly\GAC_MSIL\System.Numerics\v4.0_4.0.0.0__b77a5c561934e089\System.Numerics.Data.OracleClient\v4.0_4.0.0.0__b77a5c561934e089\System.Data.OracleClient.Wrapper.dll'. Cannot find or open the PDB file.
Exception thrown: 'System.Data.SqlClient.SqlException' in System.Data.dll
- Engine's provider 'System.Data.SqlClient' not found.
Exception thrown: 'System.ArgumentException' in mscorlib.dll
.SqlServer.dll'. Symbols loaded.
> Not Initialized: 1:DataRepository(1:Direct:DataLink(SqlServerEngine2012(System.Data.SqlClient, v:11)))
Brand brand = new Brand() { Title = title, Abroad = abroad, EnglishTitle = englishTitle, Ico = ico, Introduction = introduction };
using (var repo = RepositoryFactory.Create(link))
{
brand.Id = brandId;
var cmd = repo.Update<Brand>(brand);
cmd.Submit();
}
Hello,
Well, first thing first... after you submit your change commands then you need to call your repo's ExecuteChanges() method:
using (var repo = RepositoryFactory.Create(link))
{
brand.Id = brandId;
var cmd = repo.Update<Brand>(Brand);
cmd.Submit();
repo.ExecuteChanges();
}
This is needed because Kerosene follows the Dynamic Repository pattern: this way you can submit as many change operations as you need and be sure them all are executed and succeed (or fail) as a single unit. As an alternative, you can combine the Submit() and ExecuteChanges() calls into one as follows:
Kerosene
Submit()
using (var repo = RepositoryFactory.Create(link))
{
brand.Id = brandId;
var cmd = repo.Update<Brand>(Brand);
cmd.SubmitNow();
}
Now, regarding you second question: you are right, strictly speaking Kerosene does not need a primary key column to exist in your database's table. It just need a way to identify univocally what row is associated with the entity. Currently it firstly tries to find the primary key columns and, if none exist, then tries to find columns marked as unique valued ones (future versions may include other mechanisms).
How those primary or unique valued columns are found? Well, when the map is validated part of the process is to query the database for the master (or primary) table's schema, and here identify if there is at least one identity column (either a primary key one, or an unique valued one).
Hope the above helps. If you still have problems please come back and let me know what exception you are receiving.
Cheers, Moises
using (var repo = RepositoryFactory.Create(link))
{
brand.Id = brandId;
repo.UpdateNow<Brand>(brand);
}
using (var repo = RepositoryFactory.Create(link))
{
brand.Id = brandId;
var cmd = repo.Update<Brand>(brand);
cmd.Submit();
repo.ExecuteChanges();
}
public static int CreateOrUpdateBrand(int brandId, string title, bool abroad, String englishTitle, string ico, string introduction, int operateUserId)
{
bool authorized = PermissionProvider.Instance.CheckPermission(operateUserId, 0);
if (!authorized)
{
throw new ArgumentException("您没有权限进行当前的操作", "operateUserId");
}
int result = 0;
Brand brand = new Brand() { Title = title, Abroad = abroad, EnglishTitle = englishTitle, Ico = ico, Introduction = introduction };
using (var link = LinkFactory.Create())
{
if (brandId > 0)
{
/.UpdateNow<Brand>(brand);
}
result = brandId;
//result = link.Update(x => x.Brand).Columns(x => x.Title = title, x => x.EnglishTitle = englishTitle, x => x.Ico = ico, x => x.Abroad = abroad, x => x.Introduction = introduction).Where(x => x.Id == brandId).Execute();
}
else
{
//2,这是采用泛型实体,有智能提示,对于字段较多的增加比较快捷
using (var repo = RepositoryFactory.Create(link))
{
var exsitsBrand = repo.Where<Brand>(x => x.Title == title).First();
if (exsitsBrand != null)
return exsitsBrand.Id;
brand.Count = 0;
repo.InsertNow(brand);
result = brand.Id;
brandId = brand.Id;
}
}
if (result > 0)
return brandId;
return 0;
}
}
new
using
UpdateNow()
Attach()
/.Attach<Brand>(brand);
repo.UpdateNow<Brand>(brand);
}
using (var repo = RepositoryFactory.Create(link))
{
var brand = repo.Where<Brand>(x => ...); // whatever logic to find it
brand.Id = ...; // your new value
repo.UpdateNow(brand);
}
Brand existsBrand = repo.Where<Brand>(x => x.Id == brandId).First();
existsBrand.Title = newTitle;
existsBrand.Ico=newIco;
....
repo.UpdateNow<Brand>(existsBrand);
dynamic rec = link.From(x => x.Cert).Where(x => x.Code == "tfy").First();
var cert = repo.Where<Cert>(x => x.Code == "tfy").First();
rec
Cert
var helper = Repo.GetHelper();
var q = new UserQuery();
var user = helper.Where<User>(q.Id == 123 && q.Name == "SomeName");
var helper = Repo.GetHelper();
var user = helper.Where<User>(q => q.Id == 123 && q.Name == "SomeName");
public class Query<T> where T : class
{
public Query<T> Where<T>(Expression<Func<T, bool>> where){
{...}
return this;
}
public T First<T>(){
return {...};
}
}
var helper = Repo.GetHelper();
var user = helper.Query<User>().Where(q => q.Id == 123 && q.Name == "SomeName").First();
T
>=
x => x.Name >= "P"
dynamic
<T>
internal class BuildingDbProvider : IBuildingProvider {
private readonly IDataRepository _repo;
public BuildingDbProvider() {
var link = LinkFactory.Create("Inventory");
_repo = RepositoryFactory.Create(link);
var map = new RegionMap((DataRepository)_repo);
}
public List<Building> GetAll() {
var cmd = _repo.Query<Building>();
var list = cmd.ToList();
return list;
}
private class RegionMap : DataMap<Building> {
public RegionMap(DataRepository repo)
: base(repo, x => x.Building) {
Members.Add(x => x.Rooms)
.OnComplete((rec, obj) => {
obj.Rooms.Clear();
obj.Rooms.AddRange(
Repository.Where<Room>(x => x.BuildingId == obj.Id).ToList());
})
.SetDependencyMode(MemberDependencyMode.Child);
}
}
}
public class Building {
public Building() {
Rooms = new List<Room>();
}
public int Id { get; set; }
public string Name { get; set; }
public List<Room> Rooms { get; set; }
}
public class Room {
public int Id { get; set; }
public string Name { get; set; }
}
MultipleActiveResultSets=true
connectionString="Server=localhost;Database=KeroseneDB;Integrated Security=true;MultipleActiveResultSets=true"
Kerosene
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/Articles/118622/Kerosene-ORM-Maps | CC-MAIN-2019-09 | refinedweb | 6,217 | 50.26 |
RCMD(3) Linux Programmer's Manual RCMD(3)
rcmd, rresvport, iruserok, ruserok, rcmd_af, rresvport_af, iruserok_af, ruserok_af - routines for returning a stream to a remote command
#include <netdb.h> /* Or <unistd.h> on some systems */ int rcmd(char **ahost, unsigned short); int rcmd_af(char **ahost, unsigned short inport, const char *locuser, const char *remuser, const char *cmd, int *fd2p, sa_family_t af); int rresvport_af(int *port, sa_family_t af); int iruserok_af(const void *raddr, int superuser, const char *ruser, const char *luser, sa_family_t af); int ruserok_af(const char *rhost, int superuser, const char *ruser, const char *luser, sa_family_t af); Feature Test Macro Requirements for glibc (see feature_test_macros(7)): rcmd(), rcmd_af(), rresvport(), rresvport_af(), iruserok(), iruserok_af(), ruserok(), ruserok_af():). rcmd()zero, then an auxiliary channel to a control process will be set up, and a file). rresvport()). is allowed to bind to a privileged port. In the glibc implementation, this function restricts its search to the ports from 512 to 1023. The port argument is value-result: the value it supplies to the call is used as the starting point for a circular search of the port range; on (successful) return, it contains the port number that was bound to. iruserok() and ruserok(), is writable by anyone other than the owner, or is hardlinked anywhere,. *_af() variants.." For information on the return from ruserok() and iruserok(), see above.
The functions iruserok_af(), rcmd_af(), rresvport_af(), and ruserok_af() functions are provide in glibc since version 2.2.rcmd(), 5.01 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. Linux 2017-09-15 RCMD(3)
Pages that refer to this page: rexec(3) | http://man7.org/linux/man-pages/man3/rcmd.3.html | CC-MAIN-2019-22 | refinedweb | 281 | 59.64 |
Jupyter Notebook Tutorial in Python
Jupyter notebook tutorial on how to install, run, and use Jupyter for interactive matplotlib plotting, data analysis, and publishing code.
Introduction¶
Jupyter has a beautiful notebook that lets you write and execute code, analyze data, embed content, and share reproducible work. Jupyter Notebook (previously referred to as IPython Notebook) allows you to easily share your code, data, plots, and explanation in a sinle notebook. Publishing is flexible: PDF, HTML, ipynb, dashboards, slides, and more. Code cells are based on an input and output format. For example:
print("hello world")
hello world
Installation¶
There are a few ways to use a Jupyter Notebook:
- Install with
pip. Open a terminal and type:
$ pip install jupyter.
- Windows users can install with
setuptools.
- Anaconda and Enthought allow you to download a desktop version of Jupyter Notebook.
- nteract allows users to work in a notebook enviornment via a desktop application.
- Microsoft Azure provides hosted access to Jupyter Notebooks.
- Domino Data Lab offers web-based Notebooks.
- tmpnb launches a temporary online Notebook for individual users.
Getting Started¶
Once you've installed the Notebook, you start from your terminal by calling
$ jupyter notebook. This will open a browser on a localhost to the URL of your Notebooks, by default. Windows users need to open up their Command Prompt. You'll see a dashboard with all your Notebooks. You can launch your Notebooks from there. The Notebook has the advantage of looking the same when you're coding and publishing. You just have all the options to move code, run cells, change kernels, and use Markdown when you're running a NB.
Helpful Commands¶
- Tab Completion: Jupyter supports tab completion! You can type
object_name.<TAB> to view an object’s attributes. For tips on cell magics, running Notebooks, and exploring objects, check out the Jupyter docs.
- Help: provides an introduction and overview of features.
Type help() for interactive help, or help(object) for help about object.
- Quick Reference: open quick reference by running:
quickref
Languages¶
The bulk of this tutorial discusses executing python code in Jupyter notebooks. You can also use Jupyter notebooks to execute R code. Skip down to the [R section] for more information on using IRkernel with Jupyter notebooks and graphing examples.
Package Management¶
When installing packages in Jupyter, you either need to install the package in your actual shell, or run the
! prefix, e.g.:
!pip install packagename
You may want to reload submodules if you've edited the code in one. IPython comes with automatic reloading magic. You can reload all changed modules before executing a new line.
%load_ext autoreload %autoreload 2
Some useful packages that we'll use in this tutorial include:
- Pandas: import data via a url and create a dataframe to easily handle data for analysis and graphing. See examples of using Pandas here:.
- NumPy: a package for scientific computing with tools for algebra, random number generation, integrating with databases, and managing data. See examples of using NumPy here:.
- SciPy: a Python-based ecosystem of packages for math, science, and engineering.
- Plotly: a graphing library for making interactive, publication-quality graphs. See examples of statistic, scientific, 3D charts, and more here:.
import pandas as pd import numpy as np import scipy as sp import chart_studio.plotly as py
Import Data¶
You can use pandas
read_csv() function to import data. In the example below, we import a csv hosted on github and display it in a table using Plotly:
import chart_studio.plotly as py import plotly.figure_factory as ff import pandas as pd df = pd.read_csv("") table = ff.create_table(df) py.iplot(table, filename='jupyter-table1')
Use
dataframe.column_title to index the dataframe:
schools = df.School schools[0]
'MIT'
Most pandas functions also work on an entire dataframe. For example, calling
std() calculates the standard deviation for each column.
df.std()
Women 12.813683 Men 25.705289 Gap 14.137084 dtype: float64
Plotting Inline¶
You can use Plotly's python API to plot inside your Jupyter Notebook by calling
plotly.plotly.iplot() or
plotly.offline.iplot() if working offline. Plotting in the notebook gives you the advantage of keeping your data analysis and plots in one place. Now we can do a bit of interactive plotting. Head to the Plotly getting started page to learn how to set your credentials. Calling the plot with
iplot automaticallly generates an interactive version of the plot inside the Notebook in an iframe. See below:
import chart_studio.plotly as py import plotly.graph_objects as go data = [go.Bar(x=df.School, y=df.Gap)] py.iplot(data, filename='jupyter-basic_bar')
import chart_studio.plotly as py import plotly.graph_objects as go trace_women = go.Bar(x=df.School, y=df.Women, name='Women', marker=dict(color='#ffcdd2')) trace_men = go.Bar(x=df.School, y=df.Men, name='Men', marker=dict(color='#A2D5F2')) trace_gap = go.Bar(x=df.School, y=df.Gap, name='Gap', marker=dict(color='#59606D')) data = [trace_women, trace_men, trace_gap] layout = go.Layout(title="Average Earnings for Graduates", xaxis=dict(title='School'), yaxis=dict(title='Salary (in thousands)')) fig = go.Figure(data=data, layout=layout) py.iplot(fig, sharing='private', filename='jupyter-styled_bar')
Now we have interactive charts displayed in our notebook. Hover on the chart to see the values for each bar, click and drag to zoom into a specific section or click on the legend to hide/show a trace.
Plotting Interactive Maps¶
Plotly is now integrated with Mapbox. In this example we'll plot lattitude and longitude data of nuclear waste sites. To plot on Mapbox maps with Plotly you'll need a Mapbox account and a Mapbox Access Token which you can add to your Plotly settings.
import chart_studio.plotly as py import plotly.graph_objects as go import pandas as pd # mapbox_access_token = 'ADD YOUR TOKEN HERE' df = pd.read_csv('') site_lat = df.lat site_lon = df.lon locations_name = df.text data = [ go.Scattermapbox( lat=site_lat, lon=site_lon, mode='markers', marker=dict( size=17, color='rgb(255, 0, 0)', opacity=0.7 ), text=locations_name, hoverinfo='text' ), go.Scattermapbox( lat=site_lat, lon=site_lon, mode='markers', marker=dict( size=8, color='rgb(242, 177, 172)', opacity=0.7 ), hoverinfo='none' )] layout = go.Layout( title='Nuclear Waste Sites on Campus', autosize=True, hovermode='closest', showlegend=False, mapbox=dict( accesstoken=mapbox_access_token, bearing=0, center=dict( lat=38, lon=-94 ), pitch=0, zoom=3, style='light' ), ) fig = dict(data=data, layout=layout) py.iplot(fig, filename='jupyter-Nuclear Waste Sites on American Campuses')
import chart_studio.plotly as py import plotly.graph_objects as go import numpy as np s = np.linspace(0, 2 * np.pi, 240) t = np.linspace(0, np.pi, 240) tGrid, sGrid = np.meshgrid(s, t) r = 2 + np.sin(7 * sGrid + 5 * tGrid) # r = 2 + sin(7s+5t) x = r * np.cos(sGrid) * np.sin(tGrid) # x = r*cos(s)*sin(t) y = r * np.sin(sGrid) * np.sin(tGrid) # y = r*sin(s)*sin(t) z = r * np.cos(tGrid) # z = r*cos(t) surface = go.Surface(x=x, y=y, z=z) data = [surface] layout = go.Layout( title='Parametric Plot', scene=dict( xaxis=dict( gridcolor='rgb(255, 255, 255)', zerolinecolor='rgb(255, 255, 255)', showbackground=True, backgroundcolor='rgb(230, 230,230)' ), yaxis=dict( gridcolor='rgb(255, 255, 255)', zerolinecolor='rgb(255, 255, 255)', showbackground=True, backgroundcolor='rgb(230, 230,230)' ), zaxis=dict( gridcolor='rgb(255, 255, 255)', zerolinecolor='rgb(255, 255, 255)', showbackground=True, backgroundcolor='rgb(230, 230,230)' ) ) ) fig = go.Figure(data=data, layout=layout) py.iplot(fig, filename='jupyter-parametric_plot')
Animated Plots¶
Checkout Plotly's animation documentation to see how to create animated plots inline in Jupyter notebooks like the Gapminder plot displayed below:
import chart_studio.plotly as py import numpy as np data = [dict( visible = False, line=dict(color='#00CED1', width=6), name = '𝜈 = '+str(step), x = np.arange(0,10,0.01), y = np.sin(step*np.arange(0,10,0.01))) for step in np.arange(0,5,0.1)] data[10]['visible'] = True steps = [] for i in range(len(data)): step = dict( method = 'restyle', args = ['visible', [False] * len(data)], ) step['args'][1][i] = True # Toggle i'th trace to "visible" steps.append(step) sliders = [dict( active = 10, currentvalue = {"prefix": "Frequency: "}, pad = {"t": 50}, steps = steps )] layout = dict(sliders=sliders) fig = dict(data=data, layout=layout) py.iplot(fig, filename='Sine Wave Slider')
Additionally, IPython widgets allow you to add sliders, widgets, search boxes, and more to your Notebook. See the widget docs for more information. For others to be able to access your work, they'll need IPython. Or, you can use a cloud-based NB option so others can run your work.
Executing R Code¶
IRkernel, an R kernel for Jupyter, allows you to write and execute R code in a Jupyter notebook. Checkout the IRkernel documentation for some simple installation instructions. Once IRkernel is installed, open a Jupyter Notebook by calling
$ jupyter notebook and use the New dropdown to select an R notebook.
See a full R example Jupyter Notebook here:
from IPython.display import YouTubeVideo YouTubeVideo("wupToqz1e2g")
$$c = \sqrt{a^2 + b^2}$$
from IPython.display import display, Math, Latex display(Math(r'F(k) = \int_{-\infty}^{\infty} f(x) e^{2\pi i k} dx'))
Exporting & Publishing Notebooks¶
We can export the Notebook as an HTML, PDF, .py, .ipynb, Markdown, and reST file. You can also turn your NB into a slideshow. You can publish Jupyter Notebooks on Plotly. Simply visit plot.ly and select the
+ Create button in the upper right hand corner. Select Notebook and upload your Jupyter notebook (.ipynb) file!
The notebooks that you upload will be stored in your Plotly organize folder and hosted at a unique link to make sharing quick and easy.
See some example notebooks:
Publishing Dashboards¶
Users publishing interactive graphs can also use Plotly's dashboarding tool to arrange plots with a drag and drop interface. These dashboards can be published, embedded, and shared.
Jupyter Gallery¶
For more Jupyter tutorials, checkout Plotly's python documentation: all documentation is written in jupyter notebooks that you can download and run yourself or checkout these user submitted examples!
| https://plotly.com/python/ipython-notebook-tutorial/ | CC-MAIN-2020-50 | refinedweb | 1,673 | 51.95 |
Hi, Im a programmer as of 7:30pm EST today. I have had short and quick run-ins with programming but this is my first actual attempt to learn(C/C++).
I am enrolled in a trig class and i would like some help on how to use the sin,cos,tan functions as i have had no luck.
i tried
#include <iostream.h>
#include <math.h>
int main()
{
int number;
cout<<"Input a number: ";
cin>>number;
sin(number);
cout<<"Sin: "<<number;
return 0;
}
if i enter, 45 i will get 45 and not .707 or something like that.
if this sin function is not for such a purpose how can i go about calculating sin and the other functions? | http://forums.devshed.com/programming/84070-trig-function-program-last-post.html | CC-MAIN-2016-50 | refinedweb | 120 | 83.15 |
Did you know that you can build web sites that AREN'T Single Page Applications? It's true! I just double checked, and it turns out that you can in fact build websites the good old fashioned way, and wouldn't you know it, it still works.
All heavy sarcasm aside, I fear that the sheer volume of the hype around HTML5 has now tainted our perception of what constitutes a modern web application. The truth of the matter is that the good old fashioned postback is at the core of the web platform, so guess what?! It's part of HTML5! Remember when we said things like "Friends Don't Let Friends Postback"? We were raising awareness of AJAX at the time - and that was good - but we somehow forgot, in the middle of all the excitement, the fundamentals of how to build responsible web applications.
Look, AJAX is just another incredibly powerful tool in your web development arsenal. Use it where it makes sense, but don't feel like you have to use it everywhere all the time. You become like the video game player that only learns one move in Streetfighter 2 and then you just keep mashing the same buttons. Nobody likes the person who always picks Blanka.
As it turns out, mobile devices (while woefully underpowered in comparison to desktops) are really good at the old school HTTP Request game. In fact, there is a school of thought which purports that having the server returning a static page is the FASTEST way to reach the most devices with the least amount of work while maintaining the best overall performance across browsers.
Kendo UI Mobile was initially created for people who wanted to build Mobile Apps. These are apps that look and feel native. They could run on your server and be served up like a web site, or packaged as a native application with something like Icenium. In order to make that work, Kendo UI Mobile has a SPA framework baked right in. When you build a Kendo UI Mobile App, you are building a SPA. We spent countless hours investigating the nasty quirks of mobile browsers so that Kendo UI Mobile looks and behaves like a native application. Because lets face it: The browser is not just a way to view static documents anymore, it's our favorite application runtime.
But what if you don't want to build a SPA? What about just building a mobile web site? We realized this was important, and as of the Q1 2013 release, you can use Kendo UI Mobile to build mobile web sites.
In this article, we are going to build a mobile web application on top of SQL Server using MVC 4 and we are going to let the server do most of the work.
You can grab the code from GitHub, and run the demo.
Go ahead and rock those hot "File / New Project" skills you have perfected over the years, and create a new MVC 4 application. I created mine as an "Empty" application so I could start from scratch. What can I say? I like a clean slate.
I'm going to be using Northwind data here. Now the only person who is more tired than you of seeing the Northwind database, is me. That I promise. It just happens to be the only small SQL Server database I have lying around. Just remember: it's not about the data. Northwind in all it's glory is just a placeholder for whatever real world data you will be working with.
In the Views folder, create a new folder called Shared and create a new MVC 4 Layout Page. Remove everything in the head of the page.
Views
Shared
<head>
<!-- DELETE this tag -->
<meta name="viewport" content="width=device-width" />
<!-- make sure this tag is empty -->
<title>@ViewBag.Title</title>
</head>
The first one of those is a viewport tag. The meta viewport tag is primarily responsible for telling the mobile device how to render the page. Should it zoom in on the content, or zoom way out an show the entire page with tiny text that you have to double tap on? Kendo UI Mobile will insert the right viewport information into the header, so you don't need to worry about it.
The second is the title of the page. This is also something Kendo UI Mobile will handle for us. We don't want to delete this tag entirely though, because then the page won't validate in Visual Studio, and that's slightly annoying. Just remove the @ViewBag.Title so you are left with just <title></title>.
@ViewBag.Title
<title></title>
If you don't have a copy of Kendo UI Mobile yet, you can download a 30 day trial version of Kendo UI Complete that contains everything you need. In your Kendo UI Mobile download, you are going to want to grab a few items.
Since we are going to be building a Kendo UI Mobile site today, I don't want to use Kendo UI Mobile's auto-adaptive OS rendering. Instead, we will be using the Kendo UI Mobile flat skin.
flat
I usually create a folder under my Content folder which I call kendo and I place the following directories and style files there.
Content
kendo
Then I add the necessary JavaScript files to the Scripts directory.
Scripts
You can also use jQuery from NuGet if you like. We recommend version 1.9.1 for use with Kendo UI Mobile.
Now we need to add Kendo UI Mobile to the _Layout page. I use the System.Web.Optimization packages so that I can just do @Styles.Render/@Script.Render. Since I started from an "Empty" template, I have to install the Microsoft.AspNet.Web.Optimization package from Nuget.
_Layout
System.Web.Optimization
@Styles.Render/@Script.Render
Microsoft.AspNet.Web.Optimization
Install-Package Microsoft.AspNet.Web.Optimization
Then, I have to add a reference to this assembly in the Web.config file that's in the Views folder under the System.Web > Pages > Namespaces section. If you started with an MVC template of any sort, this is usually already done for you.
Web.config
System.Web > Pages > Namespaces
<add namespace="System.Web.Optimization"/>
Now add the Kendo UI Flat skin css reference to the page.
@Styles.Render("~/Content/kendo/kendo.mobile.flat.min.css")
Lastly, add in the JavaScript files just before the closing <body> tag.
<body>
IMPORTANT: jQuery MUST be referenced BEFORE Kendo UI.
@Scripts.Render("~/Scripts/jquery.min.js")
@Scripts.Render("~/Scripts/kendo/kendo.mobile.min.js")
Lastly, the @RenderBody() call is wrapped in a div by default. Just remove that parent div so that @RenderBody() is a direct child of the <body>.
@RenderBody()
div
Now we are ready to start building the web application.
When we setup a Kendo UI Mobile Application, we typically use something called a layout and then build everything in that context. Not be confused with an MVC layout, the Kendo UI Mobile layout is a chunk of HTML that will create any "sticky" navigation components that we want (i.e. NavBars, Tabstrips, ect). These layout components will appear on every view that we create. A Kendo UI Mobile View is just like an MVC View. Especially today since we will in fact be in fact using MVC views to create Kendo UI Mobile Views.
layout
view
View
Before we create the layout, lets examine at what our application is going to look like...
We have two tabs. A Home tab and a Settings tab. The Home tab is really where all the functionality is. The Settings tab item is only there to help give some depth to this application and help setup a scenario we will have to work through.
Settings
The Application displays a list of Categories from the database and the Category Description. Clicking on one of these items loads a new view which has all of the products in that category. Clicking on a product loads in that product's detail in a form that we can edit and submit back to the database.
To create the Layout, use a div and give it a data-role of layout. These data- attributes are what give Kendo UI Mobile the information that it needs to transform the HTML into our mobile user interface. We give the layout a data-id attribute that we can reference from the individual Kendo UI Mobile views, thus declaring this layout as the layout for that particular view.
data-role
data-
data-id
Notice that the @RenderBody call stays outside the layout. That might seem counter-intuitive, but the Kendo UI Mobile Layout actually expects all of it's 'views' to be top level elements, not children.
@RenderBody
@RenderBody()
<div data-</div>
We can add a Kendo UI Mobile NavBar component which will be the fixed header at the top of the screen. Navbars serve the purpose of telling us where we are in a navigation hierarchy, as well as housing some important buttons, like "back", or a button to add a new item. To make this Navbar stick to the top - and always the top - we put it inside a div with a role of header. We can dynamically set the title displayed in the Navbar by using a span with a role of view-title. This will pull the value from the data-title attribute on whichever view is currently loaded and use it as the title.
header
span
view-title
data-title
@RenderBody()
<div data-
<div data-
<div data-
<span data-</span>
</div>
</div>
</div>
To create the Tabstrip, we will be using a Kendo UI Mobile Tabstrip widget and using a footer layout container to pin it to the bottom of the viewport.
footer
@RenderBody()
<div data-
<div data-
<div data-
<span data-</span>
</div>
</div>
<div data-
<div data-
<a href="/home" data-Home</a>
<a href="/settings" data-Settings</a>
</div>
</div>
</div>
Notice that we have two tabs with icons - Home and Settings. It's time to build those views. In fact, we have to build at least one of them before we can even initialize this as a Kendo UI Mobile application.
Kendo UI Mobile requires at least one view to run. So far, we haven't defined any views. We're are going to use MVC Views as Kendo UI Mobile Views and let @RenderBody() do it's thing.
Since we need some data, I've added the Northwind database to my project and created an EF Context that pulls in the tables that I need (Categories and Projects) for this application. I put that model in a folder called "Data". This is just the convention that I'm using. Others like to place their data access layer in a completely different project. Do what makes you happy.
I always take the data from the EF Model and put it directly into a model class that I defined. This class is simple and more importantly, easy for .NET to serialize to different formats. This sort of a class is commonly called a ViewModel.
ViewModel
I create a Models folder and create a ViewModel for a Product.
Models
public class Product {
public int ProductId { get; set; }
public string ProductName { get; set; }
public int? SupplierId { get; set; }
public Category Category { get; set; }
public string QuantityPerUnit { get; set; }
public decimal? UnitPrice { get; set; }
public short? UnitsInStock { get; set; }
public short? UnitsOnOrder { get; set; }
public short? ReorderLevel { get; set; }
public bool Discontinued { get; set; }
}
Notice how some fields are nullable? This is because the field they will eventually be mapped to in the database is nullable as well, and EF maps these nullable database fields as nullable in the model.
Now we create a ViewModel for a category. Categories have products by way of a relationship. This is represented in the ViewModel by assigning a list of products the the category.
public class Category {
public int CategoryId { get; set; }
public string CategoryName { get; set; }
public string CategoryDescription { get; set; }
public byte[] Picture { get; set; }
public IEnumerable Products { get; set; }
}
I then create a Repositories folder. Here I house the classes that will be accessing the EF Model directly. This keeps me from having to reference the EF Model from my Controllers. It's generally a good idea to keep controller methods skinny, and push other logic into a different layer. Since we're just doing simple data retrieval here, the repository pattern suits us well.
Repositories
I'll create a repository for the Categories table. Name the class CategoriesRepository.cs and place it in the Repositories folder.
CategoriesRepository.cs
Since all we need to show on the Home Screen is a list of Categories, the class can currently have one method which just returns a list of all the Categories, mapped to ViewModel objects.
public class CategoriesRepository {
readonly Data.NorthwindEntities _entities = new Data.NorthwindEntities();
public IQueryable<models.category /> Get() {
var categories = _entities.Categories.Select(c =>
new Models.Category {
CategoryId = c.Category_ID,
CategoryName = c.Category_Name,
CategoryDescription = c.Description,
Picture = c.Picture,
Products = c.Products.Select(p =>
new Models.Product {
ProductId = p.Product_ID,
Discontinued = p.Discontinued,
ProductName = p.Product_Name,
QuantityPerUnit = p.Quantity_Per_Unit,
ReorderLevel = p.Reorder_Level,
SupplierId = p.Supplier_ID,
UnitPrice = p.Unit_Price,
UnitsInStock = p.Units_In_Stock,
UnitsOnOrder = p.Units_On_Order
})
});
return categories;
}
}
A Quick Note On IEnumerable vs IQueryable: Have you ever wondered why sometimes people return an IQueryable, sometimes an IEnumerable (as above) and sometimes an IList or even List? I sure did. How many types could we possibly need here? It's because IQuerable's are executed in the database. IEnumerable's (and implementing types like List) do it in memory. We try to use IQueryable when doing database queries so that the database gets leveraged for what it's good at and we aren't doing intensive data manipulation operations in memory. For more information on IQueryable vs IEnumerable, see this stack overflow thread.
With all of our data plumbing out of the way (for now), we can finally create our first view. Create a Home Controller of type Empty MVC Controller. The Index method simply needs to return a matching MVC View and we'll retrieve the Categories from the database to send along as the model.
Index
public class HomeController : Controller
{
readonly Repositories.CategoriesRepository _categories = new Repositories.CategoriesRepository();
public ActionResult Index() {
var categories = _categories.Get();
return View(categories);
}
}
Ideally, we would want to inject the repository and EF dependencies in this project into their respective classes via constructor methods and IOC containers. For the sake of simplicity in demonstration I have omitted architectural concepts that might interfere with the main point of building a mobile web app.
Now create a Home folder under Views and add in a new MVC View. You can choose the "Empty" scaffold template, name it Index and have it use the Shared\_Layouts.cshtml as the Layout page. You can also select to make it a strongly typed view with the Model class being the Category ViewModel.
Shared\_Layouts.cshtml
Visual Studio will generate us a new MVC view. It's got the layout page defined and the Category ViewModel specified at the top as the model.
@model NorthwindMobile.Models.Category
@{
ViewBag.Title = "Index";
Layout = "~/Views/Shared/_Layout.cshtml";
}
<h2>Index</h2>
This view is going to be listing categories, so change the @model definition to be an IEnumerable of categories. That's what the HomeController will be passing.
@model
HomeController
We can also remove the <h2> tag and add in the markup for a Kendo UI Mobile view. The view is currently empty, and that's OK.
<h2>
@model IEnumerable<NorthwindMobile.Models.Category>
@{
Layout = "~/Views/Shared/_Layout.cshtml";
}
<div data-
I'm The Home View
</div>
Notice that a Kendo UI Mobile view is just a div with its data-role set to view. Also notice that we have assigned it to the main layout. That is the layout that we created in the _Layout.cshtml page.
main
_Layout.cshtml
Now we are ready to actually create the Kendo UI Mobile application with 1 single magical method. Open a script tag right before the closing <body> tag in _Layout.cshtml and create a new Kendo UI Mobile Application.
<script>
// create a new kendo ui mobile application using
// the whole page. use the 'flat' skin and server navigation.
new kendo.mobile.Application(document.body, {
skin: "flat",
serverNavigation: true
});
</script>
That one line creates the mobile application. You can fire this up in the browser and see it work. It's not terribly impressive just yet, but we're thanks to our database plumbing work, we're perfectly poised to add some features.
Before we do that, a quick note from our sponsor on server navigation.
By default, Kendo UI Mobile operates as a SPA. That is, it expects all the views to either be in the page, or loaded via AJAX. It has it's own router and expects that a postback will never occur. This is because there is nothing to post back to in environments like PhoneGap. It also allows Kendo UI Mobile to mimic the look and feel of the native OS with view animations.
By toggling serverNavigation: true, we have asked Kendo UI Mobile not to load remote views via AJAX. This will cause Kendo UI Mobile to act like a rather standard web application.
serverNavigation: true
Add a Settings Controller and just have it return the view.
public class SettingsController : Controller
{
//
// GET: /Settings/
public ActionResult Index()
{
return View();
}
}
Then add it's corresponding view by creating a Settings folder and a new empty MVC View that uses the _Layouts.cshtml layout page. Create a new Kendo UI Mobile View and assign it the main layout just like we did with the home view.
_Layouts.cshtml
@{
Layout = "~/Views/Shared/_Layout.cshtml";
}
<div data-
I'm Settings!
</div>
If you run the application now, you can toggle between the Kendo UI Mobile views by hitting the Tabstrip buttons.
We have a problem though. Did you pick up on it? When the user clicks on the "Settings" tab, the browser is redirected to the Settings view, but the Tabstrip doesn't update.
Since we asked Kendo UI Mobile to make this a server operation, it is no longer controlling the routing of paths and views. It's simply loading whatever view is in the page when the URL is called. The server is building those views and delivering them statically. Kendo UI Mobile has no way of knowing which tab we want it to display since it's not controlling the views anymore. In absence of that information, it displays the "Home" tab every time.
What we need to do is to manually switch the tab to the correct one based on which view we are in. There are many ways to pass this information. We could put it in the ViewBag and pass it from the server. I like to do this client-side with markup. I believe that HTML should define the behaviour of HTML when possible.
The way I like to control this is by storing the tab for the view as a data attribute, and then calling the switchTo method of the Tabstrip.
switchTo
Let me show you how this works.
The Kendo UI Tabstrip has a switchTo method that matches on the href. That means that if I want to switch to a tab with an href of home or /home, I just need to call switchTo('home').
href
switchTo('home')
First, we assign the attribute which tracks tab state to both the Home and Settings view. I use the data-switch-to attribute since it matches the method name we'll be using.
data-switch-to
@model IEnumerable<NorthwindMobile.Models.Category>
@{
Layout = "~/Views/Shared/_Layout.cshtml";
}
<div data-
I'm The Home View
</div>
Add the same attribute to the Settings view.
All that's left to do now is retrieve that value in the layout and call the switchTo method on the Tabstrip. In order to call that method, we have to get a reference to the Tabstrip. In order to get the reference, we need to make sure we do all of this after the mobile application is initialized by Kendo UI. Timing is everything.
When a Kendo UI Mobile Application is initialized, it fires an init event after it's all finished creating the app. We can define a function for that event and do the tab switching there.
<script>
new kendo.mobile.Application(document.body, {
skin: "flat",
serverNavigation: "true",
init: function () {
// tab switching happens here
}
});
</script>
In order to get the data-switch-to attribute from the current view (whatever that may be), we can select any items in the page with a data-role of view. This is just plain jQuery. Then we can look for the showIn data value which maps to the data-show-in attribute. I also indexed the role selector in case we end up with multiple views on a page. In that case, the first one will win.
showIn
data-show-in
If the view has a value for data-switch-to, we can get a reference to the Tabstrip by selecting it by it's role (tabstrip), and then passing "kendoMobileTabStrip" to the data method. All widgets store their instance inside the element's data API. This is per jQuery widget creation guidelines.
data
Once we have the widget instance, we just call showIn with the value we got from the view. All of that boils down to just two simple lines of code.
<script>
new kendo.mobile.Application(document.body, {
skin: "flat",
serverNavigation: "true",
init: function () {
// ** this switches the tabstrip to the right view **
// get the current view's id
var tabState = $("[data-role='view']").data("switchTo");
// tell the tabstrip to switch to that item
if (tabState) {
$("[data-role='tabstrip']").data("kendoMobileTabStrip").switchTo(tabState);
}
}
});
</script>
Now the appropriate tab is highlighted when we switch between views. Very nice!
So far we haven't added in the data that we worked so hard to wire up. Let's do that now.
User Experience is different on mobile. It just is. Certain types of common UI patterns - like grids for instance - work great on a desktop, but are extremely difficult to implement properly on a mobile device. Fingers are fat and they obscure the screen. This is why we have come up with different ways of displaying and navigating through data. One of the most common patterns for displaying a collection of data, is to use a ListView.
Many applications implement a ListView of some sort. This is done across all platforms and all OS's, but was made famous by Apple when they added the momentum scrolling. If you now scroll a list of items on a mobile device that does not have a bouncy feel to it, you can tell right away and it feels extremely odd.
Kendo UI Mobile has a ListView widget and it automatically implements momentum scrolling for you.
Since we are already returning a list of categories as the model for the Home view, it's easy to add a ListView widget in and populate it with data from the server. And since we're building a mobile web site the good old fashioned way, all we need to do is iterate over the model using Razor and a foreach loop. Kendo UI Mobile ListViews are really just an unordered list (<ul>) at their core, so we can build one up using that HTML element. All we have to do to cause Kendo UI to turn the plain <ul> into a Kendo UI Mobile ListView, is to give it a data-role of listview.
foreach
<ul>
listview
<div data-
<ul data-
@foreach (var item in Model) {
<li>
<a href="#">
<h1>@item.CategoryName</h1>
<p>
@item.CategoryDescription
</p>
</a>
</li>
}
</ul>
</div>
This populates the page with categories from the database on the server, and then transforms it into a Kendo UI Mobile ListView on the client.
If we were on a desktop, we could have a grid of categories, and then expand a category to see it's specific products, and then put a grid row into edit mode to edit a product. On mobile, this is not really an option. We can implement the same functionality, but we have to be a bit smarter about how its done and think a little bit differently.
We will do just that in part two of this article where we implement navigation for a data hierarchy.
If you have been following along, you can check out my progress so far, and of course grab the code from the GitHub repo. You can always grab a trial of Kendo UI at any time.
Until next time, just ponder this: The postback is far from dead. It's far too often dismissed and remains a powerful tool at your disposal.. | http://www.telerik.com/blogs/creating-a-mobile-site-for-your-sql-server-data | CC-MAIN-2017-26 | refinedweb | 4,160 | 65.73 |
Recently, I was digging through some of my code and came across a ton of stuff I had that revolves around Google. While I was flipping through some of the code, I came across an app I wrote back when Google Trends first came out. It was a simple application that could scrape the Google Trends page to see what the hottest trends were. I don’t exactly remember why I originally wrote that app, but I apparently had a reason for it at the time. Unfortunately, the app used a simple scraping mechanism that no longer worked with the updated Google Trends site. So, instead of just throwing out the code, I decided to bring it up to date. Besides, who knows when I might need something like this in the future. Since I’m sure there are plenty of others out there that could use this code right now, I’m going to take a minute to share it with you.
To begin with, the original method of simply scraping the Google Trends homepage located at was no longer going to work since Google has changed the way they generate their screens. Like most of the other Google pages, Google Trends now generates their entire user interface on-the-fly using Javascript. However, Google does still make it easy for us to get the current hottest trends by providing us with a nice little RSS feed located at. I’ve shown how to create RSS readers in the past. So, this won’t be anything new, but I will use a simpler method for parsing the XML response by using the XmlDocument along with the SelectNodes and SelectSingleNode methods. Since the Google Trends RSS feed also includes namespaces, I will also make use of the XmlNamespaceManager. Some of the items in the feed also include markup. So, I am also adding a simple method that will strip away any HTML, leaving us with nice clean text to work with.
The first thing you will need to do to get the XML from the Google Trends RSS feed is to incorporate a new WebClient object. Using that object, you can download the entire XML response as text by calling the DownloadString method and passing it the URL of the feed itself.
WebClient wc = new WebClient();
String html = wc.DownloadString(““);
Once you have the XML as a string, you will want to load it into a new XmlDocument by calling the LoadXml method.
XmlDocument doc = new XmlDocument();
doc.LoadXml(html);
Before we jump in too far, we will now want to setup our XmlNamespaceManager object which we will use later.
XmlNamespaceManager nsmgr = new XmlNamespaceManager(doc.NameTable);
nsmgr.AddNamespace(“ht”, ““);
Next, you will want to setup a new XmlNodeList which will contain all of the items found within your XML document. Using simple XPath, we can navigate all the way down the element chain and extract all occurrences of “<item>” like this:
XmlNodeList items = doc.SelectNodes(“//rss/channel/item”);
Now that you have a list of all items in the XML, you can iterate over the list using a standard “foreach” routine. Along the way, you can pluck out the information you want by again using the SelectSingleNode and SelectNodes functions. For the purposes of this article, we will pluck out the title & description of each item. Since some items do not include descriptions, you will want to include a null check before calling “.Value” on the description node. Calling “.Value” is what returns the content between the open and close tags. For example, if we have the following XML:
<?xml version="1.0" encoding="UTF-8" ?> <item> <title>Prodigy Productions, LLC</title> </item>
we can access “Prodigy Productions, LLC” by calling doc.SelectSingleNode(“//item/title/text()”).Value.
Alrighty, now that we know how to extract certain portions of content from the XML, we can continue to extract the list of news items that are associated with each hot trend using the same technique. When you start pulling the titles and snippets from each news_item, you will notice that these items have been prefixed with “ht:” such as “ht:news_item”. This is where the XmlNamespaceManager comes into play. To use it, all you have to do is pass it as a second argument to the SelectNodes and SelectSingleNode methods like this:
XmlNodeList news_items = item.SelectNodes(“ht:news_item”, nsmgr);
Another thing you might or might not care about when working with the news items are that they typically contain extra markup which makes it easy to style them when displaying the results in an HTML page. But, since we are spitting out our results to the console, we don’t need this extra markup. For that, we can use our good friend Regex and pass it a simple pattern as shown here:
const string HTML_TAG_PATTERN = "<.*?>"; public static string StripHTML(string inputString) { return Regex.Replace(inputString, HTML_TAG_PATTERN, string.Empty); }
That’s it. You are now ready to get the current hottest trends on Google. Below is the entire code used for this article. You can also download my entire Solution project from. Enjoy!
using System; using System.Collections.Generic; using System.IO; using System.Text.RegularExpressions; using System.Net; using System.Xml; namespace GoogleTrends { class Program { static void Main(string[] args) { WebClient wc = new WebClient(); String html = wc.DownloadString(""); XmlDocument doc = new XmlDocument(); doc.LoadXml(html); XmlNamespaceManager nsmgr = new XmlNamespaceManager(doc.NameTable); nsmgr.AddNamespace("ht", ""); XmlNodeList items = doc.SelectNodes("//rss/channel/item"); foreach (XmlNode item in items) { string title = item.SelectSingleNode("title/text()").Value; string description = ""; if (item.SelectSingleNode("description/text()") != null) description = item.SelectSingleNode("description/text()").Value; Console.WriteLine("Title: " + title); Console.WriteLine("Description: " + description); XmlNodeList news_items = item.SelectNodes("ht:news_item", nsmgr); foreach (XmlNode news_item in news_items) { string news_title = StripHTML(news_item.SelectSingleNode("ht:news_item_title/text()", nsmgr).Value); string news_snippet = StripHTML(news_item.SelectSingleNode("ht:news_item_snippet/text()", nsmgr).Value); Console.WriteLine(" - News Title: " + news_title); Console.WriteLine(" - News Snippet: " + news_snippet + Environment.NewLine); } Console.WriteLine(Environment.NewLine); } Console.ReadLine(); // This is here so we can view the trends before the app closes } const string HTML_TAG_PATTERN = "<.*?>"; public static string StripHTML(string inputString) { return Regex.Replace(inputString, HTML_TAG_PATTERN, string.Empty); } } }
PayPal will open in a new tab. | http://www.prodigyproductionsllc.com/articles/programming/see-whats-trending-with-google-trends-and-c/ | CC-MAIN-2015-40 | refinedweb | 1,029 | 57.57 |
Hello folks,
Happy New Year!!!
I have a class that has couple of properties.
Can I have a property in that class that can be a collection. Like say the MarketData. can I have a property say "FailedCurves" . It is a string property. But only thing is, this property is an ArrayList or Dictionary object that can have multiple string items in it.
How can I do that in C#? Am I thinking correct?
Thanks much
Code:
public class MarketData
{
#region "Private Data Members"
private int _countAdvanceCurve;
private int _countAdvancesSBCAgencyCurve;
private int _countAdvancesSBCAAACurve;
private int _countFhlbsfTreasuryCurve;
private int _countDNCOCurve;
#endregion "Private Data Members"
#region "Public Data Members"
public int CountAdvanceCurve
{
get { return _countAdvanceCurve; }
set { _countAdvanceCurve = value; }
}
} | http://forums.codeguru.com/printthread.php?t=532263&pp=15&page=1 | CC-MAIN-2014-35 | refinedweb | 116 | 51.24 |
Details
- Type:
New Feature
- Status:
Reopened
- Priority:
Major
- Resolution: Unresolved
- Affects Version/s: None
- Fix Version/s: None
- Component/s: groovy-jdk
- Labels:None
- Testcase included:
- Patch Submitted:Yes
- Number of attachments :
Description
It was brought up on the dev mailing list that you could query multiple list values at once, but not multiple Map values:
Groovy currently lets you say:
def x = [100,200,300,400] assert x[0] == 100 assert x[2] == 300 assert x[0,2] == [100,300]
You can say:
def y = [moo: 100, cow: 200, egg:300 hen:400] assert y["moo"] == 100 assert y["cow"] == 300
But currently, you cannot slice a Map like this:
assert y["moo","egg"] == [100,300]
This patch adds that functionality, so you can do (from the unit test)
void testMapSlice() { def m = [ a:1, b:2, c:3 ] assert m[ 'a', 'b' ] == [ 1, 2 ] assert m[ 'a', 'c' ] == [ 1, 3 ] assert m[ 'a', 'd', 'c' ] == [ 1, null, 3 ] }
Note that the resultant List contains null for keys that were not found. This (I believe) differs from the way that Perl handles this (I believe they are just skipped in perl), but I think that having the null values gives the developer more information.
And they could be filtered out by doing something like:
m[ 'a', 'd', 'c' ].findAll { it }
Hope it's ok!
Activity
This looks interesting and close to ready for inclusion. The main thing that worries me is that there is a slight lack of symmetry. For example, in the examples given, we can do both of these:
assert y["moo"] == 100 assert y["moo","egg"] == [100,300]
but there is no way to for instance just return [100].
This might not seem important, but if you consider the following code:
def m = [:] def key = [1, 2] m[key] = 'foo' println m[key]
It currently prints 'foo' but the above change would result in '[null, null]'. So it is a breaking change albeit one that hopefully doesn't arise too often in current code.
Gah, can't believe I missed that, thanks Paul!
Not sure how to get round this blocker...I think overloading getAt to give you seamless integration is a no-go... So the best I could come up with was to change the getAt method to:
public static <K,V> List<V> slice(Map<K,V> self, K... keys) { ArrayList<V> ret = new ArrayList<V>() ; for( K key : keys ) { ret.add( self.get( key ) ) ; } return ret ; }
So thn you can do:
def map = [ a:10, b:20 ] assert map.slice( 'a', 'b' ) == [ 10, 20 ]
and maps with list keys work too:
def m = [ (['a','b']):10, c:20 ] assert m.slice( [ 'a', 'b' ] ) == [ 10 ] assert m.slice( [ 'a', 'b' ], 'c', 'd' ) == [ 10, 20, null ]
Does this look ok? I guess even without the seamless integration, it's still a nice method to have?
Ignore this... I just discovered you can do:
[ a:10, b:20, c:30 ].subMap( [ 'a', 'X', 'c' ] ).values()
to get exactly the same functionality (the intermediate subMap call actually gives you more functionality) with existing methods...
It was pointed out to me that although it duplicates the functionality, the slice method is much nicer and this issue should remain open... Sorry about the faffing...
Wow, that was quick!
Incidentally, hash slices in Perl work just like yours
(nulls are returned for keys that have no associated value).
The behavior you've implemented looks correct/intuitive.
Tim Yates, you rock.
Thanks!
Cheers,
-Jon | http://jira.codehaus.org/browse/GROOVY-4869 | CC-MAIN-2014-42 | refinedweb | 583 | 69.31 |
C# and .NET Core Appreciation Post. The most beautiful piece of code I have ever seen... this month!
Beautus S Gumede
・3 min read
Disclaimer!
This post is a little bit ridiculous and over the top but I'd be doing injustice if I do not share my thoughts on how the journey of learning something new feels like. But if you're in a rush, just scroll to the snippet at the bottom
So here's the thing. Years ago, when I had just started programming, as part of my degree, I hated C#. I didn't want anything to do with any project related to Microsoft related tech. Here's the rationale behind this (it's a dumb rationale) Why does C# have a 'C' in its name but looks almost exactly like Java? I also believed that a good programmer must know Java, C/C++, PHP for server programming, and Python just to prove you're smart. I swear this is a phase that I literally went through.
This was because when you've just started coding, for some reason there's this notion of being defined by the programming languages you know and the libraries you use. I'm sure that was greatly influenced by the "Top 10 best languages to learn in 201X" or "Language X is dead, here's why", even "Programming languages used by Facebook" articles that I actively looked for! just so that I don't get left behind in the industry.
Of course, a lot changed in the way I view tech and my career but what I felt about C# or anything related to it never changed, until about 6-7 months ago. See, at work, they use a shit ton of .NET (or dotnet. I'm still getting the hang of this) so I'm learning it now. A few months ago, I completed a 'Basics for beginners' udemy course. It was fine I guess🙄 but learned a lot of cool stuff 🤗
Now I'm creating a project so that I learn more of this freaky stuff. Enter .NET Core 3! The most overwhelming piece of framework I've ever encountered (this year. I have to be clear on this). I have never given up so many times but come back to try again, in just a single week, ever for any project that I ever started to learn. When I finally got my thingie to give me a token for authentication, after days and nights of struggle, I ran the most satisfyingly powerful git command known to mankind
git commit -m 'Initial commit'
I was happy. Then I had my supper while rereading up on what I've implemented and that's when I saw it
The masterpiece
private User GetUserByToken(string token) { var user = (from u in _context.Users join t in _context.Tokens on u.Id equals t.UserId where t.Body == token select u).SingleOrDefault(); return user; }
It's beautiful isn't it? I thought so too
I mean do you understand that the last time I saw something this cool was when I saw JSX for the first time, which looked so confusing but really not so trivial after some time of use. My brain doesn't have to context switch between application code and database scripts but the thing just works seamlessly. And the syntax highlighting plus the IntelliSense! I am stunned.
If you're reading this and you do not understand WTF is going on here, it's okay, it's not your time yet. But later on in your life, you will see what I saw and you'll say "shit, that dude on dev.to wrote a weird post about this feeling I'm feeling right now".
Not convinced? Look again
Is GraphQL the future of APIs?
Graphs are everywhere! What are the main benefits of the data graph structure? Is GraphQL the future of APIs?
I hate C# LINQ query syntax. It is completely pointless. You can always use LINQ methods directly.
This is a warning to anyone who thinks it's cool.
It gives you nothing. It's not shorter, it's not more readable, it's not faster... It's actually harder to refactor and maintain.
I used to love them at first: they look so cool, right? Well, I've been using C# for many, many years, and the experience has always converged to: avoid them, you will regret the temptation. - Use lambdas directly instead: Lambdas are great.
Nice simple example: Often, you need to split the query into 2 places: super simple with normal syntax: you just copy part of it elsewhere, keeping it as IEnumerable/IQueryable. - With LINQ syntax, you now have to figure out how to split the nonlinear query, the beginning and end is always repeated, you have to type them over..... Zero added value, only problems. After all, its design was inspired by the ergonomically ~worst language: SQL. (I admit, this is opinionated; what I say about LINQ syntax: not so much)
Most importantly: Not all LINQ/extension methods have LINQ syntax, so now you have to mix it, and it's completely horrible.
Also, it has nothing to do with .NET Core: It's been in C# since before 2008.
btw. Don't get me wrong: LINQ extension methods are great... except, for no good reason, C# decided to call all the methods differently than every language ever - but the LINQ syntax was 100% a mistake.
I love how this came after I had seen this comment first or else I would have been devastated by now. I have rewriten that piece of code in my project using lambda, that whole function is made up of just two lines now. In fact, this is a great neede eye-opener.
The LINQ syntax has some advantages in certain cases. It can be both more readable and shorter if your query is complex (sub-queries/grouping/cross products) or requires temporary state (those tend to not be translatable to SQL and are usually done in memory).
Lambdas add a ton of noise in general because of punctuation (parentheses, braces, arrows) and you have to re-declare the item variable in every chained call.
For a simple query like this using a navigation property and a lambda a simpler, as shown.
It is about knowing when and how to use a tool; most tools are not categorically "useless".
@maartyl here goes me proving you wrong! 😅
But with this argument, we all come out as winners. The abundance of tools means there's more than one way to do the job.
@brunnerh Thank you for a counterexample. I agree that with complex queries they probably become more worthwhile, and at some point almost certainly are nicer than using lambdas directly, however, I would argue a complex query should not even be kept in the code, usually.
(I should note, I'm not too familiar with LINQ to SQL)
It can probably be refactored into multiple reusable functions. (and the moment you do this, the LINQ syntax will be worse again) - It probably should be, and using LINQ syntax makes it harder to do so. - At least to me, large queries seem quite hard to read and reason about, whereas 'composing function{s, calls}' is much nicer (I admit, this is probably a matter of opinion, but I often reuse 'subqueries' and that is generally better).
(if it is LINQ to objects) I've never seen that complex queries actually needed - Usually, there is no problem using 'lambda' syntax. (although, to be fair: I haven't always tried, but the trying would probably cost more time overall, than just always using one pattern for everything - especially one that INTERRACTS well with everything else in the language)
(if it is LINQ to SQL) If it is complex enough: it should probably not be written in C# at all, as LINQ syntax is very limiting, and I've repeatedly found it does not support things I needed, and had to rewrite it in raw SQL anyway.
Writing a query that mixes SQL and memory (does not translate to SQL) feels like a code smell to me. I understand there is some benefit to having a purely SQL query, which you then change to actually run in memory, without rewriting it, but it feels more like it would happen by accident, and not behaving as you expect it to. - I like to have a clear separation of what runs where, and generally clear separation between all things that need their own 'governing'.
Lambda noise was never much of an issue for me, but I admit this is a good point. I use lambdas everywhere for everything, but someone not as used to them might find it annoying.
Overall, there probably is some use for them (for actual, complex queries) but in my experience, I don't think I've ever encountered it. I guess they may be great for someone else, I just cannot imagine it...
PS: No idea why dev.to notified me the first time, but not again. I've only found this because I came back to report on my own findings.
I'm glad you saw the other comment first. ^^ I would have hated to make you feel devastated. I just wanted to deter others from repeating my mistakes. ^^ (and clearly, I'm not the only one :D) - I was probably just a bit harsher, because that temptress burnt me many times, before I finally learned. XD
PS: I would love it, if you could prove me wrong. ^^
- I would love to use them... it's just never been worth it.
This is going to be hard 😅 You have years of experience, I have a few weeks. Unless they introduce a breaking change that we'll most likely learn at the same pace, there's no way I can defend LINQ as of yet
I want to wholeheartedly thank you for this article. I considered LINQ syntax so useless, I forgot about it when trying to figure out how to bend C# to my needs. Thanks to you, I started to wonder what LINQ actually could be useful for, and it has a use!
It can be used to add monadic 'do' syntax sugar. I missed this for a long time, creating all sorts of workarounds. Inventing ways to use the language in manners the language was never meant to be used. (Not stopping now, lol)
Now, I finally found a use for LINQ syntax. I didn't implement it yet, but I am nearly certain it's possible. Implementing the specific SelectMany on my custom type, I should be able to write 'nested' bind calls (thus able to reference previous variables) as a flat sequence of operations.
(I just hope there is not some horrible trap, like there is in mutable async blocks (which makes them useless as generic monad syntax))
Imagine
m()to be some monadic expression.
If this works, my code will be so much more readable and maintainable thanks to this. ^^ - And if it wasn't for you, I would have never thought about this. So, thank you. ^^
I am a fan of the fluent API. I never much liked the 'SQL-like' variants. Joins especially were ugly. Getting back an IQueryable and working with that till you need the data is much preferred, IMO.
Here is your example written using an async method and lambda expression syntax:
private async Task<User> GetUserByTokenAsync(string token) => await _context.Users
.Join(_context.Tokens.Where(t => t.Body == token), u => u.Id, t => t.UserId, (u, t) => u)
.SingleOrDefaultAsync();
Note: you will need to include additional namespaces for the async extension methods on IQueryable<> and for Task<>
Drop the async and await keywords for better performance.
At some point, I'm looking forward to benchmarking the alternatives i.e. Fluent vs Lambda, Async vs Sync. I'm certain there's no one answer for every use case
Fluent translates to the same underlying Linq statements and concequently the same generated SQL. Note: The await keyword can be used in front of the parentheses.
Async code frees up the thread to do other things while it's waiting for the slower database network roundtrip. You are trading a small setup cost for starting the state machine handling the async processing, but that's it.
So basically async statements are useful for performance in situations where the data being processed is rather large
I agree that using the Async flavor of the SingleOrDefault is generally the preferred way as it improves massively the amount of load a single server can absorb. My comment was simply that the proposed method does not need to be async (contain the keyword async in the signature) as there is no need to await tasks and execute continuations within the function. So, basically, you can directly return the Task returned by SingleOrDefaultAsync without awaiting it as the caller will await it.
@Beautus S Gumede on "So basically async statements are useful for performance in situations where the data being processed is rather large"
(Not seeing proper "Reply" function on mobile)
That's one example where async is advantageous, and there are many, and it gets complicated pretty quick. I would recomend doing a deep dive on Asynchronous and Threading, but know that they are two very different subjects (although their impacts on your code can appear similar).
There's a pretty common analogy used to explain threading involving a chef/cook that I find to be a good foundation for internalizing the concept. Examples are all over Google for that. This article gives you a reasonable overview with some good detail, IMO: medium.com/swift-india/concurrency...
Happy coding!
@Beautus S Gumede In a highly performant scalable web app, async/await allows .NET to momentarily give the underlying thread to a different incoming request while waiting on IO (such as a database call). It makes .NET prolly the most scalable stack out there when done right. NodeJS does similar things with its singlethreaded event loop.
This is much better understandable I think
Great! Now try lambda expression, you will be happier:)
I did 😅 it looks way better if you're working on a single table
You can use navigation properties and lambda to improve your code.
Something like:
public class User
{
public virtual Token Token { get; set; }
}
And then:
_context.Users.SingleOfDefault(u => u.Token.Body == Token);
Ok, this actually looks cleaner and more concise. I'm assuming from this that the virtual property relation to users is setup automatically because both entities exist in one context. I'm puzzled by how this really works
Here's another strange thing I'm picking up from this snippet, the fact that the property name and its type can just be exactly the same without C# complaining. I've been experimenting areas where such is possible.
public virtual Token Token;
I will add my voice to the others here who recommend avoiding LINQ syntax. Most MS development shops have gotten away from LINQ syntax and your query shows one good reason:
It might be more efficient for the
whereto execute before the
join. With the LINQ syntax you can do it but it looks odd.
Now compare:
For your specific case, I suggest you turn
_context.Tokensinto an
IDictionary<string, Token>or even
IDictionary<string, User>where the token body is the key. Then your method looks like this:
I have a question with this one. How would my DBContext class look like, will this UsersByToken field have to be mapped to a table in the bd too?
One thing that has been mentioned a couple of times is that I should consider using IDictionary, I'm looking forward to it
Ah. I did not know what
_contextwas and I was not aware you were using EF. I assumed
_contextwas some kind of property bag or class you controlled. So, the answer is no you would not have a
UsersByTokenproperty on your
DbContextand it would not map to the database. Instead it should be a local field/property in your login class. Now that I understand you are making round trips to the database, you will find
IDictionary<>to be even more efficient than your current code, but you will need a fall back to retrieving the user from the database if it is not found associated the provided token in your dictionary.
Thank you for this. Seems like IDictionary and IQueryable are my best bet for getting the most out of EF
I felt it bro, C# is easy and beautiful
The next step up from this is to combine this with webapi odata, that way the framework generated this sort of linq expression for you from a URL.
The irony here is the discovery .net core 3 and the EF core libs that make this possible are right now in a worse state than the ef6 and .net 4.x stuff regular .net people just migrated from.
Give it a couple years though and this will be special under .net 5 when you start notice it's literally write once, run anywhere.
LINQ, like SQL, is declarative.
It's interesting that I didn't' use the term 'declarative' but now that you've mentioned it, everything makes sense. I like declarative code :D
Roll with it. Read 'T-SQL Fundamentals, Third Edition' by Itzik Ben-Gan. Work through all the examples. You'll see why "SELECT" really belongs after "FROM" and "WHERE" (LINQ knows!). Find a good LINQ resource and work through more examples (LINQ to this, LINQ to that...). Don't worry about the lambda expressions underpinning LINQ syntactic sugar at this time. Microsoft has a fine offering. Visual Studio is the best IDE on the market. DOT NET "Core" is platform independent. For Web, perhaps check out angular.io (a Google framework) underpinned by TypeScript language (the "C# guy" invented TypeScript). Angular is organized, TypeScript is awesome, and it all downcompiles into the correct JavaScript for the device requesting your "stuff." There are dollar bills for your bank account on the Microsoft train. A lot of the other stuff is budget busting, shifting sand. In my experience.
I 100% agree that SQL got select wrong, and it should be at the end!
As for VS being the best IDE, I honestly find using Kotlin and IntelliJ Idea more pleasant.
You shouldn't use the join keyword, a second "from" is best:
i) you get to use the equality operator;
ii) you can arrange the operands any way you like;
iii) you can add on the DefaultIfEmpty() function on the end to change to a left join and vice versa.
Also, I always use the LINQ format instead of chained functions. Simply because it's more readable.
This is so true. The join was unnecessary 🤔 I was kind of suprised to why the Lambda version I switched to seems shorter, it was because it didn't use a join 😬
Yeah... LINQ is solid stuff. Had a dev on my team give the same reaction a few weeks back.
Now if you want it to run 10 times faster, drop entity framework and use dapper
If you want better performance but still strongly typed you could look at linq2db
Linq is epic and versatile. Here is a sample of me combing two Lazy objects building a new lazy result without executing them via using .Value, via linq monad extensions i created
Var lazyresult =
from a in lazyThing1
from b in lazyThing2
select a + b;
If you like that, check out rx.net reactive extensions for linq
Use lazy loading proxy from
Microsoft.EntityFrameworkCore.Proxiespackage, so you can get better sleep
I'll look this up! Thanks
Lazy loading navigations can easily lead to accidental N+1 queries. You can explicitly include navigations with
DbSet<>.Include.
I like LINQ syntax, but I think it's not really necessary here. I'd write it as:
Of course, this is a subjective matter, so any approach is OK if the team is fine with it.
Lovely article Beautus! As other commenters mention I also remember this thrill and power when witnessed at first. It is truly an amazing feeling and I share your excitement with JSX. It is great to hear you have bound with feature of a language such as LINQ as that will guide you to discover other amazing features like maybe the newest pattern matching in switch clauses? Which again might lead you to a totally different but .NET language as F#.
Keep on mind, LINQ syntax gives you range variables which you do not have in the lambda function syntax. The LINQ syntax eventually ends up “compiled” into lambda syntax, i.e. just a bunch of methods, read a ref explanation here: stackoverflow.com/a/20573973
If you are interested to understand the fantastic IEnumerable “driver” of the LINQ and how to literally reimplement then google for EduLINQ by Jon Skeet. Harnessing the yield keyword with iterators and generators is what made love C# even more. Afterwords you can google how LINQtoEntities works (thats the one you use with _context), hint/spoiler its an amazing compiler that helps it with “translation” into SQL with a ton of expression visitor pattern implementations composed to build a compiler pipeline... uh :-)
Also a side note on async as I am very allergic to that topic :-), it will make your code slower but it will take care of memory better in longer (scale up scenario) shots. It is not a silver bullet and it has nothing to do with increasing performance at all (only maybe decreasing it). Google Stephen Cleary and his blog for more info.
Cheers,
V.
I just read through the stackoverflow link and I'm stunned. But through that revelation, I had to wonder how performant a Lambda api would fare up against 100 million rows of data as opposed to using a raw query
Hey. Your thoughts stream to right questions! Actually EF is a lot slower than raw cons (or Dapper queries), for a test in the following link 10x to 15x times, but the test is not a real world scenario, i.e. do not infere that Dapper or ADO is better, as speed is just one treat. Read more about the test: exceptionnotfound.net/dapper-vs-en...
Also EF has to cache your lambdas after translating them for the first time into SQL strings. Also note some queries can not be cached due to their dynamic construction (you can build up a query with multiple C# statements) such as any query using a .Where(x => someNumbers.Contains(x.Id)), as the SQL “where in” (it is actually a ton of OR statements if you look into SQL) part is actually calculated each time.
Using EF is certainly not a question of speed.
You have never written in a functional language, true that?
FSharp is far superiour and tons more declarative as CSharp ever can be.
It is a superset, csharp has nothing that fsharp has not and the other way around has fsharp everything that csharp has and uses barely anything of it, since every fsharp dev knows, it is the by far weakest part of the language.
Sadly, scientific evidence counts less in a world full of self declared "computer scientists"
fsharp.org/testimonials/
Tbh I vastly prefer the method syntax when writing LINQ code, because of I wanted to write something that looks like SQL, I'd use SQL, not some halfbreed c#/SQL syntax.
Your next problem is that you'll be using this all the time for DB access, when really SQL is not just more appropriate, faster and more efficient, bit it's exactly all the things you love about that bit of code. :-)
Wait till you see dynamic LINQ =)
Or manually build expression trees haha :-)
This is sexy
Now are any employers going to move to .NET Core 3 when .NET 5 is right around the corner and promises to unify everything? Probably not unless you're a business that rebuilds everything from scratch every 2 years. You'd be a fool to use .NET Core 3 if your core product gets rebuilt only every 5-10 years.
So yeah, we will see. Smart money is on waiting.
Welcome to C# I remember this feeling. In fact, it has happened to me a few times in my career. Always be aware of the industry, but don’t take it too seriously. It’s people like you and I that are shaping the industry. Don’t be scared to make mistakes, but try always to make software that you can be proud of and share what you learn with others.
Side note: Java is also a wonderful high level language with tons of resources around it now, but one of the forces that made it popular was the amount of money that was invested on advertising it. I suggest you check this video on YouTube, who knows you might fall in love with F# one day ;)
youtu.be/QyJZzq0v7Z4
This gives me an idea. After I'm done with the project I've started with. I want to create a clone or similar with F#. With .NET Core also, I think that's going to be very interesting
Thanks for the inspiration bro, 'cause I have just started with the language and I must say for a beginner it's frustrating with all it's .Net types😩😣😭 but after reading this for sure nami I'll be set👍
LINQ is your next thrill
I never enjoyed anything like this in a long time. I just had to write about it. Flutter almost gave me a similar vibe but it wasn't that thrilling
EDIT: Jsx did!
I really enjoy your writing style. Followed!
Thanks a lot, man. I appreciate that a lot. I try to make it as relatable and conversational as much as possible.
"Enter .NET Core 3! The most overwhelming piece of framework I've ever encountered"
to be honest I think .net core is not a framework
I also hate C# because of so many libraries and many built-in methods in these libraries. But i enjoyed working on C# Windows Form Applications. | https://practicaldev-herokuapp-com.global.ssl.fastly.net/sduduzog/c-and-net-core-appreciation-post-the-most-beautiful-piece-of-code-i-have-ever-seen-this-month-49gf | CC-MAIN-2020-05 | refinedweb | 4,395 | 71.95 |
> hash.zip > hash.c
/* +++Date last modified: 05-Jul-1997 */ #include
#include #include "hash.h" /* ** public domain code by Jerry Coffin, with improvements by HenkJan Wolthuis. ** ** Tested with Visual C 1.0 and Borland C 3.1. ** Compiles without warnings, and seems like it should be pretty ** portable. */ /* ** These are used in freeing a table. Perhaps I should code up ** something a little less grungy, but it works, so what the heck. */ static void (*function)(void *) = (void (*)(void *))NULL; static hash_table *the_table = NULL; /* Initialize the hash_table to the size asked for. Allocates space ** for the correct number of pointers and sets them to NULL. If it ** can't allocate sufficient memory, signals error by setting the size ** of the table to 0. */ hash_table *construct_table(hash_table *table, size_t size) { size_t i; bucket **temp; table -> size = size; table -> table = (bucket * *)malloc(sizeof(bucket *) * size); temp = table -> table; if ( temp == NULL ) { table -> size = 0; return table; } for (i=0;i size; bucket *ptr; /* ** NULL means this bucket hasn't been used yet. We'll simply ** allocate space for our new bucket and put our data there, with ** the table pointing at it. */ if (NULL == (table->table)[val]) { (table->table)[val] = (bucket *)malloc(sizeof(bucket)); if (NULL==(table->table)[val]) return NULL; (table->table)[val] -> key = strdup(key); (table->table)[val] -> next = NULL; (table->table)[val] -> data = data; return (table->table)[val] -> data; } /* ** This spot in the table is already in use. See if the current string ** has already been inserted, and if so, increment its count. */ for (ptr = (table->table)[val];NULL != ptr; ptr = ptr -> next) if (0 == strcmp(key, ptr->key)) { void *old_data; old_data = ptr->data; ptr -> data = data; return old_data; } /* ** This key must not be in the table yet. We'll add it to the head of ** the list at this spot in the hash table. Speed would be ** slightly improved if the list was kept sorted instead. In this case, ** this code would be moved into the loop above, and the insertion would ** take place as soon as it was determined that the present key in the ** list was larger than this one. */ ptr = (bucket *)malloc(sizeof(bucket)); if (NULL==ptr) return 0; ptr -> key = strdup(key); ptr -> data = data; ptr -> next = (table->table)[val]; (table->table)[val] = ptr; return data; } /* ** Look up a key and return the associated data. Returns NULL if ** the key is not in the table. */ void *lookup(char *key, hash_table *table) { unsigned val = hash(key) % table->size; bucket *ptr; if (NULL == (table->table)[val]) return NULL; for ( ptr = (table->table)[val];NULL != ptr; ptr = ptr->next ) { if (0 == strcmp(key, ptr -> key ) ) return ptr->data; } return NULL; } /* ** Delete a key from the hash table and return associated ** data, or NULL if not present. */ void *del(char *key, hash_table *table) { unsigned val = hash(key) % table->size; void *data; bucket *ptr, *last = NULL; if (NULL == (table->table)[val]) return NULL; /* ** Traverse the list, keeping track of the previous node in the list. ** When we find the node to delete, we set the previous node's next ** pointer to point to the node after ourself instead. We then delete ** the key from the present node, and return a pointer to the data it ** contains. */ for (last = NULL, ptr = (table->table)[val]; NULL != ptr; last = ptr, ptr = ptr->next) { if (0 == strcmp(key, ptr -> key)) { if (last != NULL ) { data = ptr -> data; last -> next = ptr -> next; free(ptr->key); free(ptr); return data; } /* ** If 'last' still equals NULL, it means that we need to ** delete the first node in the list. This simply consists ** of putting our own 'next' pointer in the array holding ** the head of the list. We then dispose of the current ** node as above. */ else { data = ptr->data; (table->table)[val] = ptr->next; free(ptr->key); free(ptr); return data; } } } /* ** If we get here, it means we didn't find the item in the table. ** Signal this by returning NULL. */ return NULL; } /* ** free_table iterates the table, calling this repeatedly to free ** each individual node. This, in turn, calls one or two other ** functions - one to free the storage used for the key, the other ** passes a pointer to the data back to a function defined by the user, ** process the data as needed. */ static void free_node(char *key, void *data) { (void) data; if (function) function(del(key,the_table)); else del(key,the_table); } /* ** Frees a complete table by iterating over it and freeing each node. ** the second parameter is the address of a function it will call with a ** pointer to the data associated with each node. This function is ** responsible for freeing the data, or doing whatever is needed with ** it. */ void free_table(hash_table *table, void (*func)(void *)) { function = func; the_table = table; enumerate( table, free_node); free(table->table); table->table = NULL; table->size = 0; the_table = NULL; function = (void (*)(void *))NULL; } /* ** Simply invokes the function given as the second parameter for each ** node in the table, passing it the key and the associated data. */ void enumerate( hash_table *table, void (*func)(char *, void *)) { unsigned i; bucket *temp; for (i=0;i size; i++) { if ((table->table)[i] != NULL) { for (temp = (table->table)[i]; NULL != temp; temp = temp -> next) { func(temp -> key, temp->data); } } } } #ifdef TEST #include void printer(char *string, void *data) { printf("%s: %s\n", string, (char *)data); } int main(void) { hash_table table; char *strings[] = { "The first string", "The second string", "The third string", "The fourth string", "A much longer string than the rest in this example.", "The last string", NULL }; char *junk[] = { "The first data", "The second data", "The third data", "The fourth data", "The fifth datum", "The sixth piece of data" }; int i; void *j; construct_table(&table,200); for (i = 0; NULL != strings[i]; i++ ) insert(strings[i], junk[i], &table); for (i=0;NULL != strings[i];i++) { printf("\n"); enumerate(&table, printer); del(strings[i],&table); } for (i=0;NULL != strings[i];i++) { j = lookup(strings[i], &table); if (NULL == j) printf("\n'%s' is not in table",strings[i]); else printf("\nERROR: %s was deleted but is still in table.", strings[i]); } free_table(&table, NULL); return 0; } #endif /* TEST */ | http://read.pudn.com/downloads/sourcecode/math/1609/hash.c__.htm | crawl-002 | refinedweb | 1,016 | 69.31 |
This chapter describes creating BI Publisher layout templates using the layout editor.
This chapter includes the following topics:
Section 3.1, "Overview of BI Publisher Layouts"
Section 3.2, "Launching the Layout Editor"
Section 3.3, "About the Layout Editor Interface"
Section 3.4, "Page Layout Tab"
Section 3.5, "Inserting Layout Components""
Section 3.15, "Setting Predefined or Custom Formulas"
Section 3.16, "Saving a Layout" Figure 3-1.
Figure 3-1 Example of Interactive Output
Notice the following features:
Pop-up chart details - Hover the data model before you create a new layout. For information on adding sample data to the data model, see "Testing Data Models and Generating Sample Data" in.
The layout editor does not support namespaces or attributes in the XML data.
Launch the layout editor in one of the following ways:
Section 3.2.1, "When Creating a New Report"
Section 3.2.2, "When Editing a Report"
Section 3.2.3, "When Viewing a Report"
To launch the Layout Editor when creating a new report:
Selecting the data model for the new report.
The Report Editor displays the Add Layout page.
From the Create Layout region, click a predefined template to launch the Layout Editor.
To launch the Layout Editor when editing a report:
In the Report Editor:
From the Thumbnail view, click Add New Layout.
or
From the List view, click the Create button on the layouts table toolbar.
From the Create Layout region, click a predefined template to use to launch the Layout Editor.
To launch the Layout Editor when viewing a report:
Click Actions and then click Edit Layout.
The layout must have been created in the layout editor.
When you create a new layout, you are given the option of selecting a predefined layout to help you get started. Figure 3-2 shows the predefined layouts offered by the Basic and Shared Templates.
Figure 3-2 Predefined Layouts
The Basic and Shared Templates offer common layout structures with specific components already added. Choosing one of the predefined layouts is optional, but can facilitate layout design. If your enterprise utilizes a common design that is not available here, the boilerplate a name. This layout is now displayed to all users in the Shared Templates region.
To upload a layout: Click Upload to upload a predefined BI Publisher Template (.xpt file).
Save the report.
Any BI Publisher Templates (.xpt) added to this report are displayed to all users as a Shared Template.
To add predefined layouts that are available to your account user only:
Navigate to My Folders.
Create a new report called "Boilerplates". This report does not have a data model.
Click Add New Layout.
Design or upload the layout.
To design the layout: Click an existing boilerplate (or blank) to launch the layout editor. Insert the components to the layout. When finished, click Save and give the boilerplate a name.
To upload a layout: Click Upload to upload a predefined BI Publisher Template (.xpt file).
These layouts are presented in the My Templates region when you create a new layout.
Figure 3-3 shows the Layout Editor.
Figure 3-3 The Layout Editor
The Layout Editor interface comprises the following:
The top of the Layout Editor contains two toolbars:
The Static toolbar is always available and contains common commands such as save and preview. See Section 3.3.4, "About the Static Toolbar."
The Tabbed toolbar includes the Insert tab, the Page Layout tab, and a dynamic tab that shows the most commonly used actions and commands for the selected layout component. You can collapse this toolbar to make more room to view the design area. See Section 3.3 the layout.
The Data Source pane displays the structure of the data model and the data elements that are available to insert into the layout.
To insert a data element, select and drag it from the Data Source pane to the component in the layout.
The data type for each field is represented by an appropriate icon: number, date, or text.
Figure 3-4 shows the data source pane. The icon beside each element indicates the data type.
Figure 3-4 The Data Source Pane the layout.
Figure 3-5 shows the Components pane.
Figure 3-5 the cursor out of the field. Collapse or expand a property group by clicking the plus or minus signs beside the group name.
The properties available for each component are discussed in detail in the corresponding section for that component in this chapter. If a property field is blank, then the default is used.
Figure 3-6 shows a sample Properties pane for a table column header.
Figure 3-6 Sample Properties Pane for a Table Column Header
The Static toolbar extends on either side of the tabbed toolbar and is shown in Figure 3-7.
Figure 3-7 The Static Toolbar Section 3.5, "Inserting Layout Components."
The Page Layout tab provides common page-level tools and commands. See Section 3.4, "Page Layout Tab." Section 3.3.3, .
Figure 3-8 shows the Select tool.
Figure 3-8 The Select Tool
The Delete tool provides a similar function to the Select tool to enable you to precisely select the component to delete.
Use the Insert tab to insert report components and page elements. Figure 3-9 shows the Insert tab.
Figure 3-9 The Insert Tab
The Components group displays the report components that you can insert into the layout. To insert a component, select and drag the item to the desired location in the design area. For more information about each component, see its corresponding section in this chapter.
The Page Elements group contains page-level elements for the.
Figure 3-10 shows the Page Layout tab.
Figure 3-10 The Page Layout Tab
The Page Layout tab contains commands to set up the layout.
Table 3-2 describes header and footer options..
Figure 3-11 shows the Properties for a report header.
Figure 3-11 the layout respond to events triggered by a user when viewing the report in interactive mode.
The two types of events are:
Filter - If you click an element in a list, chart, or pivot table, that element is used to dynamically filter other components defined as targets in the report. The component being clicked does not change.
Show Selection Only - If you click an element of a list, chart, or pivot table, the chart or pivot table (being clicked) shows the results for the selected element only. This action does not affect other components of the report.
Figure 3-12 shows an example of filter event configuration. The layout contains two charts and a table. The first chart shows salary totals by department in a pie chart. The second chart shows salary totals by manager in a bar chart. The table displays a list of employees and their salaries.
Figure 3-12 Example of Filter Event Configuration
In this report, if a user clicks on a value in the Salary by Department chart, you want the Salary by Manager chart and the Employees table to automatically filter to show only the managers and employees in the selected department.
Figure 3-13.
Figure 3-13 Example of Automatic Filtering
To configure automatic filtering:
On the Page Layout tab, click Event Configuration to display the Configure Events dialog.
Figure 3-14 shows the Configure Events dialog.
Figure 3-14.
The Show Selection Only option is not enabled for Chart 1. That means that Chart 1 continues to display all values when one its elements is selected.
The Show Selection Only event displays only the value of the selected element within the chart or pivot table (being acted on).
In the example in Figure 3-15, Chart 2 is configured with Show Selection Only enabled and Filter enabled with Table 3 as the Target.
Figure 3-15 Example of Show Selection Only
This configuration results in the output shown in Figure 3-16. When the user clicks on Chart 2, only the selected value is shown in Chart 2. Because the Filter event is enabled for Table 3, the selection is applied as a filter to Table 3.
Figure 3-16 Example of Output from the Show Selection Only Configuration
To set the page margins for the report:
Click anywhere in the design area outside of an inserted component.
Click the Properties pane in the lower left of the Layout Editor. Figure 3-17 shows the Properties for the page.
Figure 3-17 The Properties for the Page
Click the value shown for Margin to launch the Margin dialog.
Figure 3-18 shows the Margin dialog.
Figure 3-18 The Margin Dialog
Select the desired size for the margin. Enter the value for the Top, Left, Right, and Bottom margins.
To automatically set the same value for all sides, select the box: Use same value for all sides. This action disables the server for large reports.
To set the maximum connections for this layout:
Click anywhere in the design area outside of an inserted component.
Click the Properties pane in the lower left of the Layout Editor. Figure 3-19 shows the Properties for the page.
Figure 3-19 The Properties for the Page
Click the value shown for Max. Connections and select the desired value from the list, as shown in Figure 3-20.
Figure 3-20 Example of Max. Connections
The layout editor supports components that are typically used in reports and other business documents. The followings components are described in these sections:.
Figure 3-21 shows the Create a Layout Grid dialog.
Figure 3-21 Create a Layout Grid Dialog
In the dialog, enter the number of rows and columns for the grid and click OK to insert the grid to the design area, as shown in Figure 3-22.
Figure 3-22 Example of a Grid Inserted in the Design Area want to display the gridlines in the finished report, then select the grid cell and click the Set Border command button to launch the Border dialog.
To add a background color to a cell:
Click the Background Color command button to launch the Color Picker.
When the. Figure 3-23 shows this option on the Properties pane.
Figure 3-23 The Interactive: Expand/Collapse Property
Figure 3-24 demonstrates the expand and collapse behavior when the report is viewed in interactive mode. The top of the figure shows the collapse icon in the upper right area of the report. Click the icon to collapse the grid. The bottom of the figure shows the report with the region collapsed.
Figure 3-24 Example of the Expand and Collapse Behavior the.
Figure 3-25 shows a layout that has a repeating section defined for the element Department. Within the repeating section are a chart that shows salaries by manager and a table that shows all employee salaries. So for each occurrence of department in the dataset, the chart and table are repeated.
Figure 3-25 An Example of a Layout that has a Repeating Section Defined for the Element Department
By default, for paginated output types, the page breaks automatically according to the amount of content that fits on a page. It is frequently desirable to have the report break after each occurrence of the repeated content.
Using the preceding example, it is desirable for the PDF output of this report to break after each department.
To create a break in the report after each occurrence of the repeating section:
Select the repeating section component.
Open the Properties pane.
Set the Page Break property to Page.
Figure 3-26 shows the Properties for a repeating section.
Figure 3-26 The Properties for a Repeating Section
In interactive mode, the values for the repeat by element are displayed as a list of values. This enables the enable the report consumer to dynamically select and view the results.
Figure 3-27 shows the repeat by element Department displayed in a list of values.
Figure 3-27 The Repeat by Element Department Displayed in a List of Values
By contrast, Figure 3-28 shows the same layout displayed in PDF. In this example the page break option is set so that each new department begins the repeating section on a new page.
Figure 3-28 Repeat by Element Department Layout Displayed in PDF.
Figure 3-29 shows the Show All setting in the Properties pane.
Figure 3-29 The Show All Setting in the Properties Pane
When you view the report, the option All is added to the menu of values, as shown in Figure 3-30.
Figure 3-30 The All Option
This section contains the following topics about working with tables:
Section 3.8.1, "Inserting a Data Table"
Section 3.8.2, "Setting Alternating Row Colors"
Section 3.8.3, "About the Table Tab"
Section 3.8.4, "About the Table Column Header Tab"
Section 3.8.5, "About the Column Tab"
Section 3.8.6, "About the Total Cell Tab"
Section 3.8.7, "Inserting Dynamic Hyperlinks"
To insert a data table:
From the Insert tab, select and drag the Data Table component to the design area.
Figure 3-31 shows an inserted, empty data table. Notice that the Table tab is now displayed.
Figure 3-31 Example of an Inserted, Empty Data Table
To add data columns to the table, select an element from the Data Source pane and drag it to the table in the layout.
Note:
You cannot include elements from multiple data models in report components unless the data models are linked. For more information, see "Creating Element Level Links" in Oracle Fusion Middleware Data Modeling Guide for Oracle Business Intelligence Publisher.
Figure 3-32 shows the columns being added to the table. Notice that when you drop a column on the table the sample data is immediately displayed.
Figure 3-32 Example of Columns Added to a Table
Continue to drag the elements from the Data Source pane to form the columns of the table. If you must reposition a column that you have already added, then select it and drag it to the correct position.
Figure 3-33 shows a completed data table.
Figure 3-33 Example of Section 3.8.5, "About the Column Tab."
Some data tables are easier to read when the rows display alternating colors, as shown in Figure 3-34.
Figure 3-34 Example of Rows Displayed in Alternating Colors
To set an alternating row color:
Select the table.
Open the Properties pane.
Click the value shown for Alternate Row Color to launch the color picker. Figure 3-35 shows the Alternate Row Color option.
Figure 3-35 The Alternate Row Color Option
Figure 3-36 shows the Table tab.
Figure 3-36 The Table Tab
The Rows to Display property controls the number of rows of data displayed as follows:
When designing the layout, this property sets the number of rows that can impact the performance of the Layout Editor.
A filter refines the displayed items by a condition. This is a powerful feature that enables you to display only desired elements in the the table data.
To set a filter:
Click the Filter toolbar button. This launches the Filter dialog, as shown in Figure 3-37.
Figure 3-37 The Filter Dialog
Enter the fields to define a filter, as described in Table 3-4.
After you have added filters, use the Manage Filters feature to edit, delete, or change the order that the filters are applied.
To manage filters:
Click the Manage Filters toolbar button to launch the Manage Filters dialog, as shown in Figure 3-38.
Figure 3-38 The Manage Filters Dialog
Hover the cursor over the filter to display the actions toolbar. Use the toolbar buttons to edit the filter, move the filter up or down in the order of application, delete, or add another filter.
A conditional format changes the formatting of an element in the table based on a condition. This feature is extremely useful for highlighting target ranges of values in the table. For example, you could create a set of conditional formats for the table that display rows in different colors depending on threshold values.
To apply a conditional format:
Click the Highlight button. This launches the Highlight dialog, as shown in Figure 3-39.
Figure 3-39 The Highlight Dialog
Enter the fields to define a condition and format to apply, as described in Table 3-5.
Figure 3-40 shows the table in the layout with the condition applied.
Figure 3-40 Example of Conditional Formatting
After you have added conditional formats, use the Manage Formats command to edit or delete a format.
To manage formats:
Click the Manage Formats button to launch the Manage Conditional Formats dialog, as shown in Figure 3-41.
Figure 3-41 The Manage Conditional Formats Dialog
Hover the cursor over an item to display the actions toolbar. Use the toolbar buttons to edit the format, move the format up or down in the order of application, delete, or add another format. The order of the conditions is important because only the first condition that is met is applied.
By default, the layout editor inserts a total row in a table that sums numeric columns. To remove the total row, click the Show menu and select the table view without the highlighted total row. Figure 3-42 shows the Show menu options.
Figure 3-42 The Show Menu Options
The total row can be further customized using the Total Cell tab and the Properties pane. For more information see Section 3.8.6, "About the Total Cell Tab."
Figure 3-43 shows the Table Column Header tab.
Figure 3-43 The Table Column Header Tab the table easier to read.
The Grouping option enables you to choose between "Group Left" or "Group Above". Group left maintains the "group by" element within the table. Figure 3-44 shows a table that has been grouped by Manager using Group Left.
Figure 3-44 Example of a Table Grouped by Manager Using Group Left
Group above inserts a Repeating Section component, and extracts the grouping element from the table. The grouping element is instead displayed above the table and a separate table is displayed for each occurrence of the grouping element. Figure 3-45 shows a table that has been grouped by Manager using Group Above.
Figure 3-45 Example of a Table Grouped by Manager Using Group Above
In Figure 3-46,.
Figure 3-46 Group Left Example
To further enhance a table, you can add a subtotal row to display for each grouped occurrence of the element. Figure 3-47 shows the same table with the Subtotals box checked. Notice that for each manager a subtotal row has been inserted.
Figure 3-47 Example of Subtotals
In Figure 3-48, the table data has been grouped by Manager. Notice that in the design pane, the Data Table component has been replaced with a Repeating Element component that contains the data table. The Manager element is inserted above the table with a label.
Figure 3-48 Group Above Example. Select the value that you want to view from the list, as shown in Figure 3-49.
Figure 3-49 Grouping Element Displayed as a Filter
The Column tab is enabled when you select a specific column in a table. Figure 3-50 shows the Column tab.
Figure 3-50 The Column Tab
The Column tab allows you to perform the following actions: an option is not listed, you can enter a custom Oracle or Microsoft formatting mask in the Properties pane. You can also set a formatting mask dynamically by including the mask as an element in your data. These features are described in the following sections:
Section 3.8.5.2, "Applying Formatting to Numeric Data Columns"
Section 3.8.5.3, "Applying Formatting to Date Type Data Columns"
Section 3.8.5.4, "Custom and Dynamic Formatting Masks"
If the column contains numeric data, the following formatting options are available:
Format - Select one of the common number formats from the list. The format is applied immediately to the table column. The formats are categorized by Number, Percent, and Currency, as shown in Figure 3-51.
Figure 3-51 Number, Percent, and Currency Formats
To apply a format not available from this list, see Section 3.8.8, "Applying Custom Data Formatting."
Decimal position - Click the Move Left or Move Right to increase or decrease the decimal positions displayed.
Show/Hide Grouping Separator - Click this button to hide the grouping separator (for example, 1,234.00 displays Figure 3-52.
Figure 3-52 Date and Time Formats
You can apply any Microsoft or Oracle (recommended) format mask to a report data field. You can manually enter the mask in the Formatting Mask property on the Properties pane.
To enter a custom data formatting mask:
Select the data column or field in the layout.
On the Properties pane, under the Data Formatting group select the Formatting Style. Supported styles are Oracle and Microsoft.
In the Formatting Mask field, manually enter the format mask to apply.
For more information on Microsoft and Oracle format masks, see Section 4.15, "Formatting Numbers, Dates, and Currencies."
Formatting masks can also be applied dynamically by either including the mask in a data element of your report data, or as a parameter to the report. The mask is passed to the layout editor based on the value of the data element.
To enter a dynamic formatting mask, in the Formatting Mask field, choose the data element that defines the formatting mask. Figure 3-53 shows an example of setting a dynamic number format mask. For this example, a parameter called NumberFormat prompts the user to define a format mask when the report is submitted. The value is passed to the Formatting Mask property and applied to the data field in the layout.
Figure 3-53 Dynamic Format Mask
If you use a parameter to pass the format mask ensure that you select the Include Parameter Tags option on the data model Properties page.
The options available from the Formula region of the column tab depend on the data type of the column.
For more information about applying formulas, see Section 3.15, employee salary table shown in Figure 3-54, assume you want to sort ascending first by Title then sort descending by Annual Salary:
Figure 3-54 Employee Salary Table
To apply the sort order to this table:
Select the Title column.
On the Column tab, under Sort, click the Ascending Order button.
From the Priority list, select 1.
Figure 3-55 shows the Priority list.
Figure 3-55 Priority List
Next select the Annual Salary column.
On the Column tab, under Sort, click the Descending Order button.
From the Priority list, select 2.
The sorted table is shown in Figure 3-56.
Figure 3-56 Example of a Sorted Table
The Layout Editor automatically inserts a grand total row when you insert a data table to the layout. As shown in the section on grouping, you can also insert subtotal rows within the table based on a grouping element. To edit the attributes of the cells in a grand total or subtotal row, select the cell and use the options in the Total Cell tab shown in Figure 3-57.
Figure 3-57 The Total Cell Tab Section 3.8.5.1, "About the Data Formatting Options for Columns." Section 3.15, "Setting Predefined or Custom Formulas."
The layout editor supports dynamic hyperlinks in tables.
To insert a dynamic hyperlink:
Select the table column.
Click Properties. The column properties include an option for URL, as shown in Figure 3-58.
In the URL field, enter the static portion of the URL and embed the absolute path to the element that provides the dynamic portion of the URL within curly braces {}. For example:{/DATA/GROUP1/ELEMENT_NAME}
where is the static portion of the URL and
{/DATA/GROUP1/ELEMENT_NAME} is the absolute path to the element in the data that supplies the dynamic portion.
For example, in the employee salary report, suppose each employee name should render as a hyperlink to the employee's person record. Assume the static portion of the URL to each person record is
The dynamic portion comes from the data element EMPLOYEE_ID. For this example, append the full path to the EMPLOYEE_ID element within curly braces and enter this in the URL field as follows:{/ROWSET/ROW/EMPLOYEE_ID}
If you are unsure of the correct element names in the absolute path, hover your mouse over the data element on the Data Source pane to display the path in the hover text, as shown in Figure 3-59.
BI Publisher supports the use of the Oracle and Microsoft format masks for custom data formatting. The results of the output depends on the selected locale.
For more information on Microsoft format masks, see Section 4.15.4, "Using the Microsoft Number Format Mask."
For more information on Oracle format masks, see Section 4.15.6, "Using the Oracle Format Mask."
To apply custom data formatting:
Select a data field or column.
Click Properties. The Data Formatting options are displayed as shown in Figure 3-60.
Figure 3-60 Data Formatting Properties
From the Formatting Style drop-down list, select the Oracle or Microsoft formatting style. The Oracle formatting style is recommended.
In the Formatting Mask field, enter a formatting mask. For example, for a column that contains product totals, you can use the Oracle formatting style, and the 9G999D99 formatting mask to display total values with two zeros to the right of the decimal place.
The layout editor supports a variety of chart types and styles to graphically present data in the layout. Figure 3-61 shows side-by-side vertical bar and pie charts in the layout editor.
Figure 3-61 following Chart Label properties apply to Scatter and Bubble chart types only: Title Font, Title Horizontal Align, Title Text, and Title Visible.
Chart Values
Note:
Some font effects such as underline, italic, and bold might not render in PDF output.
To insert a chart:
From the Insert menu, select and drag the Chart component to the layout.
By default an empty vertical bar chart is inserted and the Chart dynamic tab is displayed, as shown in Figure 3-62.
Figure 3-62 Chart Dynamic Tab
To change the chart type, click the Chart Type list to select a different type. In Figure 3-63 the chart type is changed to Pie.
Figure 3-63 The Pie Chart Type
Select and drag the data fields from the Data Source pane to the appropriate areas in the chart. The chart immediately updates with the preview data, as shown in Figure 3-64.
Figure 3-64 Dragging and Dropping Data Fields to a Chart
To resize the chart, drag and drop the resize handler on the lower right corner of the chart, as shown in Figure 3-65.
To preserve the aspect ratio when resizing a chart, press and hold the Shift key before starting to drag the corner.
Figure 3-65 Chart Resizing Section 3.8.3.2, "About Filters" for information on how to apply and manage filters.
By default, the chart displays a sum of the values of the chart measure. You can change the formula applied to a chart measure field by selecting an option from the Chart Measure Field tab.
To change the formula:
Select the measure field in the chart. This displays the Chart Measure Field tab, as shown in Figure 3-66.
Figure 3-66 Chart Measure Field Tab
Select from the following options available from the Formula list:
Count
Sum
Running Total
To sort a field in the chart:
Select the field to display the Chart Field tab.
On the Chart Field tab select Sort Ascending or Sort Descending.
To sort by multiple fields, apply a Priority to each sort field to apply the sort in the desired order.
The following features enable you to apply additional formatting to your charts:
Section 3.9.4.1, "Time Series Axis Formatting"
Section 3.9.4.2, "Hide Axis Option"
Section 3.9.4.3, "Independent Axis Formatting"
Section 3.9.4.4, "Axis Scaling"
Section 3.9.4.5, "Pie Slice Formatting"
If you do not select a value for these format options above, the BI Publisher default system settings are applied.
When the x-axis of your line chart is a date field, BI Publisher applies a time series format based on the range of the data as shown in Figure 3-67. You can customize the display of the time series in your chart, or turn it off.
Figure 3-67 Time Series Date Formatting Options
To select time series date formatting options for a chart:
Expand the Time Series report properties category.
In Day Format field, select one of the following format options for days:
None to hide the day label.
Day of Week to display only the names of each day of the week.
Day Single Letter to display only the first letter of each day of the week.
Day of Week Number to display only the number assigned to each day of the week. For example, if Sunday is the first day of the week, it can be displayed as 1, Monday displayed as 2, etc.
Day of Month to display all days in a month by the actual date. For example, the first day of the month would be displayed as 1.
In Month Format field, select one of the following format options for months:
None to hide all month labels.
Month Number to display only a number for each month in the year. For example, if the first month of the year is January, it is displayed in the chart as 1.
Month Single Letter to display only the first letter of each month in the year.
Month Short to display only the short names for each month. For example, January can be displayed as Jan.
Month Long to display only the full name of each month.
In the Time Format field, select one of the following format options for time increments:
None to hide all time labels.
Hour to display time in hours.
Hour24 to display time in 24 hour increments.
Hour24 Minute to display minutes in 24 hour increments.
Hour Minute to display time in hours and minutes.
Second to display time in seconds.
In Year Format field, select one of the following format options for years:
None to hide all year labels.
Year Short to display only the short names for each year.
Year Long to display only the full name of each year.
You can hide axis labels in reports for certain situations such as when you are working with small charts or visualizing data without values. This option is especially useful for creating reports that evaluate trends.
To hide an axis:
On the Properties pane, expand the Chart Label, Chart Value (1) or Chart Value (2) report properties category.
In Axis Visible, select False.
You can format decimal digits and numbers for each Y axis in a multiple Y-axis report.
To format decimal digits and number types for an axis:
On the Properties pane, expand the Chart Value (1) or Chart Value (2) report category.
To format axis decimals, in the Axis Decimals field, enter the number of decimals to display for a data element per axis.
To format data decimals for an axis where the Data Visible property is set to True, enter the number of decimals to display on the axis.
To apply number formatting to an axis, in the Format field, select one of the following options: General, Percent, or Currency.
If you select Currency, in the Currency Symbol field, manually enter the currency symbol.
You can set chart axis scaling as logarithmic or linear in reports.
To format axis scaling:
On the Properties pane, expand the Chart Value (1) or Chart Value (2) report properties category.
In the Axis Scaling field, select one of the following options: Logarithmic or Linear.
You can format pie slice charts to display percentages, total actual values, percentages, and labels.
To format pie slices:
On the Properties pane, expand the Plot Area report property category.
In the Pie Slice Format field, select one of the following options: Percent, Value, Label, or Label and Percent.
A gauge chart is a useful way to illustrate progress or goals. For example, Figure 3-68 shows a report with three gauges to indicate the status of regional sales goals:
Figure 3-68 Gauges Showing the Status of Regional Sales Goals
To insert a gauge chart in the layout:
From the Insert menu, select and drag the Gauge component to the layout. This inserts an empty gauge chart, as shown in Figure 3-69.
Figure 3-69 An Empty Gauge Chart
Select and drag the data fields from the Data Source pane to the Label, Value, and Series areas of the chart. The chart immediately updates with the preview data.
Figure 3-70 shows REGION being dragged to the Label area and DOLLARS being dragged to the Value area:
Figure 3-70 Dragging and Dropping REGION and DOLLARS to a Chart
Note the following:
A separate gauge is created for each occurrence of the Label (that is, each REGION). One set of properties applies to each occurrence.
By default, the Value field is a sum. You can change the expression applied to the value field. See Section 3.9.2, "Changing the Formula Applied to a Chart Measure Field."
You can apply a sort to the other gauge chart fields.
Use the Properties Pane to set detailed options for a gauge chart.
See Section 3.8.3.2, "About Filters" for information on how to apply and manage filters.
The pivot table provides views of multidimensional data in tabular form. It supports multiple measures and dimensions and subtotals at all levels. Figure 3-71 shows a pivot table.
Figure 3-71 A Pivot Table
To insert a pivot table:
From the Insert tab, select and drag the Pivot Table component to the layout. Figure 3-72 shows the empty pivot table structure.
Figure 3-72 The Empty Pivot Table Structure
Drag and drop data fields from the Data Source pane to the row, column, and data positions.
Drag multiple fields to the pivot table and place them precisely to structure the pivot table, as shown in Figure 3-73.
Figure 3-73 Dragging and Dropping Data Fields to a Pivot Table
By default the pivot table is inserted with no data formatting applied. To apply a format to the data, click the first column of data to enable the Pivot Table Data toolbar. On the Data Formatting group, select the appropriate format as shown in Figure 3-74.
Figure 3-74 Selecting a Format
Optionally resize the pivot table by clicking and dragging the handler in the lower right corner of the pivot table, as shown in Figure 3-75.
Figure 3-75 Resizing a Pivot Table
After you insert a pivot table customize the appearance and layout using the following dynamic tabs:
Pivot Table tab
Pivot Table Header tab
Pivot Table Data tab
Figure 3-76 shows the Pivot Table tab.
Figure 3-76 The Pivot Table Tab
See Section 3.8.3.2, Section 3.9, "About Charts."
Figure 3-77 shows the pivot table created in the preceding step converted to a vertical bar chart.
Figure 3-77 A Pivot Table Converted to a Vertical Bar Chart
Use the Switch Rows and Columns command to see a different view of the same data. Figure 3-78 shows the pivot table created in the previous step with rows and columns switched.
Figure 3-78 A Pivot Table with Rows and Columns Switched
The Pivot Table Header tab is shown in Figure 3-79.
Figure 3-79 The Pivot Table Header Tab
Select the column or row header of the pivot table and use the Pivot Table Header tab to perform the following:
Customize the fonts, colors, alignment and other display features of the header
Apply a sort order (for more information see Section 3.8.5.6, "About the Sort Option")
Apply data formatting (if the data type is number or date)
The Pivot Table Data tab is shown in Figure 3-80.
Figure 3-80 The Pivot Table Data Tab
Select the data area of the pivot table and use the Pivot Table Data tab to perform the following actions. Section 3.8.3.5, "About Conditional Formats")
Apply data formatting (see Section 3.8.5.1, "About the Data Formatting Options for Columns")
Apply a formula (see Section 3.8.6.2, places the data field beneath the text field as shown in Figure 3-81.
Figure 3-81 A Data Field Beneath a Text Item
To display the data field inline with the text item:
Set the Display property to Inline in the Properties pane, as shown in Figure 3-82.
Figure 3-82 Setting the Display Property to Inline
This setting enables the positioning of text items and data fields into a single line as shown in Figure 3-83.
Figure 3-83 Text Items and Data Fields Positioned in a Single Line
The Text tab is shown in Figure 3-84..
Figure 3-85 shows the Page # of N construction.
Figure 3-85 Page # of N Construction
To create the Page # of N construction: a, set the Text Item property to "Inline".
Figure 3-86 shows the insertion of the date and time icons.
Figure 3-86 Inserting the Date and Time Icons
When this report is viewed, the date and time are displayed according to the server time zone if viewed online, or for scheduled reports, the time zone selected for the schedule job. Figure 3-87 shows the date and time displayed in a report.
Figure 3-87 The Date and Time Displayed in a Report
To insert a hyperlink in a the data. The value of the element the data that contains a URL to an image.
Alternative Text: If the data includes a field that contains alternative text for the image, then select that field to display alternative text when the report is viewed as HTML.
Figure 3-88 shows the Insert an Image dialog set up to retrieve an image URL dynamically from the "Image" data element. The value of the "Name" element is used as alternative text.
Figure 3-88 Insert an Image Dialog. Figure 3-89 shows a report that displays multiple charts based on sales data. The list component displays each country for which there is sales data. The list enables the report consumer to quickly see results for each country in the list by clicking the entry in the list.
Figure 3-89 Using a List Component to Update Results
To insert a list:
From the Insert tab, select and drag the List component to the design area.
Figure 3-90 shows an inserted, empty list.
Figure 3-90 An Inserted, Empty List
To create the list, select an element from the Data Source pane and drag it to the empty list in the layout.
Figure 3-91 shows the list component after dragging the element Country Name to it.
Figure 3-91 A List Component Showing Country Names
Customize the appearance of the list. See Section 3.14.2, "Customizing a List."
Configure linked components using the Configure Events command. By default, all other tables and charts in the layout are configured to filter their results based on the user selections in the list component. To change this default behavior, see Section 3.4.5, "Interactivity: Event Configuration."
Use the List tab to:
Edit the font size, style, and color
Define borders for the list
Set the background color
Edit the font color and background color for the display of selected items
Set the orientation of the list
Specify the sort order
Figure 3-92 shows the List tab.
In Figure 3-93, the list on the left shows the default format of the list. The list on the right shows the Selected Font default format:
Figure 3-93 Default Formats Figure 3-94.
Figure 3-94 The Hide Excluded Property
Figure 3-95 shows the difference in the display depending on the setting of the property.
Figure 3-95 Display Differences of the Hide Excluded Property Settings
Figure 3-96 shows the Define Custom Formula icon.
Figure 3-96 Define Custom Formula Icon
The Formula group of commands is available from the following tabs:
Column tab
Total Cell tab
Chart Measure Field tab
Pivot Table Data tab
Note that not all options are applicable to each component type.
The menu provides the predefined formulas that are described in Table 3-6.
For non-numeric data, only the following formula options are supported:
Blank Text
Count
Count Distinct
Click Define Custom Formula to define your own formula for a component. The Function dialog enables you to define Basic Math, Context, and Statistical functions in the layout.
Figure 3-97 shows the Function dialog.
Figure 3-97 The Function Dialog
When you click one of the basic math functions, you are prompted to define the appropriate parameters for the function. You can enter a constant value, select a field from the data, or create a nested function to supply the value.
In Figure 3-98, clicking the Multiplication function displays prompts to enter the multiplicand and the multiplier. The example shows that the multiplicand is the value of the Amount Sold field. The multiplier is the constant value.
Figure 3-98 Example of the Multiplication Function
When you click one of the statistical math functions you are prompted to define the appropriate parameter for the function. You can select a field from the data, or create a nested function to supply the values. In Figure 3-99, clicking the Average function displays prompts for you to specify the source of the values for which to calculate the average.
Figure 3-99 The Average Function
Example 1: Subtraction
Figure 3-100 shows data for Revenue and Cost for each Office:
Figure 3-100 A Table Showing Revenue and Cost Data for Each Office
Using a custom formula, you can add a column to this table to calculate Profit (Revenue - Cost).
Add another numeric data column to the table. For example, drag another instance of Revenue to the table, as shown in Figure 3-101.
Figure 3-101 A Table With Two Instances of Revenue
With the table column selected, click Define Custom Formula.
In the Function dialog select Subtraction from the list, as shown in Figure 3-102. Because the source data for the column is Revenue, by default the Minuend and the Subtrahend both show the Revenue element.
Figure 3-102 The Subtraction Function
Select Subtrahend, then in the Parameter region, select Field and choose the Cost element, as shown in Figure 3-103.
Figure 3-103 Subtraction Function with the Cost Element Selected
The dialog is updated to show that the formula is now Revenue minus Cost, as shown in Figure 3-104.
Figure 3-104 Updated Subtraction Function Showing a Formula of Revenue Minus Cost
Click OK to close the dialog.
The table column displays the custom formula. Edit the table column header title, and now the table has a Profit column, as shown in Figure 3-105.
Figure 3-105 A Table Showing the Custom Formula Column Titled Profit
Example 2: Nested Function
This example uses a nested function to create a column that shows Revenue less taxes.
Add another numeric data column to the table. For example, drag another instance of Revenue to the table, as shown in Figure 3-106.
Figure 3-106 A Table With Two Instances of Revenue
With the table column selected, click Define Custom Formula.
In the Function dialog select Subtraction from the list. Because the source data for the column is Revenue, by default the Minuend and the Subtrahend both show the Revenue element, as shown in Figure 3-107.
Figure 3-107 Subtraction Function with Minuend and Subtrahend Showing the Revenue Element
Select Subtrahend, then in the Parameter region, select Nested Function and click Edit, as shown in Figure 3-108.
Figure 3-108 Subtraction Function with the Nested Function Selected
A second Function dialog is displayed to enable you to define the nested function. In this case the nested function is Revenue times a constant value (tax rate of .23), as shown in Figure 3-109.
Figure 3-109 The Function Dialog Showing the Nested Function Revenue Times a Constant Value
Click OK to close the dialog. The primary Function dialog now shows the nested function as the source of the subtrahend, as shown in Figure 3-110.
Figure 3-110 The Function Dialog Showing a Nested Function as the Source of the Subtrahend
Click OK to close the Function dialog. The table column displays the custom formula. Edit the table column header label, and now the table displays the custom function, as shown in Figure 3-111.
Figure 3-111 A Table Showing the Custom Function Revenue less tax (23%)
To save the layout to the report definition:
Click the Save or Save As toolbar button
The Save Layout dialog displays the list of layouts defined for the report definition as shown in Figure 3-112:
Figure 3-112 The Save Layout Dialog
Enter a unique name for this layout.
Select a Locale.
Note:
When you have saved the layout, the Locale cannot be updated. | https://docs.oracle.com/cd/E28280_01/bi.1111/e22254/create_lay_tmpl.htm | CC-MAIN-2020-45 | refinedweb | 7,619 | 55.64 |
Under the hood, TensorFlow 2 follows a fundamentally different programming paradigm from TF1.x.
This guide describes the fundamental differences between TF1.x and TF2 in terms of behaviors and the APIs, and how these all relate to your migration journey.
High-level summary of major changes
Fundamentally, TF1.x and TF2 use a different set of runtime behaviors around execution (eager in TF2), variables, control flow, tensor shapes, and tensor equality comparisons. To be TF2 compatible, your code must be compatible with the full set of TF2 behaviors. During migration, you can enable or disable most of these behaviors individually via the
tf.compat.v1.enable_* or
tf.compat.v1.disable_* APIs. The one exception is the removal of collections, which is a side effect of enabling/disabling eager execution.
At a high level, TensorFlow 2:
- Removes redundant APIs.
- Makes APIs more consistent - for example, Unified RNNs and Unified Optimizers.
- Prefers functions over sessions and integrates better with the Python runtime with Eager execution enabled by default along with
tf.functionthat provides automatic control dependencies for graphs and compilation.
- Deprecates global graph collections.
- Alters Variable concurrency semantics by using
ResourceVariablesover
ReferenceVariables.
- Supports function-based and differentiable control flow (Control Flow v2).
- Simplifies the TensorShape API to hold
ints instead of
tf.compat.v1.Dimensionobjects.
- Updates tensor equality mechanics. In TF1.x the
==operator on tensors and variables checks for object reference equality. In TF2 it checks for value equality. Additionally, tensors/variables are no longer hashable, but you can get hashable object references to them via
var.ref()if you need to use them in sets or as
dictkeys.
The sections below provide some more context on the differences between TF1.x and TF2. To learn more about the design process behind TF2, read the RFCs and the design docs.
API cleanup
Many APIs are either gone or moved in TF2. Some of the major changes include removing
tf.app,
tf.flags, and
tf.logging in favor of the now open-source absl-py, rehoming projects that lived in
tf.contrib, and cleaning up the main
tf.* namespace by moving lesser used functions into subpackages like
tf.math. Some APIs have been replaced with their TF2 equivalents -
tf.summary,
tf.keras.metrics, and
tf.keras.optimizers.
tf.compat.v1: Legacy and Compatibility API Endpoints
Symbols under the
tf.compat and
tf.compat.v1 namespaces are not considered TF2 APIs. These namespaces expose a mix of compatibility symbols, as well as legacy API endpoints from TF 1.x. These are intended to aid migration from TF1.x to TF2. However, as none of these
compat.v1 APIs are idiomatic TF2 APIs, do not use them for writing brand-new TF2 code.
Individual
tf.compat.v1 symbols may be TF2 compatible because they continue to work even with TF2 behaviors enabled (such as
tf.compat.v1.losses.mean_squared_error), while others are incompatible with TF2 (such as
tf.compat.v1.metrics.accuracy). Many
compat.v1 symbols (though not all) contain dedicated migration information in their documentation that explains their degree of compatibility with TF2 behaviors, as well as how to migrate them to TF2 APIs.
The TF2 upgrade script can map many
compat.v1 API symbols to equivalent TF2 APIs in the case where they are aliases or have the same arguments but with a different ordering. You can also use the upgrade script to automatically rename TF1.x APIs.
False friend APIs
There are a set of "false-friend" symbols found in the TF2
tf namespace (not under
compat.v1) that actually ignore TF2 behaviors under-the-hood, and/or are not fully compatible with the full set of TF2 behaviors. As such, these APIs are likely to misbehave with TF2 code, potentially in silent ways.
tf.estimator.*: Estimators create and use graphs and sessions under the hood. As such, these should not be considered TF2-compatible. If your code is running estimators, it is not using TF2 behaviors.
keras.Model.model_to_estimator(...): This creates an Estimator under the hood, which as mentioned above is not TF2-compatible.
tf.Graph().as_default(): This enters TF1.x graph behaviors and does not follow standard TF2-compatible
tf.functionbehaviors. Code that enters graphs like this will generally run them via Sessions, and should not be considered TF2-compatible.
tf.feature_column.*The feature column APIs generally rely on TF1-style
tf.compat.v1.get_variablevariable creation and assume that the created variables will be accessed via global collections. As TF2 does not support collections, APIs may not work correctly when running them with TF2 behaviors enabled.
Other API changes
TF2 features significant improvements to the device placement algorithms which renders the usage of
tf.colocate_withunnecessary. If removing it causes a performance degrade please file a bug.
Replace all usage of
tf.v1.ConfigProtowith equivalent functions from
tf.config.
Eager execution
TF1.x required you to manually stitch together an abstract syntax tree (the graph) by making
tf.* API calls and then manually compile the abstract syntax tree by passing a set of output tensors and input tensors to a
session.run call. TF2 executes eagerly (like Python normally does) and makes graphs and sessions feel like implementation details.
One notable byproduct of eager execution is that
tf.control_dependencies is no
longer required, as all lines of code execute in order (within a
tf.function,
code with side effects executes in the order written).
No more globals
TF1.x relied heavily on implicit global namespaces and collections. When you called
tf.Variable, it would be put into a collection in the default graph, and it would remain there, even if you lost track of the Python variable pointing to it. You could then recover that
tf.Variable, but only if you knew the name that it had been created with. This was difficult to do if you were not in control of the variable's creation. As a result, all sorts of mechanisms proliferated to
attempt to help you find your variables again, and for frameworks to find
user-created variables. Some of these include: variable scopes, global collections, helper methods like
tf.get_global_step and
tf.global_variables_initializer, optimizers implicitly
computing gradients over all trainable variables, and so on. TF2 eliminates all of these mechanisms (Variables 2.0 RFC) in favor of the default mechanism - you keep track of your variables. If you lose track of a
tf.Variable, it gets garbage collected.
The requirement to track variables creates some extra work, but with tools like the modeling shims and behaviors like implicit object-oriented variable collections in
tf.Modules and
tf.keras.layers.Layers, the burden is minimized.
Functions, not sessions
A
session.run call is almost like a function call: you specify the inputs and
the function to be called, and you get back a set of outputs. In TF2, you can decorate a Python function using
tf.function to mark it for JIT compilation so that TensorFlow runs it as a single graph (Functions 2.0 RFC). This mechanism allows TF2 to gain all of the benefits of graph mode:
- Performance: The function can be optimized (node pruning, kernel fusion, etc.)
- Portability: The function can be exported/reimported (SavedModel 2.0 RFC), allowing you to reuse and share modular TensorFlow functions.
# TF1.x outputs = session.run(f(placeholder), feed_dict={placeholder: input}) # TF2 outputs = f(input)
With the power to freely intersperse Python and TensorFlow code, you can take
advantage of Python's expressiveness. However, portable TensorFlow executes in
contexts without a Python interpreter, such as mobile, C++, and JavaScript. To
help avoid rewriting your code when adding
tf.function, use AutoGraph to convert a subset of Python constructs
into their TensorFlow equivalents:
for/
while->
tf.while_loop(
breakand
continueare supported)
if->
tf.cond
for _ in dataset->
dataset.reduce
AutoGraph supports arbitrary nestings of control flow, which makes it possible to performantly and concisely implement many complex ML programs such as sequence models, reinforcement learning, custom training loops, and more.
Adapting to TF 2.x Behavior Changes
Your migration to TF2 is only complete once you have migrated to the full set of TF2 behaviors. The full set of behaviors can be enabled or disabled via
tf.compat.v1.enable_v2_behaviors and
tf.compat.v1.disable_v2_behaviors. The sections below discuss each major behavior change in detail.
Using
tf.functions
The largest changes to your programs during migration are likely to come from the fundamental programming model paradigm shift from graphs and sessions to eager execution and
tf.function. Refer to the TF2 migration guides to learn more about moving from APIs that are incompatible with eager execution and
tf.function to APIs that are compatible with them.
Below are some common program patterns not tied to any one API that may cause problems when switching from
tf.Graphs and
tf.compat.v1.Sessions to eager execution with
tf.functions.
Pattern 1: Python object manipulation and variable creation intended to be done only once get run multiple times
In TF1.x programs that rely on graphs and sessions, the expectation is usually that all Python logic in your program will only run once. However, with eager execution and
tf.function it is fair to expect that your Python logic will be run at least once, but possibly more times (either multiple times eagerly, or multiple times across different
tf.function traces). Sometimes,
tf.function will even trace twice on the same input, causing unexpected behaviors (see Example 1 and 2). Refer to the
tf.function guide for more details.
Example 1: Variable creation
Consider the example below, where the function creates a variable when called:
def f(): v = tf.Variable(1.0) return v with tf.Graph().as_default(): with tf.compat.v1.Session() as sess: res = f() sess.run(tf.compat.v1.global_variables_initializer()) sess.run(res)
However, naively wrapping the above function that contains variable creation with
tf.function is not allowed.
tf.function only supports singleton variable creations on the first call. To enforce this, when tf.function detects variable creation in the first call, it will attempt to trace again and raise an error if there is variable creation in the second trace.
@tf.function def f(): print("trace") # This will print twice because the python body is run twice v = tf.Variable(1.0) return v try: f() except ValueError as e: print(e)
A workaround is caching and reusing the variable after it is created in the first call.
class Model(tf.Module): def __init__(self): self.v = None @tf.function def __call__(self): print("trace") # This will print twice because the python body is run twice if self.v is None: self.v = tf.Variable(0) return self.v m = Model() m()
Example 2: Out-of-scope Tensors due to
tf.function retracing
As demonstrated in Example 1,
tf.function will retrace when it detects Variable creation in the first call. This can cause extra confusion, because the two tracings will create two graphs. When the second graph from retracing attempts to access a Tensor from the graph generated during the first tracing, Tensorflow will raise an error complaining that the Tensor is out of scope. To demonstrate the scenario, the code below creates a dataset on the first
tf.function call. This would run as expected.
class Model(tf.Module): def __init__(self): self.dataset = None @tf.function def __call__(self): print("trace") # This will print once: only traced once if self.dataset is None: self.dataset = tf.data.Dataset.from_tensors([1, 2, 3]) it = iter(self.dataset) return next(it) m = Model() m()
However, if we also attempt to create a variable on the first
tf.function call, the code will raise an error complaining that the dataset is out of scope. This is because the dataset is in the first graph, while the second graph is also attempting to access it.
class Model(tf.Module): def __init__(self): self.v = None self.dataset = None @tf.function def __call__(self): print("trace") # This will print twice because the python body is run twice if self.v is None: self.v = tf.Variable(0) if self.dataset is None: self.dataset = tf.data.Dataset.from_tensors([1, 2, 3]) it = iter(self.dataset) return [self.v, next(it)] m = Model() try: m() except TypeError as e: print(e) # <tf.Tensor ...> is out of scope and cannot be used here.
The most straightfoward solution is ensuring that the variable creation and dataset creation are both outside of the
tf.funciton call. For example:
class Model(tf.Module): def __init__(self): self.v = None self.dataset = None def initialize(self): if self.dataset is None: self.dataset = tf.data.Dataset.from_tensors([1, 2, 3]) if self.v is None: self.v = tf.Variable(0) @tf.function def __call__(self): it = iter(self.dataset) return [self.v, next(it)] m = Model() m.initialize() m()
However, sometimes it's not avoidable to create variables in
tf.function (such as slot variables in some TF keras optimizers). Still, we can simply move the dataset creation outside of the
tf.function call. The reason that we can rely on this is because
tf.function will receive the dataset as an implicit input and both graphs can access it properly.
class Model(tf.Module): def __init__(self): self.v = None self.dataset = None def initialize(self): if self.dataset is None: self.dataset = tf.data.Dataset.from_tensors([1, 2, 3]) @tf.function def __call__(self): if self.v is None: self.v = tf.Variable(0) it = iter(self.dataset) return [self.v, next(it)] m = Model() m.initialize() m()
Example 3: Unexpected Tensorflow object re-creations due to dict usage
tf.function has very poor support for python side effects such as appending to a list, or checking/adding to a dictionary. More details are in "Better performance with tf.function". In the example below, the code uses dictionaries to cache datasets and iterators. For the same key, each call to the model will return the same iterator of the dataset.
class Model(tf.Module): def __init__(self): self.datasets = {} self.iterators = {} def __call__(self, key): if key not in self.datasets: self.datasets[key] = tf.compat.v1.data.Dataset.from_tensor_slices([1, 2, 3]) self.iterators[key] = self.datasets[key].make_initializable_iterator() return self.iterators[key] with tf.Graph().as_default(): with tf.compat.v1.Session() as sess: m = Model() it = m('a') sess.run(it.initializer) for _ in range(3): print(sess.run(it.get_next())) # prints 1, 2, 3
However, the pattern above will not work as expected in
tf.function. During tracing,
tf.function will ignore the python side effect of addition to the dictionaries. Instead, it only remembers the creation of a new dataset and iterator. As a result, each call to the model will always return a new iterator. This issue is hard to notice unless the numerical results or performance are significant enough. Hence, we recommend users to think about the code carefully before wrapping
tf.function naively onto the python code.
class Model(tf.Module): def __init__(self): self.datasets = {} self.iterators = {} @tf.function def __call__(self, key): if key not in self.datasets: self.datasets[key] = tf.data.Dataset.from_tensor_slices([1, 2, 3]) self.iterators[key] = iter(self.datasets[key]) return self.iterators[key] m = Model() for _ in range(3): print(next(m('a'))) # prints 1, 1, 1
We can use
tf.init_scope to lift the dataset and iterator creation outside of the graph, to achieve the expected behavior:
class Model(tf.Module): def __init__(self): self.datasets = {} self.iterators = {} @tf.function def __call__(self, key): if key not in self.datasets: # Lifts ops out of function-building graphs with tf.init_scope(): self.datasets[key] = tf.data.Dataset.from_tensor_slices([1, 2, 3]) self.iterators[key] = iter(self.datasets[key]) return self.iterators[key] m = Model() for _ in range(3): print(next(m('a'))) # prints 1, 2, 3
The general rule of thumb is to avoid relying on Python side effects in your logic and only use them to debug your traces.
Example 4: Manipulating a global Python list
The following TF1.x code uses a global list of losses that it uses to only maintain the list of losses generated by the current training step. Note that the Python logic that appends losses to the list will only be called once regardless of how many training steps the session is run for.
all_losses = [] class Model(): def __call__(...): ... all_losses.append(regularization_loss) all_losses.append(label_loss_a) all_losses.append(label_loss_b) ... g = tf.Graph() with g.as_default(): ... # initialize all objects model = Model() optimizer = ... ... # train step model(...) total_loss = tf.reduce_sum(all_losses) optimizer.minimize(total_loss) ... ... sess = tf.compat.v1.Session(graph=g) sess.run(...)
However, if this Python logic is naively mapped to TF2 with eager execution, the global list of losses will have new values appended to it in each training step. This means the training step code which previously expected the list to only contain losses from the current training step now actually sees the list of losses from all training steps run so far. This is an unintended behavior change, and the list will either need to be cleared at the start of each step or made local to the training step.
all_losses = [] class Model(): def __call__(...): ... all_losses.append(regularization_loss) all_losses.append(label_loss_a) all_losses.append(label_loss_b) ... # initialize all objects model = Model() optimizer = ... def train_step(...) ... model(...) total_loss = tf.reduce_sum(all_losses) # global list is never cleared, # Accidentally accumulates sum loss across all training steps optimizer.minimize(total_loss) ...
Pattern 2: A symbolic tensor meant to be recomputed every step in TF1.x is accidentally cached with the initial value when switching to eager.
This pattern usually causes your code to silently misbehave when executing eagerly outside of tf.functions, but raises an
InaccessibleTensorError if the initial value caching occurs inside of a
tf.function. However, be aware that in order to avoid Pattern 1 above you will often inadvertently structure your code in such a way that this initial value caching will happen outside of any
tf.function that would be able to raise an error. So, take extra care if you know your program may be susceptible to this pattern.
The general solution to this pattern is to restructure the code or use Python callables if necessary to make sure the value is recomputed each time instead of being accidentally cached.
Example 1: Learning rate/hyperparameter/etc. schedules that depend on global step
In the following code snippet, the expectation is that every time the session is run the most recent
global_step value will be read and a new learning rate will be computed.
g = tf.Graph() with g.as_default(): ... global_step = tf.Variable(0) learning_rate = 1.0 / global_step opt = tf.compat.v1.train.GradientDescentOptimizer(learning_rate) ... global_step.assign_add(1) ... sess = tf.compat.v1.Session(graph=g) sess.run(...)
However, when trying to switch to eager, be wary of ending up with the learning rate only being computed once then reused, rather than following the intended schedule:
global_step = tf.Variable(0) learning_rate = 1.0 / global_step # Wrong! Only computed once! opt = tf.keras.optimizers.SGD(learning_rate) def train_step(...): ... opt.apply_gradients(...) global_step.assign_add(1) ...
Because this specific example is a common pattern and optimizers should only be initialized once rather than at each training step, TF2 optimizers support
tf.keras.optimizers.schedules.LearningRateSchedule schedules or Python callables as arguments for the learning rate and other hyperparameters.
Example 2: Symbolic random number initializations assigned as object attributes then reused via pointer are accidentally cached when switching to eager
Consider the following
NoiseAdder module:
class NoiseAdder(tf.Module): def __init__(shape, mean): self.noise_distribution = tf.random.normal(shape=shape, mean=mean) self.trainable_scale = tf.Variable(1.0, trainable=True) def add_noise(input): return (self.noise_distribution + input) * self.trainable_scale
Using it as follows in TF1.x will compute a new random noise tensor every time the session is run:
g = tf.Graph() with g.as_default(): ... # initialize all variable-containing objects noise_adder = NoiseAdder(shape, mean) ... # computation pass x_with_noise = noise_adder.add_noise(x) ... ... sess = tf.compat.v1.Session(graph=g) sess.run(...)
However, in TF2 initializing the
noise_adder at the beginning will cause the
noise_distribution to be only computed once and get frozen for all training steps:
... # initialize all variable-containing objects noise_adder = NoiseAdder(shape, mean) # Freezes `self.noise_distribution`! ... # computation pass x_with_noise = noise_adder.add_noise(x) ...
To fix this, refactor
NoiseAdder to call
tf.random.normal every time a new random tensor is needed, instead of referring to the same tensor object each time.
class NoiseAdder(tf.Module): def __init__(shape, mean): self.noise_distribution = lambda: tf.random.normal(shape=shape, mean=mean) self.trainable_scale = tf.Variable(1.0, trainable=True) def add_noise(input): return (self.noise_distribution() + input) * self.trainable_scale
Pattern 3: TF1.x code directly relies on and looks up tensors by name
It is common for TF1.x code tests to rely on checking what tensors or operations are present in a graph. In some rare cases, modeling code will also rely on these lookups by name.
Tensor names are not generated when executing eagerly outside of
tf.function at all, so all usages of
tf.Tensor.name must happen inside of a
tf.function. Keep in mind the actual generated names are very likely to differ between TF1.x and TF2 even within the same
tf.function, and API guarantees do not ensure stability of the generated names across TF versions.
Pattern 4: TF1.x session selectively runs only part of the generated graph
In TF1.x, you can construct a graph and then choose to only selectively run only a subset of it with a session by choosing a set of inputs and outputs that do not require running every op in the graph.
For example, you may have both a generator and a discriminator inside of a single graph, and use separate
tf.compat.v1.Session.run calls to alternate between only training the discriminator or only training the generator.
In TF2, due to automatic control dependencies in
tf.function and eager execution, there is no selective pruning of
tf.function traces. A full graph containing all variable updates would get run even if, for example, only the output of the discriminator or the generator is output from the
tf.function.
So, you would need to either use multiple
tf.functions containing different parts of the program, or a conditional argument to the
tf.function that you branch on so as to execute only the things you actually want to have run.
Collections Removal
When eager execution is enabled, graph collection-related
compat.v1 APIs (including those that read or write to collections under the hood such as
tf.compat.v1.trainable_variables) are no longer available. Some may raise
ValueErrors, while others may silently return empty lists.
The most standard usage of collections in TF1.x is to maintain initializers, the global step, weights, regularization losses, model output losses, and variable updates that need to be run such as from
BatchNormalization layers.
To handle each of these standard usages:
- Initializers - Ignore. Manual variable initialization is not required with eager execution enabled.
- Global step - See the documentation of
tf.compat.v1.train.get_or_create_global_stepfor migration instructions.
- Weights - Map your models to
tf.Modules/
tf.keras.layers.Layers/
tf.keras.Models by following the guidance in the model mapping guide and then use their respective weight-tracking mechanisms such as
tf.module.trainable_variables.
- Regularization losses - Map your models to
tf.Modules/
tf.keras.layers.Layers/
tf.keras.Models by following the guidance in the model mapping guide and then use
tf.keras.losses. Alternatively, you can also manually track your regularization losses.
- Model output losses - Use
tf.keras.Modelloss management mechanisms or separately track your losses without using collections.
- Weight updates - Ignore this collection. Eager execution and
tf.function(with autograph and auto-control-dependencies) means all variable updates will get run automatically. So, you will not have to explicitly run all weight updates at the end, but note that it means the weight updates may happen at a different time than they did in your TF1.x code, depending on how you were using control dependencies.
- Summaries - Refer to the migrating summary API guide.
More complex collections usage (such as using custom collections) may require you to refactor your code to either maintain your own global stores, or to make it not rely on global stores at all.
ResourceVariables instead of
ReferenceVariables
ResourceVariables have stronger read-write consistency guarantees than
ReferenceVariables. This leads to more predictable, easier-to-reason about semantics about whether or not you will observe the result of a previous write when using your variables. This change is extremely unlikely to cause existing code to raise errors or to break silently.
However, it is possible though unlikely that these stronger consistency guarantees may increase the memory usage of your specific program. Please file an issue if you find this to be the case. Additionally, if you have unit tests relying on exact string comparisons against the operator names in a graph corresponding to variable reads, be aware that enabling resource variables may slightly change the name of these operators.
To isolate the impact of this behavior change on your code, if eager execution is disabled you can use
tf.compat.v1.disable_resource_variables() and
tf.compat.v1.enable_resource_variables() to globally disable or enable this behavior change.
ResourceVariables will always be used if eager execution is enabled.
Control flow v2
In TF1.x, control flow ops such as
tf.cond and
tf.while_loop inline low-level ops such as
Switch,
Merge etc. TF2 provides improved functional control flow ops that are implemented with separate
tf.function traces for every branch and support higher-order differentiation.
To isolate the impact of this behavior change on your code, if eager execution is disabled you can use
tf.compat.v1.disable_control_flow_v2() and
tf.compat.v1.enable_control_flow_v2() to globally disable or enable this behavior change. However, you can only disable control flow v2 if eager execution is also disabled. If it is enabled, control flow v2 will always be used.
This behavior change can dramatically change the structure of generated TF programs that use control flow, as they will contain several nested function traces rather than one flat graph. So, any code that is highly dependent on the exact semantics of produced traces may require some modification. This includes:
- Code relying on operator and tensor names
- Code referring to tensors created within a TensorFlow control flow branch from outside of that branch. This is likely to produce an
InaccessibleTensorError
This behavior change is intended to be performance neutral to positive, but if you run into an issue where control flow v2 performs worse for you than TF1.x control flow then please file an issue with reproduction steps.
TensorShape API behavior changes
The
TensorShape class was simplified to hold
ints, instead of
tf.compat.v1.Dimension objects. So there is no need to call
.value to get an
int.
Individual
tf.compat.v1.Dimension objects are still accessible from
tf.TensorShape.dims.
To isolate the impact of this behavior change on your code, you can use
tf.compat.v1.disable_v2_tensorshape() and
tf.compat.v1.enable_v2_tensorshape() to globally disable or enable this behavior change.
The following demonstrate the differences between TF1.x and TF2.
import tensorflow as tf
# Create a shape and choose an index i = 0 shape = tf.TensorShape([16, None, 256]) shape
TensorShape([16, None, 256])
If you had this in TF1.x:
value = shape[i].value
Then do this in TF2:
value = shape[i] value
16
If you had this in TF1.x:
for dim in shape: value = dim.value print(value)
Then, do this in TF2:
for value in shape: print(value)
16 None 256
If you had this in TF1.x (or used any other dimension method):
dim = shape[i] dim.assert_is_compatible_with(other_dim)
Then do this in TF2:
other_dim = 16 Dimension = tf.compat.v1.Dimension if shape.rank is None: dim = Dimension(None) else: dim = shape.dims[i] dim.is_compatible_with(other_dim) # or any other dimension method
True
shape = tf.TensorShape(None) if shape: dim = shape.dims[i] dim.is_compatible_with(other_dim) # or any other dimension method
The boolean value of a
tf.TensorShape is
True if the rank is known,
False otherwise.
print(bool(tf.TensorShape([]))) # Scalar print(bool(tf.TensorShape([0]))) # 0-length vector print(bool(tf.TensorShape([1]))) # 1-length vector print(bool(tf.TensorShape([None]))) # Unknown-length vector print(bool(tf.TensorShape([1, 10, 100]))) # 3D tensor print(bool(tf.TensorShape([None, None, None]))) # 3D tensor with no known dimensions print() print(bool(tf.TensorShape(None))) # A tensor with unknown rank.
True True True True True True False
Potential errors due to TensorShape changes
The TensorShape behavior changes are unlikely to silently break your code. However, you may see shape-related code begin to raise
AttributeErrors as
ints and
Nones do not have the same attributes that
tf.compat.v1.Dimensions do. Below are some examples of these
AttributeErrors:
try: # Create a shape and choose an index shape = tf.TensorShape([16, None, 256]) value = shape[0].value except AttributeError as e: # 'int' object has no attribute 'value' print(e)
'int' object has no attribute 'value'
try: # Create a shape and choose an index shape = tf.TensorShape([16, None, 256]) dim = shape[1] other_dim = shape[2] dim.assert_is_compatible_with(other_dim) except AttributeError as e: # 'NoneType' object has no attribute 'assert_is_compatible_with' print(e)
'NoneType' object has no attribute 'assert_is_compatible_with'
Tensor Equality by Value
The binary
== and
!= operators on variables and tensors were changed to compare by value in TF2 rather than comparing by object reference like in TF1.x. Additionally, tensors and variables are no longer directly hashable or usable in sets or dict keys, because it may not be possible to hash them by value. Instead, they expose a
.ref() method that you can use to get a hashable reference to the tensor or variable.
To isolate the impact of this behavior change, you can use
tf.compat.v1.disable_tensor_equality() and
tf.compat.v1.enable_tensor_equality() to globally disable or enable this behavior change.
For example, in TF1.x, two variables with the same value will return false when you use the
== operator:
tf.compat.v1.disable_tensor_equality() x = tf.Variable(0.0) y = tf.Variable(0.0) x == y
False
While in TF2 with tensor equality checks enabled,
x == y will return
True.
tf.compat.v1.enable_tensor_equality() x = tf.Variable(0.0) y = tf.Variable(0.0) x == y
<tf.Tensor: shape=(), dtype=bool, numpy=True>
So, in TF2, if you need to compare by object reference make sure to use
is and
is not
tf.compat.v1.enable_tensor_equality() x = tf.Variable(0.0) y = tf.Variable(0.0) x is y
False
Hashing tensors and variables
With TF1.x behaviors you used to be able to directly add variables and tensors to data structures that require hashing, such as
set and
dict keys.
tf.compat.v1.disable_tensor_equality() x = tf.Variable(0.0) set([x, tf.constant(2.0)])
{<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=0.0>, <tf.Tensor: shape=(), dtype=float32, numpy=2.0>}
However, in TF2 with tensor equality enabled, tensors and variables are made unhashable due to the
== and
!= operator semantics changing to value equality checks.
tf.compat.v1.enable_tensor_equality() x = tf.Variable(0.0) try: set([x, tf.constant(2.0)]) except TypeError as e: # TypeError: Variable is unhashable. Instead, use tensor.ref() as the key. print(e)
Variable is unhashable. Instead, use tensor.ref() as the key.
So, in TF2 if you need to use tensor or variable objects as keys or
set contents, you can use
tensor.ref() to get a hashable reference that can be used as a key:
tf.compat.v1.enable_tensor_equality() x = tf.Variable(0.0) tensor_set = set([x.ref(), tf.constant(2.0).ref()]) assert x.ref() in tensor_set tensor_set
{<Reference wrapping <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=0.0>>, <Reference wrapping <tf.Tensor: shape=(), dtype=float32, numpy=2.0>>}
If needed, you can also get the tensor or variable from the reference by using
reference.deref():
referenced_var = x.ref().deref() assert referenced_var is x referenced_var
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=0.0>
Resources and further reading
- Visit the Migrate to TF2 section to read more about migrating to TF2 from TF1.x.
- Read the model mapping guide to learn more mapping your TF1.x models to work in TF2 directly. | https://www.tensorflow.org/guide/migrate/tf1_vs_tf2?hl=da | CC-MAIN-2022-05 | refinedweb | 5,390 | 52.15 |
Try it now and let us know what you think. Switch to the new look >>
You can return to the original look by selecting English in the language selector above.
Cluster
A regional grouping of one or more container instances on which you can run task requests. Each account receives a default cluster the first time you use the Amazon ECS service, but you may also create other clusters. Clusters may contain more than one instance type simultaneously.
Contents
- activeServicesCount
The number of services that are running on the cluster in an
ACTIVEstate. You can view these services with ListServices.
Type: Integer
Required: No
- clusterArn
The Amazon Resource Name (ARN) that identifies the cluster. The ARN contains the
arn:aws:ecsnamespace, followed by the Region of the cluster, the AWS account ID of the cluster owner, the
clusternamespace, and then the cluster name. For example,
arn:aws:ecs:region:012345678910:cluster/test.
Type: String
Required: No
- clusterName
A user-generated string that you use to identify your cluster.
Type: String
Required: No
- pendingTasksCount
The number of tasks in the cluster that are in the
PENDINGstate.
Type: Integer
Required: No
- registeredContainerInstancesCount
The number of container instances registered into the cluster. This includes container instances in both
ACTIVEand
DRAININGstatus.
Type: Integer
Required: No
- runningTasksCount
The number of tasks in the cluster that are in the
RUNNINGstate.
Type: Integer
Required: No
- settings
The settings for the cluster. This parameter indicates whether CloudWatch Container Insights is enabled or disabled for a cluster.
Type: Array of ClusterSetting objects
Required: No
- statistics
Additional information about your clusters that are separated by launch type, including:
Type: Array of KeyValuePair objects
Required: No
- status
The status of the cluster. The valid values are
ACTIVEor
INACTIVE.
ACTIVEindicates that you can register container instances with the cluster and the associated instances can accept tasks.
Type: String
Required: No
The metadata that you apply to the cluster to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: | https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_Cluster.html | CC-MAIN-2019-43 | refinedweb | 358 | 56.15 |
Just a bit of feedback, after much tinkering, trying both
the stable and beta versions, etc, I had to give it up.
Nothing I seemed to do would change the hashed local file
names. NOTHING. It really was easier to take the log file,
copy it to Word, Replace out as much extra text as
possible, convert it to a table, copy the table to Excel,
import it into Access, and write a query to display the
local address in a form under the server URL. I tried using
the string exactly as given, reading the manuals, etc to
figure out how to do the %[] syntax but never got there.
Maybe it's something to do with the server, maybe I need to
learn MySQL and PHP, but I've had nothing to do with setting
up the site where my blog was located. All I wanted was to
make a local backup.
And just so you know I'm not just bitching, as far as
downloading the files, etc, the program did a great job.
Everything but this name issue was easy to use and now I've
got what I need, right where I need it. Thank you!
xj
Xavier Roche
Created with FORUM 2.0.11 | https://forum.httrack.com/readmsg/9879/9864/index.html | CC-MAIN-2022-33 | refinedweb | 209 | 73.92 |
Michael Niedermayer wrote: > On Thu, May 01, 2008 at 11:42:54PM -0400, Justin Ruggles wrote: >> Michael Niedermayer wrote: >>> On Wed, Apr 30, 2008 at 10:15:35PM -0400, Justin Ruggles wrote: >>>> Hi, >>>> >>>> I wrote: >>>>> I'll make the appropriate changes and submit new patch(es). >>>> Here are 6 new patches which do the same as the last patch. >>>> >>>> -Justin >>>> >>>> >>>> [...] >>>> diff --git a/libavcodec/flac.c b/libavcodec/flac.c >>>> index 28e25e7..fb1ac49 100644 >>>> --- a/libavcodec/flac.c >>>> +++ b/libavcodec/flac.c >>>> @@ -119,7 +119,7 @@ static av_cold int flac_decode_init(AVCodecContext * avctx) >>>> >>>> static void dump_headers(AVCodecContext *avctx, FLACContext *s) >>>> { >>>> - av_log(avctx, AV_LOG_DEBUG, " Blocksize: %d .. %d (%d)\n", s->min_blocksize, s->max_blocksize, s->blocksize); >>>> + av_log); >>> why? >> ok, i guess this should really be done at the same time as patch #5, and >> should also include changing the input to a FLACStreaminfo instead of a >> FLACContext. >> >>> [...] >>>> +#ifndef FFMPEG_FLAC_H >>>> +#define FFMPEG_FLAC_H >>>> + >>>> +#include "avcodec */\ >>>> + >>>> +#endif /* FFMPEG_FLAC_H */ >>> This file does NOT need avcodec.h >> sure enough. I'll take it out. >> >> New patch set: >> >> already approved: >> 0001-change-function-params-for-metadata_streaminfo.patch >> 0002-change-function-params-for-dump_headers.patch >> 0005-move-init_get_bits-inside-conditional.patch >> >> attached: >> 0003-split-out-some-decoder-context-params-to-a-shared-macro.patch >> 0004-share-streaminfo-parsing-function.patch >> >> >> Thanks, >> Justin >> >> > >> >From e7c9ccea2bfb073069536620b9fa3ea66234f6be Mon Sep 17 00:00:00 2001 >> From: Justin Ruggles <justin.ruggles at gmail.com> >> Date: Thu, 1 May 2008 23:32:01 -0400 >> Subject: [PATCH] split out some decoder context params to a shared macro > > ok > > > [...] > >> >From 9d87939cfb088f139f804a1d165de7d23304d288 Mon Sep 17 00:00:00 2001 >> From: Justin Ruggles <justin.ruggles at gmail.com> >> Date: Thu, 1 May 2008 23:35:48 -0400 >> Subject: [PATCH] share streaminfo parsing function > > ok applied. flac demuxer patches to follow in the near future. thanks, justin | http://ffmpeg.org/pipermail/ffmpeg-devel/2008-May/052894.html | CC-MAIN-2017-30 | refinedweb | 296 | 60.21 |
Binary Search Tree : Lowest Common Ancestor
RubenzZzZ + 34 comments
Solution is based on the following thought: The value of a common ancestor has to always be between the two values in question.
static Node lca(Node root,int v1,int v2) { //Decide if you have to call rekursively //Samller than both if(root.data < v1 && root.data < v2){ return lca(root.right,v1,v2); } //Bigger than both if(root.data > v1 && root.data > v2){ return lca(root.left,v1,v2); } //Else solution already found return root; }
- G
rmccune + 4 comments
this was my approach but it is failing testcase 2 and i don't know why. checking my 'output' for 5 hackos it reads 'CORRECT' so i don't know what's going on
RubenzZzZ + 2 comments
That the output reads CORRECT does just mean that this is the output you should get, its a bit weird. If you could post your code I will look through it to see if I see something wrong :)
- G
- AN
divyaaarthi + 1 comment
i have returned A , but output is incorrect for testcase 2; }
mohitaggarwal516 + 2 comments
your logic is incorrect after.....
if(root -> data < v1)
lca(root -> right , v1 , v2);
else
lca(root -> left , v1 , v2);
return safe; } In this case you are not checking v2. you may lose the trail of v2 while traversing. In your decision logic your root depends on v1 as well as v2...... go for the simple code...
node * lca(node * root, int v1,int v2)
{
if(v1<root->data && v2<root->data) root=lca(root->left,v1,v2); else if(v1>root->data && v2>root->data) root=lca(root->right,v1,v2); return root;
}
Sandeep_1991 + 1 comment
Awesome, can you please explain why assigning it to root will work correctly but fails if used as node* only for test case 2?
Thanks in advance!!
- SS
saikat777 + 1 comment
root->(right)->x->(right)->v1->(right)->v2
Consider this. LCA is v1's address.But since you're returning only the address without storing it in some value,it is getting lost.When control gets to x it will return it's own root ie x because each parametrised "root" is local.However storing it in root updates the value of the local root with the LCA and as the chain of hierarchy goes upwards the LCA keeps getting returned.
- S
stark007_sc + 0 comments
node * lca(node * root, int v1,int v2) { if(root==NULL) { return NULL; }
if(v1>root->data && v2>root->data) { lca(root->right,v1,v2); }
else if(v1data && v2data) { lca(root->left,v1,v2); }
return root; } same here test case 2 is where my code fails but in case of try and run it's working pretty well.
BlinkinBharg + 1 comment
This will not pass the case where v1 is an ancestor of v2.
- PN
phillipnguyen + 2 comments
It works correctly: if v1 is an ancestor of v2, then the algorithm will traverse the tree until root is the node containing v1. At this point root.data < v1 is not true and root.data > v1 is not true, therefore both of the if-statements are bypassed and root is returned, which is the correct behavior.
- AN
divyaaarthi + 0 comments
can u help me to bug what is wrong with testcase 2 for my code.
- VG
StefanK + 3 comments
No, if v1 is an ancestor of v2, v1 will be the lca. (this confused me for quite some time and I hope they change the challenge description to account for this.)
- N
nanthiran_2005 + 0 comments
Agreed, it confused me too until I read your comment! Thanks :)
- KN
kapildevneupane1 + 0 comments
How can v1 be the ancestor of itself? I am still confused.
Or, are you referring to the correct answer that the question will accept?
- AN
divyaaarthi + 0 comments[deleted]
- AM
maverick55 + 2 comments
couldn't think of a more perfect solution than this one.
b0kater + 4 comments
I prefer an iterative solution to this problem rather than recursive. The code is similarly brief, and there isn't any need to load up the call stack.
static Node lca(Node root,int v1,int v2) { Node temp = root; // not necessary, just use root, just a leftover from a different attempt. while (true) { if (temp.data > v1 && temp.data > v2) { temp = temp.left; } else if (temp.data < v1 && temp.data < v2) { temp = temp.right; } else { return temp; } } }
- JS
jaskaransingh_17 + 0 comments
awesome solution:)
- AP
anujpriyadarshi + 0 comments
This code gives the output as 1 and result is correct. In an another code output is 4 and then too result is correct.
jonmcclung + 2 comments
I did it the same way, with a minor performance improvement: I arrange v1 and v2 so that I know which is larger:
node* lca_(node* root, int a, int b) { if (b < root->data) { return lca_(root->left, a, b); } else if (a > root->data) { return lca_(root->right, a, b); } return root; } node* lca(node* root, int a, int b) { if (a < b) { return lca_(root, a, b); } return lca_(root, b, a); }
- KK
kotulakk + 2 comments
it's not really an improvement, lca is done with the assumption v1 < v2; which would be done before calling lca: as is with all test cases for this problem.
jonmcclung + 1 comment
Two things:
The first: The problem never tells us in the constraints that v1 < v2, that's an assumption that happens to be true this time.
If you did make that assumption, why this code:
if(root.data < v1 && root.data < v2){ return lca(root.right,v1,v2); } //Bigger than both if(root.data > v1 && root.data > v2){ return lca(root.left,v1,v2); }
If you are assuming v1 < v2, then if root.data < v1, root.data cannot possibly >= v2.
My code uses this tautology do its advantage by only checking what is necessary to determine what we need to know. Therefore, it is better than your code, because it uses x/2 + 1 comparisons, where x is the number of comparisons you make, and I maintain that you should check beforehand, because the check itself can only have a marginal cost but since the whole algorithm relies on it, you shouldn't simply trust that whoever uses your code is going to follow your rules.
kairat_kemp + 0 comments
i made throw(exception) assumption is wrong swapping everything in the beginning simplifies problem significantly
- EJ
ejan16 + 0 comments
I think this is much cleaner apporach by arranging v1 < v2
It is much easier to understand the logic.
performance is also slightly better.
you don't need else.. just two if commands and a return
if (...) return left_tree_lca
if (...) return right_tree_lca
return root
Great jobs!! :)
- AK
Thrashans + 2 comments
Thanks, that was really helpful. This is my C++ Solution
node * lca(node * root, int v1,int v2) { node *cur{root}; for (; cur->data > v1 && cur->data > v2; cur = cur->left); for (; cur->data > v1 && cur->data > v2; cur = cur->right); return cur; }
- BH
codeharrier + 1 comment
Took a second, but I get it. It only actually uses one of the loops. It also doesn't need to care how v1 and v2 are ordered.
No need to obfuscate it, though.
node * lca(node * root, int v1,int v2) { node *cur{root}; while (cur->data > v1 && cur->data > v2) cur = cur->left; while (cur->data > v1 && cur->data > v2) cur = cur->right; return cur; }
dongxy90 + 1 comment
A bit confusing for this and I found a counter example.
I had a tree: 8 4 9 1 6 2 5 7 3
lca of 2 and 3 should be 2, but this logic return 1
- SS
- HN
hongocvuong1998 + 0 comments
Tree flase
- SR
kartheek_kappag1 + 0 comments
also wrong for below output 7 4 2 7 1 3 6 8 6 8
- DY
dfymarine + 2 comments
for example, for a tree: 4 2 3 1 7 6 8
if you want to find the lowest common ancestor of 6 and 8, your code outputs 7, but the correct answer should be 4!
Moghaak + 1 comment
I like simplicity of your solution. My brain finally has started to figure out (picking up) a pattern in recursion. But the only problem that I am facing as of now is I overcomplicated solution. Such as the one below.
if((root->data < v2 && root->data > v1)||(root->data > v2&& root->data < v1)){ return root; }else if(root->data > v1&&root->data > v2){ return lca(root->left,v1,v2); }else if(root->data < v1&&root->data < v2){ return lca(root->right,v1,v2); }else{ return root; }
mohitaggarwal516 + 1 comment
this solution is pretty convincing. The only issue i am seeing is that in your first if either of v1 or v2 is at the root node then your condition is not responding to that moves to the sec elseif where as it should return root in first place..... lets say v1=2, v2=5 and in the tree root is 2 and root.right is 5 then your if condition is failing...hope this will help..
Moghaak + 1 comment
Thanks.
That's what I was looking at right now when I was solving the problem from Cracking the Coding Interview. Also, if I use the simple code snippet given at the top, my cases are failing. It is not returning me with the immidiate common ancester. It is rather returning the very top node.
Please comment on this issue, is it just me? Or we have a problem in the code above?
Yep I guess I need to add one more if condition namely if v1 or v2 is equal to root->data, return root.
Thanks.
mohitaggarwal516 + 1 comment
you have given the end 'return root' in else condition where as it should be returned in every condition. just remove last else...
abobakrpp + 1 comment
I think that in the first part of the code (// smaller than both) is useless because if both v1 and v2 are greater than root.data so the LCA is the root itself because all nodes on the right of the root are greater than it so the conditions inside the function will be only :-
//Bigger than both if(root.data > v1 && root.data > v2){ return lca(root.left,v1,v2); } //Else solution already found return root;
I submited this code and it passed all test cases ...
peterkirby + 0 comments
This also works.
node * lca(node * root, int v1, int v2) { if (root == NULL || root->data == v1 || root->data == v2) return root; node * left = lca(root->left, v1, v2); node * right = lca(root->right, v1, v2); if (left != NULL && right != NULL) return root; if (left != NULL) return left; return right; }
KeyTapper + 1 comment
My approach is very similar but I am getting error in TestCase 2.I have been trying to figure that out for an hour but couldn't.What's so special about case 2?
arjunthedragon + 1 comment
same situation here, bro :/
- RB
RJB3_MechLearn + 0 comments
test case 2 has been screwy for 2 years far as i can tell. i've configured my code to return 1 and 4 and neither passes. the example given at the top of the description also doesn't fit BST criteria I don't think?
- MK
manoharkotapati + 0 comments
consider this case: Tree size 3 and its contents are 4,2,1 and search for 2 and 1. As per correct solution provided, the output is returning as 2 but it should be 4 right?
- AM
ayeshamatloob + 1 comment
this solution doesnot work with case 2 because you have not made a condition on root that if it is not null then check that v1 and v2 are greater or less.
- M
- G
gschoen + 0 comments
Hmm I never thought to use a value test. I used a recursive inTree() function to test if a value was in a particular subtree. The running times of all test cases was 0 so the recursion didn't hurt in this case.
bool inTree(node *head, int value) { if (head == NULL) { return false; } else if (value == head->data) { return true; } else { return inTree(head->left, value) || inTree(head->right, value); } } node * lca(node * root, int v1,int v2) { if (inTree(root->left, v1) && inTree(root->left, v2)) { return lca(root->left, v1, v2); } else if (inTree(root->right, v1) && inTree(root->right, v2)) { return lca(root->right, v1, v2); } else { return root; } }
- TG
tanya20_1995 + 0 comments
pls help! i can't trace the code especially when it states (4<1 &&4 <7) or (4>1&&4>7). how the recursive calls r taking place..
- TG
tanya20_1995 + 1 comment
pls help! i can't trace the code especially when it states (4<1 &&4 <7) or (4>1&&4>7). how the recursive calls r taking place..
dgodfrey + 1 comment
My solution was so convoluted.
vector pathTo(node* root,int key){ vector path; node* current = root; while (current != NULL && current->data != key) { path.push_back(current); if (key data) current = current->left; else if (key > current->data) current = current->right; } if (current) path.push_back(current); return path; } node*lca(node*root,int x,int y){ vector pathX = pathTo(root, x), pathY = pathTo(root, y); unordered_set s; int i; for (i = pathX.size()-1; i >= 0; --i) s.insert(pathX[i]); for (i = pathY.size()-1; i >= 0; --i) { if (!s.insert(pathY[i]).second) break; } return pathY[i]; }
- TG
tanya20_1995 + 0 comments
Thank u!!
fede92 + 5 comments
I think in this counter example it wouldnt work:
v1 is 1 and v2 is 7
4 / \ 2 6 \ / \ 3 5 7 / 1
Answer we would get is 4 and the correct answer is 6.
- MR
rshaghoulian + 0 comments
Nice job. You can alternatively try to solve it iteratively like this to achieve O(1) space complexity since recursion takes O(log n) space.
saxtouri + 0 comments
It is better to first check if you have already found the lce and then decide which path to go.
To, the same idea without brackets would be like this:
node * lca(node * root, int v1,int v2) { int diff1 = root->data - v1, diff2 = root->data - v2; if (diff1 * diff2 <= 0) return root; if (diff1 < 0) return lca(root->right, v1, v2); return lca(root->left, v1, v2); }
- T
tushar2693 + 0 comments
I don't think it is a correct solution.. It will always return the root of tree after final evaluation.. All test cases except test case 2 are working becuase fro all of them root of the tree is the LCA..
Node lca(Node root,int v1,int v2){ return root; }
Try this, you will know what I am trying to say.. If you run the above code it will also fail only test case 2..
After we get the required node, in the shrinking phase of recursions we need a way so that we can contain the deepest recursive node in root and not the actual root of tree.. though I am not sure how to do that..
holy_monk + 0 comments
I think there's a bug in your code. Your code doesn't address the condition when v1 or v2 = root, in that case, the lca is the parent of the current root but your code will return current root and that's why it may be failing some test cases. Let's say, for this bst,
40 / 20 <-- v1 / lca = 40 10 <-- v2
if we are given v1 = 20, v2 = 10 then if we go through your code it'll return 20 as the lca but actually 40 will be the lca.
Have a look at my code. It passes all the test cases.
node * lca(node * root, int v1,int v2) { if(v1<root->data && v2<root->data) if(node *tmp = lca(root->left, v1, v2)) return tmp; if(v1>root->data && v2>root->data) if(node *tmp = lca(root->right, v1, v2)) return tmp; return root; }
- KP
153J1A05A1 + 0 comments
this doesn't work for all the test cases.
- SH
hingoman25 + 0 comments
Can someone explain this recursive solution to me? I'm not able to get the correct return value.
- AS
avansharma + 0 comments
The example input given to the problem in the description has Node 5 on left of Root node 1.
Which I do think will fail your solution.
- AL
knyl2013 + 0 comments
static Node lca(Node root,int v1,int v2) { Node curr = root; while ((curr.data < v1 && curr.data < v2) || (curr.data > v1 && curr.data > v2)) { if(curr.data < v1 && curr.data < v2){ curr = curr.right; } else { curr = curr.left; } } return curr; }
Changed it to iterative so the space complexity is O(1)
- DD
yesudeep + 1 comment
Here's my code:
def lca(root, a, b): node = root while node: if max(a, b) < node.data: node = node.left elif min(a, b) > node.data: node = node.right else: break return node
Does that help you?
miere00 + 0 comments
I noticed people often see @RubenzZzZ's anwser (which is awesome, btw) and forget to +1 this one too... Non-recursive == less expensive!
- GB
newsbeidl + 3 comments
As with so many challenges here, this is a terrible problem description. This is one of the worst. The "Lowest common ancestor" is not even defined! And the sole example just returns the root of the tree! Horrible, lazy, horrible problem description. Why doesn't HackerRank have standards on their problems? We should be allowed to VOTE on how well defined a problem is, otherwise we end up with problems like this one.
- KA
- TD
tduncan + 0 comments
Yes. I knew what the LCA problem was already, but this question is really ment for people who aren't familiar with it yet, so they should have it spelled out. I was annoyed that the problem says nothing about how to handle duplicate values - if you have a tree full of 5's, do you put all the 5's on the left, all of them on the right, or balance it out somehow? I just assumed they didn't include such things in the tests and my code worked, but this should have been mentioned in the problem.
rshaghoulian + 1 comment
Efficient Java solution - passes 100% of test cases
From my HackerRank solutions.
Make sure to leverage the fact that this is a binary search tree. We assume the tree has unique values.
Runtime: O(log n) on a balanced tree
Space Complexity: O(1)
I solve it iteratively since recursion would take O(log n) space complexity
static Node lca(Node n, int v1, int v2) { while (n != null) { if (n.data > v1 && n.data > v2) { n = n.left; } else if (n.data < v1 && n.data < v2) { n = n.right; } else { break; } } return n; }
Let me know if you have any questions.
kumarsurajsingh1 + 1 comment
Only problem in #2 test case, Can you help me ..
static Node lca(Node root,int v1,int v2){ if(root==null){ return null; } if(root.data>v1 && root.data>v2) lca(root.left,v1,v2); if(root.data
rshaghoulian + 1 comment
Hi. I cannot read your code since part of it is missing. Try putting it in a code snippet as I did in my post and I will take a look.
- V
Vikrant_Ch + 2 comments
static Node lca(Node root,int v1,int v2) { if(root==null) return null; if(root.data>v1 && root.data>v2) { return lca(root.right,v1,v2); } else if(root.data<v1 && root.data<v2) { return lca(root.left,v1,v2); } else { return root; } }
- V
Vikrant_Ch + 0 comments
failing for teastcase #2
rshaghoulian + 0 comments
I think you're traversing the wrong side of the tree. If the value at the root is larger than both v1 and v2, you should traverse the root.left instead of root.right
- DR
max1994 + 1 comment
failing test case 2 despite keeping the condition if v1 ancestor of v2 return v1.
- R
rishabjain603 + 0 comments
same here.so do you know now what's the problem?
- SR
sujairamprasathc + 0 comments
Poor Probem description The tree given in description is not a BST
- KR
- AO
aaditya_oza + 0 comments
Can there be duplicate instances of an entry in the BST ? If yes, what's the strategy to insert them into the tree ?
piyush_v94 + 4 comments
what if Node A is an ancestor of Node B, do we return A or its parent?
- KD
kamikaz1_k + 0 comments
I'd imagine a node cannot be considered its own ancestor. So I would think that you would need to return the parent of A.
- JL
thisisjoelee + 0 comments
A, which I think is wrong but seems to be what the problem is looking for.
- LL
sllavanya91 + 1 comment
You will have to return node A, because for the Lowest common ancestor problem, every node is considered a descendant of itself.
So if A is an ancestor of B, your lowest common ancestor would have to be A. Hope this clarifies your question.
- AN
divyaaarthi + 1 comment
i have returned A , but output is incorrect; }
- AN
divyaaarthi + 0 comments
getting incorrect in testcase 2
Sort 255 Discussions, By:
Please Login in order to post a comment | https://www.hackerrank.com/challenges/binary-search-tree-lowest-common-ancestor/forum | CC-MAIN-2018-26 | refinedweb | 3,522 | 73.07 |
On Nov 9, 2007 1:46 AM, Chris McDonough <[EMAIL PROTECTED]> wrote: > > On Nov 8, 2007, at 6:25 PM, Jim Fulton wrote: > > Guido recently told me that people in the Python community at large > > assume that anything in the Zope namespace is assumed to be Zope > > specific, so I'd rather not put it there. > > Does it matter? People who are allergic to the name "zope" can > probably lose.
Well, it will be an uphill struggle to get people to use it, and that's bad, because the more people that use it, the more people will help code it... -- Lennart Regebro: Zope and Plone consulting. +33 661 58 14 64 _______________________________________________ For more information about ZODB, see the ZODB Wiki: ZODB-Dev mailing list - ZODB-Dev@zope.org | https://www.mail-archive.com/zodb-dev@zope.org/msg02637.html | CC-MAIN-2018-51 | refinedweb | 130 | 76.25 |
I'm trying to compile my web app as a native desktop application in C. However I'm having a bit of trouble grabbing the file path in C.
In PyGTK I would use...
import webkit, pygtk, gtk, os path=os.getcwd() print path web_view.open("file://" + path + "/index.html")
However I'm not sure if I'm just looking in the wrong places or what, but when I search Google I haven't been able to find out how to grab the file path in C which I want to use like this.
gchar* uri = (gchar*) (argc > 1 ? argv[1] : "file://" + path + "app/index.html");
Instead of linking to it in a grotesque manner like so...
gchar* uri = (gchar*) (argc > 1 ? argv[1] : "file://" + /home/michael/Desktop/kodeWeave/linux/app/index.html"); webkit_web_view_open (web_view, uri);
Here's my full project (if helpful).
#include <stdio.h> #include <string.h> #include <gtk/gtk.h> #include <webkit/webkit.h> static WebKitWebView* web_view; void on_window_destroy (GtkObject *object, gpointer user_data) { gtk_main_quit(); } int main (int argc, char *argv[]) { GtkBuilder *builder; GtkWidget *window; GtkWidget *scrolled_window; gtk_init(&argc, &argv); builder = gtk_builder_new(); gtk_builder_add_from_file (builder, "browser.xml", NULL); window = GTK_WIDGET (gtk_builder_get_object (builder, "window1")); scrolled_window = GTK_WIDGET (gtk_builder_get_object (builder, "scrolledwindow1")); g_signal_connect (G_OBJECT (window), "delete-event", gtk_main_quit, NULL); gtk_window_set_title(GTK_WINDOW(window), "kodeWeave"); web_view = WEBKIT_WEB_VIEW (webkit_web_view_new()); gtk_container_add (GTK_CONTAINER (scrolled_window), GTK_WIDGET (web_view)); gtk_builder_connect_signals (builder, NULL); g_object_unref (G_OBJECT (builder)); gchar* uri = (gchar*) (argc > 1 ? argv[1] : ""); webkit_web_view_open (web_view, uri); gtk_widget_grab_focus (GTK_WIDGET (web_view)); gtk_widget_show_all (window); gtk_main(); return 0; }
You can't use the
+ operator to concatenate strings in c, you may need
snprintf instead, first you need a large enough buffer, may be the constant
PATH_MAX will work, it's defined in
limits.h, so for example
char uri[PATH_MAX]; char cwd[PATH_MAX]; getcwd(cwd, sizeof(cwd)); if (argc > 1) snprintf(uri, sizeof(uri), "%s", argv[1]); else snprintf(uri, sizeof(uri), "", cwd); /* ^ %s specifier for ^ this char pointer */
the
+ operator works with your operands, but in a different way, it just performs pointer arithmetic, because the operands are pointers. | http://databasefaq.com/index.php/answer/297355/c-gtk-grab-file-path-in-c-webkitgtk | CC-MAIN-2018-47 | refinedweb | 334 | 56.86 |
Due by 4pm on Wednesday, 14 March
You can grab a template for this homework either by downloading the file from the calendar or by running the following command in terminal on one of the school computers (the dot is significant: it denotes the current directory)
cp ~cs61a/lib/hw/hw8.py .
Readings. Chapter 3.3
Q1. Write a version of tree_find from lecture (for finding keys among the labels of a binary search tree) that is purely iterative and does not use recursion.
Q2. Define a function depth that, given a Tree, T, and a value, x, finds the depth at which x appears as a label in the tree. Depth here refers to distance from the root, T. The node T itself is at depth 0; its children are at depth 1, etc. Assume that x appears at most once in the tree. Return None if it does not appear.
Q3. Generalize the binary search trees from lecture to search trees with more than two children. We can define a general search tree as one whose labels are lists of keys such that
- A node whose label is None represents an empty collection.
- Otherwise, there is at least one key in a node label and the keys are sorted in ascending order.
- A non-empty node with N keys has N+1 children, which are also general search trees.
- If x is key #k in a node's label, then all keys in child #k are less than x and all those in child #k+1 are greater than x.
Fill in the definition of gen_tree_find in the skeleton to search for a key in such a general search tree.
Q4. Write a higher-order function that generalizes memoization:
def memoize(func): """Returns a function that takes the same arguments as 'func' and returns the same value, but with memoization. That is, if f is the function returned by memoize(func), then f(v) returns func(v), but if f is called twice with the same arguments, v, it does not call func(v), but returns the previously returned value. We assume that 'func' is a pure function whose value depends only on the values of its arguments, and whose side-effects are irrelevant, and that the values of its argument, v, are of a type suitable for use as keys in a Python dictionary."""
So, for example, if we define:
def fib(x): print(x) if x <= 1: return 1 else: return fib(x-2) + fib(x-1) fib = memoize(fib)
and then call fib(6), we'd get the expected return value (13), but the printed values would be 0, 1, 2, 3, 4, 5, 6, instead of the sequence we would expect from the unmemoized fib, which is 0, 1, 2, 0, 1, 3, 1, 2, 0, 1, etc.
Your memoize function should work with functions that take any number of parameters. Reminder: In Python, the syntax:
def f(*a): ...
allows f to take any number of parameters (0 or more), setting a to a tuple containing them. Likewise, if g is a function taking two parameters, then:
>>> g(1, 2) 42 >>> v = (1, 2) >>> g(*v) 42
Q5. Modify your solution to Q4 so that if the calculation of func(v) for some value of v causes a recursive call of func(v) (that is, a call with the same arguments, indicating an infinite loop), then the memoized function raises a RuntimeError exception. Call the new version checked_memoize.
Q6. Consider the following definition of adjoin_set, adapted to the binary search trees in this problem set, and adjoin_all:
empty_set = Tree(None) def adjoin_set(S, v): """Assuming S is a binary search tree representing a set (no duplicate values), the binary search tree representing the set S U {v}.""" if S.label is None: return Tree(v, None, None) elif v < S.label: return Tree(S.label, adjoin_set(S[0], v), S[1]) elif v == S.label: return S else: return Tree(S.label, S[0], adjoin_set(S[1], v)) def adjoin_all(S, L): """The result of adding all the elements of L to set S, in order.""" for v in L: S = adjoin_set(v) return S
Define two functions: bad(N) and good(N) that each returns a sequence of N non-null integer values such that tree_find(adjoin_all(empty_set, bad(N)), x) takes as long as possible for any given value N and the worst x, and tree_find(adjoin_all(empty_set, good(N)), x) takes as little time as possible for any given value N and the worst x.
Q7. [Extra for experts] Write a function that returns the result of removing a value from a binary search tree, if it is present (maintaining the search-tree property, of course). Returns the original tree if the value is not present. The time spent should be proportional to the depth of the tree. Hint: This is easy if the node whose label matches the value being deleted contains at most one non-empty child. The tricky part is figuring out what to do when that node has two non-empty children.
Q8. [Extra for experts] Define a function preorder(T) on Trees that returns an iterator over the labels in T in preorder. That is, it lists a node's label first, then those of its children (recursively) in order. (For this problem, there are no empty trees; None is just a possible label value):
""" >>> T = Tree(1, Tree(2, Tree(3, 4, 5), 6), 7, 8) >>> list(preorder(T)) [1, 2, 3, 4, 5, 6, 7, 8] """ | http://www-inst.eecs.berkeley.edu/~cs61a/sp12/hw/hw8.html | CC-MAIN-2017-26 | refinedweb | 932 | 65.25 |
Lasso regression is another form of regularized regression. With this particular version, the coefficient of a variable can be reduced all the way to zero through the use of the l1 regularization. This is in contrast to ridge regression which never completely removes a variable from an equation as it employs l2 regularization.
Regularization helps to stabilize estimates as well as deal with bias and variance in a model. In this post, we will use the “CaSchools” dataset from the pydataset library. Our goal will be to predict test scores based on several independent variables. The steps we will follow are as follows.
- Data preparation
- Develop a baseline linear model
- Develop lasso regression model
The initial code is as follows
from pydataset import data
import numpy as np
import pandas as pd
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import GridSearchCV
from sklearn.linear_model import Lasso
df=pd.DataFrame(data(‘Caschool’))
Data Preparation
The data preparation is simple in this example. We only have to store the desired variables in our X and y datasets. We are not using all of the variables. Some were left out because they were highly correlated. Lasso is able to deal with this to a certain extent w=but it was decided to leave them out anyway. Below is the code.
X=df[['teachers','calwpct','mealpct','compstu','expnstu','str','avginc','elpct']]
y=df['testscr']
Baseline Model
We can now run our baseline model. This will give us a measure of comparison for the lasso model. Our metric is the mean squared error. Below is the code with the results of the model.
regression=LinearRegression()
regression.fit(X,y)
LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=False)
first_model=(mean_squared_error(y_true=y,y_pred=regression.predict(X)))
print(first_model)
69.07380530137416
First, we instantiate the LinearRegression class. Then, we run the .fit method to do the analysis. Next, we predicted future values of our regression model and save the results to the object first_model. Lastly, we printed the results.
Below are the coefficient for the baseline regression model.
coef_dict_baseline = {}
for coef, feat in zip(regression.coef_,X.columns):
coef_dict_baseline[feat] = coef
coef_dict_baseline
Out[52]:
{'teachers': 0.00010011947964873427,
'calwpct': -0.07813766458116565,
'mealpct': -0.3754719080127311,
'compstu': 11.914006268826652,
'expnstu': 0.001525630709965126,
'str': -0.19234209691788984,
'avginc': 0.6211690806021222,
'elpct': -0.19857026121348267}
The for loop simply combines the features in our model with their coefficients. With this information we can now make our lasso model and compare the results.
Lasso Model
For our lasso model, we have to determine what value to set the l1 or alpha to prior to creating the model. This can be done with the grid function, This function allows you to assess several models with different l1 settings. Then python will tell which setting is the best. Below is the code.
lasso=Lasso(normalize=True)
search=GridSearchCV(estimator=lasso,param_grid={'alpha':np.logspace(-5,2,8)},scoring='neg_mean_squared_error',n_jobs=1,refit=True,cv=10)
search.fit(X,y)
We start be instantiate lasso with normalization set to true. It is important to scale data when doing regularized regression. Next, we setup our grid, we include the estimator, and parameter grid, and scoring. The alpha is set using logspace. We want values between -5 and 2, and we want 8 evenly spaced settings for the alpha. The other arguments include cv which stands for cross-validation. n_jobs effects processing and refit updates the parameters.
After completing this, we used the fit function. The code below indicates the appropriate alpha and the expected score if we ran the model with this alpha setting.
search.best_params_
Out[55]: {'alpha': 1e-05}
abs(search.best_score_)
Out[56]: 85.38831122904011
`The alpha is set almost to zero, which is the same as a regression model. You can also see that the mean squared error is actually worse than in the baseline model. In the code below, we run the lasso model with the recommended alpha setting and print the results.
lasso=Lasso(normalize=True,alpha=1e-05)
lasso.fit(X,y)
second_model=(mean_squared_error(y_true=y,y_pred=lasso.predict(X)))
print(second_model)
69.0738055527604
The value for the second model is almost the same as the first one. The tiny difference is due to the fact that there is some penalty involved. Below are the coefficient values.
coef_dict_baseline = {}
for coef, feat in zip(lasso.coef_,X.columns):
coef_dict_baseline[feat] = coef
coef_dict_baseline
Out[63]:
{'teachers': 9.795933425676567e-05,
'calwpct': -0.07810938255735576,
'mealpct': -0.37548182158171706,
'compstu': 11.912164626067028,
'expnstu': 0.001525439984250718,
'str': -0.19225486069458508,
'avginc': 0.6211695477945162,
'elpct': -0.1985510490295491}
The coefficient values are also slightly different. The only difference is the teachers variable was essentially set to zero. This means that it is not a useful variable for predicting testscrs. That is ironic to say the least.
Conclusion
Lasso regression is able to remove variables that are not adequate predictors of the outcome variable. Doing this in Python is fairly simple. This yet another tool that can be used in statistical analysis. | https://educationalresearchtechniques.com/2018/12/21/lasso-regression-with-python/?shared=email&msg=fail | CC-MAIN-2020-40 | refinedweb | 836 | 53.78 |
On Mon, Feb 11, 2002 at 12:47:32PM +0100, Marcus Brinkmann wrote: >). Actually, I think you'd want to standardize the naming. I was just trying to establish a concrete example. I didn't follow through well enough, though. > >. True, but that makes it difficult to distinguish between between regular dependacies and these architecture-type dependacies in software. > >? Simple example: you want to be able to quickly find all possible packages that would work on your system. At that point, it'd be very useful to have a file that looks like: Package: telnet Env-Depends: i386, elf, gnu-libc6 Package: wdiff Env-Depends: alpha, elf, gnu-libc6.1 It's called an index, and it's commonly used in databases to speed things. I think we should also consider using a db3 databases rather than flat text files, but that's a different issue... > > and doesn't pollute the > > package namespace with a lot of virtual packages. > > Beauty is in the eye of the beholder. I think having two fields duplicated > without any real technical advantage is adding to ugliness. The advantage is in making it easier to find things. It'd be easier to tell the difference between packages that could be installed, if we had some other package installed, from those that can't be installed because of architectural dependacies. > >? Sure: apt_0.5.4_freebsd-i386.deb vs. apt_0.5.4_i386.deb ------------ ---- If you drop the Architecture concept, you have to find another way to name the files. I'm saying why put any information into the filename? You can get at it with dpkg, anyway, and it's in the Packages files, so why does it need to be in the name? I'm thinking of squid caches. Squid doesn't put any information into filenames, it just load balances across directories, and locates files with its database. I don't see why that wouldn't be a good solution here. > >. It'd be unnecessary. If you have the dependacies in the Packages file, you can trivially determine which packages are available to your system in dselect or apt. All you have to do is find dependacies that can't be satisfied without replacing your kernel. And if you install a kernel that supports another sort of binary, you can automatically show more available packages. My suggestion about using new fields rather than Depends was aimed at making that faster and easier to determine, but with a real database, rather than text files, it'd probably be unnecessary. Indexes would solve that nicely. Having distributions for particular architectures wouldn't be required. All you need is stable, testing and sid. Makes dinstall's job easy. For CD sets, you pick a kernel, and pull a list of packages that are compatible with it. The same algorithm dselect or apt would be using. | https://lists.debian.org/debian-bsd/2002/02/msg00132.html | CC-MAIN-2017-47 | refinedweb | 476 | 65.52 |
30 May 2012 23:24 [Source: ICIS news]
HOUSTON (ICIS)--US methyl isobutyl ketone (MIBK) contract prices for May settled at rollovers from April on weakening feedstock, sources confirmed on Wednesday.
The rollover holds May MIBK at $1.29-1.35/lb ($2,844-2,976/tonne, €2,275-2,381/tonne), as assessed by ICIS.
Although no June reductions have surfaced, sources said values appeared to be slipping, suggesting that reductions of about 5 cents/lb could be broadly implemented in the near term.
The rollover came despite ongoing supply constraints from at least one producer that has maintained strict sales controls for several weeks. Upstream weakness and soft demand, however, counterbalanced those supply limits to keep May contract values static.
The US MIBK market remained fractured, with some pricing heard higher and lower than the previously noted range, which represents most of the market.
US propylene contracts are likely headed for a significant drop in June, market sources said, citing weak demand and lower spot prices in recent weeks.
So far, one propylene producer has nominated a decrease of 8 cents/lb for June.
On the feedstock truck acetone front, ?xml:namespace>
Isopropanol (IPA) contract pricing slipped by an average of 8 cents/lb for May on soft market conditions and the threat of cheaper imports.
US MIBK suppliers include Dow Chemical, Eastman, Sasol, Haltermann and Celan | http://www.icis.com/Articles/2012/05/30/9565672/us-may-mibk-contracts-settle-flat-on-upstream-weakness.html | CC-MAIN-2013-48 | refinedweb | 229 | 62.07 |
Importing QtStudio3D fails.
Hello everyone, when I try to import qtstudio3d, I get the following error:
Running Windows Runtime device detection. No winrtrunner.exe found. QML module does not contain information about components contained in plugins. Module path: C:/Qt5.10/5.10.0/mingw53_32/qml/QtStudio3D See "Using QML Modules with Plugins" in the documentation. Automatic type dump of QML module failed. Errors: "C:\Qt5.10\5.10.0\mingw53_325.10\5.10.0\mingw53_32\qml\QtStudio3D\declarative_qtstudio3d.dll: Unknown error 0x000000c1.
The qtstudio3d qml directory has the following content:
(It seems like to have no plugins.qmltypes file.)
Why? Could anyone tell me the reason? Thanks in advance!
Hi,
That is not working with mingw and under 32 bits, try to compile qtstudio :
git clone --recursive
and compile it with msvc2015 64 kit
I can but the best way is to compile it don't forget to add make step with "install" argument because during building, studio copy some files inside qt at the good place to be able to call 3s studio module
This post is deleted!
@small_bird If you post your e-mail address you'll soon receive a metric ton of spam mails. You can use the forum's chat function to talk to @filipdns without the spammers watching.
@filipdns Hi, when I try to compile qt3dstudio, I come across the following problem:
:-1: error: msvc-version.conf loaded but QMAKE_MSC_VER isn't set
Could you help me? Thanks in advance!
don't forget to delete the qt3dstudio.pro.user file before compile
@small_bird said in Importing QtStudio3D fails.:
zhouyang_atwork@foxmail.com
did you received my mail with the dropbox link?
This post is deleted!
Hello, you are using what I sent to you? !if yes, it can not work with msvc 2017 it has been compile with 2015
@filipdns Yes, I just tried it for msvc2015, but it still can not work.My configure is as following:
@filipdns
The error output is as following:
Automatic type dump of QML module failed. Errors: "C:\Qt\Qt5.9.2\5.9.2\msvc2015_64\Qt5.9.2\5.9.2\msvc2015_64\qml\QtStudio3D\declarative_qtstudio3d.dll: 找不到指定的模块。
try that:
-uninstall VS, (all of them, 2017,2015,... etc and all redistribuable package)
-uninstall all QT, delete QT folder, and in C:\Users<username>\AppData\Roaming delete all qt folder
-install git:
-install perl:
-install cmake:
-install python: 2.7.14 :
-install
-install FBXSDK:
-re-install QT with qt tool, only keep msvc2015 and qt studio
-in QT root folder, right click and select git bash
-paste in git terminal :
git clone --recursive
-in this folder, go to src/3rd party and paste the boost folder that I will send to you (I did some modification to be able to compile 3d studio).
-double click on qt3dstudio.pro from created folder by git in QT
-in project setting, add make step and in argument, tape "install"
-in environemental configuration add new with variable "FBXSDK" and value, the root to the FBXSDK you install before, like this:
C://Autodesk/FBX SDK/2016.1.2
-after that, compile qt studio and see what happen
I will send you the boost folder a soon I will be at home
you will have probably many warnings like me (more than 950) but if it's go untill the end with no fatal error, you will see in the last lines of compilation out copy of dll in qml something like that, then you don't need any more the 3d studio sources (may be only copy past the folder "example" to the QT/Tools/Qt3Dstudio folder, it can be interesting to have them and Qt tool don't provide them at studio installation)
@filipdns En, thanks a lot, but the qml directory of qtstudio3d seems to have no plugins.qmltypes, unlike others, how could the compiler find the corresponding component?
drive link has been send to you
in qt3d studio source you have no qml folder, it will be build during compilation
here the link to have the boost folder to extract in src/3rdparty of qt3dstudio source
sorry, I don't see what you mean about qml directory
@filipdns When we import some plugin in qml, the compiler needs to search the qml directory.
The other directory's content is as following:
However, Qt3dStudio's directory content is as following:
You see that? No plugins.qmltypes file? Why?
@filipdns However, I got the following error:
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\include\type_traits:1501: error: C2893: fail to make funciton templete “unknown-type std::invoke(_Callable &&,_Types &&...)” spectial (compile source file Client\Code\Core\Utility\TestCmdUtils.cpp)
do you do clean installation as I tell you?
then I don't know what is going on I'm sorry...
I can only tell you my qtstudio3d in qml has no plugins.qmltypes file too
the problem is no there
@filipdns Are you sure you can import qt3dstudio in qml? It should has the search path. Could you show me the shortcut?
@small_bird said in Importing QtStudio3D fails.:
import qt3dstudio in qml
import qt3dstudio in qml? no you don't import any thing it's building process do it for you
@filipdns
The document shows that we can import studio3d to munipulate the elements in the qml project.
@filipdns You have mentioned it before:
oh ok, yes but before importing it had to be installed...
until you don't build source correctly it will not work
@filipdns En, actually you have sent me some lib files. They should be loaded correctly if I put them into the right directory. But it still shows the following error:
start from zero has I describe to you I'm sure it will work after that | https://forum.qt.io/topic/84502/importing-qtstudio3d-fails | CC-MAIN-2018-51 | refinedweb | 961 | 61.87 |
Hello, using the implementation here for x-axis rangebreaks… If you use say 10+ years of data the plots get ultra laggy to the point where they are virtually unusable. I really like the implementation below because you can get rid of candlestick gaps but at this point based on how laggy the chart becomes I think the cost outweighs the benefit. Does anyone have an idea how I might implement the below without sacrificing the performance of the rendering and scrolling over the chart itself?
Thank you very much.
import plotly.express as px import pandas as pd df = pd.read_csv(' fig = px.scatter(df, x='Date', y='AAPL.High', range_x=['2015-12-01', '2016-01-15'], title="Hide Weekend and Holiday Gaps with rangebreaks") fig.update_xaxes( rangebreaks=[ dict(bounds=["sat", "mon"]), #hide weekends dict(values=["2015-12-25", "2016-01-01"]) # hide Christmas and New Year's ] ) fig.show() | https://community.plotly.com/t/plotly-rangebreaks-make-chart-mega-slow-mega-lag/51911 | CC-MAIN-2022-21 | refinedweb | 151 | 57.27 |
I'm supposed to write a program in Java . I'm given a text file containing
I am Hakan. My email address is hakan@cs.uh.edu, and what is your name? Hi my name is Tarikul and my favorite email address is tarikul2000@cs.uh.edu
and im suppossed to take tarikul2000@uh.edu and hakan@cs.uh.edu and organize them acording to whether or not they have a subdomain(the "cs" part there other possibilities) or not and store it into class arrays of type Email and UniversityEmail. I'm to then take a user iput of 0-7 and depending on the input print out a different set of info I've created the classes and but i dont know how i can take the info then sort it. The possible subdomains are 1.art 2. chee 3. chem 4. coe 5. cs 6. egr 7. polsci
this is what i have so far if anyone can help me move foward I appreciate it
import java.io.* ; import java.util.HashSet; import java.util.Scanner; import java.io.PrintWriter; import java.io.FileOutputStream; import java.util.regex.Pattern; import java.util.regex.Matcher; public class Try { public static void main(String[] args) { Email [] storage;// email is a class that was made to store the data storage = new Email [99]; UniversityEmail[] save; save= new UniversityEmail[99]; HashSet<String> hs = new HashSet<>(); Scanner input= null; PrintWriter output= null; try { input= new Scanner("inputemails.txt"); output= new PrintWriter("outputemails.txt"); } catch (FileNotFoundException e) { System.out.print("File not found"); System.exit(0);} String line = null; while(input.hasNextLine()) { fillEmailsHashSet(line, hs); } input.close(); output.close(); } public static void fillEmailsHashSet(String line,HashSet<String> container){ Pattern p = Pattern.compile("([\\w\\-]([\\.\\w])+[\\w]+@([\\w\\-]+\\.)+[A-Za-z]{2,4})"); Matcher m = p.matcher(line); while(m.find()) { container.add(m.group(1)); } }}
The Email and UniversityEmail are in seperate code chunks i can post if it helps | http://www.javaprogrammingforums.com/file-i-o-other-i-o-streams/36307-taking-info-text-file-sorting-save-into-class-arrays-reprinting-into-new-txt-file.html | CC-MAIN-2018-09 | refinedweb | 326 | 61.12 |
NAME
ksql_role—
set role in ksql context
LIBRARYlibrary “ksql”
SYNOPSIS
#include <sys/types.h>
#include <stdint.h>
#include <ksql.h>void
ksql_role(struct ksql *sql, size_t role);
DESCRIPTIONThe
ksql_rolefunction sets the current role of sql. The role is the index of a role defined in cfg->roles as passed to ksql_alloc(3) or ksql_alloc_child(3). The role affects all subsequent ksql_exec(3) and ksql_stmt_alloc(3) calls. The new role must be allowed by having a non-zero value in the roles array within the current role's struct ksqlrole object. Otherwise, the situation is logged to
stderrand the program is immediately terminated. In split-process mode,
ksql_role() automatically sets
KSQL_EXIT_ON_ERRon cfg->flags and cfg->err to
NULL, restoring both if/when it returns. These guarantee that the function will never return without having properly set the new role. | https://kristaps.bsd.lv/ksql/ksql_role.3.html | CC-MAIN-2021-21 | refinedweb | 138 | 59.19 |
Microsoft Public Sector Developer and Platform Evangelism Team Blog
A while back, I blogged about using WPF/E & Virtual Earth together. The post is here. I've also blogged about adding Virtual Earth in a Windows Forms application here (last paragraph). What about WPF? Given that the Virtual Earth v4 Map Control is delivered as a set of JavaScript libraries, the same approach we used in Windows Forms applies to WPF. However, WPF does not ship with a "native" control with the same functionality as the WebBrowser control which ships with the .NET Framework 2.0. The good news is the good folks in "WPF Land" thought about these types of dilemmas and created a really nice interoperability layer for WPF/Windows Forms. There is a class called WindowsFormsHost in the System.Windows.Forms.Integration namespace which allows you to host Windows Forms controls in a WPF application (See Supported Scenarios in Windows Presentation Foundation and Windows Forms Interoperation). One problem solved.
The next problem you will very quickly discover (or read about in the "Supported Scenarios" link) is that a "hosted Windows Forms control is drawn in a separate HWND, so it is always drawn on top of WPF elements." Oh oh, that means you can't overlay WPF elements over the map.
WPF/E to the rescue! Even though WPF & WPF/E have completely different runtimes, they share the same markup language. WPF/E markup is a subset of WPF markup. Therefore, you can target both technologies with one definition. WPF/E is targeted at augmenting existing web technologies (works great with ASP.NET AJAX as well as other AJAX, DHTML/JavaScript applications). The Virtual Earth v4 Map Control fits into this category (it's a JavaScript control which employs AJAX techniques). I showed the two working together here.
WPF/E is a perfect workaround to the WPF/Windows Forms interoperability limitation.
The picture below shows a WPF application defined using XAML, which uses the WindowsFormsHost control to host the .NET Framework 2.0 WebBrowser control, which in turn points to an html file which uses the Virtual Earth v4 Map Control (JavaScript). When you click the "Add Pushpin" WPF and then hover over the pushpin, you will get a popup with WPF/E elements (vector graphic, text, and video) defined using XAML.
As you can see, this approach even works in 3D mode!
You can download the sample code here.
-Marc
If you would like to receive an email when updates are made to this post, please register here
RSS
PingBack from
I've received quite a few requests to update my Virtual Earth + Silverlight and WPF + Virtual Earth +
I can not dowload this source code
website link http www
<a href= >������� ��������� isq 6</a> <a href= >������� �������</a> <a href= >���������� ������� icq</a> <a href= >������� ��������� ����� ������ icq</a> <a href= >������ ����� ����� ������������ ���������</a> | http://blogs.msdn.com/publicsector/archive/2007/02/27/using-wpf-virtual-earth-and-wpf-e-together.aspx | crawl-002 | refinedweb | 472 | 64.2 |
2011/3/25 Thomas Schilling <nominolo at googlemail.com>: > unsafePerformIO traverses the stack to perform blackholing. It could > be that your code uses a deep stack and unsafePerformIO is repeatedly > traversing it. Just a guess, though. Sounds reasonable. Here is a variant of the program without intermediate lists. import System.IO.Unsafe main = run (10^5) run 0 = return () run n = (unsafePerformIO . return) (run (n - 1)) >> return () I think it does not do much more than producing a large stack and (like the original program) is much faster if the unsafe-return combination or the final return (which probably prohibits tail-call optimization) is removed. Sebastian >. > > _______________________________________________ > Glasgow-haskell-users mailing list > Glasgow-haskell-users at haskell.org > > | http://www.haskell.org/pipermail/glasgow-haskell-users/2011-March/020226.html | CC-MAIN-2014-42 | refinedweb | 118 | 58.18 |
Activities/Abacus
Contents
About Abacus
Abacus lets the learner explore different representations of numbers using different mechanical counting systems developed by the ancient Romans and Chinese. There are several different variants available for exploration: a suanpan, the traditional Chinese abacus with 2 beads on top and 5 beads below; a soroban, the traditional Japanese abacus with 1 bead on top and 4 beads below; the schety, the traditional Russian abacus, with 10 beads per column, with the exception of one column with just 4 beads used for counting in fourths; and the nepohualtzintzin, a Mayan abacus, 3 beads on top and 4 beads below (base 20). There is also a binary abacus, a hexadecimal abacus, and several abacuses that lets you calculate with common fractions: 1/2, 1/3, 1/4, 1/5, 1/6, 1/8, 1/9, 1/10, and 1/12. And there is a customization toolbar that lets you design your own abacus. The Incan abacus (Yupana) as a standalone program.
Where to get Abacus
Using Abacus
Clearing the abacus
Before you start an arithmetic operation, you need to "clear" the abacus. The upper beads should be positioned against the top of the frame and the lower beads should be positioned against the bottom of the frame. This is the default position for the abacus when you launch the activity.
- Note that some of the abacuses (e.g., the schety) do not have any upper beads. In such cases, all of the beads should start in the down position.
- Also note that the Clear Button on the main toolbar will also clear the abacus for you.
Reading the abacus
In each column, the bottom beads represent 1s and the top beads represent 5s. (The exception is the column in the schety with only 4 beads. These are 1/4 each.) So for each bead you raise up from the bottom in a column add 1 and for each bead you lower from the top in the same column, add 5.
The columns themselves represent decimal positions from right to left, e.g., 1s, 10s, 100s, 1000s, etc. (There are some exceptions: (1) the nepohualtzintzin uses base 20, e.g., 1s, 20s, 400s, 8000s, etc.; (2) on the schety, the beads to the right of the column with just four beads are 0.1s, 0.01s, 0.001s, and 0.0001s; the black beads on the Caacupé abacus are fractions; and the custom abacus lets you choose whatever (integer) base you want.)
The current value is always displayed on the frame. Experiment and you will quickly learn to write and read numbers.
Examples: In the gallery below, several simple examples are shown. In the gallery of images above, the number 54321 is shown on each of the different abaci.
Note: The display always assumes a fixed unit column, but you can override this choice.
Addition
To add, simply move in more beads to represent the number you are adding. There are two rules to follow: (1) whenever you have a total of 5 units or more on the bottom of a column, cancel out the 5 by sliding the beads back down and add a five to to the top; and (2) whenever you have a total of 10 units or more in a column, cancel out the 10 and add one unit to the column immediately to the left. (With the nepohualtzintzin, you work with 20 rather than 10.)
Example: 4+3+5+19+24=55
Subtraction
Subtraction is the inverse of addition. Move out beads that correspond to the number you are subtracting. You can "borrow" from the column immediately to the left: subtracting one unit and adding 10 to the current column.
Example: 26–2–4–6–10=4
Multiplication
There are several strategies for doing multiplication on an abacus. In the method used in the example below, the multiplier is stored on the far left of the abacus and the multiplicand is offset to the left by the number of digits in the multiplier. The red indicator is used to help keep track of where we are in the process.
Division
Simple division (by a single-digit number) is the inverse of multiplication. In the example below, the dividend is put on the left (leaving one column vacant for the quotient) and the divisor on the right.
TODO: Add instructions for long division.
Fractions
The fraction abacus lets you add and subtract common fractions: 1/2, 1/3, 1/4, 1/5, 1/6, 1/8, 1/9, 1/10, and 1/12, The fractional value is determined by the number of black beads on a rod, e.g., to work with thirds, use the rod with three beads, to work with fifths, use the rod with five beads.
The rods with white beads are whole numbers in base 10; from left to right 100000, 10000, 1000, 100, 10, and 1.
The toolbars
From left to right:
- project-toolbar button
- see below
- edit-toolbar button
- see below
- abacus-toolbar button
- see below
- customization-toolbar button
- clear button
- clear the abacus
- stop button
- exit the activity
From left to right:
- copy
- copy current value to clipboard
- paste
- paste a value from the clipboard into the abacus
From left to right:
- decimal button
- decimal abacus
- soroban button
- Japanese abacus
- saupan button
- Chinese abacus
- nepohualtzintzin button
- Mayan abacus
- hexadecimal button
- hexadecimal abacus
- binary button
- binary abacus
- schety button
- Russian abacus
- fraction button
- fraction abacus
- Caacupe button
- fraction abacus with +/–
- rod button
- Cuisenaire-like abacus
- custom button
- your custom abacus
From left to right:
- rods
- select the number of rods:
- top beads
- select the number of beads on the top of the frame
- bottom
- select the number of beads on the bottom of the frame
- factor
- select the multiplication factor of top beads (e.g., on the Chinese abacus, each top bead counts as 5× the value of a bottom bead on the same rod)
- base
- select the base to determine the value of bottom beads across rods; this is 10 on most conventional abacuses, but 20 on the Mayan abacus, 16 on the hexadecimal abacus, and 2 on the binary abacus.
- create
- you must push this button to activate the selections you've made
Gallery of abaci
Learning with Abacus
- Some lesson plans for using Abacus are found here.
- Using beads or pebbles, you can make an abacus. What is the difference between the abacus on the computer and a physical abacus?
- It is possible to create a custom abacus. I often use the example of Sumerian mathematics: the Sumerians counted on the digital bones (phalanges) of their fingers, so the base of their counting system was 12. All of the 12s (and 60s) we have in our mathemateics, e.g., 12 hours, 60 seconds, etc. have their roots in Sumerian math. But the Sumerians never invented an abacus. What would a Sumerian abacus look like?
Extending Abacus
- A fun project is to compare calculations using Abacus with the Calculate Activity. Which is faster? Which is more accurate? Which is better for estimating? Which is better for comparing?
- Abacus supports paste, so you can take numeric values from other programs and paste them into the abacus to see what their representations are; for example, I often paste numbers into the hexadecimal abacus as a quick way of converting decimal to hexidecimal.
- Abacus also supports copy, so you can take a sum calculated on an abacus and export it into SimpleGraph or some other data-visualization Activities.
- A fun collaborative mode might be to have a number randomly selected and each sharer work independently to post it on the abacus of their choice first. There could be a tally of beads awarded for each correct answer.
Modifying Abacus
Abacus is under GPL license. You are free to use it and learn with it. You are also encouraged to modify it to suit your needs or just for a further opportunity to learn.
- It might be good to have some of the above information in a Help palette, e.g., addition, subtraction, multiplication division.
Most changes can be confined to three modules:
AbacusActivity.py,
abacus.py and
abacus_window.py. The former define the Sugar and GNOME toolbars; the latter defines what code is executed by each type of abacus.
Note: since a recent refactoring, these instructions are deprecated
For instance, to add a menu item such as 'Reset' you would do the following in
abacus.py:
- Add these lines to the menu items list:
menu_items = gtk.MenuItem(_("Reset")) menu.append(menu_items) menu_items.connect("activate", self._reset)
- The _reset() method is trivial:
def _reset(self, event, data=None): """ Reset """ self.abacus.mode.reset_abacus()
Similarly, you can add another button to the Sugar toolbar in
AbacusActivity.py:
- Add these lines to the toolbar block:
# Reset the beads on the abacus to the initial cleared position self.reset_button = ToolButton( "reset" ) self.reset_button.set_tooltip(_('Reset')) self.reset_button.props.sensitive = True self.reset_button.connect('clicked', self._reset_button_cb) toolbar_box.toolbar.insert(self.reset_button, -1) self.reset_button.show()
- The _reset_button_cb() method is trivial:
def _reset_button_cb(self, event, data=None): """ Reset the beads on the abacus to the initial cleared position """ self.abacus.mode.reset_abacus()
- You'll have to create an icon for the button (
reset.svg) and put it into the
iconsubdirectory of the bundle.
This will complete the changes in the
abacus.py. The method
reset_abacus() will have to be defined for each abacus in the
abacus_window.py. This can be done by creating that method in the
AbacusGeneric class used by all the varieties of abacus. The method may have to be overridden in some abacus subclasses for customization reasons. For instance,
reset_abacus() was defined in
AbacusGeneric class and then overridden in
Schety.
If the changes involve modifying the graphics, then other methods may need to be modified as well. For instance, in order to introduce a reset button that can be clicked to reset the bead positions to the beginning, the following methods had to be modified – all in
abacus_window.py:
- in the
class Abacus, method
_button_press_cb()to activate reset button;
- in the
class AbacusGeneric, method
create()to create the graphics for reset button;
- methods
hide()and
show()to make the button visible.
Reporting problems
If you discover a bug in the program or have a suggestion for an enhancement, please file a ticket in our bug-tracking system.
You can view the open tickets here. | http://wiki.sugarlabs.org/go/Activities/Abacus | CC-MAIN-2015-06 | refinedweb | 1,736 | 62.07 |
The following is my homework assigment. I have it more or less complete. I would like feedback from the community on what I can improve in the program. Basically, before I hand it in to my professor, I want a fresh pair eyes to take a look at it. There's also three lines that I want to edit in the program but I'm not sure how to do it. I wrote comments in capital letters next to those lines. I'm almost sure that my professor will overlook those lines. I want your feedback for my benifit. As I mentioned before, I'm new to C#. Please and thank you.
(Airline Reservations System) A small airline has just purchased a computer for its new automated reservations system. You have been asked to program the new system. You are to write an application to assign seats on each flight of the airline’s only plane (capacity: 10 seats).
Your application should display the following menu of alternatives— Please type 1 for "First Class" and Please type 2 for "Economy". If the person types 1, your application should assign a seat in the first class section (seats 1-5). If the person types 2, your application should assign a seat in the economy section (seats 6-10)
Use a single-dimensional array of simple type bool to represent the seating chart of the plane. Initialize all the elements of the array to false to indicate that all seats are empty. As each seat is assigned, set the corresponding elements of the array to true to indicate that the seat is no longer available.
Your application should never assign a seat that has already been assigned. When the first class section is full, your application should ask the person if it is acceptable to be placed in the first class (and vice versa). If yes, then make the appropriate seat assignment. If no, then print the message "Next flight leaves in 3 hours."
using System; class AirlineResevationSystem { static void Main() { int seatsFirst, seatsEconomy, reserve, i = 0, j = 6; bool[] seats = { false, false, false, false, false, false, false, false, false, false }; // seating chart Console.WriteLine("Welcome to Airline Reservation System."); // greet the user while (true) { Console.WriteLine("There are " + checkFirstClass(out seatsFirst, seats) + " first class seats and " + checkEconomy(out seatsEconomy, seats) + " economy seats."); // check available seats Console.WriteLine("Please enter 1 to reserve a first class seat or enter 2 to reserve an economy class seat or 0 to exit"); // promt user for input reserve = Convert.ToInt32(Console.ReadLine()); // input from user if (reserve == 1) { if (i > 5 || i == 5) { Console.WriteLine("There are no first class seats available. Would you like an economy class seat? Type 2 for yes and 0 to exit."); reserve = Convert.ToInt32(Console.ReadLine()); // RIGHT HERE I WOULD LIKE THE PROGRAM TO GO BACK TO THE INITIAL IF STATEMENT AND CHECK THE CONDITIONS. ANY ADVICE? } else reserveFirstSeat(ref seats, ref i); // reserve first class seat } else if (reserve == 2) { if (j < 5 || j > 10) { Console.WriteLine("There are no economy seats available. Would you like a first class seat? Type 1 for yes and 0 to exit."); reserve = Convert.ToInt32(Console.ReadLine()); // RIGHT HERE I WOULD LIKE THE PROGRAM TO GO BACK TO THE INITIAL IF STATEMENT AND CHECK THE CONDITIONS. ANY ADVICE? } else reserveEconomySeat(ref seats, ref j); // reserve economy class seat } else if (reserve == 0) { Console.WriteLine("Next flight leaves in three hours."); // ALSO, I WANT TO ADD HOW MANY FIRST CLASS TICKETS WERE RESERVED IF ANY AND SAME THING FOR ECONO CLASS BUT I'M NOT SURE HOW TO KEEP TRACK OF IT. ANY ADVICE? break; } else { Console.WriteLine("Invalid entry. Please try again"); } } } public static int checkFirstClass (out int seatsFirst, bool [] seats) // method to check available First Class seats { seatsFirst = 0; for (int i = 0; i<5; i++) { if (seats[i] == false) seatsFirst++; } return seatsFirst; } public static int checkEconomy (out int seatsEconomy, bool [] seats) // method to check available Economy seats { seatsEconomy = 0; for (int i = 5; i<10; i++) { if (seats[i] == false) seatsEconomy++; } return seatsEconomy; } public static void reserveFirstSeat (ref bool [] seats, ref int i) // reserve selected first class seat { if (i<5) { if (seats[i] == false) { seats[i] = true; Console.WriteLine("You have successfully reserved a first class seat"); } } ++i; } public static void reserveEconomySeat (ref bool [] seats, ref int j) // reserve selected economy class seat { if (j > 5 || j == 5 || j<10) { if (seats[j] == false) { seats[j] = true; Console.WriteLine("You have successfully reserved an economy class seat"); } } ++j; } } | https://www.daniweb.com/programming/software-development/threads/464929/airline-reservation-system-in-c-homework | CC-MAIN-2018-39 | refinedweb | 759 | 63.9 |
Brython is a browser-based implementation of Python 3 and transpiler that has lofty goals for the browser. It doesn't seek to live side by side with JavaScript, but wants to supplant it as the scripting language of the web. In this article, I'll investigate how it works and stacks up to JavaScript through several small examples.
What Is a Transpiler?
A transpiler, or source-to-source compiler, translates source code from one language into another. Some of the more commonly known transpilers are CoffeeScript and emscripten, which both generate source code for JavaScript. emscripten generates its JavaScript from LLVM bitcode (typically compiled from C or C++). The physics simulation library, ammo.js, was converted from the C++ Bullet library with emscripten. Brython is also considered to be a transpiler.
You might use such a language for several reasons. You might know the source language better than the target language. On the other hand, the source language might allow you to be more expressive or write more terse code than the target language. Coming from a Java and Groovy background, I've found CoffeeScript to better fit how I like to write code for my pet projects, and it allows me to get a lot done with small amounts of code. The quality of the JavaScript code that CoffeeScript generates is code that I'd be proud to commit.
Getting Started with Brython
Brython has two types of distributions: a development build and a deployment build. The development build is an archive of the project's website. It's useful during the early stages because you can modify the included examples or bang out some code in the Brython console. The deployment build contains only brython.js, a number of python files providing core language features, and a set of JavaScript files that provide HTML-specific features to Brython.
Working with HTML
Brython has first-class support for HTML elements. After importing the html module, you have native access to all the HTML4 and HTML5 tags. We instantiate an element by calling the full name in all caps. You can include an option innerText string and an attribute list. Below we have an anchor that has 'Python' as innerText and navigates to python.org. The next link in the snippet is an alias for element.appendChild.
link = html.A('Python', href='') doc <= link
The following is a simple script that changes the address an anchor tag links to. One of the fun things about using Brython is that you have a shortcut to document.getElementById, thanks to the doc object. You can query by id by calling doc["elemId"] or by tag name with doc[TAGNAME]. Below is an example instantiating a canvas and setting some properties.
canvas = html.CANVAS() canvas.height = 480 canvas.width = 640 ctx = canvas.getContext('2d') doc['gameboard'] <= canvas
Working with Existing JS Objects
The number of available libraries in JavaScript expectedly far outstrips the number of Brython libraries. There's a class called JSONObject that allows you to coerce an object from JavaScript space to Brython. Take, for instance, a JavaScript foo object; we would use it in Brython as follows:
<script type="text/javascript"> var foo = new Foo(); </script> <script type="text/python"> foo = JSObject(foo) foo.bar() </script>
You can even call arbitrary JavaScript code using the JSObject by embedding an eval call, as shown below. Evals are messy, so you probably should only use them in extreme cases.
a = JSObject(eval("String('123s')"))
Creating a Simple Canvas Game/Project
To assess how Brython would work in a real-world situation, I decided to port one of my smaller Canvas2D demos to Brython, a binary clock originally made using Amino.js. Check out my article for more information.
The subset of Python 3 is decently powerful, but I was missing a couple of things to port the app. The first item I had to replace was string padding. Python has a function called zfill that does left padding of string, but it is not supported in Brython. JavaScript doesn't have such a function at all, so I just ported the JavaScript version to Brython.
def lpad(s, length): while (len(s) < length): s = "0" + s return s
The next thing I had to deal with was the lack of base encoding. JavaScript allows you to encode to an arbitrary base in its toString method. Python has no such method, and you have to code it by hand. Luckily, I happened upon baseconv. With minimal changes (dropping the executable script stuff), I was able to get it working in the browser.
Porting the project was fairly straightforward. One of the only sticking points is that Python/Brython is very particular about how you reference properties of a dictionary, Python's version of a Map. Whereas JavaScript allows you to either map_(obj.['prop']) notation or dot notation _(obj.prop) to dereference a property, if you declare an object, you must use dot notation. If you created a dictionary, you must use map notation.
Partially due to the lack of curly braces, the Brython code ended up being approximately 30% smaller than the JavaScript code.
Conclusion
In this article, we explored Brython, a source-to-source compiler that allows you to write Python code in the browser. Brython's mission statement is that it aims to replace JavaScript as the scripting language of the web. In that fight, it has a long way to go. The allure of being able to reuse a subset of server code in the browser is definitely compelling.
I'm interested in using Brython with some hobby programming projects, but I couldn't recommend it for the workplace just yet. Brython has a core that's written in JavaScript with most modules in Python. As such, even for the small binary clock example, Brython had to load four files in addition to brython.js. And two of those files needed to be interpreted at runtime adding a bit of overhead.
Luckily, the development distribution includes scripts to pre-compile source to JavaScript; however, this is something best left to the final stages of development. I found the generated JavaScript almost impossible to debug and not something that I would want to check into source control in its raw state. That being said, if you love Python and already use it as a web stack, Brython is definitely worth a look. | http://www.informit.com/articles/article.aspx?p=2111677&WT.mc_id=IT_NL_Content_2013_8_14 | CC-MAIN-2019-04 | refinedweb | 1,074 | 64.51 |
class aissp_configs
{
#include "LV\config_aissp.hpp"
};
"if(this==(leader(group this)))then{nul = [this] execVM 'leaderScript.sqf'};"
# Gambit :
Ill keep helping you troubleshoot as much as I can with this pack if you want. I also do a lot of scripting so I am not totally lost with this stuff. I will grab the latest update and give it a shot!
I was also curious why you don't use the built in functions for spawning groups and setting patrols. It would probably clean up your scripts a bunch and save you the hassle with arrays.
_grp = [_centerPos, WEST, _menAmount] call BIS_fnc_spawnGroup;
war = [this] execVM "ambientCombat.sqf";
terminate war;
nul = [this, 2, 250, false, true, false, 0, 0.02] execVM "militarize.sqf";
# dirtyhaz :
Is there anyway I can call the following code (see below) via a trigger ?
nul = [this, 2, 250, false, true, false, 0, 0.02] execVM "militarize.sqf";
Haz
nul = [mygamelogic01, 2, 250, false, true, false, 0, 0.02] execVM "militarize.sqf"; | http://www.armaholic.com/forums.php?m=posts&q=21499 | CC-MAIN-2017-26 | refinedweb | 164 | 69.89 |
Now that you've learned the basic concepts of Fuse, it's time to put things into practice and build an app. In this tutorial, you'll learn how to develop an app using the Fuse framework. Specifically, you're going to learn the following:
- How to code using UX Markup.
- How to use the Observable, Timer, and Geolocation APIs.
- How to preview an app using desktop preview and custom preview.
If you need a refresher on Fuse, check out my previous post in this series: Introducing Fuse for Cross-Platform App Development.
Prerequisites
To start working with Fuse, go to the downloads page and sign up for an account. You can also log in to an existing account if you have one.
Fuse is available for both Windows and macOS. Download and install the correct installer for your platform. On the downloads page, they also point out the Fuse plugins available for various text editors. Install the one for your text editor. The Fuse plugins include code completion, goto definition, and viewing of logs generated from the app, all of which makes developing apps more convenient.
We'll also cover how to preview the app using custom preview. This requires Android Studio or Xcode to be installed on your computer.
A basic understanding of web technologies such as HTML, CSS, and JavaScript is helpful but not required.
What You'll Be Creating
You'll be creating a stopwatch app which also measures the distance covered. The distance is measured using geolocation. The user can also create laps, and the individual distance and time for each lap will be displayed on the screen.
Here's what the app will look like:
You can view the complete source code in the tutorial GitHub repo.
Creating a New Fuse Project
Once you have installed Fuse Studio, you should now be able to create a new Fuse project. Just open Fuse Studio and click on the New Fuse Project button. Enter the name of the project, and click Create:
This will create a new folder in the selected directory. Open that folder and open the MainView.ux file. By default, it will only have the
<App> markup. Update it to include a
<Text>, and then save the file:
<App> <Text FontSize="25">Hello World!</Text> </App>
The preview should now be updated with the text you specified:
That's the main development workflow in Fuse. Just save the changes to any of the files in the project directory, and they will automatically get reflected in the desktop preview.
You can also see the logs in the bottom panel. You can trigger your own by using
console.log(), like in the browser. The only difference is that you have to
JSON.stringify() objects in order to see their value, since the
console.log() implementation in Fuse can only output strings.
UX Markup
Now we're ready to build the app. Open the MainView.ux file and remove the
<Text> element from earlier. That way, we can start with a blank slate:
<App> </App>
Including Fonts
Just like in an HTML document, the standard is to include the assets—things like fonts, stylesheets, and scripts—before the actual markup of the page. So add the following inside the
<App> element:
<Font File="assets/fonts/roboto/Roboto-Thin.ttf" ux:
This imports the font specified in the
File attribute and gives it the name
Thin. Note that this doesn't make it the default font for the whole page. If you want to use this font, you have to use its name (
Thin) on the specific text you want to apply it to.
You can download the font from the tutorial GitHub repo. After that, create an assets/fonts/robot folder inside the root project directory and put the .ttf file in it.
If you want to use another font, you can download it from dafont.com. That's where I downloaded the font for this app.
Next, we want to use icons inside the app. Fuse doesn't really have built-in elements and icon sets which allow you to do that. What it offers is a way to include existing icon fonts in your app. Since icon fonts are essentially fonts, we can use the same method for including fonts:
<Font File="assets/fonts/icons/fa-solid-900.ttf" ux:
You can download the icon font from the GitHub repo or download it directly from fontawesome.com. Note that not all icons on fontawesome are free, so it's best to check the actual icon page before using it. If you see a "pro" label next to the icon, then you can't simply use it in your project without paying.
Including JavaScript
Next, we need to include the JavaScript file for this page. We can do that using the
<JavaScript> element:
<JavaScript File="scripts/MainView.js"/>
Don't forget to create the scripts/MainView.js file at the root of the project directory.
Creating New Components
To maximize code reuse, Fuse allows us to create custom components from existing ones. In the code below, we're using a
<Panel> to create a custom button. Think of it like a
div which acts as a container for other elements. In this case, we're using it as a reusable component for creating a button.
Fuse comes with many elements. There are elements for laying out content such as the
<Panel>, elements for showing user controls, pages and navigation, scripting and data, and primitives for building the UI. Each one has its own set of properties, allowing you to modify the data, presentation, and behavior.
To create a reusable component, add a
ux:Class property to a presentation element that you'd like to use as a base. In this case, we're using a
<Panel> as the base. You can then add some default styling. This is similar to how styling is done in CSS.
Margin adds space outside of the container. Here we've only specified a single value, so this margin is applied on all sides of the panel.
Color adds a background color to the element:
<Panel ux: </Panel>
Inside the
<Panel>, we want to show the button text. We want to make this into a reusable component, so we need a way to pass in properties for when we use this component later on. This allows us to achieve different results by only changing the properties.
Inside the
<Panel>, use the data type of the value you want to pass in as the name of the element, and then add the name of the property using
ux:Property. You can then show the value supplied to the property by using
{ReadProperty PropertyName}, where
PropertyName is the value you supplied to
ux:Property. This will allow you to supply a
Text property whenever you're using the
<ToggleBtn> component.
<string ux: <Text Value="{ReadProperty Text}" Color="#fff" FontSize="18" Alignment="Center" Margin="20,15" />
Next, we want to offer the user some sort of feedback while the button is being pressed. We can do that via triggers and animators. Triggers are basically the event listeners—in this case,
<WhilePressed>. And animators are the animations or effects you want to perform while the trigger is active. The code below will make the button
10% bigger than its original size and change its color.
Duration and
DurationBack allow you to specify how long it takes for the animation to reach its peak and reach its end.
<WhilePressed> <Scale Factor="1.1" /> <Change this. </WhilePressed>
Next, we create the
<IconBtn> component. As the name suggests, this is a button which only shows an icon as its content. This works the same way as the previous component, though there are a few new things we've done here.
First is the
ux:Name property. This allows us to give a name to a specific element so we can refer to it later. In this case, we're using it to change its
Color property while the button is being pressed.
We've also used a conditional element called
<WhileTrue>. This allows us to disable the
<WhilePressed> trigger when the value for
is_running is a falsy one. We'll supply the value for this variable once we get to the JavaScript part. For now, know that this variable indicates whether the timer is currently running or not.
<Panel ux: <string ux: <Text Font="FontAwesome" Color="#333" ux:{ReadProperty Text}</Text> <WhileTrue Value="{is_running}"> <WhilePressed> <Change LapText. <!-- change text color --> <Rotate Degrees="90" Duration="0.02"/> <!-- rotate the button by 90 degrees --> </WhilePressed> </WhileTrue> </Panel>
Main Content
We can now proceed with the main content. First, we wrap everything in a
<StackPanel>. As the name suggests, this allows us to "stack" its children either vertically or horizontally. By default, it uses vertical orientation so we don't need to explicitly specify it:
<StackPanel Margin="0,25,0,0" Padding="20"> </StackPanel>
In the code above, we used four values for the
Margin. Unlike CSS, the value distribution is left, top, right, bottom. If only two values are specified, it's left-right, top-bottom. You can use the selection tool in Fuse Studio to visualize the margins applied.
Next, we add a background image for the page. This accepts the file path to the background image you want to use. A
StretchMode of
Fill makes the background stretch itself to fill the entire screen:
<ImageFill File="assets/images/seigaiha.png" StretchMode="Fill" />
You can download the background image I've used from the tutorial GitHub repo. Or you can find similar patterns on the Toptal website.
Next, show the name of the app. Below it is the time-elapsed text field. This text needs to be updated frequently, so we need to turn it into a variable which can be updated via JavaScript. To output some text initialized in this page's JavaScript file, you need to wrap the variable name in curly braces. Later on, you'll see how the value for this variable is supplied from the JavaScript file:
<Text Value="HIIT Stopwatch" Color="#333" FontSize="18" Alignment="Center" Margin="0,0,0,10" /> <Text FontSize="65" Font="Thin" TextAlignment="Center" Margin="0,0,0,20">{time_elapsed}</Text>
Next, we use the
<IconBtn> component that we created earlier—not unlike in a web environment where we use the ID of the font. In Fuse, you have to use the Unicode assigned to the icon font you want to use. You also need to use
&#x as a prefix. When this button is pressed (called
Clicked), the
addLap() function declared in the JavaScript file is executed:
<IconBtn Text="" Clicked="{addLap}" />
In Font Awesome, you can find the unicode of the icon font on its own page.
Right below the button for adding a new lap is some text which indicates that the button above is for adding new laps:
<Text Value="Lap" Color="#333" FontSize="15" Alignment="Center" Margin="0,5,0,20" />
Next, show the button for starting and stopping the timer. This also executes a function which we will declare later:
<ToggleBtn Text="{toggle_btn_text}" Clicked="{toggle}" />
Next, we need to output the laps added by the user. This includes the lap number, distance covered, and time spent. The
<Each> element allows us to iterate through a collection of objects and display the individual properties for each object:
<StackPanel Margin="20,40"> <Each Items="{laps}"> <DockPanel Margin="0,0,0,15"> <Text Alignment="Left" FontSize="18" Color="#333" Value="{title}" /> <Text Alignment="Center" FontSize="18" Color="#333" Value="{distance}" /> <Text Alignment="Right" FontSize="18" Color="#333" Value="{time}" /> </DockPanel> </Each> </StackPanel>
In the code above, we're using the
<DockPanel> to wrap the contents for each item. This type of panel allows us to "dock" its children on different sides (top, left, right, and bottom) of the available space. By default, this positions its children directly on top of each other. To evenly space them out, you need to add the
Alignment property.
JavaScript Code
Now we're ready to add the JavaScript code. In Fuse, JavaScript is mainly used for the business logic and working with the device's native functionality. Effects, transitions, and animations for interacting with the UI are already handled by the UX Markup.
Start by importing all the APIs that we need. This includes
Observable, which is mainly used for assigning variables in the UI. These variables can then be updated using JavaScript.
Timer is the equivalent of the
setTimeout and
setInterval functions in the web version of JavaScript.
GeoLocation allows us to get the user's current location:
var Observable = require("FuseJS/Observable"); var Timer = require("FuseJS/Timer"); var GeoLocation = require("FuseJS/GeoLocation");
Next, we initialize all the observable values that we'll be using. These are the variables that you have seen in the UX markup earlier. The values for these variables are updated throughout the lifetime of the app, so we make them an observable variable. This effectively allows us to update the contents of the UI whenever any of these values change:
var time_elapsed = Observable(); // the timer text var toggle_btn_text = Observable(); // the text for the button for starting and stopping the timer var is_running = Observable(); // whether the timer is currently running or not var laps = Observable(); // the laps added by the user
After that, we can now set the initial values for the toggle button and timer text:
toggle_btn_text.value = 'Start'; // toggle button default text time_elapsed.value = "00:00:00"; // timer default text
That's how you change the value of an observable variable. Since these are not inside any function, this should update the UI immediately when the app is launched.
Set the initial values for the timer, lap time, and location for each lap:
var time = 0; // timer var lap_time = 0; // time for each lap var locations = []; // location of the user for each lap
The
toggle() function is used for starting and stopping the timer. When the timer is currently stopped and the user taps on the toggle button, that's the only time we reset the values for the timer and laps. This is because we want the user to see these values even after they stopped the timer.
After that, get the user's current location and push it on the
locations array. This allows us to compare it to the next location later, once the user adds a lap. Then, create a timer which executes every 10 milliseconds. We increment both the overall
time and the
lap_time for every execution. Then update the UI with the formatted value using the
formatTimer() function:
function toggle(){ if(toggle_btn_text.value == 'Start'){ // the timer is currently stopped (alternatively, use is_running) laps.clear(); // observable values has a clear() method for resetting its value time_elapsed.value = formatTimer(time); is_running.value = true; locations.push(GeoLocation.location); // get initial location timer_id = Timer.create(function() { time += 1; // incremented every 10 milliseconds lap_time += 1; // current lap time time_elapsed.value = formatTimer(time); // update the UI with the formatted time elapsed string }, 10, true); }else{ // next: add code for when the user stops the timer } toggle_btn_text.value = (toggle_btn_text.value == 'Start') ? 'Stop' : 'Start'; }
When the user stops the timer, we delete it using the
delete() method in the timer. This requires the
timer_id that was returned when the timer was created:
Timer.delete(timer_id); // delete the running timer // reset the rest of the values time = 0; lap_time = 0; is_running.value = false;
Next is the function for formatting the timer. This works by converting the milliseconds into seconds and into minutes. We already know that this function is executed every 10 milliseconds. And the
time is incremented by
1 every time it executes. So to get the milliseconds, we simply multiply the
time by
10. From there, we just calculate the seconds and minutes based on the equivalent value for each unit of measurement:
function formatTimer(time) { function pad(d) { return (d < 10) ? '0' + d.toString() : d.toString(); } var millis = time * 10; var seconds = millis / 1000; mins = Math.floor(seconds / 60); secs = Math.floor(seconds) % 60, hundredths = Math.floor((millis % 1000) / 10); return pad(mins) + ":" + pad(secs) + ":" + pad(hundredths); }
Every time the user taps on the refresh button, the
addLap() function is executed. This adds a new entry on top of the
laps observable:
function addLap() { if(time > 0){ // only execute when the timer is running lap_time_value = formatTimer(lap_time); // format the current lap time lap_time = 0; // reset the lap time var start_loc = locations[laps.length]; // get the previous location var end_loc = GeoLocation.location; // get the current location locations.push(end_loc); // add the current location var distance = getDistanceFromLatLonInMeters(start_loc.latitude, start_loc.longitude, end_loc.latitude, end_loc.longitude); // add the new item on top laps.insertAt(0, { title: ("Lap " + (laps.length + 1)), time: lap_time_value, distance: distance.toString() + " m." }); } }
Here's the function for getting the distance covered in meters. This uses the Haversine formula:
function getDistanceFromLatLonInMeters(lat1, lon1, lat2, lon2) { function deg2rad(deg) { return deg * (Math.PI/180) } var R = 6371; // radius of the earth in km var dLat = deg2rad(lat2 - lat1);) * 1000; // Distance in m return d; }
Don't forget to export all the observable values:
module.exports = { toggle: toggle, toggle_btn_text: toggle_btn_text, is_running: is_running, time_elapsed: time_elapsed, laps: laps, addLap: addLap }
Geolocation Package
To keep things lightweight, Fuse doesn't really include all the packages that it supports by default. For things like geolocation and local notifications, you need to tell Fuse to include them when building the app. Open StopWatch.unoproj at the root of your project directory and include
Fuse.GeoLocation under the
Packages array:
"Packages": [ "Fuse", "FuseJS", "Fuse.GeoLocation" // add this ],
This should instruct Fuse to include the Geolocation package whenever building the app for custom preview or for generating an installer.
Setting Up for Custom Preview
Before you can run the app on your iOS device, you need to add a bundle identifier to the app first. Open the StopWatch.unoproj file and add the following under
iOS. This will be the unique identification for the app when it's submitted to the app store:
"Packages": [ // ... ], "iOS": { "BundleIdentifier": "com.yourname.stopwatch", "PreviewBundleIdentifier": "com.yourname.stopwatch.preview" }
Next, on Xcode, log in with your Apple developer account. If you don't already have one, you can go to the Apple developer website and create one. It's actually free to develop and test apps on your iOS device. However, there are some limitations if you're not part of the developer program.
Once your account is created, go to Xcode preferences and add your Apple account. Then click on Manage Certificates and add a new certificate for iOS development. This certificate is used to ensure that the app is from a known source.
Once that's done, you should now be able to run the app on your device. Click on Preview > Preview on iOS in Fuse Studio and wait for it to launch Xcode. Once Xcode is open, select your device and click the play button. This will build the app and install it on your device. If there's a build error, it's most likely that the preview bundle identifier is not unique:
Changing the Bundle Identifier to something unique should solve the issue. Once the error under the signing section disappears, click on the play button again to rebuild the app. This should install the app on your device.
However, you won't be able to open the app until you approve it. You can do that on your iOS device by going to Settings > General > Device Management and selecting the email associated with your Apple Developer account. Approve it, and that should unlock the app.
For Android, you should be able to preview the app without any additional steps.
Conclusion
That's it! In this tutorial, you've learned the basics of creating an app using the Fuse framework. Specifically, you've created a stopwatch app. By creating this app, you've learned how to work with Fuse's UX Markup and a few of Fuse's JavaScript APIs. You also learned how to use Fuse Studio to preview the app on your computer and your phone while developing it.
Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
| https://code.tutsplus.com/tutorials/creating-your-first-app-with-fuse--cms-30837 | CC-MAIN-2020-05 | refinedweb | 3,380 | 64.91 |
How to iterate through vector in C++
In this section, we will see how can we iterate through the elements of the vector. There are three ways to iterate through vector elements. In this tutorial, we will learn these three methods.
The Vector in C++ is a dynamic array that can automatically grow and reduce its size when the elements get added or removed respectively. Like an array, elements of vectors are at the contiguous memory locations. This property helps us in random access of the elements easily using the index and also to iterate over its element to do any kind of operations. It has a great advantage over the array.
Iterate through C++ vectors using range based for loop
It is introduced in C++11 and it is mostly used because it makes the code more readable. We will understand this using an example in which we will traverse through the vector and output the elements in sequential form.
#include <bits/stdc++.h> using namespace std; // main function int main() { // declaration of vector vector<int> v{10, 9, 30, 40}; /* traverse through the range based for loop and display its elements */ for (auto& a : v) { cout << a << " "; } return 0; }
Output:
10 9 30 40
In the above code, we have used auto keyword to iterate over the vector elements. This will automatically determine the type of elements in the vector. It takes each element and provides it for the operations we want to do using it.
Iterate through C++ vectors using indexing
this method, there is a pre-requisite of some of the things like the length of the vector. This is the most common method we are using to iterate over the vector. We will understand this more using an example.
#include <bits/stdc++.h> using namespace std; //main function int main() { // decleration of vector vector<int> v{10, 35, 20, 13, 27}; // traverse through the vector using indexing for(int i = 0; i < v.size(); i++) { cout<<v[i]<<" "; } return 0; }
Output:
10 35 20 13 27
Iterate through C++ vectors using iterators
In C++, vector class provides us two functions using which we can get the start and end iterator of the vector. These two functions are begin(), end(). begin() function is used to get the pointer pointing to the start of the vector and end() functions is used to get the pointer pointing to the end of the vector.
Using this we can iterate over the vector and display the value using the output function. We will understand this using example.
#include<bits/stdc++.h> using namespace std; //main function int main() { // declaring a vector vector<int> v{11, 23, 26, 30, 24}; // declaring iterator of vector vector<int>::iterator it; // iterate over vector using iteration for(it = v.begin(); it != v.end(); it++) { cout<<*it<<" "; } return 0; }
Output:
11 23 26 30 24 | https://www.codespeedy.com/how-to-iterate-through-vector-in-cpp/ | CC-MAIN-2021-43 | refinedweb | 480 | 59.84 |
#include <reporter.h>
#include <reporter.h>
List of all members.
Definition at line 134 of file reporter.h.
Definition at line 137 of file reporter.h.
Return true if the given string looks like a valid report.
Definition at line 617 of file reporter.cc.
References report, and tags.
Parse a received report from a child process and return a job_path_report.
Definition at line 540 of file reporter.cc.
References job_path_report::assign(), INTERNAL_ERROR, report, tags, and TRY_nomem.
Here is the call graph for this function:
Generate and submit a report to the parent process on child's std::cout.
Definition at line 506 of file reporter.cc.
References report, timer::start_value(), timer::stop_value(), and tags.
Write a report line for output from rsync to parent on child's std::cerr.
Definition at line 497 of file reporter.cc.
References rsync, and tags.
Write a report line for output from rsync to parent on child's std::cout.
Definition at line 488 of file reporter.cc.
[static]
Initial value:
{
"[RSYNC]: ",
0
}
Definition at line 480 of file reporter.cc.
Referenced by is_report(), parse(), write_report(), write_rsync_err(), and write_rsync_out(). | http://rvm.sourceforge.net/doxygen/1.0/html/classreportio.html | CC-MAIN-2017-39 | refinedweb | 186 | 62.95 |
The purpose of this lab is to:
Create a program called sketchy.py that draws the picture you designed on your prelab using the picture module. Your program should use at least one function (Make a flower; make rays on the sun; make gerbil -- anything you might want to have more than one copy of in your picture); you might use x and y coordinates of the location as parameters of the function. If you are going to draw in the function you should also make the canvas one argument of the function.
Here are a few of the things you can do with the picture module:
To adjust the pen width, use the setPenWidth function. To position and draw with the pen, use the setPosition, setDirection, rotate and drawForward functions. To draw simple shapes, you can use functions like drawCircle, drawCircleFill, drawRect, drawRectFill, etc. Use setFillColor to change the fill color used when creating shapes. Use setOutlineColor to change the color of shape edges and pen lines. Don't forget to use the display function followed by an input function call so your image gets displayed, and gives you time to savor your creation before closing the window.
You can read more about different picture functions here
Some of the best sketches will be shown in class to general acclaim.
Handin:
Please handin what you have of your lab so far.
For this part you need to write a program functionPractice.py that has the following as its main( ) function:
def main():
done = False
while not done:
x = eval(input( "x: "))
if x == 0:
done = True
else:
print(square(x))
checkEvenOdd(x)
print(reverse(x))
Of course, this is just our usual input loop that reads numbers typed by the user until the user exits with input 0. For each non-zero input there are three functions called; your job is to add these three functions to the program. They are
Function reverse( ) should be the only one of these that challenges you. One way to do this is to convert x to a string, reverse its digits, and then convert the result back into an integer. Here is a numerical algorithm for this function. Make variable result which starts at 0. Go into a loop until x is 0. At each step multiply result by 10 and add the rightmost digit of x, which is x%10. Divide x by 10 to eliminate its rightmost digit, and go around the loop again. If you do this on paper with a number like 325 you'll see how this algorithm works.
Notice that two of these functions return something; the middle one does something. Take a moment to look at how the code for the function and the code for the call both differ when a function returns something as opposed to doing something.
After you have tested out your functionPractice program, hand it in.
Mastermind is a neat (although often frustrating) puzzle game. It works a something like this: There are two players. One player (your program) is the codemaker, the other (the user) is the codebreaker..
Describe the Problem:
Write a program called master.py that allows the user to play a text-based version of the fantastic game Mastermind.
input: repeatedly get guesses from the user, until they either guess the code, or run out of guesses.
goal: generate a random code, and correctly provide the user with feedback on their guesses.
Understand the Problem:
The trickiest part of this game is determining how to provide feedback on the codebreaker's guesses. In particular, next to each guess that the codebreaker makes, the codemaker places up to four clue pegs. Each clue peg is either black or white. Each black peg indicates a correct color in a correct spot. Each white peg indicates a correct color in an incorrect spot. No indication is given as to which clue corresponds to which guess.
For example, suppose that the code is RYGY (red yellow green yellow). Then the guess GRGY (green red green yellow) would cause the codemaker to put down 2 black pegs (since guesses 3 and 4 were correct) and 1 white peg (since the red guess was correct, but out of place). Note that no peg was given for guess 1 even though there was a green in the code; this is because that green had already been "counted" (a black peg had been given for that one).
As another example, again using RYGY as our code, the guess YBBB would generate 1 white peg and 0 black; yellow appears twice in the code, but the guess only contains one yellow peg. Likewise, for the guess BRRR, only 1 white peg is given; there is an R in the code, but only one. Below is a table with guesses and the correponding number of black and white pegs given for that guess (still assuming the code is RYGY).
Check here for an online graphical version of the game (where their red pegs are our black pegs).
A sample run of our text-based program may look like this:
%python3 master.py I have a 4 letter code, made from 6 colours. The colours are R, G, B, Y, P, or O. Your guess: GGGG Not quite. You get 0 black pegs, 0 white pegs. Your guess: YYYY Not quite. You get 1 black pegs, 0 white pegs. Your guess: YOYO Not quite. You get 0 black pegs, 2 white pegs. Your guess: PPYO Not quite. You get 1 black pegs, 2 white pegs. Your guess: POYB Not quite. You get 1 black pegs, 3 white pegs. Your guess: PBOY You win! Aren't you smart.
Design an Algorithm:
Once you understand how the game works, you should design a pseudocode plan of attack. The general steps are:
Implement a Design:
Now that you have some of the kinks worked out in theory, it is time to write your program master.py.
You may assume the user always provides a guess with the available colors, and always in uppercase.
Make and use an integer constant NUM_TURNS that represents the number of allowable turns and set this to 10..
To generate the code, write a function
generateCode()
that generates the codemaker's code (and returns it as a String to the caller). That is, this function should randomly generate 4 colored pegs, selected from R, B, G, Y, O, and P, and return it as a 4-letter string. You'll want to use the random methods as discussed in lab03 in order to randomly generate a color for each peg. In particular, you'll generate an integer between 0 and 5 inclusive. You can use this as an index into the string "RBGYOP" of all of the color symbols to get your next color.
Test your generateCode function thoroughly before continuing. Once it's working, write a second function
evaluateGuess( code, guess )
that returns the numbers of white and black clue pegs according to the given guess and code. Keep going around the guess loop until the number of black pegs is 4 or the user has name NUM_TURNS guesses.
Note that you can "change" the ith character in a string s to an 'x' as follows:
s = s[0:i] + "x" + s[i+1:]
Also note that s[j:] is the substring of s from position j to the end. Similarly, s[:j] denotes the substring of s from the beginning up to (but not including) j.
If you followed the Honor Code in this assignment, make a README file that says
I affirm that I have adhered to the Honor Code in this assignment.
You now just need to electronically handin all your files. As a reminder
% cd # changes to your home directory % cd cs150 # goes to your cs150 folder % handin # starts the handin program # class is 150 # assignment is 4 # file/directory is lab04 % lshand # should show that you've handed in something
You can also specify the options to handin from the command line
% cd ~/cs150 # goes to your cs150 folder % handin -c 150 -a 4 lab04
sketchy.py functionPractice.py mastermind.py picture.py (for ease of grading)
README (with the Honor Pledge) | http://www.cs.oberlin.edu/~bob/cs150.fall15/Labs/Lab%2004/lab04.html | CC-MAIN-2018-51 | refinedweb | 1,372 | 71.24 |
An object oriented wrapper around a counting semaphore. More...
#include <rtt/os/Semaphore.hpp>
An object oriented wrapper around a counting semaphore.
It works like a traffic light on which a thread can wait() until the sempahore's value becomes positive, otherwise it blocks. Another thread then needs to signal() the semaphore. One thread which is waiting will then be awakened, if none is waiting, the first thread calling wait() will continue directly (and decrease the value by 1).
Definition at line 61 of file Semaphore.hpp.
Initialize a Semaphore with an initial count.
Definition at line 70 of file Semaphore.hpp.
References rtos_sem_init().
Try to wait on this semaphore.
Definition at line 106 of file Semaphore.hpp.
Lower this semaphore and return if value() is non zero.
Or wait if value() is zero until a signal occurs.
Definition at line 87 of file Semaphore.hpp.
Wait on this semaphore until a maximum absolute time.
Definition at line 120 of file Semaphore.hpp.
Wait on this semaphore until a maximum absolute time.
Definition at line 134 of file Semaphore.hpp. | http://www.orocos.org/stable/documentation/rtt/v2.x/api/html/classRTT_1_1os_1_1Semaphore.html | CC-MAIN-2018-47 | refinedweb | 180 | 61.33 |
Opened 10 years ago
Closed 10 years ago
Last modified 8 years ago
#1138 closed bug (fixed)
The -fexcess-precision flag is ignored if supplied on the command line.
Description (last modified by )
The numerics/Double-based programs on the great language shootout were performing poorly. Investigations revealed that the -fexcess-precision flag was being silently ignored by GHC when supplied as a command line flag. If it is supplied as a {-# OPTIONS -fexcess-precision #-} pragma, it is respected.
Consider the following shootout entry for the 'mandelbrot' benchmark. It writes the mandelbrot set as bmp format to stdout.
import System import System.IO import Foreign import Foreign.Marshal.Array main = do w <- getArgs >>= readIO . head let n = w `div` 8 m = 2 / fromIntegral w putStrLn ("P4\n"++show w++" "++show w) p <- mallocArray0 n unfold n (next_x w m n) p (T 1 0 0 (-1)) unfold :: Int -> (T -> Maybe (Word8,T)) -> Ptr Word8 -> T -> IO () unfold !i !f !ptr !x0 = loop x0 where loop !x = go ptr 0 x go !p !n !x = case f x of Just (w,y) | n /= i -> poke p w >> go (p `plusPtr` 1) (n+1) y Nothing -> hPutBuf stdout ptr i _ -> hPutBuf stdout ptr i >> loop x {-# NOINLINE unfold #-} data T = T !Int !Int !Int !Double next_x !w !iw !bw (T bx x y ci) | y == w = Nothing | bx == bw = Just (loop_x w x 8 iw ci 0, T 1 0 (y+1) (iw+ci)) | otherwise = Just (loop_x w x 8 iw ci 0, T (bx+1) (x+8) y ci) loop_x !w !x !n !iw !ci !b | x < w = if n == 0 then b else loop_x w (x+1) (n-1) iw ci (b+b+v) | otherwise = b `shiftL` n where v = fractal 0 0 (fromIntegral x * iw - 1.5) ci 50 fractal :: Double -> Double -> Double -> Double -> Int -> Word8 fractal !r !i !cr !ci !k | r2 + i2 > 4 = 0 | k == 0 = 1 | otherwise = fractal (r2-i2+cr) ((r+r)*i+ci) cr ci (k-1) where (!r2,!i2) = (r*r,i*i)
We can compile and run this as follows:
$ 8.12s user 0.00s system 99% cpu 8.143 total
8s is around 3x the speed of C (or worse).
now, if we add the following pragma to the top of the file:
{-# OPTIONS -fexcess-precision #-}
and recompile and rerun:
$ 2.94s user 0.00s system 99% cpu 2.945 total
Nearly 3x faster, and competitive with C.
Across the board the -fexcess-precision flag seems to be ignored by GHC, affecting all Double-based entries on the shootout.
A diff on the ghc -v3 output shows that -ffloat-store is not being passed to GCC when -fexcess-precision is supplied on the command line.
Change History (5)
comment:1 Changed 10 years ago by
comment:2 Changed 10 years ago by
By the way, here's the Core loop for the above 'go' function that seems to consistently beat gcc:
Main_zdwgo_info: .text movl 16(%ebp), %eax cmpl $1000000000, %eax jne .L7 movl $r1s7_closure, %esi addl $20, %ebp movl (%ebp), %eax .L8: jmp *%eax .L7: incl %eax movsd .LC2, %xmm0 movsd (%ebp), %xmm1 mulsd %xmm0, %xmm1 movsd 8(%ebp), %xmm0 mulsd (%ebp), %xmm0 mulsd .LC1, %xmm0 movl %eax, 16(%ebp) movsd %xmm1, 8(%ebp) movsd %xmm0, (%ebp) movl $Main_zdwgo_info, %eax jmp .L8
Even with the indirect jump!
comment:3 Changed 10 years ago by
already fixed, in both the 6.6 branch and HEAD, this is the patch:
Fri Dec 1 16:41:57 GMT 2006 Simon Marlow <simonmar@microsoft.com> * Ugly hack to fix -fexcess-precision
A smaller example:
This program, run with the following flags:
Runs in:
If we then move -fexcess-precision into the file, as a pragma:
Note that asking GCC to generate sse instructions makes a 10% or better improvment too.
For reference, this C program:
Which is pretty nice for GHC :-)
But now I wonder, how much of the bad numerics press has been soley due to -fexcess-precision being ignored? | https://ghc.haskell.org/trac/ghc/ticket/1138 | CC-MAIN-2017-22 | refinedweb | 667 | 72.56 |
This release, 6.0.7r1, contains a number of improvements and fixes. Most notably, the following is new:
A brand new
varnishscoreboard is available.
SSL/TLS backends now support transmitting a client certificate as part of the TLS handshake
The
tls VMOD can now also be used in
vcl_backend_response, to inspect the state of a TLS backend connection.
The
kvstore VMOD has a
static scope, which allows keeping
kvstore content across VCL reloads.
A new VMOD
format has been added. It makes it easier to format strings.
The
ykey VMOD has gained a new function,
namespace_reset.
The newly added VHA6
broadcaster_ssl_verify_peer and
broadcaster_ssl_verify_host settings allow you to use self-signed certificates on the broadcaster.
There are also several bug fixes in this release. Most notably:
Fix an issue with MSE book database waterlevel drift. This can lead to excessive waterlevel activity due to the wrong space usage data being used to control the algorithm. (VS issue 996)
Fixed an issue with
vmod_http where libcurl would use signals,
degrading performance. This fix also affects VHA6.
YKey .namespace would panic if supplied with a NULL value (VS issue 995)
See the change log for a full overview of new features and bug fixes.
The ABI for VMODs have changed, so every VMOD needs to be recompiled to work with the new version. There is not API breakage, so a simple recompile should be sufficient. If you only use VMODs bundled with Varnish Cache Enterprise, you do not have to do anything, as bundled VMODs are always recompiled. | https://docs.varnish-software.com/releases/varnish-cache-plus-6.0.7r1/ | CC-MAIN-2021-31 | refinedweb | 255 | 65.83 |
This is a simple library that lets you do one thing very easily: generate an Image for a Code128 barcode, with a single line of code. This image is suitable for print or display in a WinForms application, or even in ASP.NET.
Image
Support for other barcode symbologies can be easily found because they're easy to create. Basic Code39 barcodes, for example, can be produced using nothing but a simple font, and it seems like you can't hardly swing a cat without hitting a Code39 font or imageset for download.
However, Code39 has deficiencies. The naive font-derived output doesn't afford any error checking. In their standard configuration, most scanners can only recognize 43 different symbols (configuration changes may fix this, but you may not have that luxury with your users' choice of hardware configuration). And Code39 is fairly verbose, requiring a large amount of space for a given message.
Code128, on the other hand, has out-of-the-box support for all 128 low-order ASCII characters. It has built-in error detection at the character and message level, and is extremely terse. Unfortunately, producing a reasonable encoding in this symbology is an "active" process. You must analyze the message for the optimal encoding strategy, and you must calculate a checksum for the entire message.
In my case, it was an absolute necessity to encode control characters. My application's design demanded that the user be able to trigger certain menu shortcut keys by scanning a barcode on a page. But since I've got no control over the scanners that my users are employing, the Code39EXT symbology wasn't a good candidate.
A search yielded several Code128 controls, but these had two important deficiencies. First, they were controls. That would be fine if I just wanted to produce a barcode on the page, but I wanted to use them as images in a grid, so I needed a means of obtaining a raw GDI+ Image object. Second, they were fairly expensive -- enough that a license covering all of our developers would cost more than my time to roll my own.
As promised, producing the barcode Image is as simple as a single line of code. Of course, you'll still need code lines necessary to put that Image where it needs to go.
Here's a chunk from the sample application. In it, I respond to a button click by generating a barcode based on some input text, and putting the result into a PictureBox control:
PictureBox
private void cmdMakeBarcode_Click(object sender, System.EventArgs e)
{
try
{
Image myimg = Code128Rendering.MakeBarcodeImage(txtInput.Text,
int.Parse(txtWeight.Text), true);
pictBarcode.Image = myimg;
}
catch (Exception ex)
{
MessageBox.Show(this, ex.Message, this.Text);
}
}
Obviously, the meat of this is the first line following the try. For the caller, there's just one interesting method in the whole library:
try
GenCode128.Code128Rendering.MakeBarcodeImage( string InputData,
int BarWeight, bool AddQuietZone )
(That's the GenCode128 namespace, in a static class called Code128Rendering). Since this is a static class, you don't even need to worry about instantiating an object.
GenCode128
Code128Rendering
There are three parameters:
string InputData
The message to be encoded
int BarWeight
The baseline width of the bars in the output. Usually, 1 or 2 is good.
bool AddQuietZone
If false, omits the required white space at the start and end of the barcode. If your layout doesn't already provide good margins around the Image, you should use true.
false
true
You can get a feel for the effect of these values by playing with the sample application. While you're at it, try printing out some samples to verify that your scanners can read the barcodes you're planning to produce.
A barcode library is pretty much useless if you don't use it to print. You can't very well scan the screen. It's been quite a long time since I had printed anything from a Windows application, and it took a little while to remember how. If you need a quick reminder like I did, take a look at the event that the demo app's Print button calls.
First of all, I don't have any exception handling built into the library itself. For your own safety, you should put try/catch blocks around any calls to the library.
try/catch
The solution comprises three projects. One is the library itself, one is the demo application, and then there is the unit test code. I used NUnit by way of TestDriven.net. If you don't have that, then Visual Studio is going to complain. Since it's just test code, you can safely drop it and still use the library successfully.
Another point is the required vertical height of the barcode. The spec requires that the image be either 1/4" or 15% of the overall width, whichever is larger. Since I don't have any control of the scaling you're using when outputting the image, I didn't bother implementing the 1/4" minimum. This means that for very short barcode, the height might be illegally small.
Code128's high information density derives partly from intelligently shifting between several alternate codesets. Obtaining the optimal encoding is, as far as I can tell, a "hard" problem (in the sense of discrete math's non-polynomial problems like the Traveling Salesman). The difference between the best possible solution and my pretty good one should be small, and doesn't seem worth the effort.
My algorithm for obtaining a "pretty good" encoding involves a single-character look-ahead.
A similar decision has to be made about which codeset to start the encoding in. To solve this, I check the first two characters of the string, letting them "vote" to see which codeset they prefer. If there's a preference for one codeset, I choose it; otherwise, I default to codeset B. This is because codeset A allows uppercase alpha-numerics plus control characters, while codeset B allows upper and lower alpha-numerics; I assume that you're more likely to want lowercase than control characters.
Finally, there is an optimization in the Code128 spec for numeric-only output that I didn't take advantage of. Long runs of digits can be encoded in a double density codeset. Accounting for this in my already-ugly look-ahead algorithm would have taken a lot more effort -- for a feature that I don't need. But if you have lots of digits and space is tight, you might look at enhancing this.
I suppose that anyone examining my source code will wonder why in the world my table of bar width has two extra columns. In any sane universe, there should be six columns rather than eight. This was a compromise to allow for the oddball STOP code, which has seven bars rather than six. I could have implemented a special case for just this code, but that was too distasteful.
Instead, I added extra zero-width columns to everything else, making the data equivalent in all cases. For every bar that comes up with a zero width, nothing is output, so nothing is harmed.
Of course, the choice between six or eight columns just begs the question: why not seven? This is to accommodate an optimization in the rendering code. By pre-initializing the entire image to white, I can avoid needing to draw the white bars. Thus, I grab bar widths in groups of two. The first one is the black one, and I draw that normally (unless its width is zero). The second one is white, but there's white already there, so I can just skip the area that would have occupied.
If anyone's keeping score, this is my second attempt at truly Test-Driven Development. On the whole, I think this worked out pretty well. Especially, at the lower levels of code, I'm pretty confident of the code. However, the highest level -- where the output is just an Image -- seemed impractical to be tested in this way.
One problem I've got with the TDD, though, is code visibility. Optimally, this library should have exactly one publicly-visible class, with one public method. However, my test code forces me to expose all of the lower-level stuff that the end caller should never know about. If TDD in C# has developed a good answer to that, I haven't yet stumbled upon it.. | http://www.codeproject.com/Articles/14409/GenCode128-A-Code128-Barcode-Generator?fid=315593&df=90&mpp=10&sort=Position&spc=None&tid=4474889 | CC-MAIN-2016-18 | refinedweb | 1,415 | 63.59 |
tag:blogger.com,1999:blog-12214002.post1833295658799464217..comments2016-11-30T04:43:07.744-05:00Comments on Let's Wreck This Together...with Oracle Application Express!: You shouldn't use Oracle Application Express because...Joel R. Kallman"Everything else is put in the naming convent..."Everything else is put in the naming conventions of packages and function. Again very ugly."<br /><br />Well, "ugly" is in the eye of the beholder. For example, the Win32 API is full of "ugliness" with strange conventions, inconsistent naming and general spaghetti, yet people manage to use it (either because they have to, or because it offers some other benefits).<br /><br />Trust me, there are worse conventions than having to name your items after the page they belong to...!<br /><br />And to me, Objective C looks really ugly -- its OO features don't tempt me, because I mostly work with data-driven applications, where PL/SQL's (non-OO) features are much more useful.<br /><br />Here are some of my thoughts on structuring PL/SQL code, in general and for use with Apex:<br /><br /><br /><br />With regard to Apex as a tool, the fact that you can create apps like the JSON-powered experiment you described, speaks to the strength and flexibility of Apex to make it whatever you need.<br /><br />So I say, leverage PL/SQL packages for data processing, and pick from Apex what you need in terms of authentication and authorization, session management, navigation and templates, Interactive Reports, Flash charts, auto DML (for those quick CRUD apps), and so on.<br /><br />- MortenMorten Braten Morten and thanks for your comment. We can d...Hello Morten and thanks for your comment. We can do what we want, but it's becoming very tricky. <br /><br / ... <br /><br />we should probably split the application in more and more packages, that way we would have less problems with multiple developers, but again it's not easy to do the split, when your apex application has dynamic dependencies on these packages.<br /><br />pl/sql has 3-level namespaces (schema.package.function), and we would like to keep the app code in one schema, so we're down to two levels. Everything else is put in the naming conventions of packages and function. Again very ugly.<br /><br /. <br /><br />Maybe we could use something like google's closure to do proper JS (with all the nice features of modern language) and let the db do what it does best, running it's pl/sql, and APEX acting as a thin (but highly configurable) layer in the middle....<br /><br /. <br /><br />As Joel said they are not targeting this kind of developers, a move in such a direction might not make much sense on a cost/benefit perspective.ʯɲʑɩʛʯɖʋɪʉ ɕɑʒʝɪɪʧʠʘɶ "Guy with strange Unicode name", It ...Hi "Guy with strange Unicode name",<br /><br /?<br /><br />Do you have any concrete examples of what type of complexity we are talking about here?<br /><br />I've worked on several business-critical applications written in PL/SQL, with up to 200,000 lines of code, and I've never had problems organizing the code using packages.<br /><br />(I've often found myself wishing that Oracle allowed object names with more than 30 characters, but you work your way around it.)<br /><br /.<br /><br />- Morten<br /><br />*.Morten Braten love APEX. You can do a lot of things with APEX....I love APEX. You can do a lot of things with APEX. It is very much scalable.<br /><br />There are some Tips and Tricks which will be helpful for developing APEX Applications and which you can master by experience and reading blogs from experts of APEX.<br /><br />Regards,<br /><br />Sohil Bhavsar.Sohil Bhavsar for your answer. Could you please shed some..?ʯɲʑɩʛʯɖʋɪʉ ɕɑʒʝɪɪʧʠʘɶ ʯɲʑɩʛʯɖʋɪʉ ɕɑʒʝɪɪʧʠʘɶ, Thanks for your comment...Hi ʯɲʑɩʛʯɖʋɪʉ ɕɑʒʝɪɪʧʠʘɶ,<br /><br /. <br /><br />If object-oriented programming fits your need, then you're correct - Application Express is probably not for you. And that's fine - it really isn't intended to be a solution for all problems.<br /><br /.<br /><br />Thanks again for your feedback, though. I truly appreciate the perspective.<br /><br />JoelJoel R. Kallman Joel, I agree that APEX is an excellent tool...Hello Joel,<br />I agree that APEX is an excellent tool and I think that it can go far beyond basic CRUD and it does scale in complexity - because there is less complexity.<br /><br />But as any other tool it has its weak points and in my opinion it's lack of source control. Unfortunately<br />current statement of direction is not promising - there is nothing about it.<br /><br />The biggest APEX application - is APEX itself. Could you share how Oracle team manages source code? Probably we are just missing something and there are efficient ways to manage code.<br /><br />Thanks,<br />LevUnknown comment has been removed by the author.Unknown, APEX 4.0.2 will ship with XE. JoelFlavio,<br /><br />APEX 4.0.2 will ship with XE.<br /><br />JoelJoel R. Kallman Joel. We have invested in APEX for the past ...<br /><br /.<br /><br />In my opinion APEX does have it's niche, where it performs (rapid development, ...) better than the competition, but our biggest problem is that it doesn't scale in COMPLEXITY. <br /><br />The reason for this is that it doesn't use, right at its core, basic "Object Oriented" concepts that have proved successful over the past 20 years in designing complex application: encapsulation, reuse, inheritance, ...<br /><br /.<br />.<br /><br /. <br /><br /.<br /><br /". <br /><br />Thanks for your timeʯɲʑɩʛʯɖʋɪʉ ɕɑʒʝɪɪʧʠʘɶ, which version of Apex are you going to ship ...Joel,<br />which version of Apex are you going to ship with XE 11g?<br /><br />FlavioByte64 | http://joelkallman.blogspot.com/feeds/1833295658799464217/comments/default | CC-MAIN-2016-50 | refinedweb | 982 | 66.13 |
Overview of Python Features
Python is a famous programming framework, known for its simple object oriented characteristic advantage. A few of the other notable features of Python are the library functions & modules are reliable in nature, facilitates the developers with its interactive mode. It also supports other program theories, provides dynamic code check for types, easy access for database application, user interface programming is quite uncomplicated, anyone can get their hand on python programming as it is available for free & open source. It consents to expandability & scalability, and finally the most important feature is it is effortless to self-learn, understand & write the code.
Top 15 Features of Python
Top 15 Features of Python are as follows:
1. Easy to Write
These days with the increasing number of libraries in the languages, most of the time of developer goes in remembering them. This is one of the great features of python as python libraries use simple English phrases as it’s keywords. Thus it’s very easy to write code in python. For eg:-
Writing code for function doesn’t use curly braces to delimit blocks of code. One can indent code under a function, loop, or class.
def fun()
print("Hi, i am inside fun");//this line comes under function block as it is indented.
print("Hi ,i am outside fun");//This line will be printed when control comes out of the function block.
2. Easy to Understand
This is the most powerful feature of python language which makes it everyone’s choice. As the keyword used here are simple English phrases thus it is very easy to understand.
3. Object-Oriented
Python has all features of an object-oriented language such as inheritance, method overriding, objects, etc. Thus it supports all the paradigms and has corresponding functions in their libraries. It also supports the implementation of multiple inheritances, unlike java.
4. Robust Standard Libraries
The libraries of python are very vast that include various modules and functions that support various operations working in various data types such as regular expressions etc.
5. Supports Various Programming Paradigms
With support to all the features of an object-oriented language, Python also supports the procedure-oriented paradigm. It supports multiple inheritances as well. This is all possible due to its large and robust libraries that contain functions for everything.
6. Support for Interactive Mode
Python also has support for working in interactive mode where one can easily debug the code and unit test it lines by line. This helps to reduce errors as much as possible.
7. Automatic Garbage Collection
Python also initiates automatic garbage collection for great memory and performance management. Due to this memory can be utilized to its maximum thus making the application more robust.
8. Dynamically Typed and Type Checking
This is one of the great feature of python that one need not declare the data type of a variable before using it. Once the value is assigned to a variable it’s datatype gets defined Thus type checking in python is done at a run time, unlike other programming languages.
For eg-
v=7;// here type or variable v is treated as an integer
v="great";//here type of the variable v is treated as a string
9. Databases
Database of an application is one of the crucial parts that also needs to be supported by the corresponding programming language being used. Python supports all the major databases that can be used in an application such as MYSQL, ORACLE, etc. Corresponding functions for there database operations have already been defined in python libraries. one needs to include those files in code to use it.
10. GUI Programming
Python being a scripting language also supports many features and libraries that allow graphical development of the applications. In the vast libraries and functions, corresponding system calls and procedures are defined to call the particular OS calls to develop perfect GUI of an application. Python also needs a framework to be used to create such a GUI. Examples of some of the frameworks are Django, Tkinter, etc.
11. Extensible
This feature makes use of other languages in python code possible. This means python code can be extended to other languages as well thus it can easily be embedded in existing code to make it more robust and enhance its features. Other languages can be used to compile our python code.
12. Portable
A programming language is said to be portable if it allows us to code once and runs anywhere feature. Means, the platform where it has been coded and where it is going to run need not be the same. This feature allows one of the most valuable features of object-oriented languages-reusability. As a developer, one needs to code the solution and generated its byte code and need not worry about the environment where it is going to run.eg-one can run a code developed on windows operating system on any other operating system such as -Linux, Unix, etc.
13. Scalable
This Language helps to develop various systems or applications that are capable of handling a dynamically increasing amount of work. These type of applications helps a lot in the growth of the organization as they are strong enough to handle the changes upto some extent.
14. Free and Open Source
Yes, u read it correctly u need not pay a single penny to use this language in your application. One needs to just download it from its official website, and it’s all done to start. And as it is open-source, its source code has also been made public. One can easily download it and use it as required as well as share it with others. Thus it gets improved every day.
15. Integrated
Python can be easily integrated with other available programming languages such as C, C++, Java, etc. This allows everyone to use it to enhance the functionality of existing applications and make it more robust.
Conclusion
Python is an advanced, high-level, robust, opensource but easy to understand and code language that allows the developer to concentrate on the solution rather than remembering a large number of keywords, as it uses simple and easy to remember English phrases as it’s keywords.
It’s a robust library, support for different paradigms as well as GUI programming feature along with integrated feature makes it the most suitable language among others.
Recommended Articles
This is a guide to Python Features. Here we discuss the overview and top 15 different features of python which include easy to write and understand, object-oriented and support for interactive mode, etc. You can also go through our other suggested articles to learn more – | https://www.educba.com/python-features/ | CC-MAIN-2020-34 | refinedweb | 1,117 | 52.39 |
#include <DIOP_Acceptor.h>
#include <DIOP_Acceptor.h>
Inheritance diagram for TAO_DIOP_Acceptor:
The DIOP-specific bridge class for the concrete acceptor.
0
Constructor.
Destructor.
@ Helper method for the implementation repository, should go away
[virtual]
Implements TAO_Acceptor.
[protected]
Helper method to add a new profile to the mprofile for each endpoint.
Helper method to create a profile that contains all of our endpoints.
Set the host name for the given address using the dotted decimal format.
Returns the array of endpoints in this acceptor.
Set the host name for the given addr. A hostname may be forced by using specified_hostname. This is useful if the given address corresponds to more than one hostname and the desired one cannot be determined in any other way.
[protected, virtual]
Implement the common part of the open*() methods. This method is virtual to allow a derived class implementation to be invoked instead.
Parse protocol specific options.
Probe the system for available network interfaces, and initialize the <addrs_> array with an ACE_INET_Addr for each network interface. The port for each initialized ACE_INET_Addr will be set in the open_i() method. This method only gets invoked when no explicit hostname is provided in the specified endpoint.
Array of ACE_INET_Addr instances, each one corresponding to a given network interface.
[private]
The number of host names cached in the hosts_ array (equivalent to the number of endpoints opened by this Acceptor).
Cache the information about the endpoints serviced by this acceptor. There may in fact be multiple hostnames for this endpoint. For example, if the IP address is INADDR_ANY (0.0.0.0) then there will be possibly a different hostname for each interface.
Should we use GIOP lite??
ORB Core.
The GIOP version for this endpoint @ Theoretically they shouldn't be here!! We need to look at a way to move this out | https://www.dre.vanderbilt.edu/Doxygen/5.4.3/html/tao/strategies/classTAO__DIOP__Acceptor.html | CC-MAIN-2022-40 | refinedweb | 302 | 59.5 |
The C++ library includes the entire C standard library (from the 1990 C standard, plus Amendment 1), in which each C header, such as <stdio.h>, is wrapped as a C++ header (e.g., <cstdio>). Being part of the C++ standard, all types, functions, and objects are declared in the std namespace.
The external names are also reserved in the global namespace. Thus, proper practice is to use the names in the std namespace (e.g., std::strlen), but realize that these names are also reserved in the global namespace, so you cannot write your own ::strlen function.
The C standard permits macros to be defined to mask function names. In the C++ wrappers for these headers, the names must be declared as functions, not macros. Thus, the C <stdio.h> header might contain the following:
extern int printf(const char* fmt, ...); #define printf printf
In C++, the printf macro is not permitted, so the <cstdio> header must declare the printf function in the std namespace, so you can use it as std::printf.
A deprecated feature of C++ is that the C standard headers are also available as their original C names (e.g., <stdio.h>). When used in this fashion, their names are in the global namespace, as though a using declaration were applied to each name (e.g., using std::printf). Otherwise, the old style headers are equivalent to the new headers. The old C header names are deprecated; new code should use the <cstdio>, etc., style C headers. | http://etutorials.org/Programming/Programming+Cpp/Chapter+8.+Standard+Library/8.2+C+Library+Wrappers/ | CC-MAIN-2017-04 | refinedweb | 252 | 73.07 |
Overview
Atlassian Sourcetree is a free Git and Mercurial client for Windows.
Atlassian Sourcetree is a free Git and Mercurial client for Mac.
XmppFlask is easy to use XMPP framework that is inspired (heavily) by Flask. It is intended to be as easy to use as Flask itself is.
XmppFlask
XmppFlask is easy to start with
The main idea is to make you happy with writing small jabber bots. Like this:
from xmppflask import XmppFlask app = XmppFlask(__name__) @app.route(u'ping') def ping(): return u'pong'
Source Code
Source code is available via bitbucket.
Status
It's in status of ideal suitability for "use and help polishing it", since some obvious improvements could be done.
Community
Join us at jabber conference xmppflask@conference.jabber.org for discussions. Homepage is located at. | https://bitbucket.org/xmppflask/xmppflask | CC-MAIN-2018-22 | refinedweb | 132 | 68.57 |
I have a slow web app that I've placed Varnish in front of. All of the pages are static (they don't vary for a different user), but they need to be updated every 5 minutes so they contain recent data.
I have a simple script (wget --mirror) that crawls the entire website every 15 minutes. Each crawl takes about 5 minutes. The point of the crawl is to update every page in the Varnish cache so that a user never has to wait for the page to generate (since all pages have been generated recently thanks to the spider).
wget --mirror
The timeline looks like this:
A request that comes in between 0:00:00 and 0:05:00 might hit a page that hasn't been updated yet, and will be forced to wait a few seconds for a response. This isn't acceptable.
What I'd like to do is, perhaps using some VCL magic, always foward requests from the spider to the backend, but still store the response in the cache. This way, a user will never have to wait for a page to generate since there is no 5-minute window in which parts of the cache are empty (except perhaps at server startup).
How can I do this?
req.hash_always_miss should do the trick.
req.hash_always_miss
Don't do a full cache flush at the start of the spider run. Instead, just set the spider to work - and in your vcl_recv, set the spider's requests to always miss the cache lookup; they'll fetch a new copy from the backend.
vcl_recv
acl spider {
"127.0.0.1";
/* or whereever the spider comes from */
}
sub vcl_recv {
if (client.ip ~ spider) {
set req.hash_always_miss = true;
}
/* ... and continue as normal with the rest of the config */
}
While that's happening and until the new response is in the cache, clients will continue to seamlessly get the older cache served to them (as long as it's still within its TTL).
Shane's answer above is better than this one. This is an alternative solution which is more complicated and has additional problems. Please upvote Shane's response, not this one. I am just showing another method of solving the problem.
My initial thought was to return (pass); in vcl_recv and then, after the request has been fetched, in vcl_fetch, somehow instruct Varnish that it should cache the object, even thought it was specifically passed earlier.
return (pass);
vcl_fetch
It turns out this isn't possible:
If you chose to pass the request in an earlier VCL function (e.g.:
vcl_recv), you will still execute the logic of vcl_fetch, but the
object will not enter the cache even if you supply a cache time.
If you chose to pass the request in an earlier VCL function (e.g.:
vcl_recv), you will still execute the logic of vcl_fetch, but the
object will not enter the cache even if you supply a cache time.
So the next-best thing is trigger a lookup just like a normal request, but make sure it always fails. There's no way to influence the lookup process, so it's always going to hit (assuming it is cached; if it's not, then it's going to miss and store anyway). But we can influence vcl_hit:
vcl_hit
sub vcl_hit {
# is this our spider?
if (req.http.user-agent ~ "Wget" && client.ip ~ spider) {
# it's the spider, so purge the existing object
set obj.ttl = 0s;
return (restart);
}
return (deliver);
}
We can't force it not to use the cache, but we can purge that object from the cache and restart the entire process. Now it goes back to the beginning, at vcl_recv, where it eventually does another lookup. Since we purged the object we're trying to update already, it will miss, then fetch the data and update the cache.
A little complicated, but it works. The only window for a user getting stuck between a purge and the response being stored is the time for the single request to process. Not perfect, but pretty good.
By posting your answer, you agree to the privacy policy and terms of service.
asked
2 years ago
viewed
783 times
active
1 year ago | http://serverfault.com/questions/425503/force-request-to-miss-cache-but-still-store-the-response?answertab=votes | CC-MAIN-2014-52 | refinedweb | 709 | 79.4 |
my assignment is to write a program that asks for a temperature in fahrenheit, then spits out a value in celsius and kelvin. and ask again for a value in fahrenheit to convert. heres what ive got so far. by the way im using kernhigan and ritchie, with no experience in programming whatsoever. im finding that kernhigan and ritchie is not very good for me.
Code:
#iclude < stdio.h >
#include <math.h >
main()
{
float fahr, celsius, kelvin;
int temp, cels, kelv
temp = ' '
printf("Please enter (in fahr) a temp\n(q to quit)\n");
while (temp != 'q') {
temp = getc ( stdin );
putchar (temp);
cels = (temp-32.0) + (5.0/9.0);
kelv = 273.5 + ((temp - 32.0) + (5.0/9.0));
printf("The temperature in Celsius is %6.1f\n" , cels);
printf("The temperature in Kelvin is %6.1f\n" , kelv);
}
return = 0
}
also, my next assignment is to use the program above (when written correctly) and rewrite it so that it uses one function to conver from fahrenheit to celsius and another to convert from celsius to kelivin. any help would be appreceated | http://cboard.cprogramming.com/c-programming/78217-newb-question-probs-program-printable-thread.html | CC-MAIN-2016-18 | refinedweb | 183 | 77.53 |
Application Insights SDK (0.11.0-prerelease)
October 21, 2014
What is it?
Application Insights SDK lets you send telemetry to the Application Insights portal, where you can find out what users are doing with your application.
0.11.0 is the latest SDK release for Application Insights. This SDK includes new functionality and new concepts in addition to a change to the API. For information on the previous 0.10 release please read this blog post.
Data sent through this SDK will only be visible through the Microsoft Azure Preview Portal. (Previous versions sent data to an earlier edition of Application Insights, accessed through Visual Studio Online.) To find out more information about the different versions of Application Insights please see the documentation.
You can instrument these types of application:
- ASP.NET web applications hosted either on premises or in Microsoft Azure.
- Windows Phone 8.0 (Silverlight) and 8.1 (Silverlight and WinRT) – UI experience is coming soon.
- Windows Store 8.1 applications – UI experience is coming soon.
- Logging frameworks – Capture and search trace logging messages from multiple popular logging frameworks:
- System.Diagnostics.Trace
- Nlog
- Log4Net
- Application Insights tracing API
-.
What is new?
The 0.11 SDK includes a new simpler Telemetry API, automatic JavaScript error collection, and enhancement to trace telemetry collection. It also includes a number of bugs fixes and additional improvements.
New Simpler Custom Telemetry API
As part of the 0.11 release we have introduced a simplified API. Our previous proposed API was not as quick and simple as we had hoped. To improve upon this we have modified the API to help simplify the coding experience and help new customers get up to speed quicker.
The primary change is the move to a single class in the root namespace, Microsoft.ApplicationInsights.TelemetryClient. This single class should meet the majority of your telemetry instrumentation needs. The TelemetryClient object exposes a set of Track methods. These Track methods represent all of the core telemetry item types Application Insights understands. After instantiation of a new TelemetryClient object, you should be able to send most telemetry through single method calls. In addition to the basic calls below, all track methods have more complex signatures allowing capture of additional custom properties for more advanced scenarios.
var tc = new TelemetryClient(); tc.TrackEvent("SampleEvent"); tc.TrackTrace("Simple Trace Log Message"); tc.TrackMetric("BasicMetric", 42); catch (Exception e) { tc.TrackException(e); }
The TelemetryClient also exposes a context property member. The fields on this property represent the common set of properties that will be automatically attached to all telemetry sent through this instance of a TelemetryClient. This enables you to set common property values once, to be applied to all telemetry. Examples showing setting of both standard and custom properties are shown below.
tc.Context.User.AccountId = "My Customer AccountID"; tc.Context.Properties["CustomTrackingProperty"] = "OCT2014";
NOTE: Breaking Change – This API is a breaking change from previous versions. If using a previous version you will need to modify existing code to use the new 0.11 version.
JavaScript automatic error collection
The 0.11 SDK release includes enhancements to the JavaScript SDK. The JS SDK will now automatically collect all unhandled JavaScript exceptions. On a modern browser, the call stack will also be collected. In the screenshot below you can see a simple example of a call to a non-existent function in JavaScript and the details caught and reported to Application Insights. In addition you can see that this exception collected and reported in a common way with unhandled exceptions caught by the .Net Application Insights SDKs.
All you need to do to make use of this functionality is ensure that you have the latest update to the 0.11 JS SDK script tag. The easiest way to obtain the new snippet is from the Getting Started page in the portal.
Enhanced Trace Collection
Trace collection has been enhanced to match the schema and processing of the other telemetry types. All types are now consistently handled in the UI to enable a common experience across all your telemetry data.
NOTE: Breaking Change – The change of trace schema is a breaking change. If you were using trace collection before you will need to update to version 0.11 or later to continue to view your trace telemetry. The quickest way to upgrade is to utilize NuGet package management to upgrade your SDK NuGets and your trace adapters to the latest 0.11 release.
Where can we leave feedback? Have lots of comments on the AI api. I love the idea, just the implementation causes issues.
For example, my solution uses ETW to trace. I have a listener setup to send data to AI. But at that point I don't have the Exception object anymore, I had taken the important things from it and wrote it to ETW. I really don't want to add AI listening everywhere where I handle exceptions.
Also, the custom config… do you know how hard this is to manage when using Azure websites and multiple environments? All I want is to be able to set the instrumentation key in appsettings. I have lots of other comments, is there a user voice page? Or can a PM reach out to me directly, my email is cleverguy25@hotmail.com.
@Cleve, you can leave feedback for app insights here: visualstudio.uservoice.com/…/77108-application-insights
Thanks Mike. Now where can I enter bugs? I have had quite a problem with Exception tracing and TraceTelemetry.
@Cleve,
In addition you or anyone is feel to email me feedback directly at joshweb at microsoft dot com. You can also file bugs through
Very cool use case with the ETW. Since you don't have the raw exception any more you can always new up a manual exception object Microsoft.ApplicationInsights.DataContracts.ExceptionTelemetry. You can then manually populate the fields extracted from your ETW event in your listener.
Thanks,
Josh | https://blogs.msdn.microsoft.com/devops/2014/10/21/application-insights-sdk-0-11-0-prerelease/ | CC-MAIN-2017-26 | refinedweb | 985 | 59.09 |
draw an icon on canvas have 2 different classes for graphics data :
- Icon class represents icon (typically from .mbm file) which can be put in ListBox for selection.
- Image class represents a bigger image which can be drawn upon.
These 2 classes can't convert to/from each other.
But you can use a library that read .mbm file and draw an Image from the data there. So, you can make an icon into an image the module icon_image whose source code is below can be used for this purpose.
#
# module icon_image from Korakot user
#
# called by just import icon_image
# im = icon_image(file_mbm, idx)
from graphics import Image
from struct import unpack
def readL(f, pos=None):
if pos is not None:
f.seek(pos)
return unpack('L', f.read(4))[0]
def open(file_mbm='z:\\system\\data\\avkon.mbm', idx = 28):
# read icon data from mbm file
f = file(file_mbm, 'rb')
if readL(f) != 0x10000041:
return None # work for mbm on ROM (z:) only
start = readL(f, 8+4*idx)
f.seek(start+20)
length = readL(f) - readL(f) # pd_size - offset
width, height = readL(f), readL(f)
enc = readL(f, start+56)
f.seek(start+68)
data_encoded = f.read(length)
# decode the data
data_padded = rle_decode(data_encoded, enc)
mat = bit_matrix(data_padded, width, height)
im = Image.new((width, height), '1')
for j in range(height):
for i in range(width):
im.point((i,j), mat[j][i]*0xffffff)
return im
# Decode of 8-bit RLE
# Either to repeat-(n+1)-times or not repeat (100-n) bytes
def rle_decode(bytes, enc=1):
if not enc: return bytes
out = []
i = 0
while i < len(bytes):
n = ord(bytes[i])
i += 1
if n < 0x80:
out.append( bytes[i] * (n+1) )
i += 1
else:
n = 0x100 - n
out.append( bytes[i:i+n] )
i += n
return ''.join(out)
# from bytes to bit matrix
# Each line was padded to 4-byte boundary, must discard the end
def bit_matrix(bytes, width, height):
mat = []
k = 0
for j in range(height):
line = []
while len(line)<width:
longint, = unpack('L', bytes[k: k+4])
k += 4
toget = min(width - len(line), 32)
for i in range(toget):
longint, r = divmod(longint, 2)
line.append(int(r))
mat.append(line)
return mat
Here's an example using this module.
- Python 2nd Edition
- Python 3rd Edition
from appuifw import *
import icon_image, e32
app.body = c = Canvas()
# choose a bitmap from plenty inside mbm (multi-bitmap)
icon = icon_image('z:\\system\\data\\avkon2.mbm', 28)
c.blit(icon)
e32.ao_sleep(10) # 10 sec to end program
Limitations : only 1-bit icons. | http://developer.nokia.com/community/wiki/Archived:How_to_draw_an_icon_on_canvas_in_PySymbian | CC-MAIN-2014-49 | refinedweb | 430 | 74.79 |
Windows.System.Threading.Core namespace
Creates work items that run in response to named events and semaphores. Also preallocates resources for work items that must be guaranteed the ability to run, even in circumstances of heavy (or full) resource allocation.
- PreallocatedWorkItem
When work items are created using ThreadPool.RunAsync, the work item is created and submitted as a single operation. This is acceptable for most scenarios, but it is sometimes necessary to set aside resources for a work item in advance.
The PreallocatedWorkItem class constructs a work item ahead of time, putting the work item "on standby" so that it can be submitted to the thread pool when it's needed. This is useful in circumstances where the resources available to your app are completely allocated before the work item is needed - for example, calling a deallocation routine that uses a work item. If a work item has already been allocated, the resource deallocation routine can still be called and the PreallocatedWorkItem can still be submitted to the thread pool even if all resources are already in use.
- SignalNotifier
Sometimes it is necessary to queue work items in response to named events or semaphores created by Win32 COM objects. You can run a Windows Runtime method in response to a named event or semaphore using a SignalNotifier object. This allows you to write Windows Runtime code that responds to events and signals sent using Win32 and COM for Windows Store apps, provided that the event or semaphore has a name. For example, the SignalNotifier can be used to work with Win32 code that's being ported to a Windows Store app.
- ISignalableNotifier
Occasionally it is not possible to know the name of an event or semaphore, but your app still needs to respond to it; for example legacy code, and some well-known events and semaphores, still use waitable handles instead of names. ISignalableNotifier allows you to create ISignalNotifier objects registered with waitable handles.
The Windows.System.Threading.Core namespace has these types of members:
Classes
Delegates
The Windows.System.Threading.Core namespace has these delegates.
Interfaces
The Windows.System.Threading.Core namespace defines these interfaces. | https://msdn.microsoft.com/en-us/library/hh965390.aspx | CC-MAIN-2016-44 | refinedweb | 356 | 52.29 |
On Mon, Jul 02, 2007 at 04:12:01PM -0700, Cory Dodt wrote: >Anyway, the Twisted coding guidelines are largely about creative, unambiguous >naming. I for one find "py-" names obnoxious, especially since they actually >*increase* the chance of a google namespace collision. (What would you name >your twisted-based IRC bot? twibot? Too bad three other people did that.) bezalel :) [ot] i'm going to publish the code, but it's just project for fun and to learn twisted -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: Digital signature Url : | http://twistedmatrix.com/pipermail/twisted-python/2007-July/015676.html | CC-MAIN-2014-10 | refinedweb | 102 | 64 |
view raw
This codes compiles and works as intended but the comment line doesnt for some reason can anybody tell me why.
#include <iostream>
int main()
{
std::cout << "Enter a line:\n";
std::string Line;
std::getline(std::cin,Line);
const std::string Whitespace = " \t\t\f\v\n\r";
//std::string Line= std::string(Line.find_first_not_of(Whitespace),Line.find_last_not_of(Whitespace)+1); // Why doesnt this work?
std::string Line= Line.substr(Line.find_first_not_of(Whitespace),Line.find_last_not_of(Whitespace)+1);
std::cout << "You entered:" << Line << "\n";
}
Not only the commented line, but another will also fail if there is space in the begin of string.
First line fail (to compile) because
string::find_first_not_of return
size_t. And construct
string from two size just make no sense.
Second line may fail because
string::substr accept length (not end) as it's second parameter. | https://codedump.io/share/LJiQOrDduaKh/1/c-removing-whitespace-fails-by-using-iterators | CC-MAIN-2017-22 | refinedweb | 139 | 58.28 |
I created a variable for the scanner called serena. Serena variable is equal to what the user inputs. My if statement says that if the answer the user enters is not equal to the actual answer then it is to display "wrong". It is a basic math game I am working on. NetBeans is telling me that I cannot use the scanner in an if statement? I am confused. Please take a look and help.
package pkgnew;
import java.util.Scanner;
public class New{
public static void main(String args[]){
Scanner serena = new Scanner(System.in);
double fnum, snum, answer;
fnum = 6;
snum = 6;
answer = fnum + snum;
System.out.println("6 + 6");
serena.nextLine();
if (serena = answer){
System.out.println("Correct!");
}else
System.out.println("Wrong!The answer is:" answer);{
}
}
--- Update ---
Do I have to define Serena as whatever number the user inputs? If so, how? | http://www.javaprogrammingforums.com/whats-wrong-my-code/37676-scanner-cannot-used-variable-if-statements.html | CC-MAIN-2015-06 | refinedweb | 146 | 62.04 |
How to take input from user Graphically ?
i have this function ( i have only put a snippet of it here , it's too big and is not relevant to question here )();
in the QUrl is have a provided a static string there . What i want is that user provides me input to construct the url .
Let me make it bit clear here . The above function is in a class and not in my main.cpp file .
This function is called from a QML file , when a user clicks a button this function ( add) is called
In short what i want to do is : When user clicks the button , i want something that takes user input , so that i can construct URL's .
I hope i was clear in explaining what i want .
P.S. English is not my first language ,
- A Former User last edited by
Hi! So your question is "how to make a URL input dialog in QtQuick"?
Yes , kind of . I actually don't care whether user provide me whole url , what i would prefer is title from user .
I want to do it from C++ ( if possible ) .
Thanks for the help !!
- A Former User last edited by
Sry, I still don't get. ^_^ First you said:
This function is called from a QML file
Later you said:
I want to do it from C++
Is your GUI written in QtQuick or not? Or do you use some QWidgets / QtQuick hybrid?
Sorry for making it confusing :/ .
i will try to explain properly
imports used in my QML file
import QtQuick 2.3 import QtQuick.Window 2.2 import QtQuick.Controls 1.4 import QtQuick.Dialogs 1.2 import QtWebEngine 1.1 import QtWebKit 3.0 import QtQuick 2.5
my qml file snippet :
Button{ id : save width: root.width height: root.height /6 text: "SAVE PAGE" onClicked: { msg.visible = true dbm.add() } }
my cpp file();
As you can see , when user clicks on that button , this ( add ) function from a cpp file is called .
So yes , the function is called from QML button .
What i meant when i said " want to do it from C++ "
what i meant was that , that the user input box or input box should be called from that add() function only .
Why i said that " i want to do it from C++ "
because to be honest i don't know how to pass QML data/values from QML to C++ :( .
If you want to see whole codebase , i can give it :)
- A Former User last edited by A Former User
Hi! Look at the following code. It has a class named
Backendwhich acts as the interface of your C++ business logic to your QtQuick GUI. This interface inherits
QObject:
backend.h
#ifndef BACKEND_H #define BACKEND_H #include <QObject> class Backend : public QObject { Q_OBJECT public: explicit Backend(QObject *parent = 0); Q_INVOKABLE QString add(QString someUrl); }; #endif // BACKEND_H
backend.cpp
#include "backend.h" Backend::Backend(QObject *parent) : QObject(parent) { } QString Backend::add(QString someUrl) { // do something static auto i = 0; return QString("%1: %2").arg(++i).arg(someUrl); }
In the main function the Backend class is made available to the QML type system with
qmlRegisterType. We instantiate a single
backendobject and insert it into the QML context with
setContextProperty:
main.cpp
#include <QGuiApplication> #include <QQmlApplicationEngine> #include <QtQml> #include "backend.h" int main(int argc, char *argv[]) { QGuiApplication app(argc, argv); qmlRegisterType<Backend>("io.qt.forum", 1, 0, "Backend"); Backend backend; QQmlApplicationEngine engine; engine.rootContext()->setContextProperty( "backend", &backend ); engine.load(QUrl(QStringLiteral("qrc:/main.qml"))); return app.exec(); }
And finally our main.qml with a custom
Dialogthat calls a method of our
backendobject:
main.qml
import QtQuick 2.5 import QtQuick.Controls 1.4 import QtQuick.Dialogs 1.2 import io.qt.forum 1.0 ApplicationWindow { visible: true width: 600 height: 400 color: "plum" Dialog { id: myDialog visible: false title: "Title of Dialog" contentItem: Rectangle { color: "orange" implicitWidth: 400 implicitHeight: 100 Column { Text { text: "Enter some string!" } TextField { id: myTextField width: 300 text: "" } Row { Button { text: "Ok" onClicked: { responseText.text = backend.add(myTextField.text) myDialog.close() } } Button { text: "Cancel" onClicked: myDialog.close() } } } } } Row { spacing: 20 Button { text: "Add" onClicked: myDialog.open(); } Text { id: responseText } } }
Hope it helps!
Hey @Wieland . Thanks for the help . It is working fine . ( i will customize it now to my needs ) | https://forum.qt.io/topic/68420/how-to-take-input-from-user-graphically/7 | CC-MAIN-2020-05 | refinedweb | 720 | 68.97 |
On Tue, May 28, 2013 at 08:58:29AM -0700, Johan Tibell wrote: > > The likely practical result of this is that every module will now read: > > module M where > > #if MIN_VERSION_base(x,y,z) > import Prelude > #else > import Data.Num > import Control.Monad > ... > #endif > > for the next 3 years or so. Not so. First of all, if Prelude is not removed then you can just write import Prelude But even this is not necessary during the transition period: see for a way that backwards compatibility can be maintained, with additional imports not being needed until code migrates to the split-base packages. Thanks Ian -- Ian Lynagh, Haskell Consultant Well-Typed LLP, | http://www.haskell.org/pipermail/haskell-prime/2013-May/003853.html | CC-MAIN-2014-41 | refinedweb | 111 | 59.23 |
C++ | Constructors | Question 12
Predict the output of following program.
(A) 10
(B) Compiler Error: p must be passed by reference
(C) Garbage value
(D) None of the above
Answer: (B)
Explanation: Objects must be passed by reference in copy constructors. Compiler checks for this and produces compiler error if not passed by reference.
The following program compiles fine and produces output as 10.
#include <iostream > using namespace std; class Point { int x; public: Point(int x) { this->x = x; } Point(const Point &p) { x = p.x;} int getX() { return x; } }; int main() { Point p1(10); Point p2 = p1; cout << p2.getX(); return 0; }
The reason is simple, if we don’t pass by reference, then argument p1 will be copied to p. So there will be a copy constructor call to call the copy constructor, which is not possible.
Quiz of this Question
Want to learn from the best curated videos and practice problems, check out the C Foundation Course for Basic to Advanced C.
My Personal Notes arrow_drop_up | https://www.geeksforgeeks.org/c-constructors-question-12/?ref=rp | CC-MAIN-2021-21 | refinedweb | 170 | 71.85 |
Boosted tree classifier derived from DTrees. More...
#include <opencv2/ml.hpp>
Boosting type. Gentle AdaBoost and Real AdaBoost are often the preferable choices.
Creates the empty model. Use StatModel::train to train the model, Algorithm::load<Boost>(filename) to load the pre-trained model.
Type of the boosting algorithm. See Boost::Types. Default value is Boost::REAL.
The number of weak classifiers. Default value is 100.
A threshold between 0 and 1 used to save computational time. Samples with summary weight \(\leq 1 - weight_trim_rate\) do not participate in the next iteration of training. Set this parameter to 0 to turn off this functionality. Default value is 0.95.
Loads and creates a serialized Boost from a file.
Use Boost::save to serialize and store an RTree to disk. Load the Boost from this file again, by calling this function with the path to the file. Optionally specify the node for the file containing the classifier | https://docs.opencv.org/4.x/d6/d7a/classcv_1_1ml_1_1Boost.html | CC-MAIN-2022-21 | refinedweb | 155 | 62.85 |
table of contents
NAME¶
mbsrtowcs - convert a multibyte string to a wide-character string
SYNOPSIS¶
#include <wchar.h>
size_t mbsrtowcs(wchar_t *restrict dest, const char **restrict src, size_t len, mbstate_t *restrict ps);
DESCRIPTION¶
If dest is not NULL, NULL, a static anonymous state known only to the mbsrtowcs() function is used instead.
The programmer must ensure that there is room for at least len wide characters at dest.
RETURN VALUE¶.
ATTRIBUTES¶
For an explanation of the terms used in this section, see attributes(7).
CONFORMING TO¶
POSIX.1-2001, POSIX.1-2008, C99.
NOTES¶
The behavior of mbsrtowcs() depends on the LC_CTYPE category of the current locale.
Passing NULL as ps is not multithread safe.
SEE ALSO¶
iconv(3), mbrtowc(3), mbsinit(3), mbsnrtowcs(3), mbstowcs(3)
COLOPHON¶
This page is part of release 5.13 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. | https://dyn.manpages.debian.org/testing/manpages-dev/mbsrtowcs.3 | CC-MAIN-2022-33 | refinedweb | 162 | 65.01 |
__doPostback method with colons problem
Discussion in 'ASP .Net' started by Steven Livingstone, Aug 4, 2003.
Want to reply to this thread or ask your own question?It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
- Similar Threads
Multiple colons in namespace names?Grant Robertson, Jan 13, 2007, in forum: XML
- Replies:
- 6
- Views:
- 696
- Joe Kesselman
- Jan 15, 2007
Colons, indentation and reformatting.
- Replies:
- 2
- Views:
- 262
- Neil Cerutti
- Jan 9, 2007
Colons, indentation and reformatting. (2)
- Replies:
- 9
- Views:
- 303
- Hendrik van Rooyen
- Jan 10, 2007
About the use of double colons in javascript ::Québec, Nov 13, 2003, in forum: Javascript
- Replies:
- 6
- Views:
- 153
- Thomas 'PointedEars' Lahn
- Nov 17, 2003
double colons in Javascript, Jul 25, 2007, in forum: Javascript
- Replies:
- 2
- Views:
- 104
- Erik Langhout
- Jul 25, 2007 | http://www.thecodingforums.com/threads/__dopostback-method-with-colons-problem.61402/ | CC-MAIN-2014-49 | refinedweb | 156 | 79.6 |
Handling multiple buttons in HTML Form in Struts
By: Charles
The <html:submit> tag is used to submit the HTML form. The usage of the tag is as follows:
<html:submit><bean:message key=â€button.saveâ€/></html:submit>
This will generate a HTML as follows.
<input type="submit" value="Save Me">
This usually works okay if there was only one button with “real†Form submission (The other one maybe a Cancel button). Hence it suffices to straight away process the request in CustomerAction. However you will frequently face situations where there are more than one or two buttons submitting the form. You would want to execute different code based on the buttons clicked. If you are thinking, “No problem. I will have different ActionMapping (and hence different Actions) for different buttonsâ€, you are out of luck! Clicking any of the buttons in a HTML Form always submits the same Form, with the same URL. The Form submission URL is found in the action attribute of the form tag as:
<formname=â€CustomFormâ€action=â€/App1/submitCustomerForm.doâ€/>
and is unique to the Form. You have to use a variation of the <html:submit> as shown below to tackle this problem.
<html:submit property=â€stepâ€>
<bean:message key=â€button.saveâ€/>
<=â€stepâ€
The generated HTML submit button has a name
associated with it. You have to now add a JavaBeans property to your
ActionForm whose name matches the submit button name. In other words an instance
variable with a getter and setter are required. If you were to make this change in
the application just developed, you have to add a variable named “step†below shows the modified execute() method
from CustomerAction. The changes are
shown in bold. When the Save Me button is pressed, the custForm.getStep() method returns a value of “Save Me†and the
corresponding code block is executed.
// CustomerAction modified for multiple button Forms
public class CustomerAction extends Action
{
public ActionForward execute(ActionMapping mapping,
ActionForm form, HttpServletRequest request,
HttpServletResponse response) throws Exception
{
if (isCancelled(request)) {
System.out.println(Cancel Operation Performedâ€);
return mapping.findForward(“mainpageâ€);
}
CustomerForm custForm = (CustomerForm) form;
ActionForward forward = null;
if ( “Save Meâ€.equals(custForm.getStep()) ) {
System.out.println(“Save Me Button Clickedâ€);
String firstName = custForm.getFirstName();
String lastName = custForm.getLastName();
System.out.println(“Customer First name is “ +
System.out.println(“Customer Last name is “ +
forward = mapping.findForward(“successâ€);
}
return forward;
}
} “Spike Meâ€. The submit
button can still have the name “step†(same as the “Save Me†button). This means
the CustomerForm class has a
single JavaBeans property “step†for the submit buttons. In the CustomerAction you can have check if the custForm.getStep() is “Save Me†or “Spike Meâ€. If each of the buttons had different names like button1, button2 etc. then the CustomerAction would have to perform checks as follows:
if (“Save Meâ€.equals(custForm.getButton1())
{
// Save Me Button pressed
} else if (“Spike Me†“Excepto MÆinstead of “Save Meâ€. However the CustomerAction class is still looking for the hard coded “Save Meâ€. Consequently the code block meant for “Save Me†button never gets. hello sir
i am simulating railway r
View Tutorial By: karthick at 2008-08-30 03:09:42
2. safsdfs
View Tutorial By: nmn at 2008-09-09 01:07:49
3. farhaan, thats why I said, "Using the HTML Bu
View Tutorial By: Charles at 2008-03-21 01:36:46
4. This tutorial is very good. Actually I have been s
View Tutorial By: farhaan at 2008-03-20 15:22:13
5. To handle multiple submit buttons you can use stan
View Tutorial By: Vic at 2008-12-23 12:03:43
6. Thanks for the solution. I must say however that t
View Tutorial By: Pierre at 2009-11-19 20:16:35
7. Thank you so much. I have migrated from the .NET t
View Tutorial By: kanika at 2011-05-31 12:23:33
8. thanks for this tutorial! it's very useful
View Tutorial By: Miguel at 2011-06-26 21:15:00
9. Very useful tutorial. But if same code is executed
View Tutorial By: Jitendra Kumar Mahto at 2011-08-12 15:10:14
10. HI, Thanks for your beautiful tutorial.
View Tutorial By: Chintan at 2011-10-06 07:35:01
11. @chintan Check this answer, that could help you. h
View Tutorial By: Mallika at 2011-10-06 11:59:45
12. thanks for this blogs , i was looking for the same
View Tutorial By: prakash at 2011-11-27 18:25:32
13. great !!!!!!!!!!!!!!!!!!!!!
View Tutorial By: arjun at 2012-03-24 14:55:10
14. Awesome post....
View Tutorial By: Zeeshan Ali Ansari at 2012-11-24 06:35:20
15. sanjuanagompert3bm
View Tutorial By: dunglupinoprb at 2017-03-15 15:04:29
16. stepanieperrin1o1
View Tutorial By: hortensebogdanskinzi at 2017-03-15 17:57:20
17. kathyrnalarid2my
View Tutorial By: louieneitzked8d at 2017-03-16 02:44:55
18. rodolfosandisonsvc
View Tutorial By: joshuabolognese1w8 at 2017-03-17 02:44:10
19. magdaspotoh8l
View Tutorial By: thaddeusrobellantt at 2017-03-21 15:25:06
20. savannahstalmaiio
View Tutorial By: lindseycollenxi3 at 2017-03-28 10:49:51
21. JasonNix
View Tutorial By: JasonNix at 2017-04-13 04:28:44
22. JasonNix
View Tutorial By: JasonNix at 2017-04-25 09:49:11 | https://java-samples.com/showtutorial.php?tutorialid=577 | CC-MAIN-2022-33 | refinedweb | 917 | 60.31 |
On Wed, Oct 22, 2008 at 02:08:26PM -0700, Eric W. Biederman wrote:> Greg.Ah, ok, I really don't think I want to know more :)> Further devices like eth0@4e are completely unusable to the udev> rules in the initial network namespace because they can not talk> to or affect them.Oh, good point.> As I read it Ben's ``solution'' puts entries in sysfs that are> completely unusable to udev.That's not a good thing to do, if udev can't see them, than HAL can'tsee them, then the rest of userspace usually has no idea they arepresent either.thanks,greg k-h | https://lkml.org/lkml/2008/10/22/604 | CC-MAIN-2020-50 | refinedweb | 108 | 72.46 |
Umbraco/Create xslt exstension like umbraco.Library in C
Create a new xslt extension like umbraco.library in C#.
Sometimes you need more functionality in your xslt, and most of the time umbraco.Library is enough. But what do you do if that isn’t enough?
There are 2 ways to create your own functions
1. Inline code. 2. xslt extension
My opinions.
Inline code.
Inline code get my xslt look messy, and are difficult to reuse, but works fine if you only need a single function. More info about inline code can be found here
xslt extension.
xslt extension on the other hand looks much cleaner, and is easy to re/use again.
Step by Step Ill go through it step by step
1. Create a class library, in my case “ronnie.library”.
2. Create your class/es that you need. In my case “dateMethods.cs”
3. Create the methods you need (remember the methods have to be public and static) e.g.
public static int ugeNummer(DateTime dato) { string res; res = Microsoft.VisualBasic.DateAndTime.DatePart(Microsoft.VisualBasic.DateInterval.WeekOfYear, dato, Microsoft.VisualBasic.FirstDayOfWeek.Monday, Microsoft.VisualBasic.FirstWeekOfYear.System).ToString(); return Int32.Parse(res); }
4. When you are done creating the methods, build and copy the dll in my case “ronnie.library.dll” into the bin folder.
5. Now you just have to register your xslt extension, and this is done in xsltExtensions.xml (placed in the config folder).
6. Open the file and add the following line. Note that starting with Umbraco 4.5, /bin/ is not needed anymore!
<ext assembly="/bin/ronnie.library" type="ronnie.dateMethods" alias="CoolDateMethods" />
- Assembly
- here you type where you have placed your dll file, (without the .dll extension)
- type
- ".NET_namespace.ClassName” Here you have to write the namespace followed by a dot and the class you want to use.
- alias
- This is your xmlns like umbraco.Library, call it what you like. In my case CoolDateMethods
7. Last but not least you have to remember to add the xmlns in your xslt document; this is done like this:
xmlns:CoolDateMethods ="urn:CoolDateMethods" exclude-result-prefixes="msxml umbraco.library CoolDateMethods
Now you should be ready to use your new xslt extension. Hope this quick’n dirty article was informative for you.
//Ronnie | http://en.wikibooks.org/wiki/Umbraco/Create_xslt_exstension_like_umbraco.Library_in_C | CC-MAIN-2015-14 | refinedweb | 379 | 61.83 |
Troy KirklandThoughts from a Microsoft Consultant about getting further value from Windows Server. Evolution Platform Developer Build (Build: 5.6.50428.7875)2005-07-11T14:40:00ZDistributed File System<P><FONT face=Verdana size=2>For the last couple of years I've presented at Microsoft TechED about the use of DFS (and hope to present this year as well covering the changes in R2).</FONT></P> <P><FONT face=Verdana size=2>I can't see why there has not been a greater uptake on DFS and can only hope to talk to more people and maybe supply some pointers that may make it easier to deploy.</FONT></P> <P><FONT face=Verdana size=2>For those new to DFS, start looking <A href="">here</A>. If you would like more information please email me.</FONT></P> <P><FONT face=Verdana size=2>First piece of advice. DFS works well on Windows NT4/Windows 2000 servers and Windows 2000 clients with caveats around some of the functionality (Offline folders, DFS namespace limitiations etc). If you do not intend to use Windows Server 2003 as the DFS Root (link targets are not a major issue on downlevel Windows servers or SAMBA boxes) and Windows XP clients then my advice is to use DFS to provide an abstracted server namespace for documents. For example if you have a number of servers holding documents you can put DFS infront to provide a more consistent address space for your users and also more flexibility to move servers around (decommission etc). If you are using Windows Server 2003 DFS namespaces and Windows XP clients then you can look at more system processes using DFS (Offline folders / Folder Redirection etc).</FONT></P> <P><FONT face=Verdana size=2>Second piece of advice. Become comfortable with how DFS masks the underlying file system. A DFS link target could be published as <A href="file://\\company.co.nz\DFSData\Users">\\company.co.nz\DFSData\Users</A> and it could point to a UNC share and path of <A href="file://\\server1\Data\NZ\Users\TroyK">\\server1\Data\NZ\Users\TroyK</A>. In a traditional UNC environment the folder could have been shared as <A href="file://\\server1\TroyK$">\\server1\TroyK$</A>. When troubleshooting DFS issues (esp with Offline Folders / Folder Redirection) it is important that the permissions are checked at each layer of the path. In this example for Offline Folders to work the user would need LIST permissions to <A href="file://\\Server1\Data">\\Server1\Data</A> and the NZ and Users folders and then WRITE/EXECUTE etc to the TroyK folder. This can be one of the more frustrating learning curves for IT staff.</FONT></P> <P><FONT face=Verdana size=2>Third piece of advice. Read the background articles on Offline Folders / Folder Redirection </FONT><A href=""><FONT face=Verdana size=2>here</FONT></A><FONT face=Verdana size=2> and the DFS support matrix </FONT><A href=""><FONT face=Verdana size=2>here</FONT></A><FONT face=Verdana size=2> (esp the </FONT><A href="javascript:toggleQuestion('title68', 'question68', 'answer68')"><FONT face=Verdana size=2>Can I use DFS with Offline Files and redirected My Documents folders?</FONT></A><FONT face=Verdana size=2> question).</FONT></P> <P><FONT face=Verdana size=2>Fourth piece of advice. Understand your data and replication. Shared read only data can be replicated more easily than shared read/write data. Also plan the impact on your network. This is one area the R2 release of DFS should be a major boon.</FONT></P> <P><FONT face=Verdana size=2>Fifth piece of advice. Give it a go. You'll hopefully be amazed with how easy it is and there is always good resource on the internet or email me to see what I can do.</FONT></P> <P><FONT face=Verdana size=2>Enjoy.</FONT></P><div style="clear:both;"></div><img src="" width="1" height="1">Troy Kirkland start somewhere<P><FONT face=Verdana size=2>Well, I've always meant to give this a go, so I may as well start here.</FONT></P> <P><FONT face=Verdana size=2>What I hope to talk about is the use of technologies that are available in Windows Server (esp 2003) and in particular those that have no incremental licensing cost, a case of working with the stuff that most companies already own. Too often I see companies that still view Windows Server (2000 and 2003) as a Windows NT replacement and I think it is the job of Microsoft and the IT community to see what we can do to help get more from the platform.</FONT></P> <P><FONT face=Verdana size=2>To start then I think I'll cover the following areas;</FONT></P> <UL> <LI><FONT face=Verdana size=2>Active Directory. Moving beyond the NT4 SAM. Application authentication etc.</FONT> <LI><FONT face=Verdana size=2>Look at LDAP (ADAM and AD) and inter-op with other LDAP repositories.</FONT> <LI><FONT face=Verdana size=2>Distributed File System (DFS). My favorite Windows Server component. Look at NT4/Windows 2000 vs Windows Server 2003 vs Windows Server 2003 R2.</FONT> <LI><FONT face=Verdana size=2>Directory Federation. Forest Trust / SSO etc</FONT> <LI><FONT face=Verdana size=2>Public Key Infrastructure (PKI).</FONT></LI></UL> <P><FONT face=Verdana size=2>My attention span is similar to a goldfish, one with a short attention span, so I think most of my comments will be what I am thinking of or working on at the time, or so I think now, only time will tell.</FONT></P> <P><FONT face=Verdana size=2>Anyway, on with the show.</FONT></P><div style="clear:both;"></div><img src="" width="1" height="1">Troy Kirkland | http://blogs.technet.com/b/troy_kirkland/atom.aspx | CC-MAIN-2013-48 | refinedweb | 972 | 63.29 |
Opened 10 years ago
Closed 4 years ago
#5241 closed Bug (fixed)
Response middleware runs too early for streaming responses
Description
In order to output very large HTML files or do progresive rendering I let a view return an iterator. The actual HTML file is generated at the last step os request processing. It works fine as long as I don't use gettext to translate any variable/string/template/... during generation. If I do, I always get the default translation.
I hope the following code snippet will clarify what I mean:
def test(request): def f(): print dict(request.session) yield "<html><body>\n" for lang in ('es', 'en'): yield '<a href="/i18n/setlang/?language=%(lang)s">%(lang)s</a><br>'%locals() for i in range(10): yield (_('Current time:')+' %s<br>\n')%datetime.datetime.now() time.sleep(1) yield "Done\n</body></html>\n" return HttpResponse(f())
In this case 'Current time:' is never translated (it is easy to fix in this case but not in others).
I found the problem and the patch working with Django 0.96 but I believe it also applies to the development version.
Attachments (1)
Change History (18)
Changed 10 years ago by
comment:1 Changed 10 years ago by
comment:2 Changed 9 years ago by
Passing iterators to HttpResponse and hoping they won't be entirely pulled into memory is not supported. There are too many pieces of middleware and other code that assume they can do random (or at least repeatable) access to the
response.content attribute.
So it's not worth doing this kind of workaround piecemeal, without a clear plan as to how to write iterator-aware middleware (which I suspect is close to impossible, given the need to access things like the length, in many cases) or specify that certain pieces of middleware shouldn't be run.
comment:3 Changed 8 years ago by
I am working on a site (my own) that needs translations for both Dutch and English. I found this patch to work for situations where long translation strings are used in templates in combination with for loops.
For example, in my case, I have some text (not even too long, but it "expands" to multiple lines in the .po file), and a for loop:
{% trans "SmashBits werkt aan software. Voor op het web, of voor op de Mac. Over het laatste binnenkort meer." %} {% comment %} {% for lang in LANGUAGES %} <div class="lang{% if forloop.first %} first{% endif %}"> <form action="/i18n/setlang/" method="POST" name="Form_{{ lang.0 }}"> <input name="next" type="hidden" value="/"> <input type="hidden" name="language" value="{{ lang.0 }}"> <a href="#" onclick="document.Form_{{ lang.0 }}.submit();">{{ lang.1 }}</a> </form> </div> {% endfor %} {% endcomment %} <div class="lang first"> <form action="/i18n/setlang/" method="POST" name="Form_en"> <input name="next" type="hidden" value="/"> <input type="hidden" name="language" value="en"> <a href="#" onclick="document.Form_en.submit();">English</a> </form> </div> <div class="lang"> <form action="/i18n/setlang/" method="POST" name="Form_nl"> <input name="next" type="hidden" value="/"> <input type="hidden" name="language" value="nl"> <a href="#" onclick="document.Form_nl.submit();">Nederlands</a> </form> </div>
The text is in Dutch, but will be translated when the user has English as the default language (as can be set by using one of the two forms). The trick is that when I use the code in the {% comment %} block, the text is not translated, but when I simply write it all down, translating works!
I don't have too much Python experience, let alone that I know anything about the Django architecture, but the patch works! I reopened this issue because I think this is an issue with the iterator of the template, i.e. the for tag.
(patch/reproduction done on 1.0.2.)
comment:4 Changed 8 years ago by
This isn't "ready for checkin" for so many reasons. Firstly, the patch isn't very good (there shouldn't be any reason to use
new.instancemethod; it can be done similarly to other places where methods are created in Django). Secondly, this is only one tiny piece of allowing iterators to work with HttpResponses and piecemeal work on that isn't particularly useful at this point. They are simply unsupported right now, as I noted above.
"The patch works" is only one part of any solution. It has to solve a problem properly, not just make the symptom go away.
comment:5 Changed 8 years ago by
comment:6 Changed 8 years ago by
comment:7 Changed 6 years ago by
comment:8 Changed 5 years ago by
Change UI/UX from NULL to False.
comment:9 Changed 5 years ago by
Change Easy pickings from NULL to False.
comment:10 follow-up: 11 Changed 5 years ago by
To be clear, the problem is that
LocaleMiddleware.process_response deactivates the current translation, and that everything that runs afterwards uses the default locale.
88b1183425631002a5a8c25631b1b1fad7eb23c5 (the commit on the http-wsgi-improvements branch) isn't acceptable because it removes
HttpResponse's current ability to stream. People already use streaming responses (even though they're fragile), and #7581 will add official support for them.
But improvements to streaming responses aren't going to fix this:
process_responseruns before beginning to send the response;
- the point of streaming responses is to generate content on the fly while sending the response.
Stupid question — why does
LocaleMiddleware need to deactivate the translation in
process_response? Couldn't it leave the translation in effect until the next request? This behavior has been here since the merge of the i18n branch:
comment:11 Changed 5 years ago by
Stupid question — why does
LocaleMiddlewareneed to deactivate the translation in
process_response? Couldn't it leave the translation in effect until the next request? This behavior has been here since the merge of the i18n branch:
+1 to remove translation deactivation, unless the test suite proves we're wrong.
comment:12 Changed 5 years ago by
comment:13 Changed 5 years ago by
Simply removing
translation.deactivate() causes some failures, but that's more a lack of isolation in the test suite than anything else.
comment:14 Changed 5 years ago by
The general problem is that middleware's
process_response (and possibly
process_exception) run before the content of a streaming response has been generated. This doesn't match the current assumption that these middleware run after the response is generated.
At least the transaction middleware suffers from the same problem. If we can't fix both at the same time, we should open another ticket for that one.
Off the top of my head -- we could:
- run these middleware methods at a later point,
- introduce a new middleware method for streaming responses.
#19519 is related, but it's a different problem -- it's about firing the
request_finished signal.
comment:15 Changed 4 years ago by
comment:16 Changed 4 years ago by
After thinking about this problem for some time, I don't think it's feasible to run the middleware after starting a streaming response.
Since you already started sending output:
- You cannot change headers — the primary task of
process_response
- You cannot handle exceptions gracefully — the primary task of
process_exception
So the answer here is to stop deactivating the translation in
process_response.
Patch | https://code.djangoproject.com/ticket/5241 | CC-MAIN-2017-30 | refinedweb | 1,210 | 61.67 |
iMovieRecorder Struct Reference
Using this interface you can communicate with the MovieRecorder plugin and programmatically start, pause and stop the recorder. More...
#include <ivaria/movierecorder.h>
Detailed Description
Using this interface you can communicate with the MovieRecorder plugin and programmatically start, pause and stop the recorder.
This plugin uses a configuration file (by default in "data/config-plugin/movierecorder.cfg") to setup the various parameters of the recording sessions.
The easiest way to use this plugin is to load it at application launch time by adding the option "-plugin=movierecorder" on the command line, then the keys to start, stop and pause the recording are by default "ALT-r" and "ALT-p".
- Remarks:
- The plugin is GPL, not LGPL.
Definition at line 43 of file movierecorder.h.
Member Function Documentation
Return whether or not a movie recording is currently paused.
Return whether or not a movie is currently recorded.
Pause an in-progress recording.
Set the format of the filename to be used to record movie.
The rightmost string of digits in this format will be automatically replaced with a number (eg with a format "/this/crystal000.nuv" the movie files created in the current directory will be called "crystal001.nuv", "crystal002.nuv" and so on). Using this method will overwrite the value defined in the configuration file.
Set the VFS file that will be used to record the movie.
If the file already exists then it will be overwritten. Using this method also overwrite the behavior defined by the filename format (see eg SetFilenameFormat()).
Start recording using the settings in the configuration system.
Stop recording if a recording is in progress.
Resume an in-progress recording.
The documentation for this struct was generated from the following file:
- ivaria/movierecorder.h
Generated for Crystal Space 2.0 by doxygen 1.6.1 | http://www.crystalspace3d.org/docs/online/api-2.0/structiMovieRecorder.html | CC-MAIN-2014-42 | refinedweb | 302 | 59.6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.