text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
MREC
MREC is a 300x250 banner. This type can be useful if the application interface has a large free area for placing a banner.
1. Manual Caching
MREC does not support autocache.
To cache MREC use:
// MREC view mrec.loadAd()
2. Checking if MREC Has Been Loaded
// MREC view mrec.isReady
3. Displaying MREC
AppodealMRECView is a subclass of
AppodealBannerView. The size of
AppodealMRECView is 300x250.
MREC ads are refreshed every 15 seconds automatically by default. To display MREC, you need to call the following code:
import UIKit import Appodeal class YourViewController: UIViewController, AppodealBannerViewDelegate { override func viewDidLoad () { super.viewDidLoad() // required: init ad banner let mrecView: AppodealMRECView = AppodealMRECView() mrecView.usesSmartSizing = false mrecView.rootViewController = self // optional: set delegate mrecView.delegate = self // required: add banner to superview and call -loadAd to start banner loading self.view.addSubview(mrecView) mrecView") } }
4. Using MREC Callbacks
Callbacks are used to track different events of an ad's lifecycle, e.g., when a MREC has been() {}
5. Hiding MREC
MREC is a view. To hide MREC, remove it from superView
// MREC view mrec.removeFromSuperview()
6. MREC Placements
Appodeal SDK allows you to tag each impression with different placement. To be able to use placements, you need to create them in Appodeal Dashboard. Read more about placements.
Appodeal.showAd(.MREC, forPlacement: placement, rootViewController: self)
Appodeal.canShow(.MREC, forPlacement: placement)
If you have no placements or call Appodeal.show with a placement that does not exist, the impression will be tagged with 'default' placement with corresponding settings applied.
Important!
Placement settings affect ONLY ad presentation, not loading or caching.
7. Getting Predicted eCPM
This method returns the expected eCPM for the cached ad. The amount is calculated based on historical data for the current ad unit.
Appodeal.predictedEcpm(for: .MREC)
8. Checking if MREC Has Been Initialized
Appodeal.isInitialized(for: .MREC)
Returns
true if MREC was initialized.
9. Checking if Autocache is Enabled for MREC
MREC does not support autocache. | https://wiki.appodeal.com/en/ios/get-started/ad-types/mrec | CC-MAIN-2022-27 | refinedweb | 321 | 52.87 |
I didn't understand why the normalAt was (0,0,-1).
Here is the function that draw the star with the bug :
I think some one mentioned that it might be the way it was drawn that causes the issue.
Code: Select all
def execute(self, obj): try: if obj.OuterRadius< obj.InnerRadius : #you cannot have it smaller obj.OuterRadius=obj.InnerRadius _points = [] for i in range(0, obj.Corners): alpha = _math.pi *(2 * i + 2 - obj.Corners % 2)/(obj.Corners) if i % 2 == 1: radius = obj.InnerRadius else: radius = obj.OuterRadius y = _math.cos(alpha) * radius x = _math.sin(alpha) * radius _points.append(App.Vector(x, y, 0.0)) if i ==0 : saveFirstPoint=App.Vector(x,y,0.0) if alpha>obj.Angle : break _points.append(saveFirstPoint) test = _part.makePolygon(_points) obj.Shape = _part.Face(test) if hasattr(obj, "Area") and hasattr(obj.Shape, "Area"): obj.Area = obj.Shape.Area
Actually changing the following lines fixed the normalAt to be (0,0,1) and that solve the issue of extruding the face which was going down rather than going up when you extrude the face.
I share with you this problem and the cause of the problem so maybe you face something like that also.
Code: Select all
x = _math.cos(alpha) * radius y = _math.sin(alpha) * radius | https://forum.freecadweb.org/viewtopic.php?f=8&t=62164&sid=41e537372936ed88fc4bc1b35468e723 | CC-MAIN-2021-43 | refinedweb | 219 | 64.17 |
When Apple open-sourced Swift, many people were quite excited to use this interesting new language on other platforms. Early on, Swift was ported to Linux and people began to look into building servers using Swift. More recently, IBM’s existing partnership with Apple deepened as they took a prominent role in the new Server APIs Project. As an iOS developer at ustwo, I decided to take server-side Swift out for a spin on Linux.
To explore both writing Swift on Linux and the Swift Package Manager, I decided to create a little sample called Mockingbird. Mockingbird is the beginnings of a mock server that takes a Swagger specification and stubs out the various endpoints defined.
While I chose to use IBM’s Kitura framework due to their relationship with Apple and because they provide a cloud platform-as-a-service themselves, there are many alternatives that are worth considering (Perfect, Vapor and Zewo just to name a few).
Why Write a Server With Swift?
Beyond simple curiosity or strong preferences for the Swift language, why might you want to build a server with Swift? There are a few reasons that come to mind, but ultimately you need to decide whether they make sense in your context.
Writing your server in Swift in addition to an iOS or macOS app (and maybe Android and Windows in the future!) allows the two to share code and frameworks. Having to only write the code once saves time both in the short-term as well as the long-term when looking at maintenance. It also reduces the overhead in terms of testing. All of this is particularly appealing to an indie developer, but can be beneficial to larger enterprises as well.
Swift, as with any language, is opinionated and is designed to solve certain problems in a particular way. For example, Swift favours static typing. Swift also prefers short, expressive code over verbosity. Not that Swift is better than other languages for these things, quite the contrary. Each language is a tool and has its place. But if these things appeal to you, you may be interested in using Swift. Also, if you plan to run on iOS, tvOS, or watchOS your choices of languages that have first-party support and frameworks are limited. Apple’s focus lately has definitely been on Swift over Objective-C or other languages.
Building a Linux Server with Swift
Building Mockingbird using Kitura and the various related packages that IBM has developed made it extremely easy to start-up and tie together basic logic to a HTTP server. The bulk of the code written was for file management and parsing. Little code was required to build the server itself.
The biggest challenges I ran into developing this mini-server fell into two categories. First, Apple maintains two versions of their Foundation framework, which is one of the key frameworks most Apple developers use. The Foundation framework on Darwin (i.e. macOS, iOS, etc.) and the Foundation framework on Linux are different implementations. In other words, these implementations do not always produce the same output when you provide them the same input, you get different results. These implementations are also not entirely API compatible. Secondly, Kitura leaves a lot to be desired in terms of testability.
There were a number of occasions where I would have the package building perfectly on macOS, only to find that when I fired up my Docker container it blew up running the tests or starting the server. Scattered throughout the code you will see snippets where I had to provide conditional compilation blocks such as:
#if os(Linux) let regexObj = try? RegularExpression(pattern: Endpoint.kituraPathRegexString, options: [])
#elselet regexObj = try? NSRegularExpression(pattern: Endpoint.kituraPathRegexString) #endif
While I did not encounter it often in my small implementation, Apple’s swift-corelibs-foundation (the open-sourced version of Foundation) still has many parts not yet implemented (do a search for
NSUnimplemented() in the repository). For anyone using Swift on Linux, I strongly recommend that you star this repository as you will likely need to reference what has been implemented and what may have been implemented differently on Linux versus macOS.
It was also non-trivial writing tests for my Kitura implementation. Walking the API endpoints defined by the server was not possible, due to the internal scoping of the
elements property of the router and not having a publicly accessible iterator for them either (even read-only). Nor does Kitura provide a testing framework to remove the boilerplate that is common to all endpoint tests (see ProcedureKit for a good example of how to do this). Thus, I wrote my own simplified version in
EndpointTests.swift to make testing easier.
Using the Swift Package Manager
The Swift Package Manager (SwiftPM) is Apple’s new, cross-platform dependency manager. Unlike Swift itself which has clear release goals and (some) stability within a given version, the SwiftPM is still in early development and changes rapidly. It also does not support iOS, watchOS, nor tvOS at this time. So don’t drop your use of CocoaPods or Carthage just yet. It also comes with a number of constraints/limitations when dealing with Xcode. Writing your SwiftPM
Package.swift file is easy enough and uses a JSON-like syntax. It tries to do intelligent things about matching your tests with the appropriate source files, linking to common Apple frameworks such as Foundation or Dispatch, and can generate an Xcode project file for you. Unfortunately, that is about all it does at this point (though to be fair, as I said it’s still early days). I ran into three major issues while using the SwiftPM, even in this small example.
1. SR-2866: The SwiftPM currently does not have a way to specify resources (such as assets, test files, etc.) to be included in the package. Mockingbird works around this by adding a
COPY command in the
Dockerfile and providing an absolute path to the resources in the code. To provide better support in Xcode, part of the
xcodeproj_after_install.rb script adds the resources to a Copy Files Build Script Phase for the
ResourceKit target. Thus
ResourceKit has three ways for generating the appropriate file url for a given resource – absolute path using Linux (Docker) file layout when building on Linux, absolute path using the macOS file layout when using the Swift compiler on a Mac (i.e.
swift build and
swift test), or a resource bundle when using Xcode on a Mac. Not having resource bundling would also prevent some of ustwo’s open source libraries, such as FormValidator-Swift, from supporting SwiftPM at this time.
2. SR-3033: The SwiftPM currently cannot test an executable package (i.e. one with a
main.swift file). Mockingbird works around this by placing as much code as possible within a library (
MockServerKit) and only a minimal
main.swift file in the executable (
MockServer).
3. SR-3583: The SwiftPM creates a
.build folder when building. No distinction is made for the operating system when building. So if the
.build folder is copied from a macOS build to a Linux server and run, it may or may not compile or test correctly. This GitHub PR seeks to either warn the user or place the build artifacts inside a top-level folder specifying the operating system. Mockingbird works around this by adding the
.build folder to the
.dockerignore file and doing a clean build on the Linux server. (For more history on this issue, see GitHub PR #807).
Another quirk of SwiftPM is that all top-level folders of the
Sources and
Tests directories define modules. It was a bit disconcerting at first to have my namespaces defined by my folder structure rather than by a configuration file or a declaration within the source files. While this is not a bad thing nor a challenge, it was different than what I have experienced developing for Apple’s platforms in the past.
Testing and, in particular, continuous integration was also a challenge. Due to the challenges outlined and the distinctions of the frameworks on the various platforms, setting up the Travis CI support took a bit of additional effort. Many thanks to the SwiftLint project on whose implementation Mockingbird’s is based.
Final Thoughts
Overall, I really enjoyed spending time with Swift on Linux and taking the SwiftPM out for a test drive. I feel that Swift and the various frameworks that support our development are starting to mature to the point where you could comfortably build servers as an indie developer. There will still be growing pains, but it may be more accessible to you than having to learn another language and all of its standard libraries to get your server up and running.
Further Reading and Viewing
Below are additional talks and blog posts about building a Linux server with Swift. Each of us have encountered different challenges, so they are worth a read.
– Jeff Bergier’s Building a Production Server Swift App: Lessons Learned: This is a great little talk about Bergier’s experience building a Swift server. There are some additional examples of the differences between Foundation on Darwin versus Linux.
– Kitura Demo (2016–09–20): An excellent demo and video tutorial as part of the Server-Side Swift Year-Long Conference.
– WWDC 2016: Going Server-side with Swift Open Source: A WWDC talk from this year on using Swift on the server.
This piece was written by Aaron McTavish,. | https://medium.com/ustwo/exploring-server-side-swift-4b722ebc53ec?source=emailShare-97a7ccc0869b-1488477378&utm_campaign=Swift%2BWeb%2BWeekly&utm_medium=web&utm_source=Swift_Web_Weekly_5 | CC-MAIN-2020-10 | refinedweb | 1,581 | 63.09 |
The following block of code queries a table with ~2000 rows. The loop takes 20s to execute! From my limited knowledge, I don't think I'm doing 2000 queries, just the one, but perhaps I'm not understanding the '.' operator and what it's doing behind the scenes. How can I fix this loop to run more quickly? Is there a way to adjust the top level query s.t. the 2nd for loop is not making a total of 3000 queries (if that's in fact what's going on)?
Here is a test block of code I made to verify that it was in fact this inner loop that was causing the massive time consumption.
block = []
cnt = 0
for blah in dbsession.query(mytable):
tic = time.time()
for a in blah.component:
cnt += 1
block.append(time.time()-tic)
print "block: %s seconds cnt: %s" % (sum(block), cnt)
# block: 20.78191 seconds cnt: 3021
for blah in dbsession.query(mytable).options( joinedload(mytable.componentA)).options(mytable.componentB)).options(mytable.componentC)
Each time you access
.component another SQL query is emitted.
You can read more at Relationship Loading Techniques, but to load it all at once you can change your query to the following:
from sqlalchemy.orm import joinedload dbsession.query(mytable).options(joinedload('component')) | https://codedump.io/share/7dzXC199UGzr/1/sqlalchemy-poor-performance-on-iterating | CC-MAIN-2016-50 | refinedweb | 217 | 69.58 |
Five. The first three steps are very similar to those in the 15 Minutes Tutorial.
After you have installed Xtext on your machine, start Eclipse and set up a fresh workspace.
Step One:.
Step Two: Write the Grammar
The wizard will automatically open the grammar file Domainmodel.xtext in the editor. As you can see it already contains a simple Hello World grammar.;
Some parts of this grammar are equal to the one in the 15 Minutes Tutorial, but other parts are different.
grammar org.example.domainmodel.Domainmodel with org.eclipse.xtext.xbase.Xbase
The*;
A Domainmodel contains an optional import section and an arbitrary number of AbstractElements. ‘Baz’ which is contained in a PackageDeclaration ‘foo.bar’ will be ‘foo.bar.Baz’. In case you do not like the default behavior you will need to use a different implementation of IQualifiedNameProvider.
Entity: 'entity' name=ValidID ('extends' superType=JvmTypeReference)? '{' features+=Feature* '}';
The rule Entity starts with the definition of a keyword followed by a name. The extends clause body of the Operation is the actual implementation and.
Step Four: Define the Mapping to JVM Concepts
The syntax alone is not enough to make the language work. We need to map the domain-specific concepts to some other language in order to instruct Xtext how it is executed. Usually you define a code generator or an interpreter for that matter, but languages using Xbase can omit this step and make use of the IJvmModelInferrer.
The idea is that you translate your language concepts to any number of Java types (JvmDeclaredType)..x also comes with a code generator which can translate that Java model into readable Java code, including the expressions.
If you have already triggered the ‘Generate).
def dispatch void infer(Entity element, IJvmDeclaredTypeAcceptor makes sure we don’t have to walk the syntax model on our own.
acceptor.accept(element.toClass(element.fullyQualifiedName)) [ ... ]
Every JvmDeclaredType you create in the model inference needs to be passed to the acceptor in order to get recognized. The extension method toClass comes from JvmTypesBuilder.. This is also the place where you add members and put the XExpressions into context. Let’s see what we do in the initialization block in detail:
documentation = element.documentation
Here we assign some JavaDoc to the newly created element. The assignment is translated to an invocation of the method JvmTypesBuilder.setDocumentation(JvmIdentifiableElement, String), and
element.documentationis in fact calling the extension method JvmTypesBuilder.getDocumentation(EObject). Such extension methods are explained in detail in the Xtend documentation.
if (element.superType !== null) superTypes += entity.superType.cloneWithProxies
Set mapped to a corresponding Java method. The documentation is translated and the parameters are added within the initializer. The line
body = feature.bodyregisters the Operation’s expression as the body of the newly created Java method. This defines the scope of the expression. The framework deduces the visible fields and parameters as well as the expected return type from that information.
Step Five : Try the Editor!
We are now able to test the IDE integration by spawning a new Eclipse using our plug-ins: right-click the project
org.example.domainmodel in the Package Explorer and select Run As → Eclipse Application.
In the new workbench, create a Java project (File → New → Project… → Java Project). Xbase relies on a small runtime library on the class path. To add this, right-click on the project and go to Java Build Path → Libraries →!
Next Chapter: The Grammar Language | http://www.eclipse.org/Xtext/documentation/104_jvmdomainmodel.html | CC-MAIN-2018-13 | refinedweb | 570 | 50.84 |
Getting Started with AWS
Create and Manage a Nonrelational Database
with Amazon DynamoDB
Module 2: Inserting and retrieving data
You will walk through some simple examples of inserting and retrieving data with DynamoDB.
Overview
In this lesson, you walk through some simple examples of inserting and retrieving data with DynamoDB. You create your DynamoDB table using the CreateTable API, and then you insert some items using the BatchWriteItem API call. Finally, you retrieve individual items using the GetItem API call. Before you work through these examples, we discuss the data model to be used in your example online bookstore application.
In subsequent modules, you learn how to retrieve multiple items at a time by using the Query API call and how to enable additional query patterns by using secondary indexes. You also see how to update existing items in your table.
Time to Complete
15 minutes
Terminology
The following DynamoDB concepts play a key role in this module:
- Table: A collection of DynamoDB data records.
- Item: A single data record in a DynamoDB table. It is comparable to a row in a relational database.
- Attribute: A single data element on an item. It is comparable to a column in a relational database. However, unlike columns in a relational database, attributes do not need to be specified at table creation, other than the primary key discussed later in this module. Attributes can be simple types such as strings, integers, or Boolean, or they can be complex types such as lists or maps.
- Primary key: A primary key is a unique identifier for a single item in a DynamoDB table. The primary key name and type must be specified on table creation, and a primary key of the specified type must be included with each item written to a table. A simple primary key consists of a single attribute, and a composite primary key consists of two attributes: a partition key and a sort key. For example, you can create a simple primary key using “UserID” as an identifier, or create a composite primary key by combining “UserID” and “Creation_Date” as an item identifier.
Data model
When building an application, you should always take time to design the data models needed in your application logic. This data model design should consider data access needs that will be needed in your application, both for reading and writing data.
DynamoDB is a nonrelational database. With nonrelational databases, you don't need to specify the full schema upfront when creating a table. You only need to declare the primary key for your table, which uniquely identifies each record in your table. This reduces the upfront cost of designing your data model because you can easily modify your schema as your application’s needs change.
As mentioned in the “Application background” section of this tutorial’s “Introduction”, your application needs to retrieve an individual book by its title and author. Because the combination of title and author is a unique identifier of a book, you can use those attributes as the primary key of your table. Your application also needs to store information about the category of our book, such as history or biography, as well as the available formats of your book — hardcover, paperback, or audiobook — that are mapped to the item numbers in your inventory system.
With these needs in mind, you can use the following schema for your table:
- Title (a string): The title of the book
- Author (a string): The author of the book
- Category (a string) The category of the book, such as History, Biography, and Sci-Fi
- Formats (a map): The different formats that you have available for sale (such as hardcover, paperback, and audiobook) and their item numbers in your inventory system
In the following steps, you create the table by specifying the composite primary key (Author and Title) of your table. Then, you load some items into your table and read individual items from the table.
Implementation
- Create a DynamoDB table
The directory you downloaded includes a create_table.py script that creates a Books table by using the CreateTable API. You can run this script by entering the following command in the AWS Cloud9 terminal.
$ python create_table.py
If you open the create_table.py script with the AWS Cloud9 editor, you should notice:
- The script specifies the composite primary key of your table with the KeySchema argument in the CreateTable API call. Your table uses Author as the hash key and Title as the range key.
- The script specifies the provisioned throughput for your table by defining both read capacity units and write capacity units. DynamoDB lets you set read and write capacity separately, allowing you to fine-tune your configuration to meet your application’s needs without paying for costly overprovisioning.
- Load items into the table
In this step, you load some books into the table. In the AWS Cloud9 terminal, run the following command.
$ python insert_items.py
This command runs the following script.
import boto3 dynamodb = boto3.resource('dynamodb', region_name='us-east-1') table = dynamodb.Table('Books') with table.batch_writer() as batch: batch.put_item(Item={"Author": "John Grisham", "Title": "The Rainmaker", "Category": "Suspense", "Formats": { "Hardcover": "J4SUKVGU", "Paperback": "D7YF4FCX" } }) batch.put_item(Item={"Author": "John Grisham", "Title": "The Firm", "Category": "Suspense", "Formats": { "Hardcover": "Q7QWE3U2", "Paperback": "ZVZAYY4F", "Audiobook": "DJ9KS9NM" } }) batch.put_item(Item={"Author": "James Patterson", "Title": "Along Came a Spider", "Category": "Suspense", "Formats": { "Hardcover": "C9NR6RJ7", "Paperback": "37JVGDZG", "Audiobook": "6348WX3U" } }) batch.put_item(Item={"Author": "Dr. Seuss", "Title": "Green Eggs and Ham", "Category": "Children", "Formats": { "Hardcover": "GVJZQ7JK", "Paperback": "A4TFUR98", "Audiobook": "XWMGHW96" } }) batch.put_item(Item={"Author": "William Shakespeare", "Title": "Hamlet", "Category": "Drama", "Formats": { "Hardcover": "GVJZQ7JK", "Paperback": "A4TFUR98", "Audiobook": "XWMGHW96" } })
As the preceding script shows, you used the BatchWriteItem API to load five books into the table. Each book includes the Author and Title attributes for the primary key, and Category and Formats attributes for additional information about the books. Each attribute has a type, which can be a simple type such as a string for the Category attribute, or a complex type such as a map for the Formats attribute.
Note that you inserted data over an HTTP API using the Boto 3 client library. All data access and manipulation requests are done via HTTP requests, rather than maintaining a persistent connection to the database as is common for relational database management systems.
- Retrieve items from the table
You can retrieve a single book by using the GetItem API request and specifying the primary key of the item you want.
In the AWS Cloud9 terminal, run the following command.
$ python get_item.py
This runs the following script to retrieve a single item, which is the book The RainMaker by John Grisham.
import boto3 dynamodb = boto3.resource('dynamodb', region_name='us-east-1') table = dynamodb.Table('Books') resp = table.get_item(Key={"Author": "John Grisham", "Title": "The Rainmaker"}) print(resp['Item'])
Your terminal should print out the full book data retrieved from the table.
$ python get_item.py {'Title': 'The Rainmaker', 'Formats': {'Hardcover': 'J4SUKVGU', 'Paperback': 'D7YF4FCX'}, 'Author': 'John Grisham', 'Category': 'Suspense'}
Because each item in a table is uniquely identified by its primary key, the GetItem API call will always return at most one item from your table.
In the next module, you learn how to retrieve multiple items from a DynamoDB table with a single API call. You also learn how enable multiple data access patterns in your table by using secondary indexes. | https://aws.amazon.com/getting-started/hands-on/create-manage-nonrelational-database-dynamodb/module-2/ | CC-MAIN-2022-21 | refinedweb | 1,228 | 52.49 |
This is my class
import java.io.*; public class bankAccount { //instance variables public String name; public int number; public double balance; public double deposit; //constructor public bankAccount(String name, int number, double balance, double deposit) { this.name = "John Smith"; this.number = 123456; this.balance = 0.0; } public double getBalance() { return balance; } public void getDeposit(double deposit) { balance+=deposit; } public String toString() { return "Name:" + name + "\n number:" + number + "\n balance = " + getBalance(); } }
and my main
import java.io.*; public class testBank { public static void main(String[] args) { bankAccount b; b = new bankAccount(); System.out.println(b.toString()); } }
I get the error "cannot find symbol" at the b=new bankAccount() part of the main. Apparently, the symbol is the bankAccount constructor.
Why won't this compile?
I know I'm not really some of the methods in my first class, but Im trying to fix this problem first. | https://www.daniweb.com/programming/software-development/threads/286060/quick-problem-w-my-main | CC-MAIN-2017-26 | refinedweb | 145 | 58.48 |
CHAPTER 1
Using neural nets to recognize handwritten digits..
What
1,
x2 , …
, and produces a single binary output:
In the example shown the perceptron has three inputs, x
1,
x2 , x3
. In
general it could have more or fewer inputs. Rosenblatt proposed a
simple rule to compute the output. He introduced weights,
w1 , w2 , …
, real numbers expressing the importance of the respective
inputs to the output. The neuron's output, 0 or 1, is determined by
whether the weighted sum ∑
j
w j xj
is less than or greater than some
threshold value. Just like the weights, the threshold is a real
number which is a parameter of the neuron. To put it in more
precise algebraic terms:
0
if ∑ w j xj ≤
threshold
{1
if ∑ w j xj >
threshold
j
output =
(1)
j:
1. Is the weather good?
2. Does your boyfriend or girlfriend want to accompany you?
3. Is the festival near public transit? (You don't own a car).
We can represent these three factors by corresponding binary
variables x
1,
x2
is good, and x
1
, and x . For instance, we'd have x
3
= 0
1
= 1
if the weather is bad. Similarly, x
2
boyfriend or girlfriend wants to go, and x
2
= 0
if the weather
= 1
if your
if not. And similarly
again for x and public transit.
3
the weather, and w
2
= 2
and w
3
= 2
1
= 6
for
for the other conditions. The
larger value of w indicates that the weather matters a lot to you, 3. w j xj > threshold
j
is cumbersome, and we can make two notational
changes to simplify it. The first change is to write ∑
product, w ⋅ x
≡ ∑ w j xj
j
j
w j xj
as a dot
, where w and x are vectors whose
components are the weights and inputs, respectively. The second
change is to move the threshold to the other side of the inequality,
and to replace it by what's known as the perceptron's bias,
b ≡ −threshold
. Using the bias instead of the threshold, the
perceptron rule can be rewritten:
output =
0
if w ⋅ x + b ≤ 0
{1
if w ⋅ x + b > 0
(2).,
and an overall bias of 3. Here's our perceptron:
Then we see that input 00 produces output 1, since
(−2) ∗ 0 + (−2) ∗ 0 + 3 = 3
is positive. Here, I've introduced the ∗
symbol to make the multiplications explicit. Similar calculations
show that the inputs 01 and 10 produce output 1. But the input 11
produces output 0, since (−2) ∗ 1 + (−2) ∗ 1 + 3 = −1 and x .
1
This requires computing the bitwise sum, x
1
⊕ x2
2
, as well as a carry
bit which is set to 1 when both x and x are 1, i.e., the carry bit is
1
just the bitwise product x
1 x2
2
:
To get an equivalent network of perceptrons we replace all the NAND
gates by perceptrons with two inputs, each with weight −2, and an
overall bias of 3. and x as variables
1
2
j
w j xj
would always be zero, and so
the perceptron would output 1 if b > 0, and 0 if b ≤ 0. That is, the
perceptron would simply output a fixed value, not the desired value
(x , in the example above). It's better to think of the input
1
perceptrons as not really being perceptrons at all, but rather special
units which are simply defined to output the desired values,
x1 , x2 , …
.
Learning 0 to 1.
1,
x2 , …
. But
instead of being just 0 or 1, these inputs can also take on any values
between 0 and 1. So, for instance, 0.638 … is a valid input for a
sigmoid neuron. Also just like a perceptron, the sigmoid neuron has
weights for each input, w
1,
w2 , …
, and an overall bias, b. But the
output is not 0 or 1. Instead, it's σ (w ⋅ x + b), where σ is called the
*Incidentally, σ is sometimes called the logistic
sigmoid function*, and is defined by:
function, and this new class of neurons called
logistic neurons. It's useful to remember this
1
σ (z) ≡
−z
.
(3)
1 + e
terminology, since these terms are used by many
people working with neural nets. However, we'll
stick with the sigmoid terminology.
To put it all a little more explicitly, the output of a sigmoid neuron
with inputs x
1,
x2 , …
, weights w
1,
w2 , …
, and bias b is
1
.
(4)
1 + exp(− ∑ w j xj − b)
j ≡ w ⋅ x + b
σ (z) ≈ 1
is a large positive number. Then e
−z
≈ 0
and so
. In other words, when z = w ⋅ x + b is large and positive, the
output from the sigmoid neuron is approximately 1, just as it would
have been for a perceptron. Suppose on the other hand that
z = w ⋅ x + b
is very negative. Then e
z = w ⋅ x + b
is very negative, the behaviour of a sigmoid neuron
−z
→ ∞
, and σ (z) ≈ 0 . So when
also closely approximates a perceptron. It's only when w ⋅ x + b is of
modest size that there's much deviation from the perceptron model.
What about the algebraic form of σ ? How can we understand that?
In fact, the exact form of σ isn't so important - what really matters is
the shape of the function when plotted. Here's the shape:
sigmoid function
1.0
0.8
0.6
0.4
0.2
0.0
-4
-3
-2
-1
0
z
1
2
3
4
This shape is a smoothed out version of a step function:
step function
1.0
0.8
0.6
0.4
0.2
0.0
-4
-3
-2
-1
0
z
1
2
3
4
If σ had in fact been a step function, then the sigmoid neuron would
be a perceptron, since the output would be 1 or 0 depending on
whether w ⋅ x + b was positive or negative*. By using the actual σ
*Actually, when w ⋅ x + b
function we get, as already implied above, a smoothed out
strictly speaking, we'd need to modify the step
perceptron. Indeed, it's the smoothness of the σ function that is the
crucial fact, not its detailed form. The smoothness of σ means that
small changes Δw in the weights and Δb in the bias will produce a
j
small change Δoutput in the output from the neuron. In fact,
calculus tells us that Δoutput is well approximated by
Δoutput ≈
∂ output
∑
j
∂w j
Δw j +
∂ output
Δb,
(5)
∂b
where the sum is over all the weights, w , and ∂ output/∂w and
j
∂ output/∂b
j
denote partial derivatives of the output with respect to w
and b, respectively. Don't panic if you're not comfortable with
partial derivatives! While the expression above looks complicated,
with all the partial derivatives, it's actually saying something very
simple (and which is very good news): Δoutput is a linear function
of the changes Δw and Δb in the weights and bias. This linearity
j
= 0
the perceptron
outputs 0, while the step function outputs 1. So,
j
function at that one point. But you get the idea. which really matters, and not its exact form,
then why use the particular form used for σ in Equation (3)? In fact,
later in the book we will occasionally consider neurons where the
output is f (w ⋅ x + b) for some other activation function f (⋅). The
main thing that changes when we use a different activation function
is that the particular values for the partial derivatives in Equation
(5) change. It turns out that when we compute those partial
derivatives later, using σ will simplify the algebra, simply because
exponentials have lovely properties when differentiated. In any
case, σ 0 or 1. They can
have as output any real number between 0 and 1, so values such as
0.173 …
and 0.689 ….
Exercises
Sigmoid neurons simulating perceptrons, part I
Suppose we take all the weights and biases in a network of
perceptrons, and multiply them by a positive constant, c > 0.
Show that the behaviour of the network doesn't change.
Sigmoid neurons simulating perceptrons, part II ⋅ x + b ≠ 0
for the input x to any particular perceptron in the network.
Now replace all the perceptrons in the network by sigmoid
neurons, and multiply the weights and biases by a positive
constant c > 0. Show that in the limit as c → ∞ the behaviour
of this network of sigmoid neurons is exactly the same as the
network of perceptrons. How can this fail when w ⋅ x + b = 0
for one of the perceptrons?
The architecture of neural networks by 64 greyscale image,
then we'd have 4, 096 = 64 × 64 input neurons, with the intensities
scaled appropriately between 0 and 1. The output layer will contain
just a single neuron, with output values of less than 0.5 indicating
"input image is not a 9", and values greater than 0
Having
into by 28 pixel
images of scanned handwritten digits, and so the input layer
contains 784 = 28 × ≈ 1, then that will indicate that the
network thinks the digit is a 0. If the second neuron fires then that
will indicate that the network thinks the digit is a 1. And so on. A
little more precisely, we number the output neurons from 0 through
, and figure out which neuron has the highest activation value. If
9 4 .
Learning with gradient descent
Now to denote a training input. It'll be
convenient to regard each training input x as a 28 × 28 = 784dimensional
*Sometimes referred to as a loss or objective
define a cost function*:
function. We use the term cost function
throughout this book, but you should note the
C(w, b) ≡
1
2
2n ∑
‖y(x) − a‖ .
(6)
x) ≈ 0,
precisely when y(x) is approximately equal to the output, a, for all
training inputs, x. So our training algorithm has done a good job if it
can find weights and biases so that C(w, b) ≈
other terminology, since it's often used in
research papers and other discussions of neural
networks.) σ).
This could be any real-valued function of many variables,
v = v1 , v2 , …
. and v :
1
2
What we'd like is to find where C achieves its global minimum.
Now, of course, for the function plotted above, we can eyeball the
graph and find the minimum. In that sense, I've perhaps shown
slightly too simple a function! A general function, C , is an extremum. With some luck
that might work when C as a function
of just two variables, I've turned around twice in two paragraphs
and said, "hey, but what if it's a function of many more than two
variables?" Sorry about that. Please believe me when I say that it
really does help to imagine C in the v direction, and
1
1
a small amount Δv in the v direction. Calculus tells us that C
2
2
changes as follows:
ΔC ≈
∂C
∂v1
Δv1 +
∂C
∂v2
Δv2 .
(7)
We're going to find a way of choosing Δv and Δv so as to make ΔC
1
2
negative; i.e., we'll choose them so the ball is rolling down into the
valley. To figure out how to make such a choice it helps to define Δv
to be the vector of changes in v, Δv ≡ (Δv
1,
Δv2 )
T
, where T is again
the transpose operation, turning row vectors into column vectors.
We'll also define the gradient of C to be the vector of partial
derivatives, (
∂C
∂v1
,
∂C
∂v2
T
)
. We denote the gradient vector by ∇C , i.e.:
∇C ≡
∂C
( ∂v
∂C
,
1
∂v2 )
T
.
(8)
In a moment we'll rewrite the change ΔC in terms of Δv and the
gradient, ∇C . Before getting to that, though, I want to clarify
something that sometimes gets people hung up on the gradient.
When meeting the ∇C notation for the first time, people sometimes
wonder how they should think about the ∇ symbol. What, exactly,
does ∇ mean? In fact, it's perfectly fine to think of ∇C as a single
mathematical object - the vector defined above - which happens to
be written using two symbols. In this point of view, ∇ is just a piece
of notational flag-waving, telling you "hey, ∇C is a gradient vector".
There are more advanced points of view where ∇ can be viewed as
an independent mathematical entity in its own right (for example,
as a differential operator), but we won't need such points of view.
With these definitions, the expression (7) for ΔC can be rewritten as
ΔC ≈ ∇C ⋅ Δv.
(9)
This equation helps explain why ∇C is called the gradient vector:
∇C
relates changes in v to changes in C , just as we'd expect
something called a gradient to do. But what's really exciting about
the equation is that it lets us see how to choose Δv so as to make ΔC
negative. In particular, suppose we choose
Δv = −η∇C,
(10)
where η is a small, positive parameter (known as the learning rate).
Then Equation (9) tells us that ΔC
Because ‖∇C‖
2
≈ −η∇C ⋅ ∇C = −η‖∇C‖
, this guarantees that ΔC
≥ 0
≤ 0
2
.
, i.e., C will always
decrease, never increase, if we change v according to the
prescription in (10). (Within, of course, the limits of the
approximation in Equation (9)). This is exactly the property we
wanted! And so we'll take Equation (10) to define the "law of
motion" for the ball in our gradient descent algorithm. That is, we'll
use Equation (10) to compute a value for Δv , then move the ball's
position v by that amount:
′
v → v
= v − η∇C.
(11)
Then we'll use this update rule again, to make another move. If we
keep doing this, over and over, we'll keep decreasing C until - we
hope - we reach a global minimum.
Summing up, the way the gradient descent algorithm works is to
repeatedly compute the gradient ∇C , just says "go down, right now". That's still
a pretty good rule for finding the minimum!
To make gradient descent work correctly, we need to choose the
learning rate η to be small enough that Equation (9) is a good
approximation. If we don't, we might end up with ΔC
> 0
, which
obviously would not be good! At the same time, we don't want η to
be too small, since that will make the changes Δv tiny, and thus the
gradient descent algorithm will work very slowly. In practical
implementations, η is often varied so that Equation (9),
… , vm
. Then the change ΔC in C
produced by a small change Δv = (Δv
1,
… , Δvm )
T
is
ΔC ≈ ∇C ⋅ Δv,
(12)
where the gradient ∇C is the vector
∇C ≡
∂C
(
T
∂C
,…,
∂v1
∂vm
)
.
(13)
Just as for the two variable case, we can choose
Δv = −η∇C,
(14)
and we're guaranteed that our (approximate) expression (12) for ΔC
will be negative. This gives us a way of following the gradient to a
minimum, even when C is a function of many variables, by
repeatedly applying the update rule
′
v → v
= v − η∇C.
(15) , in position so as to decrease C as
much as possible. This is equivalent to minimizing ΔC
≈ ∇C ⋅ Δv
.
We'll constrain the size of the move so that ‖Δv‖ = ϵ for some small
fixed ϵ > 0. In other words, we want a move that is a small step of a
fixed size, and we're trying to find the movement direction which
decreases C as much as possible. It can be proved that the choice of
Δv
which minimizes ∇C ⋅ Δv is Δv = −η∇C, where η = ϵ/‖∇C‖ is
determined by the size constraint ‖Δv‖ = ϵ. So gradient descent can
be viewed as a way of taking small steps in the direction which does
the most to immediately decrease C .
Exercises
Prove the assertion of the last paragraph. Hint: If you're not
already familiar with the Cauchy-Schwarz inequality, you may
find it helpful to familiarize yourself with it.
I explained gradient descent when C is a function of two
variables, and when it's a function of more than two variables.
What happens when C , and this can be quite costly. To see
why it's costly, suppose we want to compute all the second partial
derivatives ∂
2
C/∂vj ∂vk
. If there are a million such v variables then
j
we'd need to compute something like a trillion (i.e., a million
*Actually, more like half a trillion, since
squared) second partial derivatives*! That's going to be and biases
k
bl
which minimize the cost in Equation (6). To see how this works,
let's restate the gradient descent update rule, with the weights and
biases replacing the variables v . In other words, our "position" now
j
2
∂ C/∂vj ∂vk = ∂ C/∂vk ∂vj
. Still, you get the point.
has components w and b , and the gradient vector ∇C has
k
l
corresponding components ∂C/∂w and ∂C/∂b . Writing out the
k
l
gradient descent update rule in terms of components, we have
wk → w
k
′
bl → b
∂C
′
l
= wk − η
(16)
∂w k
∂C
= bl − η
.
(17)
∂bl). Notice that
this cost function has the form C
over costs C
x
≡
‖y(x)−a‖
2
2
=
1
n
∑
x
Cx
, that is, it's an average
for individual training examples. In
practice, to compute the gradient ∇C we need to compute the
gradients ∇C separately for each training input, x, and then
x
average them, ∇C
=
1
∑
n
x
∇C x
. Unfortunately, when the number of
training inputs is very large this can take a long time, and learning
thus occurs slowly.
An idea called stochastic gradient descent can be used to speed up
learning. The idea is to estimate the gradient ∇C by computing ∇C
x
for a small sample of randomly chosen training inputs. By
averaging over this small sample it turns out that we can quickly get
a good estimate of the true gradient ∇C , and this helps speed up
gradient descent, and thus learning.
To make these ideas more precise, stochastic gradient descent
works by randomly picking out a small number m of randomly
chosen training inputs. We'll label those random training inputs
X1 , X2 , … , Xm
, and refer to them as a minibatch. Provided the
sample size m is large enough we expect that the average value of
the ∇C will be roughly equal to the average over all ∇C , that is,
Xj
x
∑
m
j=1
∇C Xj
m
∑
≈
x
∇C x
= ∇C,
n
where the second sum is over the entire set of training data.
Swapping sides we get
(18)
∇C ≈
m
1
m ∑
∇C X ,
j
(19)
j=1
confirming that we can estimate the overall gradient by computing
gradients just for the randomly chosen mini-batch.
To connect this explicitly to learning in neural networks, suppose w
k
and b denote the weights and biases in our neural network. Then
l
stochastic gradient descent works by picking out a randomly chosen
mini-batch of training inputs, and training with those,
wk → w
′
k
∂C Xj
η
= wk −
(20)
m ∑ ∂w k
j
′
bl → b
l
= bl −
∂C Xj
η
m ∑
j
,
(21)
∂bl
where the sums are over all the training examples X in the current
j) we scaled the overall cost function by a
factor . People sometimes omit the , summing over the costs of
1
1
n
n
individual training examples instead of averaging. This is
particularly useful when the total number of training examples isn't
known in advance. This can occur if more training data is being
generated in real time, for instance. And, in a similar way, the minibatch update rules (20) and (21) sometimes omit the
1
m
term out
the front of the sums. Conceptually this makes little difference,
since it's equivalent to rescaling the learning rate , as in MNIST, and choose a minibatch ,, we update our
weights and biases according to the rules
wk → w
′
k
= w k − η∂C x /∂w k
and b
l
′
→ b
l
= bl − η∂C x /∂bl
. Then we
choose another training input, and update the weights and
biases again. And so on, repeatedly. This procedure is known as
online, online, or incremental learning. In online learning, a
neural network learns from just one training input at a time
(just as human beings do). Name one advantage and one
disadvantage of online learning, compared to stochastic
gradient descent with a mini-batch size of, say, 20.
Let me conclude this section by discussing a point that sometimes
bugs people new to gradient descent. In neural networks the cost C
is, of course, a function of many variables - all the weights and
biases - and so in some sense defines a surface in a very highdimensional to figure out how to move so as to decrease C .
Alright,parameters
Apart from the MNIST data we also need a Python library called
Numpy, for doing fast linear algebra. If you don't already have
Numpy installed, you can get it here.
Let me explain the core features of the neural networks code, before
giving a full listing, below. The centerpiece is a Network class, which
we use to represent a neural network. Here's the code we use to
initialize a Network object:
class Network(object):])
The.
Note that the Network initialization code assumes that the first layer
of neurons is an input layer, and omits to set any biases for those
neurons, since biases are only ever used in computing the outputs
from later layers. is the weight for the connection
jk
between the k neuron in the second layer, and the j neuron in the
th:
′
a
= σ (wa + b).
(22)
the LISA machine learning laboratory at the
University of Montreal (link).
There's quite a bit going on in this equation, so let's unpack it piece
by piece. a is the vector of activations of the second layer of
neurons. To obtain a we multiply a by the weight matrix w , and
′
add the vector b of biases. We then apply the function σ
elementwise to every entry in the vector wa + b. (This is called
vectorizing the function σ .) It's easy to verify that Equation (22)
gives the same result as our earlier rule, Equation (4), for
computing the output of a sigmoid neuron.
Exercise
Write out Equation (22) in component form, and verify that it
gives the same result as the rule (4) for computing the output
of a sigmoid neuron.
With all this in mind, it's easy to write code computing the output
from a Network instance. We begin by defining the sigmoid function:
def sigmoid(z):
return 1.0/(1.0+np.exp(-z))
Note that when the input z is a vector or Numpy array, Numpy
automatically applies the function sigmoid elementwise, that is, in
vectorized form.
We then add a feedforward method to the Network class, which,
given an input a for the network, returns the corresponding
output*. All the method does is applies Equation (22) for each
*It is assumed that the input a is an (n, 1)
layer:
the number of inputs to the network. If you try to
Numpy ndarray, not a (n,) vector. Here, n is
use an (n,) vector as input you'll get strange
results. Although using an (n,) vector appears
def feedforward(self, a):
"""Return the output of the network if "a" is input."""
for b, w in zip(self.biases, self.weights):
a = sigmoid(np.dot(w, a).
def SGD(self, training_data, epochs, mini_batch_size, eta,
test_data=None):
"""Train the neural network using mini-batch stochastic.
ndarray makes it particularly easy to modify the
code to feedforward multiple inputs at once, and
that is sometimes convenient.
return a
gradient descent.
the more natural choice, using an (n,), :)]
Most of the work is done by the line
delta_nabla_b, delta_nabla_w = self.backprop(x, y)
This invokes something called the backpropagation algorithm,
which is a fast way of computing the gradient of the cost function.
So update_mini_batch works simply by computing these gradients
for every training example in the mini_batch, and then updating
self.weights
and self.biases appropriately.,
self.backprop
which we've already discussed. The
method makes use of a few extra functions to help in
computing the gradient, namely sigmoid_prime, which computes the
derivative of the σ function,.
using backpropagation.
Gradients are calculated)) η = 3.0,
>>> net.SGD(training_data, 30, 10, 3.0, test_data=test_data)
Note that if you're running the code as you read along, it will take
some time to execute - for a typical machine (as of 2015) it will
likely take a few minutes to run. (on my machine this experiment takes tens of seconds
Of course, to obtain these accuracies I had to make specific choices
for the number of epochs of training, the mini-batch size, and the
learning rate, η. As I mentioned above, these are known as hyperparameters for our neural network, in order to distinguish them
from the parameters (weights and biases) learnt by our learning
algorithm. If we choose our hyper-parameters poorly, we can get
bad results. Suppose, for example, that we'd chosen the learning
rate to be
However, you can see that the performance of the network is getting
slowly better over time. That suggests increasing the learning rate,
say to η = 0.01. If we do that, we get better results, which suggests
increasing the learning rate again. (If making a change improves
things, try doing more!) If we do that several times over, we'll end
up with a learning rate of something like η = 1.0 (and perhaps fine
tune to 3.0), which is close to our earlier experiments. So even
though we initially made a poor choice of hyper-parameters, we at
least got enough information to help us improve our choice of
hyper-parameters.
techniques introduced in chapter 3 will greatly
reduce the variation in performance across
different training runs for our networks.
In general, debugging a neural network can be challenging. This is
especially true when the initial choice of hyper-parameters
produces results no better than random noise. Suppose we try the
successful 30 hidden neuron network architecture from earlier, but
with the learning rate changed to η = 100.0 :
>>> net = network.Network([784, 30, 10])
>>> net.SGD(training_data, 30, 10, 100.0, test_data=test_data)
At this point we've actually gone too far, and the learning rate is too
high:
Epoch 0: 1009 / 10000
Epoch 1: 1009 / 10000
Epoch 2: 1009 / 10000
Epoch 3: 1009 / 10000
...
Epoch 27: 982 / 10000
Epoch 28: 982 / 10000
Epoch 29: 982 / 10000
Now imagine that we were coming to this problem for the first time.
Of course, we know from our earlier experiments that the right
thing to do is to decrease the learning rate. But if we were coming to
this problem for the first time then there wouldn't be much in the
output to guide us on what to do. We might worry not only about
the learning rate, but about every other aspect of our neural
network. We might wonder if we've initialized the weights and
biases in a way that makes it hard for the network to learn? Or
maybe we don't have enough training data to get meaningful
learning? Perhaps we haven't run for enough epochs? Or maybe it's
impossible for a neural network with this architecture to learn to
recognize handwritten digits? Maybe the learning rate is too low?
Or, maybe, the learning rate is too high? When you're coming to a
problem for the first time, you're not always sure..
Exercise
Try creating a network with just two layers - an input and an
output layer, no hidden layer - with 784 and 10 neurons,
respectively. Train the network using stochastic gradient
descent. What classification accuracy can you achieve?.
numpy ndarray with 50,000 entries.
This is a:
sophisticated algorithm ≤ simple learning algorithm + good
training data.
Toward?.
The end result is a network which breaks down a very complicated
question - does this image show a face or not - into very simple
questions answerable at the level of single pixels. It does this
through a series of many layers, with early layers answering very
simple and specific questions about the input image, and later
layers building up a hierarchy of ever more complex and abstract
concepts. Networks with this kind of many-layer structure - two or
more hidden layers - are called deep neural networks.
Of course, I haven't said how to do this recursive decomposition
into sub-networks. It certainly isn't practical to hand-design the
weights and biases in the network. Instead, we'd like to use learning
algorithms so that the network can automatically learn the weights
and biases - and thus, the hierarchy of concepts - from training
data..
CHAPTER 2
How the backpropagation algorithm works. Today, the backpropagation
algorithm is the workhorse of learning in neural networks.
This chapter is more mathematically involved than the rest of the
book. If you're not crazy about mathematics you may be tempted to
skip the chapter, and to treat backpropagation as a black box whose
details you're willing to ignore. Why take the time to study those
details?
The reason, of course, is understanding. while the expression is
somewhat complex, it also has a beauty to it, with each element
having a natural, intuitive interpretation. And so backpropagation
isn't just a fast algorithm for learning. It actually gives us detailed
insights into how changing the weights and biases changes the
overall behaviour of the network. That's well worth studying in
detail.
With that said, if you want to skim the chapter, or jump straight to
the next chapter, that's fine. I've written the rest of the book to be
accessible even if you treat backpropagation as a black box. There
are, of course, points later in the book where I refer back to results
from this chapter. But at those points you should still be able to
understand the main conclusions, even if you don't follow all the
reasoning.
Warm up: a fast matrix-based approach
to computing the output from a neural
network to denote the weight
l
jk
for the connection from the k neuron in the (l − 1) layer to the j
th
th
th
neuron in the l layer. So, for example, the diagram below shows
th | https://b-ok.org/book/3552529/9cc32f | CC-MAIN-2019-09 | refinedweb | 5,134 | 69.82 |
Using the included models is fine, but at some point you’ll probably want to implement your own models, which is what this tutorial is for.
Generally speaking, in order to implement a new model, you’ll need to implement a
DatasetReader
subclass to read in your datasets and a
Model
subclass corresponding to the model you want to implement.
(If there’s already a
DatasetReader for the dataset you want to use,
of course you can reuse that one.)
In this tutorial we’ll also implement a custom PyTorch
Module,
but you won’t need to do that in general.
Our simple tagger model uses an LSTM to capture dependencies between the words in the input sentence, but doesn’t have a great way to capture dependencies between the tags. This can be a problem for tasks like named-entity recognition where you’d never want to (for example) have a “start of a place” tag followed by an “inside a person” tag.
We’ll try to build an NER model that can outperform our simple tagger on the CoNLL 2003 dataset, which (due to licensing reasons) you’ll have to source for yourself.
The simple tagger gets about 88% span-based F1 on the validation dataset. We’d like to do better.
One way to approach this is to add a Conditional Random Field layer at the end of our tagging model. (If you’re not familiar with conditional random fields, this overview paper is helpful, as is this PyTorch tutorial.)
The “linear-chain” conditional random field we’ll implement has a
num_tags x
num_tags matrix of transition costs,
where
transitions[i, j] represents the likelihood of transitioning
from the
j-th tag to the
i-th tag.
In addition to whatever tags we’re trying to predict, we’ll have special
“start” and “end” tags that we’ll stick before and after each sentence
in order to capture the “transition” inherent in being the tag at the
beginning or end of a sentence.
As this is just a component of our model, we’ll implement it as a Module.
To implement a PyTorch module, we just need to inherit from
torch.nn.Module
and override
def forward(self, *input): ...
to compute the log-likelihood of the provided inputs.
To initialize this module, we just need the number of tags.
def __init__(self, num_tags: int) -> None: super().__init__() self.num_tags = num_tags # transitions[i, j] is the logit for transitioning from state i to state j. self.transitions = torch.nn.Parameter(torch.randn(num_tags, num_tags)) # Also need logits for transitioning from "start" state and to "end" state. self.start_transitions = torch.nn.Parameter(torch.randn(num_tags)) self.end_transitions = torch.nn.Parameter(torch.randn(num_tags))
I’m not going to get into the exact mechanics of how the log-likelihood is calculated; you should read the aforementioned overview paper (and look at our implementation) if you want the details. The key points are
(sequence_length, num_tags)tensor of logits representing the likelihood of each tag at each position in some sequence and a
(sequence_length,)tensor of gold tags. (In fact, we actually provide batches consisting of multiple sequences, but I’m glossing over that detail.)
viterbi_tags()method that accepts some input logits, gets the transition probabilities, and uses the Viterbi algorithm to compute the most likely sequence of tags for a given input.
The
CrfTagger is not terribly different from the
SimpleTagger model,
so we can take that as a starting point. We need to make the following changes:
crfattribute containing an appropriately initialized
ConditionalRandomFieldmodule
We can then register the new model as
"crf_tagger".
The CoNLL data is formatted like
U.N. NNP I-NP I-ORG official NN I-NP O Ekeus NNP I-NP I-PER heads VBZ I-VP O for IN I-PP O Baghdad NNP I-NP I-LOC . . O O
where each line contains a token, a part-of-speech tag, a syntactic chunk tag, and a named-entity tag. An empty line indicates the end of a sentence, and a line
-DOCSTART- -X- O O
indicates the end of a document. (Our reader is concerned only with sentences and doesn’t care about documents.)
You can poke at the code yourself, but at a high level we use
itertools.groupby
to chunk our input into groups of either “dividers” or “sentences”.
Then for each sentence we split each row into four columns,
create a
TextField for the token, and create a
SequenceLabelField
for the tags (which for us will be the NER tags).
As the
CrfTagger model is quite similar to the
SimpleTagger model,
we can get away with a similar configuration file. We need to make only
a couple of changes:
model.typeto
"crf_tagger"
"dataset_reader.type"to
"conll2003"
"dataset_reader.tag_label"field with value “ner” (to indicate that the NER labels are what we’re predicting)
We don’t need to, but we also make a few other changes
transitionsparameters to help avoid overfitting
test_data_pathand set
evaluate_on_testto true. This is mostly to ensure that our token embedding layer loads the GloVe vectors corresponding to tokens in the test data set, so that they are not treated as out-of-vocabulary at evaluation time. The second flag just evaluates the model on the test set when training stops. Use this flag cautiously, when you’re doing real science you don’t want to evaluate on your test set too often.
At this point we’re ready to train the model.
In this case our new classes are part of the
allennlp library,
which means we can just use
allennlp/run.py train,
but if you were to create your own model they wouldn’t be.
In that case
allennlp/run.py never loads the modules in which
you’ve defined your classes, they never get registered, and then
AllenNLP is unable to instantiate them based on the configuration file.
In such a case you’ll need to create your own such script. You can actually copy that one, the only change you need to make is to import all of your custom classes at the top:
from myallennlp.data.dataset_readers import Conll2003DatasetReader from myallennlp.models import CrfTagger
and so on. After which you’re ready to train:
$ my_run.py train tutorials/getting_started/crf_tagger.json -s /tmp/crf_model | http://allennlp.org/tutorials/creating-a-model | CC-MAIN-2017-51 | refinedweb | 1,051 | 53.92 |
Serialization/Deserialization Series: JSON
What happens when a client asks for JSON for an AJAX service instead of XML? Today, I discuss the way to return your data as JSON instead of XML.
In the last post, I discussed the way to serialize and deserialize an object based on XML.
We took an existing XML file and used Visual Studio's Paste XML as Classes to generate our XML objects so we can read in XML to create our objects (deserialize) or write out a modified XML document from our objects (serialize).
While XML is easy, JSON (JavaScript Object Notation) is becoming the better way to send data across the wire because of the natural and minimal structure to reduce the verbosity of an XML document.
Hey JSON!
In the Edit menu, there is also a Paste JSON as Classes option. While we could use that and make more classes, we can reuse our existing class to serialize JSON as well as our XML.
To serialize an object into JSON, you need to include a JavaScriptSerializer which can be found in the
System.Web.Script.Serialization namespace.
Based on the Dashboard example we used in the previous post, we would pass the dashboard object into the meat-grinder...errr...JavaScript serializer. :-)
var x = new JavaScriptSerializer(); var json = x.Serialize(dashboard);
The json variable contains the following:
{ "tabs" : [ { "title" : "Home", "widget" : [ { "id" : 15, "title" : "Weather" }, { "id" : 19, "title" : "RSS Feed" } ] }, { "title" : "Sports", "widget" : [ { "id" : 21, "title" : "Scores" } ] } ] }
No Node!
Wait a minute...what happened to our root node? Unfortunately, it excludes the root node and serializes the object's data inside the object. It doesn't include the root object. Unfortunately, this is the nature of the beast. It's how the Serializer works.
If you want to include the root node, place the object in a wrapper like so:
public class JsonContainer { public dashboard Dashboard { get; set; } public JsonContainer(dashboard dashboard) { Dashboard = dashboard; } }
And then in your code, you would pass in the JsonContainer object.
var container = new JsonContainer(dashboard); var x = new JavaScriptSerializer(); var json = x.Serialize(container);
This is probably the easiest way to the root node included.
Your new JSON will look like this:
{ "dashboard" : { "tabs" : [ { "title" : "Home", "widget" : [ { "id" : 15, "title" : "Weather" }, { "id" : 19, "title" : "RSS Feed" } ] }, { "title" : "Sports", "widget" : [ { "id" : 21, "title" : "Scores" } ] } ] } }
Conclusion
Since we covered the XML serialization/deserialization procedure before, I felt it fitting to discuss the JSON serialization process and gotchas in this post even though this is a short post.
The JavaScript Serializer is another way to take an object structure using simple objects with native types and easily convert them into JSON objects.
Did I miss something? Post a comment below to continue the discussion. | https://www.danylkoweb.com/Blog/serialization-deserialization-series-json-AR | CC-MAIN-2017-26 | refinedweb | 456 | 53 |
celServerEventData Class Reference
The data about a server event. More...
#include <physicallayer/nettypes.h>
Detailed Description
The data about a server event.
Definition at line 309 of file nettypes.h.
Member Data Documentation
The persistent data of the event.
Definition at line 325 of file nettypes.h.
The time at which the event occured.
Definition at line 320 of file nettypes.h.
The type of the event.
Definition at line 315 of file nettypes.h.
True if we need to be sure that the message has been received by the client.
Definition at line 331 of file nettypes.h.
The documentation for this class was generated from the following file:
- physicallayer/nettypes.h
Generated for CEL: Crystal Entity Layer 2.1 by doxygen 1.6.1 | http://crystalspace3d.org/cel/docs/online/api/classcelServerEventData.html | CC-MAIN-2014-42 | refinedweb | 126 | 54.79 |
Chapter 8. About service discovery
Service Discovery allows adding APIs for management by recognizing the discoverable running services in the OpenShift cluster. It has the following features:
- It uses the cluster API to query for services properly annotated for discovery.
- It configures 3scale to hit the service using an internal endpoint inside the cluster.
- It imports Open API Specification (Swagger) up to version 2.0 as 3scale Active Docs.
- It allows you to update the API integration and the Open API Specification at any time (resynchronize with the cluster).
- It supports OpenShift and Red Hat SSO authorization flows.
- It works with Fuse from version 7.2.
Additional notes
- For on premise installations, the API provider may have its own namespace and services.
- With Fuse, services will be deployed to Fuse production namespace. | https://access.redhat.com/documentation/en-us/red_hat_3scale_api_management/2.5/html/admin_portal_guide/service-discovery-concept | CC-MAIN-2021-31 | refinedweb | 132 | 58.28 |
.
Compared to the previous revision, I am doing a fairly standard 2-means clustering to split points into clusters.
- Select two random centroid points, out of the available points, and make sure they are not the same vector.
- For each point in the vector space, calculate the distance of them to the two centroid points.
- Assign each point to the closest centroid.
- (this perhaps diverge from the original algorithm) Calculate the number of points assigned to each centroid, find the
ratio(count of points assigned to centroid
alphadivide by cound of points to centroid
beta)
- Repeat (2), stop when the
ratioconverges, or it is within
0.4 < ratio < 0.6.
(This is why I am not a good computer/data scientist)
The algorithm is not ideal and not scientific, but I just wanted to have a quick idea of how things may change. For those who aren’t aware, this is because I cannot tell if the centroids between two iterations converge even though if the ratios converge (though I secretly hope they are good enough).
As this is a fairly minor revision, I am pasting the splitter function here instead of updating my previous gist.
def splitter_balanced(points_x, _points_y, idx_pool): result = None points_current = Vectors( points_x.dimension, tuple(points_x[idx] for idx in sample(tuple(set(idx_pool)), 2)), ) ratio_current = 1.0 while not result: if points_are_different(points_current): decider = branch_centroid_get(points_current) branches = points_assign_branches(decider, points_x, idx_pool) ratio = min(len(branches[True]), len(branches[False])) / max( len(branches[True]), len(branches[False]) ) if isclose(ratio_current, ratio) or 0.4 < ratio < 0.6: result = decider, branches else: ratio_current = ratio points_current = Vectors( points_x.dimension, tuple( centroid_update( Vectors( points_x.dimension, tuple(points_x[idx] for idx in idx_sub), ) ) for _, idx_sub in branches.items() ), ) else: points_current = Vectors( points_x.dimension, tuple(points_x[idx] for idx in sample(set(idx_pool), 2)), ) return result
Starting with my own approximation to Bagging.
For some reason it looks like with the same dataset, the result is not as good as previously. However given the random nature of both random splitting and k-means algorithm, it could be just I am getting a set of very unfortunate starting points this time. (The points shown are the best model after performing a 5-fold cross-validation)
Interestingly, running this caught a minor problem in my original code. Apparently there is a chance for my code to bootstrap a N-sized training set consisting of repetitions of 1 point.
choices(idx_pool, k=len(idx_pool))
Thus failing this line of code
sample(tuple(set(idx_pool)), 2))
While this should have a very low chance of happening, but yea, in short I should go buy lottery.
Things get more interesting when we see my approximation to Random Forest.
The result is not following a similar trend, it started out great, but when the size of the forest increases it becomes worse, very very counter-intuitive. It is rather rare to see even the training error increases when the forest size increases, so it gets underfitted when we have more trees?
Next we have boosting
While the result is worst than the previous attempt using the legacy point splitting method, but the result is likely just due to the unfortunate choice of initial random centroids.
Comparing the methods
So in the end, it is just a fun follow-up to the previous experiment, overall this round the result is worse, and still very far from the classic original methods.
Thanks for your time. | https://cslai.coolsilon.com/2020/10/25/regression-with-2-means-clustering-annoy-non-scientific/ | CC-MAIN-2021-43 | refinedweb | 576 | 53.1 |
Velo by Wix: Side effect wix-data saving methods
The wix-data methods for saving data has a side effect that I have spent a few hours debugging. In this post, I share how it goes
We have three methods for manipulation of the database collection that has the same side effect. I found this behavior when it broke my logic in the data hook.
The pasted
item is mutating after any of these methods is done:
wixData.insert("myCollection", item)
wixData.update("myCollection", item)
wixData.save("myCollection", item)
So, consider the code example where we output to console a new item before and after inserting it into a collection.
import wixData from 'wix-data'; (async () => { const item = { title: 'hello' }; console.log(1, item); const result = await wixData.insert('notes', item); console.log(2, item); console.log(3, result === item); })();
In the console:
1 {"title":"hello"} 2 {"title":"hello","_createdDate":"2020-11-21T18:34:29.050Z","_updatedDate":"2020-11-21T18:34:29.050Z","_id":"6e616318-ffdb-4954-9529-84c6a63f5393"} 3 true
We can see above what, after inserting, the item is mutating it has new properties. And the method will return the same item which was passed.
wixData.update() and
wixData.save() will have the same behavior.
How it affects
In my case, I used a data hook that first saves a new user to private collection (only for admins), and then it creates a new row for public members collection with part of open the user data.
backend/data.js
export async function Members_afterInsert(item) { await wixData.insert('MembersPrivate', item); /** * Here I had a mutated item after inserting it into a Private database. */ }
I had a mutated item after inserting it into a Private database, and it created a bug.
For fixing, I have to copy the item for inserting. I use the object spread operator:
// Creates a copy of item await wixData.insert('MembersPrivate', { ...item });
Methods for multiple items
We also have methods that can work with a number of items. Let's consider three of these which use to save data:
wixData.bulkInsert("myCollection", [item1, item2])
wixData.bulkUpdate("myCollection", [item1, item2])
wixData.bulkSave("myCollection", [item1, item2])
We can get to repeat the experiment. These methods accept an array of items.
import wixData from 'wix-data'; (async () => { const a = { title: 'a' }; const b = { title: 'b' }; console.log(1, a, b); const result = await wixData.bulkInsert('notes', [a, b]); console.log(2, a, b); console.log(3, result); })();
In the console:
1 {title: "a"} {title: "b"} 2 {title: "a"} {title: "b"} 3 { errors: [], inserted: 2, insertedItemIds: [ "8e518e07-5ba9-46bf-8db0-b9231c6f5926", "4d90b85a-8449-4213-978f-ef0f8bec275b" ], length: 2, skipped: 0, updated: 0, updatedItemIds: [], }
Great, bulk methods work differently. They don't mutate the passed items and don't return the mutated items back. The successfully done bulk methods return the
Promise<WixDataBulkResult> that contains the info about the changes.
Conclusion
The understanding of how to work the platform is an important thing. If I catch the bugs that grow from a misunderstanding of some of the processes, then I prefer to spend time searching, experiments, and testing it.
When the platform is not a black box for you, this saves more time than you have spent learning it. | https://shoonia.site/side-effect-data-saving-methods | CC-MAIN-2021-49 | refinedweb | 539 | 59.09 |
view raw
In my model I have:
after_create :push_create
def push_event(event_type)
X["XXXXX-#{Rails.env}"].trigger(event_type,
{
:content => render( :partial =>"feeds/feed_item", :locals => { :feed_item => self })
}
)
end
NoMethodError (undefined method `render' for #<WallFeed:0x1039be070>):
Well, "they" are right. You really have to do the rendering in a controller - but it's fair game to call that controller from a model! Fortunately, AbstractController in Rails 3 makes it easier than I thought. I wound up making a simple ActionPusher class, working just like ActionMailer. Perhaps I'll get ambitious and make this a proper gem someday, but this should serve as a good start for anyone else in my shoes.
I got the most help from this link:
in lib/action_pusher.rb
class ActionPusher < AbstractController::Base include AbstractController::Rendering include AbstractController::Helpers include AbstractController::Translation include AbstractController::AssetPaths include Rails.application.routes.url_helpers helper ApplicationHelper self.view_paths = "app/views" class Pushable def initialize(channel, pushtext) @channel = channel @pushtext = pushtext end def push Pusher[@channel].trigger('rjs_push', @pushtext ) end end end
in app/pushers/users_pusher.rb. I guess the require could go somewhere more global?
require 'action_pusher' class UsersPusher < ActionPusher def initialize(user) @user = user end def channel @user.pusher_key end def add_notice(notice = nil) @notice = notice Pushable.new channel, render(template: 'users_pusher/add_notice') end end
Now in my model, I can just do this:
after_commit :push_add_notice private def push_add_notice UsersPusher.new(user).add_notice(self).push end
and then you'll want a partial, e.g. app/views/users_pusher/add_notice.js.haml, which could be as simple as:
alert('#{@notice.body}')
I guess you don't really need to do it with Pushable inner class and the .push call at the end, but I wanted to make it look like ActiveMailer. I also have a pusher_key method on my user model, to make a channel for each user - but this is my first day with anything like Pusher, so I can't say for sure if that's the right strategy. There's more to be fleshed out, but this is enough for me to get started.
Good luck!
I've got the general outline of a solution working. Like this, in your model:
after_create :push_new_message private def render_anywhere(partial, assigns = {}) view = ActionView::Base.new(ActionController::Base.view_paths, assigns) view.extend ApplicationHelper view.render(:partial => partial) end def push_new_message pushstring = render_anywhere('notices/push_new_message', :message_text => self.body) Pusher[user.pusher_key].trigger!('new_message', pushstring) end
that is definitely working - the template is rendering, and gets eval()'ed on the client side successfully. I'm planning to clean it up, almost certainly move render_anywhere somewhere more general, and probably try something like this
I can see that pushes will need their own templates, calling the generally available ones, and I may try to collect them all in one place. One nice little problem is that I sometimes use controller_name in my partials, like to light up a menu item, but I'll obviously have to take a different tactic there. I'm guessing I might have to do something to get more helpers available, but I haven't gotten there yet.
Success! Hooray! This should answer your question, and mine - I'll add more detail if it seems appropriate later. Good luck!!!!
I don't have an answer, but this timely question deserves more clarification, and I'm hoping to get closer to my answer by helping ask :)
I'm facing the same problem. To explain a little more clearly, Pusher asynchronously sends content to a connected user browser. A typical use case would be a showing the user they have a new message from another user. With Pusher, you can push a message to the receiver's browser, so they get an immediate notification if they are logged in. For a really great demo of what Pusher can do, check out
You can send any data you like, such as a JSON hash to interpret how you like it, but it would be very convenient to send RJS, just like with any other ajax call and eval() it on the client side. That way, you could (for example) render the template for your menu bar, updating it in its entirety, or just the new message count displayed to the user, using all the same partials to keep it bone-DRY. In principle, you could render the partial from the sender's controller, but that doesn't make much sense either, and there might not even be a request, it could be triggered by a cron job, for example, or some other event, like a stock price change. The sender controller just should not have to know about it - I like to keep my controllers on a starvation diet ;)
It might sound like a violation of MVC, but it's really not - and it really should be solved with something like ActionMailer, but sharing helpers and partials with the rest of the app. I know in my app, I'd like to send a Pusher event at the same time as (or instead of) an ActionMailer call. I want to render an arbitrary partial for user B based on an event from user A.
These links may point the way towards a solution:
The last one looks the most promising, offering up this tantalizing snippet:
def render_anywhere(partial, assigns) view = ActionView::Base.new(Rails::Configuration.new.view_path, assigns) ActionView::Base.helper_modules.each { |helper| view.extend helper } view.extend ApplicationHelper view.render(:partial => partial) end
As does this link provided by another poster above.
I'll report back if I get something working
tl;dr: me too! | https://codedump.io/share/oF0tGE1wJC95/1/rails-how-to-render-a-viewpartial-in-a-model | CC-MAIN-2017-22 | refinedweb | 931 | 53.92 |
The Mahalanobis distance is the distance between two points in a multivariate space. It’s often used to find outliers in statistical analyses that involve several variables.
This tutorial explains how to calculate the Mahalanobis distance in Python.
Example: Mahalanobis Distance in Python
Use the following steps to calculate the Mahalanobis distance for every observation in a dataset in Python.
Step 1: Create the dataset.
First, we’ll create a dataset that displays the exam score of 20 students along with the number of hours they spent studying, the number of prep exams they took, and their current grade in the course:
import numpy as np import pandas as pd import scipy as stats data = {'score': [91, 93, 72, 87, 86, 73, 68, 87, 78, 99, 95, 76, 84, 96, 76, 80, 83, 84, 73, 74], 'hours': [16, 6, 3, 1, 2, 3, 2, 5, 2, 5, 2, 3, 4, 3, 3, 3, 4, 3, 4, 4], 'prep': [3, 4, 0, 3, 4, 0, 1, 2, 1, 2, 3, 3, 3, 2, 2, 2, 3, 3, 2, 2], 'grade': [70, 88, 80, 83, 88, 84, 78, 94, 90, 93, 89, 82, 95, 94, 81, 93, 93, 90, 89, 89] } df = pd.DataFrame(data,columns=['score', 'hours', 'prep','grade']) df.head() score hours prep grade 0 91 16 3 70 1 93 6 4 88 2 72 3 0 80 3 87 1 3 83 4 86 2 4 88
Step 2: Calculate the Mahalanobis distance for each observation.
Next, we will write a short function to calculate the Mahalanobis distance.
#create function to calculate Mahalanobis distance def mahalanobis(x=None, data=None, cov=None): x_mu = x - np.mean(data) if not cov: cov = np.cov(data.values.T) inv_covmat = np.linalg.inv(cov) left = np.dot(x_mu, inv_covmat) mahal = np.dot(left, x_mu.T) return mahal.diagonal() #create new column in dataframe that contains Mahalanobis distance for each row df['mahalanobis'] = mahalanobis(x=df, data=df[['score', 'hours', 'prep', 'grade']]) #display first five rows of dataframe df.head() score hours prep grade mahalanobis 0 91 16 3 70 16.501963 1 93 6 4 88 2.639286 2 72 3 0 80 4.850797 3 87 1 3 83 5.201261 4 86 2 4 88 3.828734
Step 3: Calculate the p-value for each Mahalanobis distance.
We can see that some of the Mahalanobis distances are much larger than others. To determine if any of the distances are statistically significant, we need to calculate their p-values.
The p-value for each distance is calculated as the p-value that corresponds to the Chi-Square statistic of the Mahalanobis distance with k-1 degrees of freedom, where k = number of variables. So, in this case we’ll use a degrees of freedom of 4-1 = 3.
from scipy.stats import chi2 #calculate p-value for each mahalanobis distance df['p'] = 1 - chi2.cdf(df['mahalanobis'], 3) #display p-values for first five rows in dataframe df.head() score hours prep grade mahalanobis p 0 91 16 3 70 16.501963 0.000895 1 93 6 4 88 2.639286 0.450644 2 72 3 0 80 4.850797 0.183054 3 87 1 3 83 5.201261 0.157639 4 86 2 4 88 3.828734 0.280562
Typically a p-value that is less than .001 is considered to be an outlier. We can see that the first observation is an outlier in the dataset because it has a p-value less than .001.
Depending on the context of the problem, you may decide to remove this observation from the dataset since it’s an outlier and could affect the results of the analysis. | https://www.statology.org/mahalanobis-distance-python/ | CC-MAIN-2022-21 | refinedweb | 617 | 73.27 |
A pages, or when you override the master page’s breadcrumb place holder with appropriate breadcrumb control defined in your page layout. In out-of-the-box publishing site templates the breadcrumb handling goes something like this:
- Application.master defines a breadcrumb for layouts pages
- Publishing page layouts override master page’s (e.g. default.master) breadcrumb (either by overriding PlaceHolderTitleBreadcrumb, or hiding PlaceHolderTitleBreadcrumb and pushing a new one in PlaceHolderMain, like in DefaultLayout.aspx)
- default.master defines a default breadcrumb that is used for everything else, like for example form pages, and non-publishing team site page layouts
All this creates a cruft in you page layouts, because you have to repeat the same overrides in all your page layouts. This is also a problem, when you want to have a single master page for all your pages (using an HttpModule to override application.master and setting both custom master and system master to point to the same master page).
This isn’t a problem if you replace the default breadcrumb with a more intelligent one. Here is a code for a generic breadcrumb control that works for all kinds of pages described above:
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
using Microsoft.SharePoint.Publishing;
using Microsoft.SharePoint.Publishing.Navigation;
using Microsoft.SharePoint.WebControls;
namespace Sininen.Meteoriitti.SharePoint.Web.UI
{
public class Breadcrumb : UserControl
{
protected SiteMapPath ContentMap;
public Breadcrumb()
{
Load += BreadcrumbLoad;
}
void BreadcrumbLoad(object sender, EventArgs e)
{
if (Page is UnsecuredLayoutsPageBase)
{
ContentMap.SiteMapProvider = "SPXmlContentMapProvider";
}
else if (Page is PublishingLayoutPage)
{
ContentMap.RenderCurrentNodeAsLink = false;
var provider =
PortalSiteMapProvider;
if (provider != null)
{
provider.IncludePages =
PortalSiteMapProvider.IncludeOption.Always;
ContentMap.Provider = provider;
}
else
{
ContentMap.SiteMapProvider =
"CurrentNavSiteMapProviderNoEncode";
}
}
else
{
ContentMap.SiteMapProvider = "SPContentMapProvider";
}
}
}
}
And the ascx-file for the above code behind:
<asp:SiteMapPath
</div>
Now you just have to replace the PlaceHolderTitleBreadcrumb place holder’s content with the new breadcrumb user control in your master page.
[...] check it out! These guys already have a batch of posts, ranging from topics like breadcrumb navigation all the way to an installation pitfall checklist. And more is coming… So if you’re into [...]
Thanks Aapo …..
Its solve my problem. Nice Article.
Additional Web.config changes.
Add below entry under your siteMap
This looks like JUST what I need!!! The OOB breadcrumbs don’t get it.
However, I’m a designer and not a developer in SharePoint 2010, though I have been modifying the default master and CSS quite a bit to make it more user friendly.
I have no idea where to plug in the code you provided on this page. And I have not dealt with any .aspx pages at all. Suggestions most appreciated.
Well, this could be accomplished without coding, just placing this code (from v3) to the masterpage: <asp:SiteMapPath SiteMapProvider=”SPContentMapProvider” id=”ContentMap” SkipLinkText=”" RenderCurrentNodeAsLink=”true” NodeStyle-CssClass=”ms-sitemapdirectional” runat=”server” />
Kalle,
I’m sorry, but did you read the article? Please test your breadcrumb on these:
1. Publishing Pages (/Pages/Default.aspx, /Pages/SubPage.aspx)
2. Layouts Pages (/_Layouts/Settings.aspx, /_layouts/ListGeneralSettings.aspx)
3. Forms Pages (AllItems.aspx, EditForm.aspx, etc.)
Your solution doesn’t work. The whole point of this article was that you need to have different SiteMapProvider in different kind of pages. Your SPContentMapProvider is ok for Forms Pages.
Well, as I have tested, it works on all those three pagetypes. This is the situation in SharePoint 2010 with publishing feature activated. I’m not saying your coding is wrong, I’m just pointing that “a single breadcrumb for all SharePoint pages” can be accomplished a little bit easier way.
Kalle,
I am using SharePoint 2010 with the publishing feature activated. The code you list is exactly the code I am using after trying many other things. It works better than most, but it still does not cover all the 3 different items that Aapo lists.
Example: I have a left nav set up for Discussion Boards as one of the items. When the user clicks on Discussion Boards, the breadcrumb just shows to the site name, and does not include Discussion Boards. If I drill down further to a particular discussion, then that particular discussion name will appear in the breadcrumb.
And I’d love to get rid of the aggravating “Home” that appears in the breadcrumb for any site or subsite home. WTF were they thinking when they put that in there?
Kalle and Janey,
I have to re-try this with 2010, maybe things have changed a little bit. Kalle when I said it doesn’t work, I meant that it doesn’t render correct breadcrumb items. It renders breadcrumb, but not the breadcrumb that it should. I will come back to this after I have tested the different options in SP2010. This article was originally written for 2007.
In 2010 they have this PopoutMenu breadcrumb with SharePoint:ListSiteMapPath that has by default these providers: SiteMapProviders=”SPSiteMapProvider,SPContentMapProvider”.
Well… they gave us new top navigation and quick launch data providers, maybe they did something to breadcrumb too in 2010.
Hi..the sharepoint 2010 breadcrumb is killing me. I can’t get the masterpage to display the full breadcrumb AND with the flyout menu (like in document library).. I am having trouble find some good example for this. Do you have any ideas?
I have used this, I think I copied it out from the original and edited it, but it rendered out duplicates of links its weird..
I need it to be like this:
Home > current site collection > subsite > document library > all documents (fly out menu)
Please help! ;-(
Sorry one more, I have tried this.. this breadcrumb is really the trail that I want.
How can I get the document fly out menu to work with this breadcrumb?
Thanks
One solution is to use jQuery and move/copy the 2010 default breadcrumb from dropdown menu and the fly-out -menu from titlearea to somewhere else. This is the easiest solution, if the only access is Sharepoint Designer and browser… Of course the solution from Aapo is more customizable and better for developers, who are familiar with Visual Studio and have access with Sharepoint hive.
Hi There,
This article doesn’t provide enough guidance on how to implement these changes. Please provide more information on how to create the user control. where the code goes. Is VS required? What needs to be in SPD?
Thank you,
[...] span#ctl00_PlaceHolderTitleBreadcrumb_ContentMap { padding-left:10px; } I must give credit to this site for the bottom part of their post. This entry was posted in blog and tagged asp, css, [...]
Hi,
I want this feature as webpart. How to create as webpart I’m a Designer.
As same the quicklauch to be as webpart any idea would be appreciate.
thanks
V
Aapo, would you be so kind as to explain how to implement this for us new to SharePoint development? Specifically, where does the first block of code you provide? And what about the second? I think others have posted similar comments asking for this, would appreciate your guidance.
Thanks for this simple and straight forward solution!
Hi,
Im gonna jump in a year later and ask again, how to implement this code? Those of us new to Sharepoint development seem to be getting ignored across the internet. We cannot join your superior developers club without a little help…
Thanks for another informative website. The place else may I am getting that type of info written in such an ideal
way? I have a challenge that I am simply now running on,
and I have been at the look out for such information.
This works fine except for one issue: For site contents, the breadcrumb is displayed as a hierarchy. I want the breadcrumb to be always displayed in a horizontal line like how I is displayed for other pages.
Please help!
Keep up the helpful job and producing in the crowd!
Really, such a useful web-site
You’ve one of the greatest sites
It seemed to be missing on my publishing site’s “Navigation” settings page, so I modified this part of the code a tiny bit so I could see at least a partial breadcrumb there:
if (Page is UnsecuredLayoutsPageBase)
{
if (provider != null && provider.CurrentNode != null)
{
OSSP_SiteMapBreadCrumb.Provider = provider;
}
} | http://www.sharepointblues.com/2010/02/08/a-single-breadcrumb-for-all-sharepoint-pages/ | CC-MAIN-2017-13 | refinedweb | 1,374 | 65.93 |
This tutorial provides an introduction to adding resources, such as images and sounds, to a MoSync application. These external files need to be added into the project, so that they can be packaged with your code and deployed to the device.
Note: new ways of organizing resources were introduced in the MoSync SDK 3.0; see: Resource Compilation.
The resource compiler is part of MoSync's Pipe-Tool (pipe-tool.exe). When you build your application in Eclipse, the resource compiler concatenates all the resources your program needs into a file called "resources" and then it creates an index by which your program can access the resources.
When it runs, the resource compiler looks for resource list files in your project. Resource list files are files that end with the .lst extension. Each resource list file references one or more resources. You don’t have to do anything special to make this work, you just need to create a file with a .lst extension and identify in it what resources you want to be included in the final packages.
You can include any file that will be useful for your program. Typically these will be images (in .png format), sounds (in .mp3 format), and MoSync-specific formats such as .mof font files.
If you are going to be creating data at runtime, or want somewhere to store downloaded data, then the resource list file is also useful for that. You can create a placeholder resource which is going to have a handle, but doesn’t yet contain any data.
There is a full description of all of the types of data you can put into a resource file in the Resource Compiler Reference. In this tutorial we will focus on a few of the common types of resource you are likely to use.
To add external resources to your app, you need to add a resource list file to your project. A resource list file is a text file with the .lst extension: res.lst or similar would be a good name for it.
Create the resource list file by selecting New > Other from the MoSync IDE's File menu.
Select MoSync Resource File from the list and click Next. Type a filename ending in ".lst" and click Finish. You’ll see the new file appear in the list of files in Project Explorer. It has an icon similar to a Pac-Man ghost. Double-click on the file to open it.
Tool tip: The Eclipse IDE can help with the syntax of writing resource list files with an Intellisense-style option list as you type. Press period (.) and you’ll see it appear.
The resource compiler creates an index of resources of a special type called MAHandle. Under the hood, this is actually an int, but by giving it a special type name it is easy to see where you can use the resources.
To use a resource you’ve compiled, you need to include a reference to file MAHeaders.h which the resource compiler will create in the root of your project:
#include <MAHeaders.h>
This file is a list of C #define directives, mapping the name you’ve given to your resources to the internal index number of the resource in the compiled resources file. You can use these definitions in your code.
As an example, I’ve created a resource file with a font (which we will see later). My res.lst file contains this:
.res MYFONT .bin .include "pretty.mof"
and the MAHeaders.h file that the resource compiler creates contains this:
#define MYFONT 1
This means that if you’ve included MAHeaders.h in my source code, you can use the reference directly.
Font* f = new Font(MYFONT);
If you look at the documentation for Font, then the constructor is
Font(MAHandle handle);
MAHandle means the name of the resource in the resource file (MYFONT), and the MAHeaders.h file maps between the name you have given it and its index position in the resources file. Whenever you see MAHandle in the MoSync documentation, it means the name you’ve given to a resource.
One of the most common types of resource you’ll want to include will be an image. This should generally be a PNG file. Some phones can handle JPEG and GIF images, but PNG has the best overall support, as well as supporting an alpha level for transparency.
To include an image, you need to declare it as a resource in the resource list file in the following way:
.res BACKGROUND_IMAGE .image "images/BG240.png"
Note: the path to the resource cannot be absolute. Furthermore you must use either forward slashes (/) in the path or escaped backslashes (\).
When this compiles, we will have a reference to BACKGROUND_IMAGE in MAHeaders.h, and this code will create an image:
#include <MAUI/Image.h> ... Image* i = new Image(0, 0, 100, 100, NULL, true, true,BACKGROUND_IMAGE);
This will create a new image from BG240.png, and automatically set the size correctly for the image.
Alternatively, you can set the resource on an existing Image widget.
i->setResource(BACKGROUND_IMAGE);
This is useful for animation. More on that later.
If you want to include a sound file, then there is a specific directive for this. You need to include it as a .media (or a .umedia) file.
You also need to know the MIME type of the audio format you’ve want. You can look this up at but you’ll probably be using MP3 files. Not every phone will play every format (see Feature/Platform Support).
To add an MP3 file, you’ll add this declaration in a resource list file:
.res MUSIC .media "audio/mpeg", "music.mp3"
To add a MIDI file, you’ll add:
.res MUSIC .media "audio/midi", "music.mid"
To use the sound in your application:
maSoundPlay(MUSIC, 0, maGetDataSize(MUSIC));
The second parameter here is the offset position to start playing from, and the last parameter is the number of bytes to play. The function maGetDataSize() returns the size of a resource.
Fonts are bitmaps fonts which are specific to the MAUI user interface library. MoSync comes supplied with a couple of examples, as well as the software (the BMFonts tool) to make your own. To use them in your application, you need to include them in a resource list file.
.res MYFONT .bin .include "pretty.mof"
You can then use them in your own code as follows
Font* f = new Font(MYFONT);
Other tutorials will look at fonts in much more detail.
Placeholders provide a reference to a resource in MAHeaders, but that resource doesn’t exist yet. You’d use this in situations where you are going to either create a resource at runtime (like an image) or download data into a resource, and you want to use that resource in several places.
For instance, if you were writing an app which downloaded an image, you could create a placeholder so you can reference it when it is available:
.res DOWNLOADEDIMAGE .placeholder
You can then download the image to the placeholder.
ImageDownloader* dl = new ImageDownloader(); dl->beginDownloading("", DOWNLOADEDIMAGE);
This is just an excerpt of the code required for downloading. See the Downloading Data from the Internet tutorial for more information and examples.
When this has completed downloading, you can save the image to local storage:
MAHandle imagestore = maCreateStore("IMAGESTORE", MAS_CREATE_IF_NECESSARY); maWriteStore(imagestore, DOWNLOADEDIMAGE); maCloseStore(imagestore, 1);
Even if you don’t save it permanently, you can still access the Image you’ve created by referencing the placeholder name you’ve used. You can use this across your whole application, and not just in the scope of the downloader.
You can also create placeholders at runtime. The difference with these is that the reference is not in MAHeaders.h, so it won’t be available to all the classes in your application.
MAHandle TEMPIMAGE = maCreatePlaceholder();
Not every file has to have its own resource label. You can create a series of resources which are each accessed from the same root MAHandle. For instance, if you wanted to create an animation in a MAUI::Image widget, you can set up four frames for the animation in the .lst file.
.res ANIMATION .image "frame1.png" .res .image "frame2.png" .res .image "frame3.png" .res ENDANIMATION .image "frame4.png"
Each image has its own .res declaration, which will create an index number for it, but it doesn’t need to be named. As MAHandle is a synonym for ‘int’, you can perform arithmetic operations on it. This excerpt will update an image every time a downloader informs your listener that more data has been downloaded, and you can update an animation on screen to show your user that it is working.
MAHandle h = ANIMATION; Image* waitingAnim = new Image(0, 0, 100, 100, NULL); ... void NotifyProgress(Downloader* dl, int downloadedBytes, int totalBytes) { h = h++; if(h > ENDANIMATION) h = ANIMATION; waitingAnim->setResource(h); }
When the listener’s notifyProgress method is called, the resource on the MAUI::Image widget is updated.
There are two variations on some of the examples we’ve seen. Normally, resource files are loaded into memory, and are read directly from there. With some files, this makes perfect sense. You should only include fonts you want to use, and it is going to need access to it. For other resources, such as media files (MP3) and program data, it may be wasteful to have all of them loaded into memory at the same time. You may have three MP3 files, but you can only play one at a time. To handle this more efficiently, you can mark these as being unloaded with either .ubin (for data files) and .umedia (for MP3 files). These will only then be loaded when required.
You can also load other resources on demand. For instance, you can offer the user a choice of skins or background images for the application. Normally, all these resources will be loaded when the application starts, but you can defer it until a time when it is needed. This will reduce the start up time and the memory consumption. Loading resources on demand is slower overall than loading at start up though. The benefits are that you’re not loading resources you won’t need, and that the loading time is spread.
There is no difference in how you use unloaded binary and media resources.
There is another way to work with unloaded resources which are not .ubin or .umedia. The directive .skip means that this resource will not be loaded when the application starts. This is particularly useful if you’ve got large resources which you may not need. They are not taking up loading time or precious memory if you’re not going to use them.
.res BGIMAGE2 .skip .image "resources/background2.png"
Once you’ve created them, you can use them as you would any .image resource. There is no additional code required. They do take slightly longer to load than they would at application start up, but the user probably won’t even notice.
One easy way to localise your application is to keep all the system text like buttons and labels held externally to your compiled application. The application can then read in the strings that it wants to use. You can create a different version of the resource file for each language.
Reading in strings from the resource file isn’t necessarily obvious however, and I’ve created a few example of how you can do it.
C++ Strings are very strange if you come from a Java or C# background. They are not as ubiquitous as they are in Java and C#, and are quite frankly, of limited use.
They are not inherent objects, and many method calls do not accept Strings but char* or const char* instead. This means that you are constantly converting between String and char*, which can be verbose and laborious.
The MoSync resource compiler handles strings in three different formats
To read the value of the string, it isn’t as simple as
String* myString = new String(RESOURCENAME);
Instead, you have to make a more explicit call out to the resources file. There is an object which can help with this called DataHandler. This will let you read from the handle, and keeps track of its own position.
bool GetString(DataHandler* dh, String& output) { char c; output.clear(); if(!dh->read(&c, 1)) { return false; } while(c) { output.append(&c, 1); if(!dh->read(&c, 1)) { return false; } } return true; }
In the first example above, you can see that the method takes a DataHandler and a String as parameters. It uses the DataHandler to read from the resource file one character at a time, and builds the string.
This on its own should be enough to convince you to always use .pstring whenever possible.
You can avoid a String object and create the string in a faster way using this function.
char* GetString(MAHandle stringResource) { // Get the length of the string data. int length = maGetDataSize(stringResource); // Allocate space for the string data plus the // null termination character. char* buffer = new char[length + 1]; // Read data. maReadData(stringResource, buffer, 0, length); // Null terminate the string. buffer[length] = 'TEMPLATE_PAGE_CONTENT'; // Return the string. return buffer; }
This returns a char* to your string, but it doesn’t delete it. You will need to call
delete[] buffer;
when you’ve finished with it. It uses the function maGetDataSize() to determine the length of the string and then reads it in one go. It will add the null terminator on, so this would be suitable with resource string of the .string type.
The next example read Pascal strings into a String object without doing it character by character:
bool GetPascalString(MAHandle stringResource, String& output) { bool success = false; output.clear(); DataHandler* dataHandler = new DataHandler(stringResource); byte length; if(dataHandler->read(&length, 1)) { char* buffer = new char[length]; dataHandler ->read(buffer, length); output.setData(new StringData<char>(buffer)); delete[] buffer; success = true; } delete dataHandler ; return success; }
It uses the DataHandler as well, and it creates a new StringData object with the string in. This would be a good way to read .pstrings from the resources file.
Finally, the fourth way doesn’t make use of DataHandler, and will probably be the fastest and sleekest way to read strings:
bool GetPascalStringWithoutDataHandler(MAHandle stringResource, String& output) { output.clear(); byte length; // Check that there is at least one byte. if(maGetDataSize(stringResource) == 0) return false; // Read byte size. maReadData(stringResource, &length, 0, 1); char* buffer = new char[length]; maReadData(stringResource, buffer, 1, length); output.setData(new StringData(buffer)); delete[] buffer; return true; }
It uses the underlying maReadData() function, which is wrapped by DataHandler to read the string from the resource. It also creates a new StringData object, and it deletes its temporary char array.
You can create or load binary data to use in your application. This may be a proprietary data format, or it may be a way of creating screens dynamically using localised resource files. The directive .bin indicates that this should be included as raw binary data.
There are two ways to do this. Firstly, if you’ve already got some binary data you want to use
.res BINARYDEMO .bin .include "resources/data.bin"
This will include your data for you to process as you see fit.
A second way is for you to describe it in the resource file itself.
.res BINARYDEMO .include "resources/test.png"
I’ve created some content I want to put on to the screen, using some of the resource techniques we’ve already looked at. I can write a simple parser to create screen dynamically at runtime.
(Creating screens at runtime is the subject of the tutorial Creating New Screens.)
// Read the first byte. This determines the number of widgets maReadData(BINARYDEMO, &length, 0, 1); while(completed < length) { // is an extract from the accompanying source code. It reads through the binary data we’ve put into the resource file, and creates the appropriate widgets at runtime. You can change the content of you application by just changing the resource file. You can share the resource file with clients and translators without giving away all of your code. This is the screen it produces: | http://www.mosync.com/docs/sdk/cpp/guides/adding-resources-mosync-project/index.html | CC-MAIN-2014-52 | refinedweb | 2,713 | 65.01 |
UPDATE: Notification box has been updated: Take a look at, the new home place for NotificationBox and others.
I’ve wrote a custom message box called NotificationBox for the Windows Phone 7. It gives you an option to pick whatever button you like to have inside the message box. Also It provides a cool feature which is ‘Show this message again’ with isolated settings persistency so you won’t have to create this mechanism by yourself each time you want to give an option to the user suppressing a message.
The usage is similar to the regular message box, except for the fact that you can pass commands with custom actions instead of return values – then switch/ifs.
For example:
NotificationBox.Show(
“Exit”,
“Are you sure?”,
new NotificationBoxCommand(“Yes”, () => { }),
new NotificationBoxCommand(“No”, () => { }));
NotificationBox.Show(
“Erase”,
“You are about to loose your data!”,
new NotificationBoxCommand(“Ok”, () => { }),
new NotificationBoxCommand(“Cancel”, () => { }));
NotificationBox.Show(
“Choose”,
“Choose an option.”,
new NotificationBoxCommand(“Save”, () => { }),
new NotificationBoxCommand(“Load”, () => { }));
NotificationBox.Show(
“Custom”,
“Click on any of the buttons below.”,
new NotificationBoxCommand(“Xxx”, () => { }),
new NotificationBoxCommand(“Xxx”, () => { }),
new NotificationBoxCommand(“Zzz”, () => { }));
Yields:
Another version of the Show method called ShowAgain, gives an option to suppress a message:
NotificationBox.ShowAgain(
“Show Again”,
“Uncheck the show again message and this message won’t appear again.”,
“Show this message again”,
false,
suppressed => { },
GetType().ToString(),
new NotificationBoxCommand(“Xxx”, () => { }),
new NotificationBoxCommand(“Xxx”, () => { }),
new NotificationBoxCommand(“Zzz”, () => { }));
NotificationBox.ShowAgain(
“Show Again”,
“Check the show again message so this message will appear again.”,
“Show this message again”,
true,
suppressed => { },
GetType().ToString(),
new NotificationBoxCommand(“Xxx”, () => { }),
new NotificationBoxCommand(“Xxx”, () => { }),
new NotificationBoxCommand(“Zzz”, () => { }));
The first call with false, displays the following message:
Now calling the same method again with the same parameters (at least with GetType().ToString()), won’t open the message in case that the user suppressed it by unchecking the check box.
The second snippet (fourth param is true) forces the message to open event if suppressed, but now the same message appears and the checkbox left unchecked. The user have an option to check it again.
I’ll be happy to have inputs, so please – don’t be shy 😉
You can download the source from CodePlex.
Awesome work. I like the commands support, though some people will still prefer return values.
I see that you are ignoring the page’s ApplicationBar. When your notification shows up, the user could still press AppBar buttons or menus.
In my own popups, I keep a ref on the page’s ApplicationBar and set it to null until the popup is closed. I wonder if it’s the good way to do it.
Thanks Martin.
Totally forgot from the app-bar 😉 and I think that hiding the app-bar is simple and good idea.
Maybe it would be better if we could display a full screen popup. Need to investigate that.
Thanks.
Either I’m missing something or the popup isn’t handling the back button correctly.
Hitting back should close the popup.
Hi Aiden,
Havn’t took care of that, but will.
Thanks.
Looks good.
It isn’t handling back button presses correctly, nor does it handle light theming correctly, dark themes are just fine.
Thanks Kacey.
I’m planning to fix these issues as soon as I’ll have free time 😉
Promise to post. Keep watching.
Thank you, this is good stuff.
Do you (or somebody else) know, why the standard MessageBox just supports the OK and Cancel button? It seems really strange to me.
I really don’t know why, but even though they would have given it, still they wouldn’t give custom action support with custom content.
Thanks for providing the community with a more flexible message box!
Any idea why this won’t show over a pivot control? I can use this on other pages in my app, but the pivot page refuses to display it. I get an error if I try and re-create it, so I know the popup IS there. But I don’t see it. Also, if I swap in the stock messagebox, it displays just fine….
Hi Phil,
I’ve just checked it on a Pivot page, works just fine.
A XAML version for light and dark theme
Torben, very cool thanks 😉
@Torben / @Tomer: the paste-it link is taken down. Could you please post the theme fix somewhere?
/M
Sorry Magnus, somehow I don’t have it.
Good work but it does not work well will Landscape orientation.
private static void parentPage_BackKeyPress needs to change otherwise we have to press the backbutton twice to go back
Code should be
private static void parentPage_BackKeyPress(object sender, CancelEventArgs e)
{
CurrentPage.BackKeyPress -= parentPage_BackKeyPress;
if (_popup != null && _popup.IsOpen)
{
ClosePopup();
e.Cancel = true;
}
}
How do I put this into my project? I have never done this sort of thing before?
I added the project to the solution but it can’t find the Tomers namespace.
Hi Lee,
Basically you should download and extract my file. Open and build the project with VS2010. This should create a .dll under bin\Debug folder.
In you project add reference to this dll and look at my tester as how to use it. Also you can take a look on my tester to see the usage.
Tomer
Thanks Tomer,
In the end I added the project to the solution and built it that way and added that reference. That way I suppose it updates.
Any idea on the light/dark theming, the link in the posts above is broken. Shame!
Don’t have that theme, sorry.
Hope that Torben will see this and will be kind to add it again.
this message box is not working for multi thread operations. for example, i am running background theread and display multiple messaeg boxes, it throwing error as “Message is already shown.”. not sure how to fix this issue.
Is there a possibility to use it for something like a first-use tutorial?
If i try do put the code in the constructor of the MainPage there is a NullReferenceException by:
var currentPage = rootFrame.Content as PhoneApplicationPage;
PS: And sorry for my bad english ^^
Yes feel free. As for the exception this operation should be delayed using the dispatcher or else. It just need the root to calculate the size. If remember correctly. Having fixed size based on layout will be just great.
How do I create a function for the buttons?
Hi Rob,
The NotificationBoxCommand has two parameters, the second one is a delegate. You can provide your function in this parameter.
The function should be something like:
void YourFuncName();
So you should have:
new NotificationBoxCommand(“Cmd Name”, YourFuncName);
Hope it helps.
Hi,
how about license? can i use this control for apps, which i deploy in the marketplace? and what is with commercial projects? any suggestions?
enno
Hi Enno,
Feel free to use it in wherever you like to, just be kind and leave some credit 😉
Bellow are the fixes i made to support Light Mode.
in NotificationBox.Show and NotificationBox.ShowAgain
Change the Height property of _popup from CurrentPage.ActualHeight to 800
this is because ActualHeight does not update when you hide the application bar
In the Generic.xaml
Update the button collor, the xaml should look like this
Update the Text color, xaml should look like this
When I try to use this, nothing appears on the screen. But if I click the button again it complains that it’s already shown.
So I wonder if maybe there is something wrong with the Z-order or something? Any suggestions?
Great work!
I discovered a strange effect using the control.
If I show the NotificationBox from at the loaded event handler of the apps first page, the phone’s back button does not work correctly.
The first push closes the NotificationBox as expected. But the second push has no effect. Maybe because of the removal of the BackKeyPress event?
The third push closes the app.
Any ideas how to solve that problem?
Thanks
ckruhs
First of all thanks for the great control!
I think this is the proper fix for the light theme issue
In Generic.xaml, change this:
into this:
Also remove all the “Foreground=” from Textblocks, Buttons & Checkboxes
hI Panagiotis Koutsias,
Thanks for sharing.
@ckruhs
I used my page’s Loaded event to show the MessageBox and the problem solved for me. I also modified parentPage_BackKeyPress as Udit said in a comment above.
Hope that helps!
Many thanks for this control, I will definitely be using it. I do have one suggestion:
Shouldn’t ShowAgain should call the suppression action even if the message is suppressed?
This gives a chance for the default action to be performed.
Thanks,
Jason
There is one issue with the control
If i write some code in back button then first notificationbox is displayed to user and in background that code is also executed. Is there any way to solve this issue??
Hi JasonBSteele,
This is exactly how it works:
internal void Close()
{
if (SuppressionCallback != null)
{
SuppressionCallback(!ShowAgainOption);
}
ClosePopup();
}
To make this dialog work when BACK BUTTON is pressed (i.e. inside of OnBackKeyPress function) for example for such “Are you sure you want to loose all changes?” message, you need:
1. to cancel navigation to previous page ( e.Cancel = true; )
2. TO PLACE NotificationBox.Show(..) INSIDE THE Dispatcher
3. to add a function to one of the button (NotificationBoxCommand) to navigate to previous page.
so the code is
protected override void OnBackKeyPress(CancelEventArgs e) {
e.Cancel = true;
this.Dispatcher.BeginInvoke(() =>
NotificationBox.Show(“Leaving the page”,
“Are you sure you want to loose all changes?”,
new NotificationBoxCommand(“Yes”, () => { this.NavigationService.GoBack(); }),
new NotificationBoxCommand(“No”, () => { }))
);
}
CORRECTION to my today’s message:
To make this dialog appear when BACK BUTTON is pressed (i.e. inside of OnBackKeyPress function) for example for such “Are you sure you want to loose all changes?” message, you need:
1. to cancel navigation to previous page ( e.Cancel = true; )
2. to add IsOpen static function to NotificationBox class. This is needed to allow hiding the dialog when it is open and back button is pressed.
3. to place NotificationBox.Show(..) inside of Dispatcher
4. to add a function to one of the button (NotificationBoxCommand) to navigate to previous page.
so the code is
NotificationBox class:
public static bool IsOpen {
get {
return _popup != null;
}
}
some page:
protected override void OnBackKeyPress(CancelEventArgs e) {
if (!NotificationBox.IsOpen) {
e.Cancel = true;
this.Dispatcher.BeginInvoke(() =>
NotificationBox.Show(“Leaving the page”,
“Are you sure you want to loose all changes?”,
new NotificationBoxCommand(“Yes”, () => { this.NavigationService.GoBack(); }),
new NotificationBoxCommand(“No”, () => { }))
);
}
}
Hi giacoder,
Thanks for your solution, though I’ve cleaner one, such you don’t need to change anything in your page for having that. I’ll post my solution later, as part of my efforts loading it to CodePlex 😉
Hi Tomer, just a quick question: i’d like to make the buttons a bit larger. Doing so in the Generic.xml works fine from within your demo app. But when i reference the rsulting dll form my own project: the buttons keep their old size. Any idea? Thanks!
Giacoder, I’ve published a project in CodePlex.
You can use an updated version of the NotificationBox with a fix to the problem you’ve described.
Here is the link for the project:
Hi Martin,
You should create a ControlTemplate for the NotificationBox (use Blend, its easy), then set the internal ItemsControl to use your Button data-template:
I know it’s lame, but it’s a first draft of this control 😉
Better option is to expose “CommandTemplate” property from the notification box so the user can override it.
@Tomer, i tried creating a custom datatemplate but the buttons are still showing up small. Any working code?
I think its not related to buttons. No matter what changes i make to Generic.xaml (like changing foreground color of textblocks etc.) none of them show up in my app. It works for your Tester app though. Any ideas?
Is it possible to cutomise the buttons size and put images on it,How? Also,how do you also put a link inside the buttons,such as by clicking the “Home” Button it will go directly to Home Page.
Hi Mohib Sheth,
First I encourage you downloading my new WP7 Assets library from Codeplex:
It contains newest version of the Notification Box.
Now that you have it, look at the demo. It has what you need.
hi,
i ‘ve just update to the new version. i change NotificationBox by NotificationTool.
How to change the color of buttons, text, title when the theme is set to light or dark ?
Do you have sample ? (send me: zorgan [at] free dot fr)
Thanks
Hi Zorgan,
Please open a discussion in CodePlex so we can share the solution.
Thanks,
Tomer
Found one issue that will cause a fatal error.
> Popup comes up
> You click “ok” or another button added
> Code is executed
> Dialog goes away
> You press backbutton to go back one view
> Fatal error
The error is in the backbutton listener in NotificationTool class. I think it’s either if(IsShown) or ClosePrompt().
I wasn’t too familiar with this advanced navigation and frame coding, but after scratching my head on this for a while, I realized that the backbutton listener only is stopped on backbutton during the messagebox, not if you click one of the buttons. So I added
RootFrame.BackKeyPress -= parentPage_BackKeyPress;
in ClosePrompt()-method which is run after the commands, like you do in the backbutton listener. That way you stop listening for backbuttons after the messagebox is gone.
Thank you very much for this project, it really helped me out when making a dark theme only app, that needs custom options.
Thanks for sharing Alex.
Does this NotificationBox support Orientation changes, e.g. if the user is viewing a Landscape orientation on Windows Phone 7 and the NotificationBox is displayed as a Landscape box. Currently it looks to be Portrait mode only.
Hi Steve, It’s portrait only.
Hi Tomer,
Your set of assets it’s amazing and it really enhances WPSDK but I encountered and issue in order to show two Notification box messages on a row. In few words, after the first is shown a second one must be shown depending on the answer for the first one but I’m just getting just the first one. The call for the second one is being made (debugging I can hit the code) but nothing happens.
If you prefer so I can leave a copy of this issue on codeplex.
Much Thanks,
Cristian
Where can i get just dll for it, without compiling it?
Hi Cristian,
The notification box was not design to support more than one message at a time. Currently it doesn’t support kind of scenario. Maybe I’ll add it in the future.
I am trying to your code implement my project but I write NotificationBOx.show() it giives me error. It suggest me only 3 (showagaintext,showagainproperty,showagainoption). What i do?
Regarding the issue where you pop a NotificationTool box up and then want to pop up another one inside one of the NotificationActions. I think I fixed this in the source code under DeepForest.Phone.Assets –> Tools –> NotificationAction.cs
I made this changed (original code is commented out)
void ICommand.Execute(object parameter)
{
//_execute();
//NotificationTool.Close();
NotificationTool.Close();
_execute();
}
This essentially closes the tool before executing the NotificationAction which allows you to pop a new NotificationTool within the action. | http://blogs.microsoft.co.il/tomershamam/2010/10/19/windows-phone-7-custom-message-box/ | CC-MAIN-2019-04 | refinedweb | 2,577 | 67.25 |
ContextMenus do not work in Edge
By design Issue #9529523
Steps to reproduce
I am creating a simple extension, but the background page doesn’t recognize
chrome.contextMenus, what am I doing wrong?
// Always returning false if (!chrome.contextMenus) { console.error('Cannot find contextmenus') }
The manifest has the proper permissions:
"permissions" : ["contextMenus" ],
Microsoft Edge Team
Changed Assigned To to “Brad E.”
Changed Assigned To from “Brad E.” to “Chee C.”
Changed Status to “Confirmed”
Changed Status from “Confirmed” to “By design”
Edge extensions use the browser.* namespace instead of the chrome.* namespace. If you replace chrome with browser and add "persistent": true to your background field in the manifest, your extension should start working as expected.
We’ll update our documentation to make sure that the change in namespace is explicitly called out. Thanks for bringing this to our attention!
You need to sign in to your Microsoft account to add a comment. | https://developer.microsoft.com/en-us/microsoft-edge/platform/issues/9529523/ | CC-MAIN-2017-34 | refinedweb | 153 | 67.45 |
Bruce, Looking at the variable attribute, you will see the _Unsigned = "true" attribute. You'll want to convert the unsigned byte to a short. Note that when converting, you cannot take absolute value, as this mapping shows: byte short ... 125 = 125 126 = 126 127 = 127 -128 = 128 -127 = 129 -126 = 130 -125 = 131 -124 = 132 ... Here's the java code to make the conversion: static public short unsignedByteToShort(byte b) { return (short) (b & 0xff); } The scale factor and offset are exactly what you would expect - scale by 0.5 and subract 64.5, respectively. Hope this helps! -Lansing Madry Unidata Boulder, Colorado > Tech/Help, > > I am trying to work with NEXRAD II data files to extract radial velocity. > I have been able to convert the files I have to a NETCDF format (using > your 'java' utility) to more easily access the data I need. > > Within the NEXRAD files the 'RadialVelocity' field is labeled as in > units of 'm/s', but the actual data are in a 'byte' format (ranging in > value from -127 to +127). Could you direct me to where I find out how > to convert the 'byte' data into a floating point velocity in 'm/s' (the > header also lists an 'scale_factor' of 0.5, and an 'offset' of -64.5)? > > Peace, > > BJ > > -- > o o \ ! \| \ / |/ ! / o / o > -|- -/- --o \- | -/ o-- -\ -|- > /_\_____|\_____/__\____ /_o_____/o\_____o_\_____/__\____/_\____/_\ > Bruce Jackson > Air Resources Board/Technical Support Division > Sacramento, California 95812 > address@hidden<mailto:address@hidden> > (916)324-6916 > > > Ticket Details =================== Ticket ID: HEL-720643. | https://www.unidata.ucar.edu/support/help/MailArchives/thredds/msg01779.html | CC-MAIN-2022-40 | refinedweb | 260 | 63.29 |
문서
교육
파트너
커뮤니티
사례 연구
Versions
v1.19
v1.18
v1.17
v1.16
v1.15
한국어 Korean
中文 Chinese
CASE STUDY:
Digging it the largest consumer genomics DNA network in the world. The company's popular website,
ancestry.com
, has been working with big data long before the term was popularized. The site was built on hundreds of services, technologies and a traditional deployment methodology. "It's worked well for us in the past," says Paul MacKay, software engineer and architect at Ancestry, "but had become quite cumbersome in its processing and is time-consuming. As a primarily online service, we are constantly looking for ways to accelerate to be more agile in delivering our solutions and our products."
Solution
The company is transitioning to cloud native infrastructure, using
Docker
containerization,
Kubernetes
orchestration and
Prometheus
for cluster monitoring.
Impact
. With the move to Dockerization for example, instead of taking between 20 to 50 minutes to deploy a new piece of code, we can now deploy in under a minute for much of our code. We’ve truly experienced significant time savings in addition to the various features and benefits from cloud native and Kubernetes-type technologies."
the sum of the legacy systems became quite cumbersome in its processing and was time-consuming," says MacKay. "We were looking for other ways to accelerate, to be more agile in delivering our solutions and our products."
"And when it [Kuber."
Plus, MacKay says, "I just believed in the confidence that comes with the history that Google has with containerization. So we started out right on the leading edge of it. And we haven't looked back since."
Which is not to say that adopting a new technology hasn't come with some challenges. "Change is hard," says MacKay. "Not because the technology is hard or that the technology is not good. It's just that people like to do things like they had done [before]. You have the early adopters and you have those who are coming in later. It was a learning experience on both sides."
Figuring out the best deployment operations for Ancestry was a big part of the work it took to adopt cloud native infrastructure. "We want to make sure the process is easy and also controlled in the manner that allows us the highest degree of security that we demand and our customers demand," says MacKay. "With Kubernetes and other products, there are some good solutions, but a little bit of glue is needed to bring it into corporate processes and governances. It's like having a set of gloves that are generic, but when you really do want to grab something you have to make it so it's customized to you. That's what we had to do."
Their best practices include allowing their developers to deploy into development stage and production, but then controlling the aspects that need governance and auditing, such as secrets. They found that having one namespace per service is useful for achieving that containment of secrets and config maps. And for their needs, having one container per pod makes it easier to manage and to have a smaller unit of deployment.
"The success of Ancestry's first deployment of the hint system on Kubernetes helped create momentum for greater adoption of the technology."
With that process established, the time spent on deployment was cut down to under a minute for some services. "As programmers, we have what's called REPL: read, evaluate, print, and loop, but with Kubernetes, we have CDEL: compile, deploy, execute, and loop," says MacKay. "It's a very quick loop back and a great benefit to understand that when our services are deployed in production, they're the same as what we tested in the pre-production environments. The approach of cloud native for Ancestry provides us a better ability to scale and to accommodate the business needs as work loads occur."
The success of Ancestry's first deployment of the hint system on Kubernetes helped create momentum for greater adoption of the technology. "Engineers like to code, they like to do features, they don't like to sit around waiting for things to be deployed and worrying about scaling up and out and down," says MacKay. "After a while the engineers became our champions. At training sessions, the development teams were always the ones saying, 'Kubernetes saved our time tremendously; it's an enabler; it really is incredible.' Over time, we were able to convince our management that this was a transition that the industry is making and that we needed to be a part of it."
A year later, Ancestry has transitioned a good number of applications to Kubernetes. "We have many different services that make up the rich environment that [the website] has from both the DNA side and the family history side," says MacKay. "We have front-end stacks, back-end stacks and back-end processing type stacks that are in the cluster."."
"... 'I believe in Kubernetes. I believe in containerization. I think if we can get there and establish ourselves in that world, we will be further along and far better off being agile and all the things we talk about, and it'll go forward.'"
Looking ahead, MacKay sees Ancestry maximizing the benefits of Kubernetes in 2017. "We're very close to having everything that should be or could be in a Linux-friendly world in Kubernetes by the end of the year," he says, adding that he's looking forward to features such as federation and horizontal pod autoscaling that are currently in the works. "Kubernetes has been very wonderful for us and we continue to ride the wave."
That wave, he points out, has everything to do with the vibrant Kubernetes community, which has grown by leaps and bounds since Ancestry joined it as an early adopter. "This is just a very rough way of judging it, but on Slack in June 2015, there were maybe 500 on there," MacKay says. "The last time I looked there were maybe 8,500 just on the Slack channel. There are so many major companies and different kinds of companies involved now. It's the variety of contributors, the number of contributors, the incredibly competent and friendly community."
As much as he and his team at Ancestry have benefited from what he calls "the goodness and the technical abilities of many" in the community, they've also contributed information about best practices, logged bug issues and participated in the open source conversation. And they've been active in attending
meetups
to help educate and give back to the local tech community in Utah. Says MacKay: "We're trying to give back as far as our experience goes, rather than just code."
When he meets with companies considering adopting cloud native infrastructure, the best advice he has to give from Ancestry's Kubernetes journey is this: "Start small, but with hard problems," he says. And "you need a patron who understands the vision of containerization, to help you tackle the political as well as other technical roadblocks that can occur when change is needed."
With the changes that MacKay's team has led over the past year and a half, cloud native will be part of Ancestry's technological genealogy for years to come. MacKay has been such a champion of the technology that he says people have jokingly accused him of having a Kubernetes tattoo.
"I really don't," he says with a laugh. "But I'm passionate. I'm not exclusive to any technology; I use whatever I need that's out there that makes us great. If it's something else, I'll use it. But right now I believe in Kubernetes. I believe in containerization. I think if we can get there and establish ourselves in that world, we will be further along and far better off being agile and all the things we talk about, and it'll go forward."
He pauses. "So, yeah, I guess you can say I'm an evangelist for Kubernetes," he says. "But I'm not getting a tattoo!" | https://kubernetes.io/ko/case-studies/ancestry/ | CC-MAIN-2020-40 | refinedweb | 1,358 | 60.95 |
AGIWiki/shake.screen
The screen is shaken
Note: although shaking the screen 100 or more times is not recommended (because the screen will shake for a long time), it should be noted that the
The shake.screen command shakes the screen a specified number of times.
Syntax
shake.screen(byt shakeCount);
Remarks
The screen is shaken
shakeCount times. The screen is "shaken" by moving the entire screen image around on the monitor to give a violent tremor effect.
Do not use a value of 0, as this causes the screen to shake indefinitely, and never returns control back to the interpreter.
AGIPAL
Note: although shaking the screen 100 or more times is not recommended (because the screen will shake for a long time), it should be noted that the
shake.screen command is altered if you are using AGIPAL so that any value between 100 and 109 changes the game's palette instead of shaking the screen.
Parameters
shakeCount: a number, 0-255, specifying how many times to shake the screen
Possible errors
- Some interpreters, such as NAGI, do not support the
shake.screencommand and will do nothing if the command is issue. It is safe to issue the command on these interpreters, but it will be ignored.
- Specifying a value of 0 for
shakeCountappears to make the screen shake forever.
Examples
The following code will shake the screen five times whenever the player enters this room:
#include "defines.txt" if (new_room) { load.pic(room_no); draw.pic(room_no); discard.pic(room_no); draw(ego); show.pic(); shake.screen(5); } | https://wiki.scummvm.org/index.php?title=AGIWiki/shake.screen&mobileaction=toggle_view_desktop | CC-MAIN-2021-25 | refinedweb | 258 | 62.68 |
On Sat, Oct 27, 2007 at 11:06:34AM -0700, Kevin D. Kissell wrote:
> The, [...]
The code upto 2.6.23 uses IRQF_DISABLED (which used to be named SA_INTERRUPT
until July 2006) for the timer interrupt in timer_irqaction which is defined
in the generic time.c.
> [...] I could imagine that making a material difference in the presnece
> of "interrupt storms" from I/O devices.
I don't recall any reports of this sort of behaviour before tnishioka's.
This includes no hang reports about the R4000 read from c0_count ever -
because Linux will happen to just nicely tiptoe around the issue for all
real world configurations.
>.
I think it should be based on the last compare value. This is the only
way to ensure interrupts will use a precise timing.
> But I gave up tilting at these windmills a long, long time ago... ;o)
Your windmill has been fixed for 2.6.24.
Now available at your nearest LMO (TM) GIT Store!
The big change is that the new timer code now has a proper concept of
oneshot interrupt timers. Which is what the compare interrupt really is
despite the continuously running counter. So this is how the new event
programming code looks like:
static int mips_next_event(unsigned long delta,
struct clock_event_device *evt)
{
unsigned int cnt;
int res;
cnt = read_c0_count();
cnt += delta;
write_c0_compare(cnt);
return ((int)(read_c0_count() - cnt) > 0) ? -ETIME : 0;
}
The called will check for the error return and if so invoke the active
clockevent device's ->set_next_event method again with a new suitable
delta.
Qemu btw. can trigger the -ETIME case fairly easily.
But anyway, I don't object a patch to improve theoretical correctness.
Ralf | http://www.linux-mips.org/archives/linux-mips/2007-10/msg00566.html | CC-MAIN-2014-15 | refinedweb | 278 | 73.58 |
Check out this quick tour to find the best demos and examples for you, and to see how the Felgo SDK can help you to develop your next app or game!
Cool graphics are essential to probably every game, but you always have to keep in mind that memory is limited, especially on older devices. In this tutorial you will learn how to use the available memory more efficiently, speed up loading sprites and even drawing them, with the help of TexturePacker.
First, follow this link and download TexturePacker.
While downloading, take a look at these 2 short and great videos by Code'n'Web, explaining the very basics of what we are talking about in this tutorial (about 3 minutes each):
SpriteSheets - TheMovie - Part 1
SpriteSheets - TheMovie - Part 2
In short, sprite sheets have these advantages:
After you finish installing, run TexturePacker. You can use the free trial version for this guide.
Loading every sprite from a separate file has many disadvantages, especially in memory usage and performance.
First of all, using file formats such as png or jpeg can reduce the total size of your game, but it won't affect the memory (RAM) usage while your game is running. The sprites have to be uncompressed into the RAM where they "become" textures the graphics processor can use. This means that every single pixel of the sprite consumes the same amount of memory (4 Byte per pixel with the standard RGBA8888 image format). E.g. a 512x512 pixel png completely filled with black color has a file size of under 5KB, but will still use 1MB (4 Bytes per pixel * width * height) of RAM.
The rectangular shape of the sprite usually doesn't match the particular sizes that hardware demands and needs to be changed by the system before they can get passed to the GPU. Worst case would be a hardware that can only process square sprites with width and height matching a power of 2 (128x128,256x256,...). If we have a 140x140 sprite, it will automatically be altered to match the hardware constraints. Since a square of 128x128 would be too small, it will be packed in a 256x256 square, and all the remaining space is unused, but still consumes memory when loading the sprite into the RAM. So instead of 76 KB (4 Bytes per pixel * width * height) for the 140x140 sprite, we will now need 256KB of memory space. That's more than 3 times the original space.
This adds up quickly, because lots of sprites will be used in a game. E.g. a 512x512 sprite needs 1MB. Some might say: "Calm down bro, I got 1GB of RAM in my iPhone 5, I can handle your 200 single sprites, no worries!". But you have to keep in mind that your game should maybe also run on an iPod Touch 4 with only 256MB. Also consider that you can't fill all this memory just with sprites of your application. Other applications will use parts of the RAM for their data at the same time. Here is a list of iOS devices and their RAM (Memory): List of iOS devices.
But don't worry, you will learn below how sprite sheets can be created with a power-of-two size by combining several sprites so that less to no extra padding is added.
Also the standard RGBA8888 image format with 4 Byte per pixel can be an unnecessary waste of memory. By default, images are stored with 8 bit per color channel. This is 32 bits (= 4 Bytes) for red, green, blue and alpha. With 32 bits you can represent 2^32 = 4,294,967,296 different colors, which you don't always need. Background images can be optimized very well by choosing a different format. They don't need the alpha channel because they are always behind everything else. The color channels' bit depth can be reduced to e.g. 5 bits for red, 6 bits for green (more for green because the human eye is more accurate with green colors) and 5 bits for blue. Suddenly you need only half of the memory you needed before. Furthermore, dithering randomizes the errors introduced by the reduction of the color depth and makes it less visible. This is all supported by TexturePacker.
Fewer draw calls usually improve the performance. Draw calls are expensive because each draw call needs state changes. The CPU will wait for the GPU to finish its current draw command and set the new states. This disturbs the pipelining in the GPU and causes a lot of idle time in the CPU. Additionally, transferring data, e.g. vertex data, to the graphics device is quite slow. In theory, there is a point where more draw calls with less data each are better. Since we have mostly rectangles with 4 vertices each in a 2D-engine and hardly any other data this point cannot be reached in practice. In a nutshell, sprite sheets help that the game and the graphics device can work better in parallel because of fewer interruptions.
Because several game objects share the same texture with a sprite sheet they can be displayed through one draw call. This doesn't work if the rectangles are overlapping because they must be blended in the correct order. Fortunately, the Qt renderer puts as many non-overlapping items as possible in one draw call. This speeds up the rendering performance a lot, even in crowded scenes.
You probably want to have different texture sizes for different screen resolutions. Creating the three different versions for sd, hd and hd2 of all images is an annoying and tedious task for the artists. Fortunately, TexturePacker makes it easy and fast to export the sprite sheets with different scaling settings. Read the section about Automatic Content Scaling for more information.
We made a quick performance test comparing our new TexturePackerAnimatedSprite component with the native Qt AnimatedSprite component. In this test we added instances of them at random positions on the screen until the frame rate dropped below 30 frames per second. Here are the results:
As you can see here, the difference between the Qt and our sprite implementation is bigger on slower hardware. Felgo can show about 35% more sprites on high-end devices and 100% more on low-end mobile devices! If the Qt renderer wouldn't use an own internal sprite sheet the differences would be even greater.
The Felgo implementation is also better in terms of memory consumption. The test program, compiled by MinGW in release mode, with 8000 of our sprites used 145 MB memory space. It used 185 MB with 8000 Qt sprites. So our sprite implementation needs about 25% less memory.
Internally, Qt automatically creates sprite sheets at runtime of normal Image components. While this is great if you have few images of small size, the Qt solution has several disadvantages:
The Felgo solution solves all of the above Qt issues. Thus we recommend using a custom sprite sheet created with TexturePacker over the Image and Sprite solutions of Qt. However, when beginning to prototype a game, using the QML Image element and SpriteSequence or AnimatedSprite is perfectly fine. For the best performance in published games though, switch to the TexturePacker components by Felgo.
You can get TexturePacker from here.
TexturePacker is an extremely powerful, easily accessible and well-designed tool. It supports all required features and can export to arbitrary resolutions, which makes it a great fit to export the sd, hd and hd2 textures based from your high-res versions. The best thing is, it is written with Qt so it is available for all desktop platforms!
The advantage of texture packing tools is that you can automatically put all your images into a single texture. At exporting, you can then change the resolution of the image for the 3 main resolutions sd, hd and hd2 (see the How to create mobile games for different screen sizes and resolutions guide for more information). So you can work with a single version of your graphics in highest resolution, and scale them down in no time.
TexturePacker generates 2 kind of files:
The packed sprite sheet of the Squaby Demo for example looks like this:
{"frames": { "10.png": { "frame": {"x":2,"y":2,"w":32,"h":26}, "rotated": false, "trimmed": false, "spriteSourceSize": {"x":0,"y":0,"w":32,"h":26}, "sourceSize": {"w":32,"h":26} }, "15.png": { "frame": {"x":36,"y":2,"w":32,"h":26}, "rotated": false, "trimmed": false, "spriteSourceSize": {"x":0,"y":0,"w":32,"h":26}, "sourceSize": {"w":32,"h":26} }, ... // the definitions for all the other images follow here, automatically generated by the texture packing tools ... }}, "meta": { "app": "", "version": "1.0", "image": "squaby.png", "format": "RGBA8888", "size": {"w":128,"h":256}, "scale": "0.25", "smartupdate": "$TexturePacker:SmartUpdate:e5683c69753f891cee5b8fcf8d21cf93$" } } }
Let's take a look at this great tool and let's use it to create a little project.
We are using some sprites of the Felgo game Squaby for this guide. You can download the resources here: resources
In the TexturePacker GUI, click Add smart folder, navigate to our Felgo project, and add the
texturepacker-resources folder or simply drag and drop
your raw assets folder into the window.
Now you can see all the sprites from that folder on the left hand side. If you change anything within the folder, TexturePacker will automatically pick up all the changes. It's also possible to arrange your sprites in subfolders and refer to them with their relative path later, or even add multiple folders to the same sprite sheet - this can be handy for larger games where you have the same items on multiple sprite sheets/levels.
Below that, in the bottom right corner, is the size of your sprite sheet and the amount of memory it will use in the RAM.
In the center you can see how TexturePacker arranges all the sprites in a optimized way, representing your resulting sprite sheet.
We will take a closer look at some of the most important options on the right hand side.
Like I mentioned above, the standard format would be RGBA8888 with 4 Bytes per pixel (Red, Green, Blue and Alpha for transparency). With this setting our sprite sheet will use 2048KB of memory, as you can see in the bottom right of the TexturePacker GUI. If we change it to RGB4444, we discard half of the color information, ending up with half the memory used, which is a huge improvement. Go ahead and try it out!
Saving 1MB didn't convince you about this features strength? Then we will do the math again with a hd2 texture. 2048 * 2048 * 4 Bytes results in 16MB (!) of memory needed in the RAM for just one texture. So with RGB4444 we can save 8MB (!!!) with each texture. HUUUUGE!
Of course, half the color can also cause problems, especially with gradients. This is where TexturePacker's killer feature Dithering comes into play.
Try out the different dithering options and take a close look at the sprites. While the towers and Squabies still look very good, the digit sprites (5, 10, 15) are designed so gradient heavy (especially with the shadow) that even with dithering we are not fully satisfied with the outcome.
In this case the best way would probably be splitting the sprite sheets into one with gradient heavy sprites and one with the others, and chose different image formats for each of them. Of course it always depends on the number of sprites you have, if the memory win is worth the trade-off from having more sprite sheets.
In this tutorial we will just stick to RGBA8888 so we can go on, make sure you changed the image format back to it.
You simply create your images for the highest resolution and let TexturePacker scale them down for lower resolutions. Regarding this, I highly recommend reading How to create mobile games for different screen sizes and resolutions if you haven't done so already.
This will be the different resolutions of your scene on different devices (while the logical size of your scene stays 480x320 to not affect the game logic):
This means, the common work-flow would be:
In the scaling variants menu, enter the settings like in the picture below and click apply.
And what's the thought behind saving the same images 3 times in different sizes? Again, saving RAM is the answer. On lower resolution devices you don't need huge hd2 sprites, so Felgo uses the smaller ones instead, to save memory.
Felgo DOES support it, so make sure it's activated!
If you want to find out more about any of the options, just hover your mouse above them and read the tool-tips.
Just one more thing to add regarding image formats. RGB565 saves some of the color information and completely discards the alpha information, which makes it perfect to reduce the size of background images. This is also used internally in Felgo when you use the BackgroundImage component.
Fine, we covered the most important options for our tutorial, and are nearly ready to publish our sprite sheet, we are just missing a name for our data file and the texture file.
Click the button with (...) next to Data File, locate the assets/img folder within your Felgo project and save the file as {v}squaby.json. The "{v}" is a placeholder for the scaling subfolders like "+hd" and "+hd2". The path should now be something like ".../squaby/assets/img/{v}squaby.json". TexturePacker will automatically fill in the Texture File name for you.
Now click Publish sprite sheet and all your sprite sheets for the different resolutions are created.
You can also save this settings as a *.tps file by clicking Save project.
TexturePackerAnimatedSprite, TexturePackerSpriteSequence and TexturePackerSprite support content scaling like the MultiResolutionImage component. This allows you to create the game only once for a logical scene size and automatically resize the images based on the screen.
To use content scaling together with the TexturePacker components, export 3 different versions of your high-res graphics: The high-res hd2 version with a scene resolution of 1920x1280, the hd version with 960x640 and the sd version with 480x320. Just modify the Scale setting in TexturePacker. When your images are made for the hd2 resolution, export it with scale = 1 for the hd2 texture, with scale = 0.5 for the hd texture and with scale = 0.25 for the sd texture. The setting is displayed in the following image. TexturePacker also creates the corresponding JSON file.
The Scale is set to 0.25 for exporting the sd image and json file.
Place the image and JSON files in the correct directories like in this example:
If you thought this simple tutorial will finally become super tricky now, I have to disappoint you - more simple stuff is about to come. :)
Jump into our
main.qml, and delete most of it to look like this
import Felgo 3.0 import QtQuick 2.0 GameWindow { Scene { }// Scene }// GameWindow
Now we got a GameWindow with an empty Scene where we will place our sprites next.
Let's add a single sprite to our Scene
import Felgo 3.0 import QtQuick 2.0 GameWindow { Scene { TexturePackerAnimatedSprite { id: nailgunSprite source: "../assets/img/squaby.json" frameNames: ["nailgun.png"] x: 100 y: 100 }// TexturePackerAnimatedSprite }// Scene }// GameWindow
All we needed is the TexturePackerAnimatedSprite component and set the name of the sprite as the TexturePackerAnimatedSprite::frameNames and the path to the json file to TexturePackerAnimatedSprite::source. As you can see, the name of the sprite in the sprite sheet is exactly the same as it was as a single sprite.
Did you notice the plural of "frameNames" and its usage as list? Although the TexturePackerAnimatedSprite is mainly for sprite animations it can also be used for static images. It does only update its graphics if necessary and has therefore a good performance. Keep in mind that the frameNames property is actually a list of strings.
Additionally we added an
id and moved the sprite to the defined x/y coordinates.
If you run the project, you can see our nail gun sprite, pretty easy.
Let's modify this sprite at runtime. If you already played Squaby, you know that the towers can be upgraded. With every upgrade, the nailgun will look different. We will quickly simulate this behavior.
Add this after your sprite:
MouseArea { anchors.fill: nailgunSprite onClicked: { if(nailgunSprite.frameNames[0] === "nailgun.png") { nailgunSprite.frameNames = ["nailgunUpgradeFire.png"]; } else if(nailgunSprite.frameNames[0] === "nailgunUpgradeFire.png") { nailgunSprite.frameNames = ["nailgunUpgradeBoth.png"]; } else{ nailgunSprite.frameNames = ["nailgun.png"]; } } }// MouseArea
This is the reason we gave the sprite an
id; so we can access its properties with the
id. What we are doing here is changing the frameNames of the sprite with each click on it. The sprite
automatically gets redrawn if its frameNames have been changed.
Run the project and try it out!
This looks too static for your taste? You want some animations? No problem sir, your wish is my command:
import Felgo 3.0 import QtQuick 2.0 GameWindow { Scene { TexturePackerAnimatedSprite { id: squabySprite source: "../assets/img/squaby.json" frameNames: ["squ1-walk-1.png", "squ1-walk-2.png", "squ1-walk-3.png", "squ1-walk-4.png"] interpolate: false anchors.centerIn: parent frameRate: 3 } } }
We just added a walking Squaby, quite similar to the static sprite. We added the TexturePackerAnimatedSprite component, set the path to our
json file to the
filename. Additionally we added an
id and centered the sprite sequence in our scene.
If you run the project you can admire that cute little walking Squaby. But everyone knows, this little monsters do not only walk around, they jump and scare the sh** out of us!
This time we use a TexturePackerSpriteSequence with multiple TexturePackerSprite children to control several animations at once.
Each of these TexturePackerSprite describes an animation. For example, the animation has the name "walk", and it runs at 20 frames per second. All the sprites used for this
animation are set to
frameNames in the correct order.
Although I'm already totally frightened of what's about to come, replace our sprite animation with this: } MouseArea { anchors.fill: squabySprite onClicked: { squabySprite.jumpTo("jump") } }// MouseArea }// TexturePackerSpriteSequence } }
By clicking you can switch to the "jump" animation. When one cycle ends it has a 75% chance switch to the "jump" animation and a 25% chance to play "jump" again.
Now we only need to tell the Squaby to stop running and jump instead. This is done in our MouseArea below the sprite sequence. If we click the Squaby, we use the TexturePackerSpriteSequence::jumpTo() function to change the animation.
Take a deep breath, fasten your seatbelt and then run your project to try it out!
What? That didn't scare you? I guess that means you are pretty damn tough! Compared to you I'm a total wreck right now, so let's stop here with this lesson.
If you have any questions regarding this tutorial, don't hesitate to visit the support forums.
Visit Felgo Games Examples and Demos to gain more information about game creation with Felgo and to see the source code of existing apps in the app stores. | https://felgo.com/doc/howto-texture-packer/ | CC-MAIN-2021-04 | refinedweb | 3,208 | 64.1 |
Many of the services available in .NET and exposed via C# (such as late binding, serialization, remoting, attributes, etc.) depend on the presence of metadata. Your own programs can also take advantage of this metadata, and even extend it with new information. Examining existing types via their metadata is called reflection, and is done using a rich set of types in the System.Reflection namespace. It is also possible to dynamically create new types at runtime via the classes in the System.Reflection.Emit namespace. You can extend the metadata for existing types with custom attributes. For more information, see Chapter 14. | http://etutorials.org/Programming/C+in+a+nutshell+tutorial/Part+II+Programming+with+the+.NET+Framework/Chapter+13.+Reflection/ | CC-MAIN-2018-13 | refinedweb | 102 | 50.43 |
Details
Description
Jaxen is failing this test:
org.w3c.dom.Element root = doc.createElement("root");
doc.appendChild(root);
Element child = doc.createElementNS("", "foo:child");
root.appendChild(child);
XPath xpath = new DOMXPath("//namespace::node()");
List result = xpath.selectNodes(doc);
assertEquals(3, result.size());
This query on this document should find three nodes: a namespace node for the xml prefix on the root, a namespace node for the xml prefix on the child, and a namespace node for the foo prefix on the child. It only finds the first two. (I've inspected this with the debugger.) For some reason it misses the foo prefix.
I haven't fully tracked this down yet. It might be a problem in the DOM navigator. It might be a problem in Jaxen itself.
Activity
I've fixed the basic issues. The DOM navigator will now generate namespace nodes from elements and attributes' namespaces, as well as explicit xmlns Attr objects. There may still be some issues with ancestral namespaces I need to look at. I have no idea what to do with DOM trees in which the Attr objects are in active conflict with the actual namespaces. This is a real mismatch between DOM and XPath.
What Jaxen is currently doing is ignoring the additional, contradictory namespace declarations. I think I'm going to write tests for this and declare this to be the expected behavior.
I'm furthermore going to rule that the intrinsic namespace of an Element or Attr wins over the contradictory namespace declared by an xmlns or xmlns:pre attribute.
There's no good solution here (Blame DOM for that); but it's an edge case; and this approach feels marginally less surprising.
I think this is now resolved and document as best as possible. Honestly DOM is just truly brain damaged here.
I think I've figured out this bug. It's a problem in the DOM navigator. That navigator assumes that all namespace nodes are represented by Attr objects. However, this is not necessarily true for synthetically created documents. These may have namespaces but no Attr objects at all. It's basic DOM brain damage, and should not affect anything in the core or other navigators. I'm working on fixing it. | http://jira.codehaus.org/browse/JAXEN-105?focusedCommentId=40347&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel | CC-MAIN-2013-20 | refinedweb | 373 | 60.21 |
Welcome to Fabric!
This document is a whirlwind tour of Fabric’s features and a quick guide to its use. Additional documentation (which is linked to throughout) can be found in the usage documentation – please make sure to check it out.
As the README says:
Fabric is a Python (2.5 or higher) library and command-line tool for streamlining the use of SSH for application deployment or systems administration tasks.
More specifically, Fabric is:
Naturally, most users combine these two things, using Fabric to write and execute Python functions, or tasks, to automate interactions with remote servers. Let’s take a look.
This wouldn’t be a proper tutorial without “the usual”:
def hello(): print("Hello world!")
Placed in a Python module file named fabfile.py, that
As used above, fab only really saves a couple lines of if __name__ == "__main__" boilerplate. It’s mostly designed for use with Fabric’s API, which contains functions (or operations) for executing shell commands, transferring files, and so forth.
Let’s build a hypothetical Web application fabfile.> Done.
The code itself is straightforward: import a Fabric API function, local, and use it to run and interact with local shell commands. The rest of Fabric’s API is similar – it’s all just Python.
See also
Operations, Fabfile discovery prepare_deploy(): test() commit()
The prepare_deploy task can be called just as before, but now you can make a more granular call to one of the sub-tasks, if desired.:
However, despite the additional complexity, it’s still pretty easy to follow, and is now much more flexible.
See also
Context Managers, Full list of env vars
Let’s start wrapping up our fabfile by putting in the keystone: a deploy task that ensures the code on our server is up to date:
def deploy(): code_dir = '/srv/django/myproject' with cd(code_dir): run("git pull") run("touch app.wsgi")
Here again, we introduce a handful of new concepts: prompted us at runtime. Connection definitions use SSH-like “host strings” (e.g. user@host:port) and will use your local username as a default – so in this example, we just had to specify the hostname, my_server.
git pull works fine if you’ve already got a checkout of your source code – but what if this is the first deploy? It’d be nice to handle that case too and do the initial git clone:
def deploy(): code_dir = '/srv/django/myproject' with settings(warn_only=True): if run("test -d %s" % code_dir).failed: run("git clone user@vcshost:/path/to/repo/.git %s" % code_dir) with cd(code_dir): run("git pull") run("touch app.wsgi") pack(): local('tar czf /tmp/my_project.tgz .') def prepare_deploy(): test() pack() def deploy(): put('/tmp/my_project.tgz', '/tmp/') with cd('/srv/django/my_project/'): run('tar xzf /tmp/my_project.tgz') run('touch app.wsgi')
This fabfile makes use of a large portion of Fabric’s feature set:
However, there’s still a lot more we haven’t covered here! Please make sure you follow the various “see also” links, and check out the documentation table of contents on the main index page.
Thanks for reading! | http://docs.fabfile.org/en/1.4.1/tutorial.html | CC-MAIN-2018-05 | refinedweb | 520 | 64.91 |
The FP logicsheet is an XSP logicsheet, a TagLib, that performs tasks useful to people wanting to build and handle HTML Forms within Cocoon. Some of the tags are general enough to be used for other purposes as well.
The Tags form a simple XML language that can be mixed with other TagLibs, for writing the typical logic required for modifying XML Files with HTML Forms.
They allow you to work with data (Strings and Nodes) from external XML Files, both reading and writing to them using the XPath Specification to identify your target. There are also tags to choose between HTTP Methods GET and POST, so you could have a page that can both build a Form and handle saving it's data.
Check your cocoon.properties for this line and add it if it's not already there:
processor.xsp.logicsheet.fp.java =
resource://org/apache/cocoon/processor/xsp/library/java/fp.xsl
Note the line break is for formatting purposes, it should not appear in your cocoon.properties file.
Map the
namespace to the fp prefix. Elements in the fp taglib namespace will be interpreted as input to the fp taglib and will be stripped from the output.
This is typically done like this:
<xsp:page
language="java"
xmlns:fp=""
xmlns:
. . .
</xsp:page>
The fp:resource element with it's mandatory id attribute, is for defining a file you would like to read and/or write to. It takes a series of configuration elements as children.
fp:resource
id
The list of valid child elements is:
fp:resource-file (mandatory) -
The path to a file you want to work with
fp:resource-node (mandatory) -
An XPath to select the Node in your file that you want to work with
fp:default-mode (defaults to replace if not present) -
The default mode you want to use when writing your Nodes, replace replaces the selected Node, insert-before and inser-after create a new Node, with the same name as the selected, and inserts before or after.
replace
insert-before
inser-after
None of the above tags have any relevance outside of the fp:resource-file.
fp:resource-file
You can use as many fp:resource-file elements as you have unique files to work with, each one should have a unique id attribute, this is what you use to refer to them. ie. the FP TagLib can work with multiple files at the same time.
Errors are flagged by placing an fp-error attribute into the parent of the fp:resource-file tag, with a value that is an IDREF pointing to an fp-error element in your document root.
fp-error
The fp:read element reads TextNodes or Nodes from the specified fp:resource into it's parent Element using a relative XPath. It has several mandatory attributes.
fp:read
The list of valid attributes is:
from (mandatory) -
The id of the fp:resource you want to read
select (mandatory) -
An XPath (relative to fp:resource-node) to select the Node(s) in the fp:resource you want to read
as (defaults to string) -
The way you want to read the Node(s) selected by your XPath. The default, string reads all of the TextNodes that are children of the selected Node(s); node reads all of the selected Nodes as deep cloned XML.
fp:resource-node
string
node
It is safe to use an XPath in your fp:read element select attribute that selects multiple Nodes.
If the parent of an fp:read element is an fp:write element, the output of the fp:read element goes to the fp:write element and is written to file. Otherwise the output goes to the user, like any other TagLib.
fp:write
The fp:write element writes TextNodes or Nodes to the specified fp:resource at the location specified by it's relative XPath. It has several mandatory attributes.
to (mandatory) -
The id of the fp:resource you want to write
select (mandatory) -
An XPath (relative to fp:resource-node) to select the Node in the fp:resource you want to write
as (defaults to string) -
The way you want to write the Node selected by your XPath. The default, string overwrites all of the TextNodes that are children of the selected Node with the text content of the fp:write element; node replaces the selected Node with the content of the fp:write element as XML.
It is not safe to use an XPath in your fp:write element select attribute that selects multiple Nodes.
Only FP or other TagLib Tags can be used as child elements of the fp:write element. To do otherwise, causes compilation errors that can be difficult to track down. There may be a solution to this
The fp:write element may only use a simplified form of XPath in it's select attribute, one that is just a simple "path/to/node", because the TagLib has to be able to construct any Nodes you ask for and it can only interpret the simplest case. ie. you can use an XPath like this/is/an/xpath/to/nowhere and the data will be written with all the intervening elements whether they currently exist or not, but an XPath like title/* will not work. (This is different from fp:read's behaviour).
this/is/an/xpath/to/nowhere
title/*
The fp:if-post element allows simple logic, based on the HTTP Method of the incoming request.
fp:if-post
Any child elements of the fp:if-post element are ignored during GET, HEAD, PUT and DELETE requests.
The fp:if-get element allows simple logic, based on the HTTP Method of the incoming request.
fp:if-get
Any child elements of the fp:if-get element are ignored during POST, HEAD, PUT and DELETE requests.
The fp:redirect element can be used to redirect the user to another URL when two conditions have been met, that this was a POST Request, and there were no errors generated by any of the other FP Tags.
fp:redirect
The value of the fp:redirect element, is the URL you want users to go to, it should accept relative, root and absolute addressed URLs.
Try the sample. The URL is http://[your server]/[your cocoon path]/samples/fp/index.xml
The sample files
form/default.xml -
Default values used by the Forms
form/form-html.xsl -
Renders the Form to HTML
form/form-xml.xsl -
Renders the Form to XML (for debugging purposes)
form/item-add.xml -
Builds and saves a Form to add new Items in index.xml
form/item-edit.xml -
Builds and saves a Form to edit existing Items in index.xml
images -
Folder of images
index.xml -
The content XML file
page-html.xsl -
Renders the normal view of the project
These are bits of annotated code from fp/form/item-add.xml
This is how to set up a file resource that you want to modify.
<fp:resource
<fp:resource-file>../index.xml</fp:resource-file>
<fp:resource-node>item[position()=<request:get-parameter]</fp:resource-node>
<fp:default-mode>insert-before</fp:default-mode>
</fp:resource>
We are nesting an element from the Request TagLib inside an FP element to build an XPath on the fly from the "item" request parameter. Now you can use fp:read and fp:write elements to read and/or write to this file, inserting a new "item" Node before the current one, from text in a Form field.
You can have as many fp:resource elements in your XSP as you need.
The id attribute of the fp:resource must be unique, you use it's value in the fp:read or fp:write elements to define which file you want.
Here, I want to get the Title of all onf the Items, to build a Menu.
<menu action="item-add.xml?item=">
<fp:read
The fp:read element is replaced by the contents of the read.
The from attribute is the name of the resource you want to read.
from
The select attribute is an XPath relative to the Node specified in this resource's "resource-node".
The as attribute, in this case node specified that the data should be read and passed on as XML.
as
Status and Error id's are added as attributes to the parent of the fp:read tag, in this case the <menu/>tag.
If there is an error, you get the ID of the error report, placed at the end of the page.
The output looks like this:
<menu fp-
<title>Title 1</title>
<title>Title 2</title>
<title>Title ...</title>
Here we make a Form input Field, so the name is important, but it does not need to relate to the name of the location where the data is to be stored. The parent deos not have to be input, use whatever you want.
input
<input name="title">
<xsp:attribute<fp:read</xsp:attribute>
<fp:if-post>
<fp:write
<request:get-parameter
</fp:write>
<fp:read
</fp:if-post>
<fp:if-get>
<fp:read
</fp:if-get>
</input>
I create a label attribute in the input element, using XSP, but the content comes from reading Text from the defaults file (I would have had to set this file up previously as a fp:resource). The default value for the as attribute is "string", so I am leaving it out. I understand Xerces prefers you to write attributes before elements.
label
Next I write, only if someone is POSTing a Form.
Write the contents of the fp:write element, in this case the value of the POSTed Form field called "title" (this one), to the title Node of our selected item in the external file, as Text.
title
item
The fp:write element does not emit anything into the <input/> tag, except status Attributes.
<input/>
After finishing the write, we read the info back out, this is what ends up being the contents of the input element. I am reading it back out incase I want to display that the user has written etc.
Lastly, only get the default value for the field if someone is GETting a Form. | http://cocoon.apache.org/1.x/fp.html | CC-MAIN-2015-18 | refinedweb | 1,703 | 59.94 |
lets say i have a file, say main.cc, and it includes a file, say input.h, but input.h also includes io.h. Does that mean main.cc depends on io.h? If so, why.
lets say i have a file, say main.cc, and it includes a file, say input.h, but input.h also includes io.h. Does that mean main.cc depends on io.h? If so, why.
If main.cc depends on input.h and input.h depends on io.h, then main.cc depends on io.h. If that is what you mean.
Note that io.h doesn't need to be included witin main.cc also. If you include input.h, it is automatically included.
let me reprase the question, if main.cc includes input.h in its dependency list, and input.h includes io.h in its dependency list, should main.cc also include io.h in its dependency list (assuming io.h is just some random header file, not the io.h included with the compiler). Note that main.cc contains no references to anything defined in io.h, just input.h.
If so, then why. It seems to me that since main.cc doesn't contain any references to anything defined/declared in io.h, then it shouldn't include io.h in its dependency list.
no, having the same header in every header file will lag your program cimpilation badly.
Once I made a program with 2 header files, an engine i made and just aheader file for spare parts i made. They both linked to eachother and every other library(which also link to eachother). My c files also linked to my header files. Took me 10 minutes to compile my program. It was only around 1500 lines long. Sad. Very Sad.
only link to input.h . Do not link to ios.h again. its not needed. does that answer ur question?
>>only link to input.h
hmmm, I'm not sure you understand my question.
What I mean about dependency is that if i were to change a line of code in io.h but that changing that line did not require any modification to input.h, would i need to recompile my object code for main.cc?
ok, i think i know what ur asking.. if you change ANYTHING in any source file you used yes u need to recompile..
also note when you include a .h the source code in the .h is pasted at the beginning of the source file you called it in..
see the #include is a preprocessor statement saying everyting in the file goes here...
ie:
if myheader.h has this in it :
cout <<" HELLO ";
and you call it in source file main.cpp like this:
#include <myheader.h>
when you compile, the compiler goes through the preprocessor and finds the statement #include, now it says ok substitute everything in myheader.h and past it here , then remove the line #include <myheader.h>
there ya go
LB0: * Life once school is done
LB1: N <- WakeUp;
LB2: N <- C++_Code;
LB3: N >= Tired : N <- Sleep;
LB4: JMP*-3;
Yes, thats what my instructor said. But what I said to him in response is that since main.cc doesn't reference anything in io.h, it doesnt matter if io.h is changed since main.cc has no use of anything in io.h. main.cc only calls classes and functions in input.h, so it only matters if definitions are changed in input.h, not io.h.
[main.o] --> [input.o] --> [io.o]
the only time that main.c should be recompiled is if the "interface" of the functions and classes change in input.o (ie. input.h changes). But since main.o references nothing in io.h/io.o, there is no need for main.o to be recompiled if the io.h changes
If I'm wrong about this, i would like to know why. | https://cboard.cprogramming.com/cplusplus-programming/16546-makefile-dependencies.html | CC-MAIN-2017-22 | refinedweb | 665 | 89.65 |
Every person whose gross total income exceeds the taxable limit is liable to file income tax return (ITR). Therefore,,” says Chetan Chandak, Head of Tax Research, H&R Block India.
To explain it further, income tax returns for a financial year must be filed by the 31st of July of the next financial year. For example, income tax returns of Finacial Year 2016-17, that ended on March 31, 2017, will be due on 31st July 2017. In one financial year, you can file your IT returns for the previous two financial years. To instance, in the financial year 2016-17, up to March 31, 2017, you can file return for the previous two financial years – 2015-16 and 2014-15.
“From FY 2018-19, tax returns for the previous financial year can only be filed because the time period to file belated return or revised return has been reduced to one year from the end of the relevant assessment year to one year from the end of the relevant financial year. For example, return for FY 2016-17 can be filed or revised till 31.03.2018 only,” says Archit Gupta, Founder and CEO, ClearTax.in.
However,.
In case. | http://www.financialexpress.com/money/file-income-tax-returns-for-previous-years-how-many-years-can-it-returns-be-filed-for/788931/ | CC-MAIN-2018-13 | refinedweb | 200 | 66.98 |
12 December 2012 19:58 [Source: ICIS news]
Corrected: ?xml:namespace>
Correction: In the ICIS story headlined "US nylon October exports down by 10% year on year – ITC" dated 12 December, please read the headline as "…down by 9.1…" instead of "…down by 10…".
Please read in the first paragraph …were down by 9.1%… instead of …were down by 10%…. A corrected story follows.
HOUSTON (ICIS)--US nylon exports for October were down by 9.1% year on year, according to data made available by the US International Trade Commission (ITC) on Wednesday.
October 2012 exports decreased to 47,356 tonnes from 52,090 tonnes in October 2011, the ITC said.
Year to date, exports of nylon have totalled 505,878 tonnes, a decrease of 5.9% from the same period in 2011.
Top destinations for US nylon exports in October were Mexico, China and Canada.
Imports for October amounted to 7,001 tonnes, an increase by 9.7% year on year.
Canada, Germany and Italy were the top three points of origin for nylon material brought to the US in October.
The ITC groups nylon 6, nylon 11, nylon 12, nylon 6,6, nylon 6,9, nylon 6,10 and nylon 6,12 in its report.
ICIS provides pricing reports for nylon 6 and nylon 6,6.
US producers of nylon include Ascend Performance Materials, BASF Corp, DuPont, EMS-Grivory, Honeywell, INVISTA, NYCOA and Sh | http://www.icis.com/Articles/2012/12/12/9623828/corrected-us-nylon-october-exports-down-by-9.1-year-on-year-itc.html | CC-MAIN-2014-42 | refinedweb | 238 | 67.55 |
A key feature of PYLON is the ability to classify data with custom rules and then use this to greatly increase you analysis options.
You can use tags you've added to data in both your analysis query filters and as targets to be analyzed.
Analysis Query Filters
By adding tags.
Simple Tags
For example you could add the following tags to your interaction filter for your recording:
tag "BMW" { fb.parent.content contains_any "BMW" or fb.content contains_any "BMW" } tag "Honda" { fb.parent.content contains_any "Honda" or fb.content contains_any "Honda" } tag "Ford" { fb.parent.content contains_any "Ford" or fb.content contains_any "Ford" }
With these tags in place you can now subset data in your index by the automotive brand.
For example you could use the following filter to analyze just interactions that mention BMW:
interaction.tags == "BMW"
Or, you could filter to multiple brands:
interaction.tags IN "BMW,Honda"
If you're looking to analyze for example the top links shared by an audience, you now have the abillity to analyze this by brand:
{ "analysis_type": "freqDist", "filter": "interaction.tags == \"BMW\"", "parameters": { "target": "links.url", "threshold": 5 } }
Namespaced Tags
You can of course use namespaced tags in the same way. The advantage of namespaced tags is that they enable you to build large taxonomies of tags in an organised fashion that's easy to analyze.
Using namespaces let's expand our example tags:
tag.car.brand "BMW" { fb.parent.content contains_any "BMW" or fb.content contains_any "BMW" } tag.car.brand "Ford" { fb.parent.content contains_any "Ford" or fb.content contains_any "Ford" } tag.car.ford "E-150" { fb.parent.content contains_any "E150,E 150,E-150" OR fb.content contains_any "E150,E 150,E-150" } tag.car.ford "E-350" { fb.parent.content contains_any "E350,E 350,E-350" OR fb.content contains_any "E350,E 350,E-350" } tag.car.bmw "3 series" { fb.parent.content contains_any "3 series, 3-series, 3series" OR fb.content contains_any "3 series, 3-series, 3series" } tag.car.bmw "5 series" { fb.parent.content contains_any "5 series, 5-series, 5series" OR fb.content contains_any "5 series, 5-series, 5series" }
With these tags in place you could filter by brand:
interaction.tag_tree.car.brand == "BMW"
Or by model:
interaction.tag_tree.car.bmw == "3 series"
Or by combinations of both:
interaction.tag_tree.car.brand == "BMW" AND interaction.tag_tree.car.bmw == "3 series"
Note that the interaction.tag_tree target is case-sensitive. So in the above example "bmw" would not match any tags.
It's important to note that an interaction filter can include up to 10,000 tag or scoring rules, including from those you have included using the stream keyword.
Analysis Targets
By adding tags to data you can perform frequency distribution analysis on the tagged data.
When you submit an analysis query you can specify tags as your analysis target. You can do so for both simple and namespaced tags.
Simple Tags
When analyzing results of simple tags you use the interaction.tags target. Analyzing this target gives you a count of the number of interactions for each tag in the set of data you're analyzing.
If you applied the simple tags example above you could analyze the spread of brands across data in your index using this query:
{ "analysis_type": "freqDist", "parameters": { "target": "interaction.tags", "threshold": 3 } }
Namespaced Tags
When analyzing results of namespaced tag rules you make use of the interaction.tag_tree target.
If you applied the namespaced tags example above you could analyze the spread of brands like so:
{ "analysis_type": "freqDist", "parameters": { "target": "interaction.tag_tree.car.brand", "threshold": 3 } }
Notice here you specify the level of tags which contains the 'leaves' in your tag structure that you'd like to analyze. You cannot specify a level which contains sub namespaces, so you can only analyze one group of tags at a time.
Of course you could add a filter based upon your tags too. For example to analyze the top models in the BMW brand:
{ "analysis_type": "freqDist", "parameters": "interaction.tag_tree.car.brand == \"BMW\" " } | http://dev.datasift.com/docs/products/pylon-fbtd/howto/examples/analysis-queries/analyzing-tagged-data | CC-MAIN-2017-09 | refinedweb | 671 | 52.15 |
On Mon, 2006-02-27 at 03:34 -0500, Shailabh Nagar wrote:> Arjan van de Ven wrote:> > >>+/*> >>+ * timespec_diff_ns - Return difference of two timestamps in nanoseconds> >>+ * In the rare case of @end being earlier than @start, return zero> >>+ */> >>+static inline nsec_t timespec_diff_ns(struct timespec *start, struct timespec *end)> >>+{> >>+ nsec_t ret;> >>+> >>+ ret = (nsec_t)(end->tv_sec - start->tv_sec)*NSEC_PER_SEC;> >>+ ret += (nsec_t)(end->tv_nsec - start->tv_nsec);> >>+ if (ret < 0)> >>+ return 0;> >>+ return ret;> >>+}> >> #endif /* __KERNEL__ */> >> > >> > >>> >> >wouldn't it be more useful to have this return a timespec as well, and> >then it'd be generically useful (and it also probably should then be> >uninlined ;)> > > >> Return another timespec to store the difference of two input timespecs ? > Would that be useful ?> Didn't quite get it.the API is a bit crooked right now; you have 2 timespecs as a measure oftime, and you return a long as diff, rather than another timespec.How do you know the nsec_t doesn't overflow ??? I suspect the answer is"you don't". timespec's are a way to deal with that nicely. And it makesthe API more symmetric as well-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | http://lkml.org/lkml/2006/2/27/64 | CC-MAIN-2017-17 | refinedweb | 210 | 59.03 |
Custom message including numpy arrays
Hi!
I want to publish a custom message including some numpy arrays using python.
I am aware of the numpy_msg and all it's examples, they do however only cover publishing just one array.
My example message is:
header int16[] array1 int16[] array2
My publisher looks approximatly like this:
import rospy from std_msgs.msg import String from test_publisher.msg import test_msg from rospy.numpy_msg import numpy_msg import numpy as np def talker(): pub = rospy.Publisher('chatter', numpy_msg(test_msg), queue_size=10) rospy.init_node('talker', anonymous=True) rate = rospy.Rate(1) # while not rospy.is_shutdown(): my_msg = test_msg() my_msg.array_1 = np.ones([1,4],dtype=np.int16) my_msg.array_2 = np.ones([4,4],dtype=np.int16) pub.publish(my_msg) rate.sleep() if __name__ == '__main__': try: talker() except rospy.ROSInterruptException: pass
When i run this code and enable a listener, the following exception occurs:
rospy.exceptions.ROSSerializationException: field array1 must be a list or tuple type
If i just define one array in the message and directly publish that, it works however.
Can anyone provide me with some insight?
numpyarrays are not natively supported in ROS messages by default. I'm not entirely sure, but I believe that is where the error is coming from.
You could take a look at eric-wieser/ros_numpy.
And perhaps this tutorial on the ROS wiki: Using numpy with rospy.
I wonder if the issue is not that you have two arrays, but that you are trying to feed 2-dimensional arrays into the message?
Unfortunatly the tutorial only covers converting the whole array into one message and I am not able to do the same when the message contains other fields as well
As far as I can see, the library supplied by wieser converts numpy arrays into other known message types, none of them usable for my purpose.
I have not yet tried using 1D arrays, but I will try that in the morning. | https://answers.ros.org/question/289557/custom-message-including-numpy-arrays/ | CC-MAIN-2021-21 | refinedweb | 323 | 58.28 |
Sliders in pythonista
How do i change the values of a variable in pythonista using a slider
@Holm little example to change the value of the variable x
import ui x = 0 x_max = 100 def s_action(sender): x = x_max*sender.value sender.name = 'x='+str(int(x)) s = ui.Slider() s.action = s_action s.width = 200 s.name = 'x='+str(int(x)) s.present('sheet')
- struct_engr_
This post is deleted!last edited by struct_engr_ screenshot.jpeg
I built an app and am running this app on my iPad.
Credit to:
- [toggle the sliders]
- mikaelho / pythonista – gestures
- [sliders with labels]
- tdamdouni / Pythonista
- Pythonista / slider / SliderWithLabel_danrcook.py
The video shows an app with 8 inputs — 2 integer, 4 float, 2 boolean. Numeric input is tedious — at best; slider input is easier and offers positive feedback. In this app, both approaches are used:
Efficient sliders for user input
Only clear text for user screenshots
I enhanced the slider class to allow min values, floats, rounded floats, booleans. Rounded floats allow only multiples of say 0.5 [0.0, 0.5, 1.0, 1.5 ... ] or of 5 [0, 5, 10, 15 ...].
Here is the code gist:
khoitsma/augmented_slider.py
I’ve completed my new app for iPhone and iPad.
- AISC Bolt Shear Capacity based on AISC 15th Edition
- (American Institute of Steel Construction)
Thanks to all.
aisc-bolt-shear-capacity-pythonista
It would be cool if you could make this a GitHub repo instead of a GitHub gist. That would allow others to suggest improvements in the form or pull requests. For instance:
self.s_type = kwargs['s_type'] if 's_type' in kwargs else 'int' # could be rewritten as self.s_type = kwargs.get('s_type', 'int')
Thanks -- done. | https://forum.omz-software.com/topic/4999/sliders-in-pythonista/1 | CC-MAIN-2020-10 | refinedweb | 282 | 60.11 |
This is the namespace declarations I am talking about.
(sample Document View)
<aa:ents jcr:primaryType="aa:Ents" xmlns:jcr=""
xmlns:
<aa:ent jcr:
...
...
</aa:ent>
..
..
</aa:ents>
If I try to import this above XML without any any of the xmlns definitions
under aa:ents, I get an exception saying "prefix aa mapping not found". But
the thing is that the mappings are all present. If I try to add a mapping
for aa programmatically, it gives an error saying "mapping already present".
It's just that when the import happens, the mappings are "not seen" by the
method, and I need to specify them explicitly again here...which I don't
want to do.
That's the problem I am facing. Any suggestions?
Thanks,
Sridhar
On 11/10/06, Stefan Guggisberg <stefan.guggisberg@gmail.com> wrote:
>
> On 11/9/06, Sridhar Raman <sridhar.raman@gmail.com> wrote:
> > I made my Document View, and did the importing, and assigned the
> references
> > programmatically. That works fine.
> >
> > But I have another small problem - concerning namespaces.
> >
> > This is the structure of the nodes:
> > CON
> > |---ENTS
> > |---DIM
> >
> > There can be many DIM nodes, and all should be under the main node of
> > /CON/ENTS. So I create an XML document in Document View format, that
> will be
> > imported.
> >
> > The first time, I created a file in this style:
> > <ents ...>
> > <dim/>
> > <dim/>
> > <dim/>
> > <dim/>
> > </ents>
> >
> > And I did the importXML("/con", ...) using this method call. It worked
> fine,
> > with just a small hitch. I had to define all the namespaces again as
> part of
> > the <ent> element. Else, the import kept throwing unrecognised prefix
> > exceptions. I did that and it worked.
>
> i don't see any namespaces being used in your example. what namespaces
> are you talking of. can you provide a small sample xml file?
>
> >
> > Now I want to do a second set of import to this same workspace. And I
> > created the Document View in the same style as earlier. The import gives
> me
> > something like this.
> >
> > /con/ent/dim/...
> > /con/ent/dim1/...
> > /con/ent/dim2/...
> > /con/ent/dim3/...
> > /con/ent/dim4/...
> > ..[DATA from first import]
> > ..
> > /con/ent2/dim/...
> > /con/ent2/dim1/...
> > /con/ent2/dim2/...
> > /con/ent2/dim3/...
> > ..[DATA from second import]
> > ..
> >
> > What should be done to get the new list of <dim> nodes to go under the
> > earlier <ent>?
>
> i don't think that this can be done by using the jcr importXML methods.
> you'll either have to do it programmatically or write a custom SAX
> ContentHandler.
>
> > I tried this kind of Document View
> > <dim>
> > <dim1>
> > <dim2>
> > ...
> > and importing using the call importXML("/con/ent/", ..)
> >
> > This gives 2 problems:
> > 1) it's not well formed
> > 2) namespaces not found
>
> again, what namespaces are you talking of?
>
> cheers
> stefan
>
> >
> > What do I do?? Is there any solution to this?
> > I would be glad if you could help me with this.
> >
> > Thanks in advance,
> > Sridhar
> >
> >
> | http://mail-archives.apache.org/mod_mbox/jackrabbit-users/200611.mbox/%3C227621ad0611102129s21c6df55q3635aec2f75bd9eb@mail.gmail.com%3E | CC-MAIN-2016-26 | refinedweb | 483 | 77.33 |
use utf8; package Net::Etcd::KV::Txn; use strict; use warnings; use Moo; use Types::Standard qw(InstanceOf Str Int Bool HashRef ArrayRef); use MIME::Base64; use Data::Dumper; use JSON; with 'Net::Etcd::Role::Actions'; use namespace::clean; =head1 NAME Net::Etcd::KV::Txn =cut our $VERSION = '0.022'; =head1 DESCRIPTION Txn processes multiple requests in a single transaction. A txn request increments the revision of the key-value store and generates events with the same revision for every completed request. It is not allowed to modify the same key several times within one txn. From google paxosdb paper: if guard evaluates to true. 3. A list of database operations called f op. Like t op, but executed if guard evaluates to false. =head1 ACCESSORS =head2 endpoint /v3alpha/kv/txn =cut has endpoint => ( is => 'ro', isa => Str, default => '/kv/txn' ); =head2 compare compare is a list of predicates representing a conjunction of terms. If the comparisons succeed, then the success requests will be processed in order, and the response will contain their respective responses in order. If the comparisons fail, then the failure requests will be processed in order, and the response will contain their respective responses in order. =cut has compare => ( is => 'ro', isa => ArrayRef, required => 1, ); =head2 success success is a list of requests which will be applied when compare evaluates to true. =cut has success => ( is => 'ro', isa => ArrayRef, ); =head2 failure failure is a list of requests which will be applied when compare evaluates to false. =cut has failure => ( is => 'ro', isa => ArrayRef, ); =head1 PUBLIC METHODS =head2 create create txn =cut #TODO hack alert sub create { my $self = shift; my $compare = $self->compare; my $success = $self->success; my $failure = $self->failure; my $txn ='"compare":[' . join(',',@$compare) . '],'; $txn .= '"success":[' . join(',', @$success) . ']' if defined $success; $txn .= ',' if defined $success and defined $failure; $txn .= '"failure":[ ' . join(',', @$failure) . ']' if defined $failure; $self->{json_args} = '{' . $txn . '}'; # print STDERR Dumper($self); $self->request; return $self; } 1; | https://metacpan.org/release/Net-Etcd/source/lib/Net/Etcd/KV/Txn.pm | CC-MAIN-2021-17 | refinedweb | 323 | 53.71 |
Daniel Pope wrote a great blog post describing the different ways of performing asynchronous I/O in Python. In this post, I want to focus on his section called “Generator-based Coroutine”. Python’s
asyncio module in the standard library has a concept of “coroutines” that uses generators instead of callbacks or promises seen in other asynchronous frameworks.
Whenever a I/O call is made, you can prepend the statement with
yield from and as long as the function is under the
asyncio.coroutine generator, the Python module will handle the call asynchronously. Now this method doesn’t fully fit under the coroutine framework since generators can only yield to the caller frame. Therefore, it is called a semicoroutine.
Here is an example of its usage
import asyncio @asyncio.coroutine def think(duration): print("Starting to think for " + str(duration) + " seconds...") yield from asyncio.sleep(duration) print("Finished thinking for " + str(duration) + " seconds...") loop = asyncio.get_event_loop() loop.run_until_complete(asyncio.gather( think(5), think(2) )) loop.close()
Output:
Starting to think for 5 seconds... Starting to think for 2 seconds... Finished thinking for 2 seconds... Finished thinking for 5 seconds...
Newer Syntax
Now the decorator syntax has been deprecated in Python 3.8 in favor of the more common
async /
await syntax introduced in Python 3.7. The reason I showed the previous version is because I think it’s important to understand how this module is implemented behind the scenes. Also since Python 3.6 code is likely still floating around, you might encounter code bases like the above in your day to day.
Here is the equivalent code written in modern day syntax
import asyncio async def think(duration): print("Starting to think for " + str(duration) + " seconds...") await asyncio.sleep(duration) print("Finished thinking for " + str(duration) + " seconds...") async def main(): await asyncio.gather( think(2), think(5) ) asyncio.run(main())
Not that much shorter in terms of lines of code, but it allows the developer to not have to be concerned about the event loop or generators. | https://brandonrozek.com/blog/pyasyncio/ | CC-MAIN-2020-24 | refinedweb | 339 | 59.7 |
Myco::Devel - myco Developer's Guide.
This guide is intended for developers wanting to build applications with myco. You should have a decent grasp of the Perl programming language, or else a solid grasp of another programming language (C, PHP, etc.). Familiarity with Object Oriented Programming (OOP) techniques and test-first methodology of developing programs (such as outlined in the "eXtreme Programming" method) also go a long way toward writing sound applications, and making the best use of features offered in myco.
Our goal in this manual is to write and run a small application based on the myco framework.
Most likely, you will also be functioning as your own sysadmin. If so, please consult the Myco System Administration Guide for how-tos on installing Perl, PostresSQL, module dependencies, myco-deploying the database, etc. This document will repeat some of the details from the Admin guide along the way.
Also note that the assumption running through this guide that you're working on some variant of Unix or Linux. This is just to Keep It Simple Stupid. Nothing would thrill us more than to see widespread Windows myco-deployments of myco. Please hit the mailing list or the myco blog () if you are attempting such a thing and run into trouble.
The simplest way to get started, after completing installation and initial configuration of myco, is to utilize myco-mkentity to create a new Myco entity class and its companion test class. Depending on how you like to structure your module files and what testing framework you like to use (myco-mkentity and myco-testrun currently use Test::Class with Test::Unit), this may not suit you. But for this guide, it'll have to do :)
First,
be sure that you've set a couple environment variables.
Assuming you've downloaded and untarred/unzipped the myco distribution into your home directory and renamed it just 'myco',
in
sh or
bash:
export MYCO_ROOT=/usr/home/yourhomedir/myco
In
csh or
tcsh:
setenv MYCO_ROOT /usr/home/yourhomedir/myco
Put it in your .bashrc or .cshrc for permanence, if you like. Now navigate there:
cd $MYCO_ROOT
Now, after contemplating the object you'd like to model in your class and the name you want to give to it, run
myco-mkentity:
./bin/myco-mkentity Myco::App::Guitar
Though you can name your class anything, a good place to start is to park it within the Myco perl namespace, making use of the 'App' area. This has been historically used as a collection point or sandbox for developing myco applications. Anyway, using
myco-mkentity requires you to do it this way.
You can now poke around your new class file:
vi lib/Myco/App/Guitar.pm
and your companion test class file:
vi test/Myco/App/Guitar/Test.pm
Once you're satisfied its all there, give the test a whirl!
% ./bin/myco-testrun Myco::App::Guitar::Test ...... Time: 0 wallclock secs ( 0.01 usr + 0.01 sys = 0.02 CPU)
By default, your new test class will not test for persistence bahavior:
skip_persistence => 1 # in the %test_parameters hash of your test class
This is desirable, since its entirely possible that you want to simply use the myco framework to write classes to work in-memory only, and not persist as objects in a database. In this case, you'd proceed to write your code, but all attributes would be of a transient nature. But in most cases - such as now - you'll want to utilize persistence. So turn persistence testing on:
skip_persistence => 0
and run the test again. It should crash and burn, ending like this:
!!!FAILURES!!! Test Results: Run: 6, Failures: 0, Errors: 3
So, we now want to configure your class in the myco framework to be persistent, so that these six initial persistence tests will pass.
The Guitar.pm module file generated my myco-mkentity provides two dummy attributes (fooattrib and barattrib) to get persistence started. This should suffice to prove that persistence will work. One thing you might want to do before remyco-deploying the database is to specify your own DB table name. In the Myco::Entity::Meta object creation near the top of the class, setting the database table name:
tangram => { table => 'guitar', }
You can leave it commented out, too - table name will be generated automatically for your class when we myco-recycle the database, as long as you add your class' name to the SCHEMA_ENTITY_CLASSES array in the schema hash in myco.conf:
SCHEMA_ENTITY_CLASSES => [ qw( Myco::App::Guitar ) ],
If you ran the installation tests, myco.conf will have been created for you based on your responses to questions about your environment. If not, copy the file myco.conf-dist to myco.conf and flesh it out. Regardless you need to add the name of your entity class yourself at this point.
Now myco-deploy the new class to the database...
./bin/myco-deploy
...looking for output indicating that your 'guitar' table was created...
guitar myco-deployed Schema Deployed
...and run the test again:
./bin/myco-testrun Myco::App::Guitar::Test
All six basic entity tests should have passed. If you're suspicious that something should have failed, then you must be a test-first coder! Seriously, testing is good to do in parellel with (or, better, anterior to) writing your code. But myco's testing framework utilizes inheritance and other OO virtues to automate all the repetitious object persistence and entity class testing you'd normally have to do for each case. This means that, when you just want to model, in our case, a basic guitar and its attributes, you really don't have to write test code for it - its built into the framework!
But before we flesh Guitar.pm. First we'll replace the stock attributes with ones more guitar-ish. Try these on for size:
$md->add_attribute(name => 'make', type => 'int', values => [0..3], value_labels => { 0 => 'Gibson', 1 => 'Fender', 2 => 'Paul Reed Smith', 3 => 'Ibanez', }, ); $md->add_attribute(name => 'model', type => 'string', );
Simple, right? Here we're outlining the make/model with a Tangram integer and string data type, respectively. Here's one more:
$md->add_attribute( name => 'strings', type => 'flat_array', tangram_options => { table => 'guitar_strings', }, );
The last one may seem silly, since most guitars have six strings, but let's not forget about the guitar's poor cousin - the bass guitar, or various bastardizations like the seven-string, twelve-string or 'Chapman Stick' :)
There's many ways to model your attributes (TIMTOWDI), including using sets of objects, etc. Myco is tightly bound to the Tangram data mapping framework, so best to consult its documentation for more info. See Tangram::Type for more on the data types available for persistification. Here we're modeling strings as a perl array.
Now get rid of any references to those dummy attributes, 'fooattrib' and 'barattrib' in Guitar.pm and its Test.pm file:
In Line 70 of your Test.pm change this:
simple_accessor => 'foottrib',
to this:
simple_accessor => 'make',
As the comment above says,
simple_accessor is "A scalar attribute that can be used for testing". You can further flesh out the
%test_parameters hash to have the test framework automatically run your new attributes through the gauntlet. This can be very useful (even necessary) for objects that have required attributes using complex data types. But that's not the case with our current example.
Since our data schema has changed, let's myco-recycle the database!
./bin/myco-recycle
Again you'll see output to the effect that your guitar table was remyco-deployed.
Now, let's write a simple perl script to build our very own guitar!
#!/usr/bin/perl -w use strict; use Myco; # Get database connection paramters from myco.conf - very handy! use Myco::Config qw(:database); Myco->db_connect(DB_DSN, DB_USER, DB_PASSWORD); # Make it a Fender! my $guitar = Myco::App::Guitar->new( make => 1 ); print "Its a guitar!\n" if ref $guitar eq 'Myco::App::Guitar'; $guitar->set_make(0); # Changed my mind, now its a Gibson $guitar->set_model('Les Paul'); my @strings = qw(B E A D G B E); # Seven strings - rare! $guitar->set_strings( \@strings ); my $id = $guitar->save; print "The Tangram OID for your new guitar is: $id\n" if $id; # Do a myco/tangram query my $guitar_ = Myco->remote('Myco::App::Guitar'); my @results = Myco->select( $guitar_, $guitar_->{model} eq 'Les Paul' ); print "Guitar was saved and selected!\n" if $results[0]->id == $id; $guitar->destroy; Myco->db_disconnect;
One extremely sexy feature of myco is the ability to model (and store persistently) the behavior of a Tangram query object. This is accomplished by specifying as metadata the information you'd normally use to write a Tangram query in raw perl code - things such as attribute names, remote objects, boolean operators used to join clauses of the query together, and even the various methods Tangram::Expr used for the different Tangram data types.
You saw how our query was done in just two lines in the above example. Not much need to elaborate on that. However, when writing queries that involve many and more complex attributes such as object sets, optional filter clauses, etc., these queries can quickly become monstrous. Building an abstract query specification as a Myco::Entity::Meta::Query object is the best way to save yourself a lot of coding later on in your application.
So, let's rewrite our above example in a slightly different way (just to prove TMTOWTDI), while also replacing our simple two line query with one that can do a little more, but this time as a query spec in Guitar.pm.
Find the commented section - "Query Specifications" - near the top of Guitar.pm. Let's build our query spec here:
my $queries = sub { my $md = shift; # Metadata object $md->add_query( name => 'Test Guitar Query', remotes => { '$guitar_' => 'Myco::App::Guitar', }, result_remote => '$guitar_', params => { param_make => [ qw($guitar_ make) ], param_model => [ qw($guitar_ model 1) ], param_string => [ qw($guitar_ strings 1) ], }, filter => { parts => [ { remote => '$guitar_', attr => 'make', oper => 'eq', param => 'param_last', part_join_oper => '&', }, { remote => '$guitar_', attr => 'model', oper => 'eq', param => 'param_model', part_join_oper => '&', }, { remote => '$guitar_', attr => 'strings', oper => 'includes', param => 'param_string' }, ], }, ); };
We specify our query inside an anonymous subroutine. This is so we can create as many for our class as we like, and so it can more easily be passed to the Myco::Entity::Meta method,
activate_class. While we're at it, let's do that. Find the method call at the bottom of Guitar.pm:
$md->activate_class( queries => $queries );
For a full account of this structure, see Myco::QueryTemplate. A couple of things to note in passing are the
params hash, which specifies the remote object containing the attribute, the actual attribute name, as well as a boolean flag to indicate that a param is optional. So, in this query, only the first param is required. The
params hash is keyed by the attribute alias we'll use it comes time to actually run the query and pass in the params. To illustrate that you can use any descriptive alias you like, I've prepended each hash key in
params with a
param_. You could've also call these three params 'foo_1', 'foo_2', and 'foo_3', though that would be a bit obscure :) Most often you'd just key this hash with the actual attribute names. Just remember that only the hash values in the array will be used to construct the query.
No let's let's rewrite our script:
#!/usr/bin/perl -w use strict; use Myco; use Myco::Config qw(:database); Myco->db_connect(DB_DSN, DB_USER, DB_PASSWORD); my $guitar = Myco::App::Guitar->new( make => 0, model => 'Stratocaster', strings => [qw(E A B C D E F G)] ); my $id = $guitar->save; print "The Tangram OID for your new guitar is: $id\n" if $id; # Let's dig into the metadata to get our query my $class_metadata = Myco::App::Guitar->introspect; my $queries = $class_metadata->get_queries; my $guitar_query = $queries->{'Test Guitar Query'}; my $its_a_myco_query = ref $guitar_query eq 'Myco::Entity::Meta::Query'; print "Its a query!\n" if $its_a_myco_query; my %run_params = ( param_make => '1', param_model => 'Stratocaster', param_string => 'B' ); my @results = $guitar_query->run_query( %run_params ); print "Guitar was saved and selected!\n" if $results[0]->id == $id;
Pretty cool! When you're just starting out doing Tangram queries, a method that you might find helpful is
get_filter_string in Myco::QueryTemplate.
For instance, this...
print $guitar_query->get_filter_string( \%params );
...should yield this:
$guitar_->{make} == $params{param_make} & $guitar_->{model} eq $params{param_model} & $guitar_->{strings}->includes($params{param_string})
For another working query example, see the sample entity included with the myco base distribution.
There's a ton more you can do with myco, though this guide should provide you with a good start. Cheers, and let us know how you like myco!
Ben Sommer <ben@mycohq.com> | http://search.cpan.org/~sommerb/Myco-1.22/lib/Myco/Devel.pod | CC-MAIN-2017-13 | refinedweb | 2,116 | 61.97 |
Source generators and a boilerplate code
Source generators first were introduced in C# 9.0 in Spring 2020 as a new compiler feature that lets developers to generate new C# source code that can be added to a compilation. Using it you can inspect. It is a powerfull developers tool, that can augment you code starting from generation custom serializations and ending with generated fast dependency injection containers.
Code refactoring is one of the processes when developers maintain applications to minimize technical debth, refresh libraries used or maintain code readability. The main drawback of the boilerplate code is blackout the business logic: instead of analysis of functionality programmers need also to find it among overall code.
Here I’d like to introduce 3 libraries written by myself that are based on Source Generators features: SourceMapper, SourceConfig and SourceApi and are aimed to decrease boilerplate code in solution. An idea of each of the packages is to autogenerate code with some functionalities that can be used in code.
SourceMapper.
During my work on different cloud-native applications with a various tech stacks I’ve paid attention to Java’s widely used mapping library MapStruct, where developers define mappings using Java annotations (in C# we call it attributes). Using Source Generators we can implement mapper, that generates mappings based on C# attributes actually during coding. It can be installed using Nuget Package Manager:
Install-Package Compentio.SourceMapper
For example, definition of UserDao mapping to UserInfo object can be defined in interface, ClassName — defines the target mapper class name that is generated:
Destination code for UserMapper class than generated and can be used in solution:
Dependency injection extension code for various .NET containers also generated based on the Dependency Injection container you’ve been used in project. In this way you can inject mappers in your services, controllers or repositories and stay clean with Domain, DTO and DAO transformations in solution.
More about using, extending can be found on GitHub.
SourceConfig.
SourceConfig package uses Source Generators Additional File Transaformation feature to generate additional code. The idea of this package is simple: instead of creating POCO classes for configuration in you project, the package generates these objects intead of you. First lets intall it:
Install-Package Compentio.SourceConfig
After that, even if you have few configs for different enviromnets, thay are merged in one class. Lets assume, that you have appsettings.json configuration file:
and for development environment appsettings.development.json:
All you need, is to add them as C# analyzer additional file in Visual Studio:
or in your *.cproj file:
This will generate object for you configuration and you do not need to stay in sync when new configuration properties will be added to teh files: thay will apppear in you class automatically!:
Source code and more, of course, on SourceConfig GitHub.
SourceApi.
There are two approaches when implemening Web API: code first, that most of developers prefer: to create Web API controllers, DTO’s, to add Swagger UI and that’s all!; and API first, when API needs to be designed or discussed and only after that we start to implement it. In distributed systems with a various technologies and consumers of our API it is good to have language agnostic tools to agree and share API between the consumers. Open API Specification has been created for that:
The OpenAPI Specification (OAS) defines a standard, language-agnostic interface to RESTful APIs which allows both humans and computers to discover and understand the capabilities of the service without access to source code, documentation, or through network traffic inspection.
The third library, that I called SourceApi created for API first approach: you (or your team) define API in yaml or json format, add it to Web API project and the package generates Controller base classes with DTO’s and documentations for your API. All you need, is to implement logic in controllers. You even do not need to create DTO’s, since it already generated.
For example, for Opan API defined in
thus, StoreControllerBase class is generated as:
and can be used as base abstract class for your Web API controller. You only need to concentrate on implementation logic instead of definitions of response codes, stay in sync with DTO’s, etc. When you change Open API definition file, the base abstract class and DTO’s are refreshed.
Some configuration properties also added to the package: you can define the namespace of your base controllers, or you can generate only DTO’s in a case when you are the consumer of some REST API.
Source code and documentation can be found on SourceApi GitHub. | https://aleksander-parkhomenko.medium.com/source-generators-and-a-boilerplate-code-faa1695f546 | CC-MAIN-2022-21 | refinedweb | 770 | 50.36 |
Details
Description
The script:
number = 27 println number.toHexString ( )
results in the message:
Caught: groovy.lang.MissingMethodException: No signature of method: java.lang.Integer.toHexString() is applicable for argument types: () values: []
Possible solutions: toHexString(int), toString(), toString(), toString(), toString(int), toString(int, int)
groovy.lang.MissingMethodException: No signature of method: java.lang.Integer.toHexString() is applicable for argument types: () values: []
Possible solutions: toHexString(int), toString(), toString(), toString(), toString(int), toString(int, int)
at toHexString_Problem.run(toHexString_Problem.groovy:5)
The reason is that toHexString is defined as a static method on the types and not within the GroovyJDK. I believe that it is more usable for prograqmmers, more appropriate to a dynamic language, and generally more Groovy to allow the method to operate on a reference to an object and only raise an exception if the object is not of an appropriate type.
Activity
To fit the requested need, it would be enough extending the Number class with the non static method toHexString() (maybe as well as toBinaryString() and toOctalString()). Having this method will make number to formatted string conversion groovier. Supporting Pauls statement: This is an Improvement rather than a Bug.
I changed the title and the type to reflect that this is an improvement.
I agree that the original title wasn't accurate but there are two possible interpretations to Russel's request. The first is that we add support for one (or a handful) of methods (toHexString, toOctalString etc.). The other interpretation (and the one I believe Russel intended) is that we provide "automatic DGM" support for any static method defined on any class where the class is the method's first argument, i.e. auto provide the appropriate instance method. The toHexString is just an example - and as it turns out an interesting one in that it would require automatic handling/detection for the primitive versions of the wrapper classes.
So a better title would be something like "Automatically provide instance methods on objects when another class has a static method with the object as first argument"?
Something like that. Just some more background: we already have "use". So if it wasn't for the int vs Integer difference in Russel's example we could do something like:
def number = 27 use (Integer) { println number.toHexString() }
If we want to allow some cases to not need the "use", which ones would we do automatically? Just static methods in the runtime class of the object (with appropriate first parameter)? Also in it's super classes? Any arbitrary class on the classpath? The last one might be way too prohibitively slow.
I am against doing anything like making instance methods out of static methods that happen to be in the form of a category method automatically from any unrelated class. I am against that mainly, because a category method can shadow an implementation.
If anything, then only if the static method declared in X has X as first parameter and is category wise applied on X. For Y extends X the method would then be visible as well.
But to have an overview of the scope of this change it would be good to know how common this case is... for example in the JDK. Otherwise it might be better to just add the methods manually. I think in case of Integer#toHexString(int) this was made only in that way, because otherwise you would have to box from int to Integer. And if it is really only such cases, then adding them as DGM is more than enough.
Not sure it's a bug since I don't think groovy ever claimed to support this feature. Can you elaborate a bit more on what you are proposing? Are you suggesting that Groovy should assume for an instance of some class, that any static method that happens to have the class type as its first parameter should be added as an instance "meta-method" for the class? I.e. "static toHexString(Integer param)" and possibly "static toHexString(int param)" would be treated as instance methods on any Integer but "static parseInt(String param)" would not because while being a static method in the Integer class doesn't have Integer as its first param? | http://jira.codehaus.org/browse/GROOVY-5399?focusedCommentId=296065&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel | CC-MAIN-2013-48 | refinedweb | 704 | 62.68 |
Closed Bug 1380585 Opened 3 years ago Closed 3 years ago
Come out with an approach to pref on/off the Photon visual refresh
Categories
(Firefox :: Preferences, enhancement, P2)
Tracking
()
Firefox 56
People
(Reporter: rickychien, Assigned: jaws)
References
(Blocks 1 open bug)
Details
(Whiteboard: [photon-preference])
Attachments
(1 file)
Discussion thread has started from. We should figure out a way to be able to pref on/off of Photon visual refresh change whatever using run-time or build-time approach.
In reply to bug 1357306 comment 13, how about if we land the patch that I pushed to that bug, but instead of enabling this for Nightly builds we keep it disabled. Then developers can enable it on their machines but official Firefox 56 Nightly builds will not have it. Does that sound better?
Flags: needinfo?(rchien)
Yeah! Sounds great to me. Once we got Fx56 QA sign-off, we can enable the build flag in Nightly before the merge day. Please submit your patch here and let's land it! Thanks!
Flags: needinfo?(rchien)
Flags: needinfo?(jaws)
Flags: qe-verify-
Whiteboard: [photon-preference][triage] → [photon-preference]
Flags: needinfo?(jaws)
The patch I'm attaching enables the pref on Nightly. We can use Cedar[1] for testing the non-57 code. Cedar has been in use to see what Firefox 56 will be, which will have the various photon build flags disabled. [1]
Assignee: nobody → jaws
Status: NEW → ASSIGNED
Target Milestone: Firefox 57 → Firefox 56
When this lands, what will we need to do to get the pref disabled on Cedar?
Flags: needinfo?(mconley)
Comment on attachment 8887648 [details] Bug 1380585 - Add MOZ_PHOTON_PREFERENCES build-time flag to help with implementing visual refresh of preferences.
Attachment #8887648 - Flags: review?(cmanchester) → review+
Comment on attachment 8887648 [details] Bug 1380585 - Add MOZ_PHOTON_PREFERENCES build-time flag to help with implementing visual refresh of preferences. Looks great! And then we can add ifdef marco within css file to pref on the visual feature. ``` %ifdef MOZ_PHOTON_PREFERENCES ... %endif ```
Attachment #8887648 - Flags: review?(rchien) → review+
Yes, that is correct.
Pushed by jwein@mozilla.com: Add MOZ_PHOTON_PREFERENCES build-time flag to help with implementing visual refresh of preferences. r=chmanchester,rickychien
Status: ASSIGNED → RESOLVED
Closed: 3 years ago
status-firefox56: --- → fixed
Resolution: --- → FIXED
Jared, how do you enable MOZ_PHOTON_PREFERENCES flag from mozconfig? Does it support artifact build?
Flags: needinfo?(jaws)
The flag is enabled by default on local and Nightly builds. If you want to test without it, open /toolkit/moz.configure and set MOZ_PHOTON_PREFERENCES to default to False, like is done here for MOZ_PHOTON_ANIMATIONS and MOZ_PHOTON_THEME: QA will need to use builds from Cedar to test, as builds from Nightly will not actually be what ships in 56 because Nightly includes many Photon changes. All Photon changes are disabled on Cedar builds.
Flags: needinfo?(jaws)
This is now disabled on Cedar,
Flags: needinfo?(mconley) | https://bugzilla.mozilla.org/show_bug.cgi?id=1380585 | CC-MAIN-2020-10 | refinedweb | 476 | 64.51 |
Created on 2017-11-10 16:33 by snwokenk, last changed 2017-11-11 05:26 by snwokenk.
1st sequence:
arr = multiprocessing.Array("b", 1) # byte type
arr[0] = 's'.encode()
print(arr[:])
result is [115]
----
2nd sequence:
arr = multiprocessing.Array("c", 1) # character type
arr[0] = 's'.encode()
print(arr[:])
result is b's'
----------
Wrong values for given types.
The values are correct for the given type codes, which should be the same as the corresponding type codes for the array and struct modules. Except the array module doesn't support the "c" type.
However, assigning b's' to an index of a "b" type array should fail with a TypeError, so I don't think you're showing actual code. A "b" array value is a signed integer in the range [-128, 127]. Larger magnitude integers can be assigned, but they alias (wrap around) back to this range.
I don't understand why you think they are the wrong values. What values were you expecting?
You have a byte array, you set the value to the byte b's', which is 115, and you get 115. You have a (byte) character array, you set the value to the byte b's', and you get b's'. What were you expecting?
I completely wrote the wrong code:
So I'll copy and paste:
--------------------
Scenario 1
import multiprocessing
br = multiprocessing.Array('c', 1)
br[0] = 's'
print(br[:])
Scenario 1 result = "TypeError: one character bytes, bytearray or integer expected"
-------------------------
Scenario 2
import multiprocessing
br = multiprocessing.Array('b', 1)
br[0] = 's'.encode()
print(br[:])
Scenario 2 results = "TypeError: an integer is required (got type bytes)"
-----------------------------------
I believe my confusion is that I am thinking more of the python data type byte, which takes b'', than C language data type byte (which takes numbers from -127 to 128. This confusion is compounded when I do something like scenario 3:
-------------------------------
import multiprocessing
br = multiprocessing.Array('c', 1)
br[0] = 's'.encode()
print(br[:])
scenario 3 results: b's'
------------------
In the first scenario passing 's' i get an error, even though by definition 's' is a c char data type.
To clarify
my expectation was something like this:
arr = multiprocessing.Array('c', 3) # type char
arr = "Fun"
which is similar to c implementation:
char arr[3] = "Fun"
AND
arr = multiprocessing.Array('b', 3) # type byte
arr = b'fun'
Also this website list 'c' as a data type supported: | https://bugs.python.org/issue32003 | CC-MAIN-2017-47 | refinedweb | 405 | 65.93 |
You'll find yourself needing to add temporary data to an individual user. Sessions allow you to do this by adding a key value pair and attaching it to the user's IP address. You can reset the session data anytime you want; which is usually done on logout or login actions.
The Session features are added to the framework through the
SessionProvider Service Provider. This provider needs to be between the
AppProvider and the
RouteProvider. For Masonite 1.5+, this provider is already available for you.
It's important to note that the Session will default to the
memory driver. This means that all session data is stored in an instantiated object when the server starts and is destroyed when the server stops. This is not good when using a WSGI server like Gunicorn which might use multiple workers because each worker will create it's own memory state and requests may jump from worker to worker unpredictably. If you are only using 1 worker then this won't be an issue as long as the worker doesn't die and reset for the duration of the server's life. In this case you should use another driver that doesn't have the memory issue like the
cookie driver which will store all session information as cookies instead of in memory.
There are a two ideas behind sessions. There is session data and flash data. Session data is any data that is persistent for the duration of the session and flash data is data that is only persisted on the next request. Flash data is useful for showing things like success or warning messages after a redirection.
Session data is automatically encrypted and decrypted using your secret key when using the
cookie driver.
Sessions are loaded into the container with the
Session key. So you may access the
Session class in any part of code that is resolved by the container. These include controllers, drivers, middleware etc:
def show(self, request: Request):print(Session) # Session class
Data can easily be persisted to the current user by using the
set method. If you are using the
memory driver, it will connect the current user's IP address to the data. If you are using the
cookie then it will simply set a cookie with a
s_ prefix to notify that it is a session and allows better handling of session cookies compared to other cookies.
def show(self, request: Request):request.session.set('key', 'value')
This will update a dictionary that is linked to the current user.
You can also set a dictionary as a session value and it will automatically JSON encode and decode as you set and get the key:
def show(self, request: Request):request.session.set('key', {'key', 'value'})
When you get the key from the session it will turn it back into a dictionary.
You can use it on flash as well:
def show(self, request: Request):request.session.flash('key', {'key', 'value'})
Data can be pulled from the session:
def show(self, request: Request):request.session.get('key') # Returns 'value'
Very often, you will want to see if a value exists in the session:
def show(self, request: Request):request.session.has('key') # Returns True
You can get all data for the current user:
def show(self, request: Request):request.session.all() # Returns {'key': 'value'}
Data can be inserted only for the next request. This is useful when using redirection and displaying a success message.
def show(self, request: Request):request.session.flash('success', 'Your action is successful')
When using the
cookie driver, the cookie will automatically be deleted after 2 seconds of being set.
You can of course change the drivers on the fly by using the
SessionManager key from the container:
from masonite.managers import SessionManagerdef show(self, session: SessionManager):session.driver('cookie').set('key', 'value')
It's important to note that changing the drivers will not change the
session() function inside your templates (more about templates below).
The
SessionProvider comes with a helper method that is automatically injected into all templates. You can use the session helper just like you would use the
Session class.
This helper method will only point to the default driver set inside
config/session.py file. It will not point to the correct driver when it is changed using the
SessionManager class. If you need to use other drivers, consider passing in a new function method through your view or changing the driver value inside
config/session.py
{% if session().has('success') %}<div class="alert alert-success">{{ session().get('success') }}</div>{% endif %}
You could use this to create a simple Jinja include template that can show success, warning or error messages. Your template could be located inside a
resources/templates/helpers/messages.html:
{% if session().has('success') %}<div class="alert alert-success">{{ session().get('success') }}</div>{% elif session().has('warning') %}<div class="alert alert-warning">{{ session().get('warning') }}</div>{% elif session().has('danger') %}<div class="alert alert-danger">{{ session().get('danger') }}</div>{% endif %}
Then inside your working template you could add:
{% include 'helpers/messages.html' %}
Then inside your controller you could do something like:
def show(self):return request().redirect('/dashboard') \.session.flash('success', 'Action Successful!')
or as separate statements
def show(self, request: Request):request.session.flash('success', 'Action Successful!')return request().redirect('/dashboard')
Which will show the correct message and message type.
You can reset both the flash data and the session data through the
reset method.
To reset just the session data:
def show(self, request: Request):request.session.reset()
Or to reset only the flash data:
def show(self, request: Request):request.session.reset(flash_only=True)
Remember that Masonite will reset flashed data at the end of a successful
200 OK request anyway so you will most likely not use the
flash_only=True keyword parameter. | https://docs.masoniteproject.com/advanced/sessions | CC-MAIN-2020-34 | refinedweb | 973 | 57.37 |
For exception families (ex: System.Net namespace), the steps are very similar.
One thing to be aware of, changing the options for an exception family (ex: System.IO) will change the setting for all nodes beneath it (ex: System.IO.FileNotFoundException) to match. Note: If you are debugging Smart Device (NetCF) projects using the beta 2 release of Visual Studio 2005, you will need to turn off the debugger's Just-My-Code (JMC) feature to be able to stop on first chance exceptions.Take care!-- DKDisclaimer(s):This posting is provided "AS IS" with no warranties, and confers no rights.Some of the information contained within this post may be in relation to beta software. Any and all details are subject to change. | http://blogs.msdn.com/davidklinems/archive/2005/07/19/440632.aspx | crawl-002 | refinedweb | 124 | 67.15 |
14 July 2009 04:57 [Source: ICIS news]
By Helen Yan
SINGAPORE (ICIS news)--Asia acrylonitrile (ACN) producers will implement steeper production cuts as their margins have been severely eroded by the soaring costs of feedstock propylene, market sources said on Tuesday.
Major Japanese ACN producer Asahi Kasei and Taiwanese ACN producer China Petrochemical Development Corp (CPDC) said the surge in costs of feedstock propylene had eroded their margins, leaving them no choice but to further slash operating rates by another 10-20% as soon as possible.
The feedstock propylene has soared to $1,000-1,045/tonne (€720-752/tonne) CFR (cost and freight) NE (northeast) Asia, up $150/tonne in the past month, according to global chemical markets intelligence service ICIS pricing.
In contrast, ACN prices had only started to edge up to $1,180-1,230/tonne CFR NE Asia last week after languishing at $1,150-1,200/tonne CFR NE Asia for the past two months since early May.
“We will cut the operating rates of our ACN plants further by another 10-15% as soon as possible if our margins continue to be eroded by the high feedstock propylene costs,” a company source at Asahi Kasei said.
Asahi’s ACN plants in ?xml:namespace>
The same sentiment was echoed by Taiwanese ACN producer CPDC.
“We have no choice but to further cut the operating rate of our ACN plant by another 10-20% or our margins will be wiped out,” a company source at CPDC said.
CPDC is running its 190,000 tonne/year ACN plant at
However, a South Korean ACN producer, Taekwang Industrial, said it would monitor the market conditions before deciding whether to cut operating rates. It is currently running its 250,000 tonne/year ACN plant at
“It depends on the market conditions. We are running 100% now, but may cut the operating rate in September if market conditions deteriorate,” a company source said.
Apart from deeper production cuts, Asian ACN producers have also increased offers by $50-100/tonne to $1,300-1,350/tonne CFR Asia.
Although demand from the derivative acrylic fibre (AF) is expected to remain strong into the third quarter, ACN producers said they anticipate stiff resistance to the proposed price hikes.
“At the moment, we are facing strong resistance from the AF makers to offers above $1,300/tonne CFR Asia, but we have no choice but to keep our offers at this minimum level due to the feedstock cost push factor,” a South Korean ACN producer said.
($1 = €0.72). | http://www.icis.com/Articles/2009/07/14/9232212/asia+acn+makers+to+deepen+production+cuts+on+feedstock.html | CC-MAIN-2013-20 | refinedweb | 427 | 51.52 |
codesuMac Mar 19, 2018 1:33 AM (in response to barryli)
It looks as if Console is only for real-time log display. It just doesn't go into the past (at least not for my App).
The command 'log show' does very similar to the Console App - and I was not able to extract my log lines.
However - when I do the 'log collect' ritual, saving a huge bunch of data on my desktop, in a .logarchive bundle - then log show can extract all my persistent log lines from that archive - with the same --predicates as you described. A good filtering mechanism (at least better then prefixing all your messages) would be to set up your own "subsystem", and filter by that.
I'm still working on it, and get frustrated more and more - because my clients are now going to need to learn to use the terminal, and have root privileges (to collect logs) and run a minute-long command just to be able to send me my Apps logs.
That's what I call "Beautiful" new technology... Hail Apple.
Another question - to what log level/type NSLog() maps to? I need to continue using NSLog() for my clients running on older systems..
Re: NSLog and os_log only work in XcodesuMac Mar 18, 2018 10:39 PM (in response to barryli)
Xcode 9.2, Latest Mac OS SDK - 10.13, (a bit old) Objective-C project, running on Mac OS 10.12 - Same sad behavior. The saddest thing is, I cannot even KNOW if my logs are persisted anywhere or not.
I need to deploy my Mac Application to clients, and I rely on logs on their Macs for catching failures and other issues "in the field". I am not talking about crashes, but just internal logic issues in the app.
I have tried NSLog() and followed closely the new unified logger APIs. I've watched the WWDC video, read the documentation, downloaded the Sample app (Paper Company) and even copied actual source lines to my code
--- to no avail. I cannot find any of my log lines neither in the Consoled App, nor using the "log" command line.
To make it clear - I specifically tried the error and failure log levels that should ALWAYS persist - just for proving that any logging persists.
I am getting quite desperate on this, and soon I will be retreating into one of the lousy open-source text-based log-writing utilities on the net. Yes - they ****, and they do not integrate with the OS, and they are bloating the disk with garbage - but I need logs, and MacOS (also iOS) have been depriving me of that functionality for quite a while.
If anyone learns something new about this (@ eskimo...) please add a note here sometime.
Re: NSLog and os_log only work in Xcodeeskimo Mar 19, 2018 3:34 AM (in response to suMac)
I’m not having any problems here. First, I created a small test project that logs some stuff.
#include <os/log.h> #include <stdlib.h> int main(int argc, char **argv) { #pragma unused(argc) #pragma unused(argv) os_log(OS_LOG_DEFAULT, "qqq default"); os_log_info(OS_LOG_DEFAULT, "qqq info"); os_log_debug(OS_LOG_DEFAULT, "qqq debug"); os_log_error(OS_LOG_DEFAULT, "qqq error"); return EXIT_SUCCESS; }
I then tested in two different scenarios, as explained below.
In the first scenario I wanted to test log streaming. Here’s how I did this:
I restored a virtual machine (VM) to a fresh 10.13.3 snapshot.
I copied my test tool to the VM.
$ scp LogTest virtual-high.local.:
.
Within the VM I opened a Terminal window and entered the following command.
$ log stream --level=debug --predicate 'eventMessage contains "qqq"'
.
I opened another Terminal window and ran my test tool.
$ ./LogTest $
.
The first Terminal window showed all my log output.
$ log stream --level=debug --predicate 'eventMessage contains "qqq"' Filtering the log data using "eventMessage CONTAINS "qqq"" Timestamp Thread Type Activity PID TTL … 03:19:04.504214-0700 0x1a6a Default 0x0 681 LogTest: qqq default … 03:19:04.504676-0700 0x1a6a Info 0x0 681 LogTest: qqq info … 03:19:04.504696-0700 0x1a6a Debug 0x0 681 LogTest: qqq debug … 03:19:04.504724-0700 0x1a6a Error 0x0 681 LogTest: qqq error ^C
.
In the second scenario I wanted to explore the “in the field” case. Here’s what I did:
I restored my VM back to its 10.13.3 snapshot.
I copied my test tool to the VM again.
Within the VM I opened a Terminal window and ran my test program.
$ ./LogTest $
.
I then ran sysdiagnose:
$ sudo sysdiagnose
.
I authenticated, authorised, and waited for
sysdiagnoseto complete.
I copied the resulting log back to my main Mac, also running 10.13.3
I unpacked the log and then opened
system_logs.logarchivein Console.
In Console, I made sure that Include Info Messages and Include Debug Messages, both on the Action menu, were checked.
I then entered “qqq” in the search box and pressed Return.
Here’s what I see:
default 2018-03-19 03:19:58.336030 -0700 LogTest qqq default info 2018-03-19 03:19:58.336039 -0700 LogTest qqq info error 2018-03-19 03:19:58.336047 -0700 LogTest qqq error
.
This is pretty much what I expected. The only oddity is that the debug message wasn’t captured by the sysdiagnose log, which is not hugely weird.
Share and Enjoy
—
Quinn “The Eskimo!”
Apple Developer Relations, Developer Technical Support, Core OS/Hardware
let myEmail = "eskimo" + "1" + "@apple.com" | https://forums.developer.apple.com/message/300567 | CC-MAIN-2019-43 | refinedweb | 912 | 74.29 |
The function time () returns the current calendar time, i.e., the number of seconds elapsed since 00:00:00 hour January 1, 1970 GMT (or gmt are used alternatively) up to the execution of the function. The function prototype is written as follows:
time_t time (time_t *Timeptr);
If Timeptr is not NULL, the return value gets stored in the memory pointed to by Timeptr. If unsuccessful, it returns a negative value.
Illustrates the application of function time () .
#include <stdio.h>
#include <time.h>
void main()
{
time_t Time;
Time = time(&Time);
clrscr();
printf("Time = %u\n", Time);
}
The number given in the output represents the number of seconds since 00:00:00 hours 01 January1970 GMT up to the execution of this program. In the following sections, we shall discuss the different conversion functions that convert the output of time () function into string representing day, date, hours, min, sec, year, | https://mail.ecomputernotes.com/what-is-c/function-a-pointer/function-time | CC-MAIN-2020-16 | refinedweb | 149 | 62.27 |
Wrong .pyui file is getting loaded', 'Documentation', 'Textures')): found = [] # Use a built-in image's path as a starting point (use a different one if you've removed this from your project): dummy_img_path = scene.get_image_path('Test_Lenna') bundle_path = os.path.split(os.path.split(dummy_img_path)[0])[0] for path, dirs, files in os.walk(bundle_path): # Remove excluded directory names from the search (not required, but makes it faster): for excluded in exclude_dirs: try: dirs.remove(excluded) except: pass for filename in files: ext = os.path.splitext(filename)[1] if ext == file_extension: full_path = os.path.join(path, filename) found.append(full_path) return found ui_files = find_bundled_files() w, h = ui.get_screen_size() root_view = ui.load_view(find_bundled_files()[0]) root_view['namefield'].clear_button_mode = 'while_editing' root_view['namefield'].text = read_username() root_view.background_color = (0, 0.02, 0.1) scene_view = SceneView() scene_view.frame = (0, 0, w, h) scene_view.scene = Game() root_view.present(orientations=['portrait'], hide_title_bar=True )
In your .pyui file, do you really have a textfield named 'namefield'?
Yes.
Is
len(ui_files)>1?
Nope!
On the line after
ui.load_view()put:
print([sv.name for sv in root_view.subviews])
Everything is listed EXCEPT for the textfield, here's my .pyui file to prove it:
[{"class":"View","attributes":{"background_color":"RGBA(1.000000,1.000000,1.000000,1.000000)","tint_color":"RGBA(1.000000,1.000000,1.000000,1.000000)","enabled":true,"border_color":"RGBA(0.000000,0.000000,0.000000,1.000000)","flex":""},"frame":"{{0, 0}, {768, 960}}","nodes":[{"class":"Button","attributes":{"image_name":"ionicons-arrow-right-a-256","border_color":"RGBA(0.000000,0.000000,0.000000,1.000000)","font_size":15,"title":"","enabled":true,"tint_color":"RGBA(1.000000,1.000000,1.000000,1.000000)","flex":"","action":"play_game","font_bold":false,"name":"button1","corner_radius":0,"uuid":"74B78341-B9F6-42F4-96BB-32DDA25F8573"},"frame":"{{40, 277}, {339.5, 227.5}}","nodes":[]},{"class":"Button","attributes":{"tint_color":"RGBA(1.000000,1.000000,1.000000,1.000000)","font_size":15,"enabled":true,"font_bold":false,"name":"button2","flex":"","border_color":"RGBA(0.000000,0.000000,0.000000,1.000000)","action":"change_character","uuid":"D1CC743E-9E48-41D4-A5C2-00EF7C4A6797","image_name":"ionicons-person-256","title":""},"frame":"{{407, 278}, {326, 226.5}}","nodes":[]},{"class":"Label","attributes":{"font_size":17,"enabled":true,"text":"Play Game","font_name":"AvenirNext-Heavy","name":"label1","flex":"","border_color":"RGBA(0.000000,0.000000,0.000000,1.000000)","text_color":"RGBA(0.306122,0.714286,0.443485,1.000000)","alignment":"center","tint_color":"RGBA(0.306122,0.714286,0.443485,1.000000)","uuid":"4011CA12-1BB9-470C-B869-16B97CFF19CB"},"frame":"{{134, 512.5}, {150, 32}}","nodes":[]},{"class":"Label","attributes":{"font_size":17,"enabled":true,"text":"Change Character","font_name":"AvenirNext-Heavy","name":"label2","flex":"","border_color":"RGBA(0.000000,0.000000,0.000000,1.000000)","text_color":"RGBA(0.306122,0.714286,0.443485,1.000000)","alignment":"center","uuid":"46F37335-6B0A-4329-9CE7-3E4BCA414FE6"},"frame":"{{479, 512}, {181.5, 32}}","nodes":[]},{"class":"Label","attributes":{"font_size":100,"enabled":true,"text":"Cacti Killer","font_name":"AvenirNext-Heavy","name":"label3","flex":"","border_color":"RGBA(0.000000,0.000000,0.000000,1.000000)","text_color":"RGBA(1.000000,1.000000,1.000000,1.000000)","alignment":"center","uuid":"480ECFCD-00FE-4955-8D15-6DF38A6A1D56"},"frame":"{{34, 27.5}, {699, 137.5}}","nodes":[]},{"class":"Label","attributes":{"font_size":17,"enabled":true,"text":"By T Ferry Code","font_name":"AvenirNext-Heavy","name":"label4","flex":"","border_color":"RGBA(0.000000,0.000000,0.000000,1.000000)","text_color":"RGBA(0.306122,0.714286,0.443485,1.000000)","alignment":"center","uuid":"DAC42FBB-12C0-410B-B2F5-2002B2B23612"},"frame":"{{309, 158.5}, {150, 32}}","nodes":[]},{"class":"TextField","attributes":{"tint_color":"RGBA(0.153061,0.714286,0.304592,1.000000)","font_size":17,"enabled":true,"flex":"","name":"namefield","border_style":3,"text_color":"RGBA(0.000000,0.000000,0.000000,1.000000)","alignment":"center","border_color":"RGBA(0.000000,0.000000,0.000000,1.000000)","placeholder":"Your Name","uuid":"9C85BC1D-94BF-4D28-A042-9FF3E9D4B5AC"},"frame":"{{283, 650.5}, {200, 32}}","nodes":[]}]}]
I print the ui files and it appears there IS more than one, even though only one is listed in my file browser.
Try changing the 0 to -1 in the line:
root_view = ui.load_view(find_bundled_files()[0])so that you take the last .pyui file instead of the first .pyui file.
You should also change the title of this thread to
Wrong .pyui file is getting loaded in XCodebecause your problem has nothing to do with
.clear_button_mode()but is instead caused by the fact that
root_view['namefield'] == None.
Will try when I get home today.
Tech,
Why dont you print out the name of ALL of the files that this script finds, when running on your device, not simulator. That way, you can let people know what directory things are going to, and so might not need this script anymore!
You could also modify this function to look only for files that start with a specified name! the name of your pyui.
@ccc Just for future reference, changing the 0 to -1 in the line:
root_view = ui.load_view(find_bundled_files()[0])worked.
Can you please print out the full path name of ALL the files that this script finds as @JonB suggests? It would be good to understand what the paths are.
Using the 'find_bundled_files' as mentioned above by @techteej, I was able to compile a tentative first stand-alone App in Xcode 5.1 for iOS 7.0 - a simple App I made with my daughter, which ended up getting rejected because it didn't work on iOS 8. Now that I've upgraded to Xcode 6.1 (and iOS 8.x target devices), it seems that I am back to the issue of not being able to find supplementary files, be they .pyui files or supporting .txt files.
I've read that iOS 8 has changed the location of Documents with respect to the App directory. Has anyone figured out a way to obtain that location in a scripted way from the main Script.py?
Dvader, we asked tech to respond back with where files were located, so that we could document the right place for everything. So, perhaps you could try the following, and report back your results?
If you can print out the results from the following, both on the device in xcode, on the device in pythonista, and on the simulator, I think that would go a long way to figuring this out.
import os print 'user', os.path.expanduser('~') print 'cwd', os.path.abspath('.') print 'current file', os.path.abspath( __file__) print 'os file', os.path.abspath(os.__file__) print 'image file', os.path.abspath(scene.get_image_path('Test_Lenna')
The next step would be to use each of the above as starting locations for the os.walk printing out all of the pyuis we find along the way. Really, the os.walk won't be required at all, once we understand the folder structure.
I started to run similar print commands last night already, to try and figure this out. I'll run the exact script you mentioned and post the output tonight.
From my quick reading of the Apple documentation on file structure handling in iOS 8, they encourage/force you to query the OS when you start your App, to receive what the path to your docs is, because it doesn't seem to be the same every time. I think that's part of the changes they needed to add "app bundles". I have only started reading on this recently, so perhaps my interpretation of the documentation is wrong, so don't quote me on this.
Anyhow, will post those results. Thanks for the suggestion, @JonB.
in Pythonista on iPad running iOS 8.1
beginning of test printout
BASE = /private/var/mobile/Containers
user $BASE/Data/Application/number_and_letters_1
cwd $BASE/Data/Application/numbers_letters_1/Documents/
current $BASE/Data/Application/numbers_letters_1/Documents/Script.py
os file $BASE/Bundle/Application/numbers_letters_2/Pythonista.app/pylib/os.py
image file $BASE/Bundle/Application/numbers_letters_2/Pythonista.app/Textures/Test_Lenna.png
end of test printout
in IOS simulator
beginning of test printout
BASE = /Users/dvader/Library/Developer/CoreSimulator/Devices/numbers_letters_3/data/Containers
user $BASE/Data/Application/numbers_letters_4
cwd $BASE/Data/Applications/numbers_letters_4/Documents
current file $BASE/Data/Applications/numbers_letters_4>/Documents/Script.py
os file $BASE/Bundle/Application/numbers_letters_5/PythonistaProject.app/pylib/os.py
image file $BASE/Bundle/Application/numbers_letters_5/PythonistaProject.App/Textures/Test_Lenna.png
end of test printout
ad hoc build on iPad running iOS 8.1
beginning of test printout
BASE = /private/var/mobile/Containers
user $BASE/Data/Application/numbers_letters_6
cwd $BASE/Data/Application/numbers_letters_6/Documents
current $BASE/Data/Application/numbers_letters_6/Documents/Script.py
os file $BASE/Bundle/Application/numbers_letters_7/Pythonista.app/pylib/os.py
image file $BASE/Bundle/Application/numbers_letters_7/Pythonista.app/Textures/Test_Lenna.png
end of test printout
It does seem like expanduser and initial Script directory are in the Application Data files consistently in iOS 8, while the supporting libraries are in the Application Bundle files. I'll have to see what the implications are for the find_bundle_files function, and where to start walking through the directories.
Any other observations?
I'm now upgraded to iOS 8 and Xcode 6.1, but it would be useful to have this written down for iOS 7 as well.
I'd suggest setting bundle path to the user path. Have the script print out all files it finds!
There are a few more paths listed in sysconfig, though I think most of those are from compilation, but you could give those a try too.
Apologies for not updating this earlier.
I spent a bit of time tonight on Xcode 6.1, looking at where files are going. First, a few notes on setting up:
Used the latest template linked to in the forum.
when I open that template in Xcode 6.1, it pops up with a couple of comments on having to optimize a couple of parameters. I can't remember what those were now, but I just agreed. Didn't seem to matter that much.
I use the tree structure setup by @omz, i.e. the main script is directly under the project root.
In Finder, I make a 'script' folder in the main project directory. All of my .py and .pyui files go there.
In Finder, I make an 'image' folder, where I put all of my supporting images.
In Xcode, I make a 'Script' group (looks like a yellow folder), and add all of the files in the 'script' Finder folder to that group. Note that those groups - those seen in a yellow folder - are just a convenient way to group and organize resources.
In Xcode, I make a 'my_images' group - also yellow folder - and add all of the files in the 'image' Finder folder to that group. Again, just a convenient way for Xcode to organize files. This is in contrast to the BLUE 'Textures' folder, which directly links to the 'Textures' subfolder in Finder. Any file you see in Finder in that subfolder, will automatically appear in your Xcode project in the 'Textures' BLUE folder.
With all this being done, I open the 'AppDelegate.m' file in Xcode and add, before the '@end' line, the few lines of code posted by @omz, which address the fopen, fwrite and putenv objects. You can find those at.
somewhere in your main script, or in supporting scripts, you'll need the find_bundled_files function described in another post.
If you organize your project as I describe here, all of your .pyui files will end up in the main App folder, just one branch above the os.py file as seen above, in the PythonistaProject.App folder, if you haven't changed anything further in your project. Note that all resources, .py, .pyui, custom images, text files that I've added ended up in here. The ONLY file that was in the Documents folder (os.path.join(os.path.expanduser('~'), Documents) is the main script (Script.py).
I'm sure there are other ways to organize your folders. I've started reading through some of the 'Build' scripts that the Xcode template has. This is where you see the destination of files in your App. You can try playing around with stuff there. But if you don't, I would recommend start with the find_bundled_files script to find your specific resources, and once you know where to locate them, you can hard-code the relative path, so that your App starts up faster. I'm hoping to do that as an update to the drawing App I developed with my kids ("Drawing with Sonia") and posted recently on the App Store; it does have a bit of a slower launch.
In addition to all this, a couple more comments - sorry for the very long post:
If you click on the Project root in Xcode and look at the General properties, I recommend switching to using Xcode assets (.xcassets) files for the app icon and launch images. They've added that functionality since Xcode 5. You just drag and drop the images in there, and it manages everything else for you.
In the main info.plist file, I've always had to switch the 'Application requires iPhone environment' to YES, otherwise, things don't work for me. Even though the only App I've posted is an iPad App, that flag seemed to be necessary.
As @omz suggested, if you don't need matplotlib, numpy, images, sounds or other resources already included, I definitely recommend removing them carefully. The base size of an App, if you just use it as is, will run over 50MB just from having to compile for the different processors as well.
And speaking of different processors supported: again clicking on the main Project root, and going in to the 'Build Settings', you need to remove 'arm64' from the 'Valid Architectures'. Currently, the project isn't compatible with 64-bit, but @omz recently confirmed in another post that he has that support ready for a future version, even though it does take up more space as a project.
Finally, I would recommend trying to figure out a good place in your App, or in the online description, to give proper credit where it's due. Whether it's recognizing all the hard work @omz has put into Pythonista, or any resources he's included in the template, or any resources you're adding yourself.
I believe these are most of the tips I can give to you. When it comes to setting up a Developer account and figuring out the iOS App development and posting pipeline, it took me a good several weeks to pass that hurdle, but it is certainly doable. I wish you good luck with that part, and look forward to seeing what can be accomplished with the Pythonista framework. | https://forum.omz-software.com/topic/1399/wrong-pyui-file-is-getting-loaded-in-xcode/7 | CC-MAIN-2019-39 | refinedweb | 2,428 | 50.63 |
Member
337 Points
Mar 05, 2020 10:19 AM|polachan|LINK
Hi
I want to check the given string value is correct format for converting into datetime variable before to convert into date time variable. But the value '05/03/2020 8:15:00' is showed as incorrect format . Please can you advise me where is the problem
eg: datetimestring ='05/03/2020 8:15:00' string clockeddate = log.AttendanceDate; string timeclock = log.Hrs + ':' + log.Mins; string datetimestring = clockeddate + " " + timeclock+":00"; DateTime dDate; bool isCorrectDateFormat = DateTime.TryParseExact(datetimestring, "dd/MM/yyyy HH:mm:ss", CultureInfo.InvariantCulture, DateTimeStyles.None, out dDate); if(isCorrectDateFormat) { dDate = Convert.ToDateTime(clockeddate + " " + timeclock); }
All-Star
47030 Points
Mar 05, 2020 10:55 AM|PatriceSc|LINK
Hi,
HH requires two digits. Try H instead. I tend to use TryParse especially with the invariant culture.
Edit: for example
using System; using System.Globalization; public class Program { public static void Main() {
DateTime dDate; foreach(string value in new string[] {"05/03/2020 8:15:00", "05/03/2020 08:15:00" }) { Console.WriteLine(DateTime.TryParseExact(value, "dd/MM/yyyy HH:mm:ss", CultureInfo.InvariantCulture, DateTimeStyles.None, out dDate)); Console.WriteLine(DateTime.TryParseExact(value, "dd/MM/yyyy H:mm:ss", CultureInfo.InvariantCulture, DateTimeStyles.None, out dDate)); Console.WriteLine(DateTime.TryParse(value,CultureInfo.InvariantCulture,DateTimeStyles.None, out dDate)); }
} }
shows False for the very first result and True otherwise.
All-Star
52433 Points
Mar 06, 2020 02:53 AM|oned_gk|LINK
Hi Polacan,
I assume that you have AttendanceDate in DateTime type, Hrs, Mins in double or integer.
So, why you make a datetime string to get dDate?
You can calculate the values directly without convert it to a string first.
Participant
1990 Points
Mar 06, 2020 07:46 AM|Sherry Chen|LINK
Hi polachan ,
As PatriceSc said , you make the mistake on the Date Format . The "H" custom format specifier represents the hour as a number from 0 through 23; that is, the hour is represented by a zero-based 24-hour clock that counts the hours since midnight. A single-digit hour is formatted without a leading zero.
The "HH" custom format specifier (plus any number of additional "H" specifiers) represents the hour as a number from 00 through 23; that is, the hour is represented by a zero-based 24-hour clock that counts the hours since midnight. A single-digit hour is formatted with a leading zero.
Refer to Custom date and time format strings for more details.
Best Regards,
Sherry
3 replies
Last post Mar 06, 2020 07:46 AM by Sherry Chen | https://forums.asp.net/t/2164751.aspx?checking+of+correct+datetime+format+is+not+working | CC-MAIN-2020-24 | refinedweb | 426 | 50.12 |
Django Polls Webapp¶
This tutorial will look at writing and deploying a web application. The app is an online poll where visitors can view choices (a bit of text) and vote the option up and down.
We will cover local deployment as well as deploying your web application to the world, where other people can play with it, and where you can modify it.
This is based heavily on the official tutorial for the Django web programming framework as well as the Ruby on Rails for Women workshop.
If you would like to see an example of this completed tutorial, Gregg did it, Lukas did it
Part 1: If You Build It... They Will Pay You¶
The Client¶
VoteOnEverything, Ltd. (VOEL) is a web startup hoping to make some money on this “internet” thing the kids are talking about. Their idea: an app where people can easily create polls on their favorite subjects and vote on them! They are flush with cash, and have hired you (you’re smart, right?) to help out!
Example User Stories¶
- As a poll creator, it should be obvious and easy to create polls.
- As a visitor, I want stable urls so I can share my polls easily.
- As a voter, it should be easy to vote on polls!
- As VOEL, I want polls with lots of total votes to be on the front page.
- As VOEL, I want to keep track of visitor information, to sell to advertisers, including their source ip, geography, etc.
- As a user, I want the application to keep track of polls I have created, my voting history.
- (many many more)
Designing The Prototype¶
Time to make a prototype! After hearing about Python and the Django web framework from that smart woman you know who works at Google, you have decided to try it out!
Since this is a prototype, you have decided to eliminate some features (logins, history tracking, demographic information, etc.) to focus on the core applicaion: polls and voting.
The prototype will consist of two parts:
- A public site that lets people view polls and vote in them.
- An admin site that lets you add, change and delete polls.
You have decided to implement the following pages:
- Poll “index” page – displays the latest few polls.
- Poll “detail” page – displays a poll question, with no results but with a form to vote.
- Poll “results” page – displays results for a particular poll.
- Vote action – handles voting for a particular choice in a particular poll.
To support these pages, you need these abstractions (models, objects):
- Polls
- Choices
Part 2: Create your Django App¶
Goal: Configure and explore a basic Django “out of the box app”.
Create necessary directories, activate virtualenv¶
(You may have already done this!)
In your terminal, starting from somewhere reasonable (here, mydir)
# remember in bash, # means comment -- don't type these :) cd mydir mkdir django_projects virtualenv --no-site-packages pystar # New python executable in pystar/bin/python # Installing setuptools............done. source pystar/bin/activate # windows -> pystar\Scripts\activate.bat # (pystar)My-MacBook-Pro:mydir $ # $ ls . # django_projects pystar
Switch to the right directory¶
In a terminal (or GitBash), get into the django_projects directory we created in the Friday setup portion of the tutorial.
You can do that by typing this into your terminal:
cd django_projects
In the Friday setup portion of the workshop, you already saw how to use the django-admin.py command to start a project. However, today, we’re using a super basic Django project that has some of the less crucial settings already predefined. Download the files at and add the myproject folder to your django_projects directory.
Let’s go into myproject and start looking around.
Look at the files¶
Let’s look at files in the project (you can ignore any .pyc files). The default Django app should look like this:
# remember, '$ ' indicates the terminal prompt, don't type it! $ ls __init__.py manage.py settings.py urls.py
These files are:
- __init__.py: An empty file that tells Python that this directory should be considered a Python module. Because of the __init__.py file, you can use import to import myproject.
-.
Start the Development (Local) Server¶
Verify the development server will start.
Run the command:
python manage.py runserver
Review the output in your terminal. It should look similar to:
Validating models... 0 errors found. Django version 1.2, using settings 'myproject.settings' Development server is running at Quit the server with CONTROL-C.
Now that the server’s running, visit with your Web browser. You’ll see a “Welcome to Django” page, in pleasant, light-blue pastel. It worked!
Note how mouthy this is, and that it mentions DEBUG, settings.py, and a lot more, which will be covered in later sections.
Of course, you haven't actually done any work yet. Here's what to do next: If you plan to use a database, edit the DATABASES setting in myproject/settings.py. Start your first app by running python myproject/manage.py startapp [appname]. You're seeing this message because you have DEBUG = True in your Django settings file and you haven't configured any URLs. Get to work!
Observe the logging that happens in the terminal where your server is running:
[24/Mar/2011 11:50:18] "GET / HTTP/1.1" 200 2057
which has the format:
DATE METHOD URL PROTOCOL RESPONSE_CODE CONTENTSIZE
Navigate to. What changes in the terminal log?
Exit the server
Experiment: These two commands are identical:
python manage.py runserver python manage.py runserver 8000
The ‘8000’ number is the port on which the server runs, by default. Start a server on port 8103, and navigate to it using your browser [answer].
Type python manage.py help. Speculate what some of these commands might do. For reference:
Part 3: Save your work!¶
Before we do anything else, let’s save our work and start it with the world.
We’ll do that with git and Github. On your own computer, get to a Terminal or a GitBash.
You should have set up git and your GitHub account yesterday. If not, do it now.
cd to get into the myproject directory. If it’s a fresh Terminal, this is what you’ll do:
cd ~/django_projects/myproject
Is this new project? (It is!) So:
create a git repository in the project directory:
# in myproject git init
Create your project on GitHub. Go to and create a new repository called “myproject”. On the main dashboard page, click on “New Repository” fill out the necessary information. cf:.
Check the status of your files. At this point:
(pystar2)Gregg-Linds-MacBook-Pro:myproject gregg$ git status # On branch master # # Initial commit # # Untracked files: # (use "git add <file>..." to include in what will be committed) # # __init__.py # manage.py # settings.py # urls.py nothing added to commit but untracked files present (use "git add" to track)
None of the files are tracked. That is, git doesn’t know about them!
Add one file git add manage.py. POP QUIZ: What does git status say now?
Add all your files to the repo, in the local directory:
git add *.py # all .py files, using a wildcard match.
Now git is aware of your files. Use git status to see them there in the staging area (the index).
git commit to commit those files:
# -m -> what is the 'message' for the commit git commit -m "Initial commit of django project from the PyStar workshop"
Look at your changes with git log to see your history. Is your commit message there?
Connect the remote github repo to your local one, and use git push to push those up to your Github repository (putting your user name and project title in the appropriate slots):
git remote add origin git@github.com:myusername/myproject.git git push origin master
Go to your Github account in your browser. Find the myproject repository. Do you see your files?
Remember:
- “commit your work” means “add and commit it to the local repository on your computer”
- “push your work” means “git push it to github” (if your computer explodes, there will still be a copy of your code on github!)
Part 4: Configure your Django Project¶
Now that we have a the scaffolding for our project in place, we can get to work! It needs to be configured.
Add yourself as an admin!¶
Open settings.py in your editor. settings.py is a Python script that only contains variable definitions. Django looks at the values of these variables when it runs your web app. The scaffold we wrote for you and Django’s own ‘startproject’ command has specified some of these variables by default, though not all of them.
Find ADMINS and replace Your Name and your_email@example.com with your name and your email address.
Remove the pound mark from the front of the line to uncomment it out.
git add and commit it:
git add settings.py git commit -m "made myself an admin"
Fix security settings¶
Right now, everyone in the workshop has the same “SECRET_KEY”. Since Django uses this key for various sensitive things, you should change it.
In settings.py, find the variable named SECRET_KEY and set it to whatever string you want.
Verify it looks something like:
# change this to something arbitrary. SECRET_KEY = '6yl8d1u0+ogcz!0@3_%au)_&ty$%1jcs2hy-!&v&vv2#@pq^(h'
How would we put a single-quote (‘) in our SECRET_KEY? [answer]
save the file
git add and commit it:
git add settings.py git commit -m "changed SECRET_KEY"
Set up the Database¶
Keep looking at settings.py: The DATABASES variable is a dictionary (note the ‘{}’ characters) with one key: default.
DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', # Add 'postgresql_psycopg2', 'postgresql', 'mysql', 'sqlite3' or 'oracle'. 'NAME': 'database.db', #. We’ve set our app to use a sqlite database, in the ENGINE attribute. Sqlite is great for development because is stores its data in one normal file on your system and therefore is really simple to move around with your app.
Note
In production, Sqlite has issues because only one process can write to it as a time. Discuss the implications of this with your group. [answer]
The NAME key tells the Django project to use a file called database.db to store information for this project.
Pop quiz: Does database.db exist right now? Find out! [answer]
Notice the INSTALLED_APPS setting towards the bottom of the settings.py. That variable (a tuple... note the ‘()’ symbols) holds the names of all Django applications that are activated in this Django instance. Apps can be used in multiple projects, and you can package and distribute them for use by others in their projects.', 'south', )
What do you think these various apps do? Why does it make sense for them to come in a standard configuration? [answer]
Each of these applications makes use of at least one database table, ‘super’ as your password:
You just installed Django's auth system, which means you don't have any superusers defined. Would you like to create one now? (yes/no): yes Username (Leave blank to use 'barack'): super E-mail address: example@example.com Password: Password (again): Superuser created successfully. Installing index for auth.Permission model Installing index for auth.Group_permissions model Installing index for auth.User_user_permissions model Installing index for auth.User_groups model Installing index for auth.Message model No fixtures found.
Does this seem magical? [answer]
Pop quiz: Does database.db exist right now? Find out! [answer]
Save and commit your work
git status # will show settings.py is changed, and a new 'untracked' # MacBook-Pro:myproject gregg$ git status # On branch master # Changed but not updated: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: settings.py # #
Drink some tea and take a stretch break. Then we can come back to STRETCHING OUR MINDS.
Part 5: In Which You Save You From Yourself, Using Git.¶
Your work is saved and committed (in git!) right?
Right? How do you know? [answer]
Good. Because you got a case of the accidental deletes and you’ve deleted your settings.py file!
No really. Go and delete settings.py. Throw it in the trash. Or the recycling bin. Or rm from the command line. Make sure it’s really gone using ls.
Try running your dev server. What happens? Why?
Delete your settings.pyc file. Try running your dev server. What happens now? Why? [answer]
Cry! So they’re gone right? No way back. And everything’s broken!
Rejoice! Because we’re using version control and version control is about to save your bacon!
$ git checkout settings.py
Look at your project folder again, using ls. Lo and behold, settings.py! Back from beyond the grave! Cool, huh? Open it up, and verify it is exactly as you left it. Isn’t that magical? [answer].
But what of settings.pyc? Start your dev server. It works, right? Stop your dev server and look at the files in your project. Do you see settings.pyc? How did it get there? [answer]
Part 6: Build The Polls Application¶
Now that your environment – a “project” – is set up, you’re set to start building the poll application.
Each application you write in Django consists of a Python package, somewhere on your Python path, that follows a certain convention. Django comes with a utility that automatically generates the basic directory structure of an app (that Django expects), so you can focus on writing code!.
Projects and Apps¶. So is our polls app. An app is:
- single purpose - login, passwords, polls, forum, etc.
- orthonogal to / independent of other apps - polls shouldn’t have to know the inside details of authentication, for example.
A project corresponds to a ‘website’: it contains a settings.py file, and it may have corresponding databases or other data stores that the apps interact with.
Django apps can live anywhere on the Python path. The python path is a list of paths where the python interpreter looks for modules.
$ python >>> import sys >>> sys.path ['', '/Users/gregg/mydir/pystar/lib/python2.6/site-packages/setuptools-0.6c11-py2.6.egg', '/Users/gregg/mydir/pystar/lib/python2.6/site-packages/pip-0.8.3-py2.6.egg', '/Users/gregg/mydir/pystar/lib/python26.zip', '/Users/gregg/mydir/pystar/lib/python2.6', '/Users/gregg/mydir/pystar/lib/python2.6/plat-darwin', '/Users/gregg/mydir/pystar/lib/python2.6/plat-mac' ... ]
To be importable (seeable by Python), your Django app must be in one of the folders on your path.
Experiment: look at your Python Path!
Create The Poll App¶
In this tutorial, we’ll create our poll app in the myproject directory for simplicity. In the future, when you decide that the world needs to be able to use your poll app and plug it into their own projects, and after you determine that your app plays nicely with other apps, you can publish that directory separately!
open your terminal and navigate to myproject
make scaffolding for the app
python manage.py startapp polls
That’ll create a directory polls to house the poll application.
Verify what is new.
git status # should show 'polls/' in 'untracked'
While were are here lets make git ignore ‘database.db’ by adding database.db to .git/info/exclude. Verify (using ‘git status’) that is it gone.
Examine the layout of polls (we will do more of this in following sections).
# remember not to type the '$', it just means the prompt'. $ ls polls polls/ __init__.py models.py tests.py views.py
Prove that polls is importable [answer]
Add and commit polls/*py.
Install the polls app into the project. Edit the settings.py file again, and change the INSTALLED_APPS setting to include the string ‘polls’ as the last entry. [answer]
Save and commit the settings.py file.
Refill your tea!
Part 7: Test your Django Project¶
Run the default Django tests
python manage.py test
Examine the output. If there are errors, what are they? [answer]
Run the tests for the polls application
python manage.py test polls
You should get output like:
$ python manage.py test polls Creating test database for alias 'default'... . ---------------------------------------------------------------------- Ran 1 test in 0.001s OK Destroying test database for alias 'default'...
Make it louder! Run python manage.py test polls -v 2 and see that it now names the test –> test_basic_addition (polls.tests.SimpleTest). ‘-v’ is for verbosity, and (here) can be 0,1,2,3.
View polls/test.py, and see the example test.
Copy test_polls.py and move it into polls directory
Edit polls to include the tests from test_polls.py to look like:
from django.test import TestCase from test_polls import *
Add it into your project code git repo:
git add polls/tests.py polls/test_polls.py git commit -m "added tests"
Examine test_polls.py in your editor. This file (provided by us) gives acceptance tests for many of the points on the original spec sheet. Normally this is the sort of thing you would write yourself, after reading your spec, and deciding on acceptence criteria. We done it here to help you along, and provide an example for your work in the future!
Writing good tests is hard!
Re-run your tests. python manage.py test polls. Note that most fail! (We will assume that the django tests all pass and focus on testing the polls tests, from here on out.)
Discuss with your groups why testing matters. [answer]
We will return to testing throughout this document as we add new features. We are done when all the tests pass!
Further research:, which goes into this in much greater detail.
test yourface: Take your eyes off the screen, and make some funny faces.
Part 8: Refine Your Workflow!¶
When developing, this is a good work flow.
- Design a feature, with criteria for acceptance.
- Test your feature, to see if meets those criteria.
- When it works (or you make good progress), commit your work.
We will use this workflow throughout the following sections, as we add the features that our protype spec outlined.
Part 9: Philosophy Break!¶
In the following sections, there will be Django Philosophy breaks to highlight major ideas of the Django framework. Other web frameworks might make these choices or use these terms differently. Who is right? [answer]
Part 10: Mockups, Views, and URLs¶
Django-Philosophy
A view is a “type” of Web page in your Django application that generally serves a specific task and has a specific associated template.:
- Poll “index” Python function.
Design your URLs¶.
Pop quiz: what is the ROOT_URLCONF for your project? [answer].
You might ask, “What’s a regular expression?” Regular expressions are patterns for matching text. In this case, we’re matching the URLs people go to, and using regular expressions to match whole ‘groups’ of them at once.
(If you’d like to learn more about regular expressions read the Dive into Python guide to regular expressions sometime. Or you can look at this xkcd.)
In addition to matching text, regular expressions can capture text. Capturing means to remember that part of the string, for later use. Regexps (or Regexen) use parentheses () to wrap the parts they’re capturing.
For Django, when a regular expression matches the URL that a web surfer requests, Django extracts the captured values and passes them to a function of your choosing. This is the role of the callback function above. When a regular expression matches the url, Django calls the associated callback function with any captured parts as parameters. This will much clearer after the next section.
Add URLs to urls.py¶
When we ran django-admin.py startproject myproject to create the project, Django created a default URLconf file called `urls.py`.
Write our URL mapping. Edit the file myproject/urls.py so it looks like this:
urlpatterns = patterns('', (r'^polls/$', 'polls.views.index'), (r'^polls/(\d+)/$', 'polls.views.detail'), (r'^polls/(\d+)/results/$', 'polls.views.results'), (r'^polls/(\d+)/vote/$', 'polls.views.vote'), #)), )
POP QUIZ, suppose a visitor goes to ,
Save urls.py.
Start the dev server and try that url out! What happens?
Re-run the test suite python manage.py test polls. What parts (if any) pass now that didn’t before? You should be seeing lots of “ViewDoesNotExist” messages. (We will create them in the next section. The tests will much more sensible after that!
Save and commit. module’s detail() function like so:
detail(request=<HttpRequest object>, '23')
The ‘23’ part comes from (\d+). Using parentheses around a pattern “captures” the text matched by that pattern and sends it as an argument to the view function; the \d+ is a regular expression to match a sequence of digits (i.e., a number).
Does this seem magical? [answer] Actually, this is less magical than some other parts of Django! Regular Expressions, though sometimes cryptic, are a pretty common and useful skill among developers.
The idea that a URL doesn’t have to map onto a file, or some other sort of static resource, is quite powerful. The URL is just a way of giving instructions to some server, somewhere.
(Rant: In Django, as in most modern frameworks, you have total control over the way your URLs look. People on the web won’t see cruft like .py or .php or even .html at the end of your URLs. There is no excuse for that kind of stuff in the modern era!)
Exercise: Think about another hypothetical website, “MyMagicToa.st”, in which you use a virtual toaster. What might some actions and (and associated urls) be for interacting with it?
Write Some Views!¶
Start the development server: python manage.py runserver
Fetch “” in your browser. You should get a pleasantly-colored error page with the following message:
ViewDoesNotExist at /polls/ Tried index in module polls.views. Error was: 'module' object has no attribute 'index'
Recall this line (r'^polls/$', 'polls.views.index').
Explore this using your django-shell: python manage.py shell
>>> import polls # imports fine! >>> import polls.views # imports fine also! polls/views.py >>> dir(polls.views) # what is in there! >>> 'index' in dir(polls.views) False >>> import inspect >>> inspect.getsourcefile(polls.views) # something like '/Users/adalovelace/gits/myproject/polls/views.py'
So, a mystery? Where is the view!? It’s nowhere! The URL parsing is going fine, but there is no one listening at the other end of the phone! This ViewDoesNotExist error happened because you haven’t written a function index() in the module polls/views.py.
Try, and, and you will see similar messages. The error messages tell you which view Django tried (and failed to find, because you haven’t written any views yet).
Write some views. Open polls/views.py and put the following Python code in it:
from django.http import HttpResponse def index(request): return HttpResponse("Hello, world. You're at the poll index.")
This is a very simple view.
Save the views.py file, then go to in your browser, and you should see that text.
RE-RUN YOUR TESTS. POP QUIZ. Do more pass?
Add a few more views by adding to the views.py file. These views are slightly different, because they take an argument (which, remember, is passed in from whatever was captured by the regular expression in the URLconf):
# recall or note that %s means, "subsitute in a string",))
Save views.py.
Navigate to. It’ll run the detail() method and display whatever ID you provide in the URL. Try and too – these will display the placeholder results and voting pages.
Add a little html to the ‘results’ view. Wrap the poll_id in <strong> </strong> tags and verify that the view is indeed bold!
RE-RUN YOUR TESTS. POP QUIZ. Which ones now pass?
Add and commit your code. Remember to write a good commit message that mentioned what changed (in English) and more details below. Mention which tests now pass (hint, they are ‘reachability’ tests!)
Mockery, Mockery¶
These views don’t plug into real polls. This is by design.
- front-end (visual) and back-end (data) can happen simulatenously
- demonstrating the UI of the product shouldn’t rely on having full data in the back end.
All of this relies on the frontend and backend having a consensus view of the interface between them. What does a ‘Poll’ look like? What data and methods might it have? If we knew this, we could construct mock objects and work with them, instead! Keeping objects simple makes writing interfaces between different layers of the application stack easier.
We will come back to templates (and use Django’s built-in templating facilities rather than simple python string formatting) after we build some models.
Part 11: Showing Off!¶
Time to show our work to the world. To do this, we are going to use DjangoZoom, a fairly new site that makes doing remote deployment easy! It’s still in beta, and we are going to be guinea pigs for them!
Push your code to github
Go to:
Login with the credentials given in class.
Enter your GitHub url.
Rename your project.
Navigate to the URL you eventually get, like.
OHNOES! There is no slash (root) view!
See that ‘/polls/’ looks just like how it does on your local machine.
Fix that locally!
in urls.py add:
urlpatterns = patterns('', (r'^$', 'polls.views.redirect_to_polls'), (r'^polls/$', 'polls.views.index'),
in polls/views.py:
from django.http import HttpResponseRedirect def redirect_to_polls(request): return HttpResponseRedirect('/polls/')
Restart your local server, and hit . What happened? Look at your logging:
[24/Mar/2011 15:01:15] "GET / HTTP/1.1" 302 0 [24/Mar/2011 15:01:15] "GET /polls/ HTTP/1.1" 200 39
See the ‘302’? By returning a HttpResponseRedirect, you redirected the user to a different page! Learn more about status codes at .
Commit and push your changes.
Return to DjangoZoom and rebuild.
Your redeployed site should have a sensible ‘/’ (that redirects to /polls).
Takeaways:
- pushing to remote location exposed that having the ‘/’ behaviour unspecified is a little ugly. This should be added to the spec.
- redirects can hide a multitude of sins. If you are used to Apache ModRewrite, doing it from right in your framework can be a lot easier!
Part 12: Poll and Choice Models¶
Remember those files from Create The Poll App above? We have worked with views.py and test.py. Let’s tackle models.py next and make some actual data for our views to see!
Django-Philosophy. This simple correspondence between models and tables is a design choice, and not everyone likes it. [discussion])
In our simple poll app, we’ll create two models: Polls and Choices. As per our spec from the customer:
A poll has:
- a question
- a publication date.
A choice has two fields:
- the text of the choice
- a vote tally.
Each Choice is associated with a Poll and each Poll has associated Choices. We will respesent these concepts with python classes derived from django.db.models. are represented by a class that subclasses django.db.models.Model. Each model has a number of class variables, each of which represents a database field in the model. (cf:).
Make the Models Migrate-able¶
When you create your models, you might not always know exactly what fields your models will need in advance. Maybe someday your polls app will have multiple users, and you’ll want to keep track of the author of each poll! Then you would want to add another field to the model to store that information.
Unfortunately, Django (and most database-using software) can’t figure out how to handle model changes very well on its own. Fortunately, a Django app called `south` can handle these changes–called ‘migrations’–for us.
Now that we’ve made our first version of our models file, let’s set up our polls app to work with South so that we can make migrations with it in the future!
On the command line, write:
$ python manage.py schemamigration polls --initial
As you can see, that’s created a migrations directory for us, and made a new migration inside it. All we need to do now is apply our new migration:
$ python manage.py migrate polls
Great! Now our database file knows about polls and its new models, and if we need to change our models, South is set up to handle those change. We’ll come back to South later.
Activate The Models¶
models.py.
Synchronize the Database¶
Now Django knows to include the polls app.
Let’s make sure that our database is up to date.
python manage.py syncdb
The syncdb looks for apps that have not yet been set up, or have changed in ways that it can understand. To set them up, it runs the necessary SQL commands against.
More info: Read the django-admin.py documentation for full information on what the manage.py utility can do.
Explore:
- Making sure polls is on the right path to be imported.
- Setting the DJANGO_SETTINGS_MODULE environment variable, which gives Django the path to your settings.py file.
Once you’re in the shell, explore the database API:
import the model classes we just wrote:
>>> from polls.models import Poll, Choice
list all the current Polls:
>>> Poll.objects.all() []
How many polls is this?
Zen koan: Can there be a Choice for a Poll that doesn’t yet exist?
Add a Poll.
>>> import datetime >>> p = Poll(question="What is the Weirdest Cookbook Ever?", pub_date=datetime.datetime.now())
Save the Poll instance into the database. You have to call save() explicitly.
>>> p.save()
Get the id of the Poll instance. Because it’s been saved, it has an ID in the database
>>> p.id 1
What other methods and attributes does this Poll instance have?
>>> dir(p) >>> help(p)
Access the database columns (Fields, in Django parlance) as Python attributes:
>>> p.question "What is the Weirdest Cookbook Ever?" >>> p.pub_date datetime.datetime(2007, 7, 15, 12, 00, 53)
Send the Poll back in time:
# Change values by changing the attributes, then calling save(). >>> p.pub_date = datetime.datetime(2007, 4, 1, 0, 0) >>> p.save() >>> p.pub_date datetime.datetime(2007, 4, 1, 0, 0)
Ask Django to show a list of all the Poll objects available:
>>> Poll.objects.all() [<Poll: Poll object>]
Fix The Hideous Default Representation¶
Wait a minute! <Poll: Poll object> is an utterly unhelpful, truly wretched, beyond comtemptable.)
Enough of these normal python methods!
import datetime # ... class Poll(models.Model): # ... def was_published_today(self): return self.pub_date.date() == datetime.date.today()
Note the addition of import datetime to reference Python’s standard datetime module. This allows us to use the datetime library module in models.py by calling it with datetime. To see what functions come with a module, you can test it in the interactive shell:
>>> dir(datetime) ['MAXYEAR', 'MINYEAR', '__doc__', '__file__', '__name__', '__package__', 'date', 'datetime', 'datetime_CAPI', 'time', 'timedelta', 'tzinfo']
Save these changes to the models.py file
Start a new Python interactive shell by running python manage.py shell:
>>> from polls.models import Poll, Choice
Verify our __unicode__() addition worked:
>>> Poll.objects.all() [<Poll: What is the Weirdest Cookbook Ever?>]
Search your database using the filter method on the objects attribute of Poll.
>>> polls = Poll.objects.filter(question="What is the Weirdest Cookbook Ever?") >>> polls [<Poll: What is the Weirdest Cookbook Ever?>] >>> polls[0].id # remember python lists start with element 0. 1
If you try to search for a poll that does not exist, filter will give you the empty list. The get method will always return one hit, or raise an exception.
>>> Poll.objects.filter(question="What is the Weirdest Cookbook Ever?") [] >>> Poll.objects.get(id=1) <Poll: What is the Weirdest Cookbook Ever?> >>> Poll.objects.get(id=2) Traceback (most recent call last): ... DoesNotExist: Poll matching query does not exist.
Add Choices¶
Observe, there is a Poll in the database, but it has no Choices.
>>> p = Poll.objects.get(id=1) >>> p.choice_set.all() []
Create three choices:
>>> p.choice_set.create(choice='To Serve Man', votes=0) <Choice: To Serve Man> >>> p.choice_set.create(choice='The Original Road Kill Cookbook', votes=0) <Choice: The Original Road Kill Cookbook> >>> c = p.choice_set.create(choice='Mini-Mart A La Carte', votes=0) >>> c <Choice: Mini-Mart A La Carte>
Go in reverse! Find the poll a particular choice belongs to:
>>> c.poll <Poll: What is the Weirdest Cookbook Ever?>
Because a Poll can have more than one Choice, Django creates the choice_set attribute on each Poll. You can use that to look at the list of available Choices, or to create them.
>>> p.choice_set.all() [<Choice: To Serve Man>, <Choice: The Original Road Kill Cookbook>, <Choice: Mini-Mart A La Carte>] >>> p.choice_set.count() 3
No really. Can one be a Choice for a Poll that doesn’t yet exist?:
>>> koan = choice("Is this even a choice") >>> koan.poll_id >>> koan.poll
Heavy Metal Polling!¶
Paste this block of code into a separate file, run python manage.py shell, import and run this block of TOTALLY METAL CODE:
import datetime import random from polls.models import Choice,Poll opinions = ['HEINOUS!', 'suxxors', 'rulez!', 'AWESOME!', 'righTEOUS', 'HAVE MY BABY!!!!', 'BEYOND METAL','SUCKS','RULES', 'TOTALLY RULES'] band_names = ''' Abonos Meshuggah Xasthur Silencer Fintroll Beherit Basilisk Cryptopsy Tvangeste Weakling Anabantha Behemoth Moonsorrow Morgoth Nattefrost Aggaloch Enthroned Korpiklaani Nile Summoning Nocturnia Smothered Scatered Summoning Wyrd Amesoeurs Solstafi Helrunar Vargnatt Agrypnie Wyrd Agrypnie Blodsrit Burzum Chaostar Decadence Bathory Leviathan Hellraiser Mayhem Katharsis Helheim Agalloch Therion Windir Ragnarok Arckanum Durdkh Emperor Sulphur Tsjuder Ulver Marduk Luror Edguy Enslaved Epica Gorgoroth Gothminister Immortal Isengard Kamelot Kataklysm Kreator Maras Megadeath Metallica Moonspell Morgul Morok Morphia Necrophagist Opeth Origin Pantera Pestilence Putrefy Vader Runenblut Possessed Sanatorium Profanum Satyricon Antichrist Sepultura Eluveitie Altare Gallhammer Sirenia Slavland Krada Tribulation Venom ObituarObituarObituarObituarObituarObituarismember Vomitory Suffocation Taake Testament ToDieFor Unleashed'''.strip().split() def make_metal_poll(bandname,opinions): pub = datetime.datetime.now() marks = '?' * random.randint(1,5) question = bandname + marks chosen = random.sample(opinions,5) choices = list() for c in chosen: votes = random.randint(1,1000) choices.append(Choice(choice=c,votes=votes)) p = Poll(question=question,pub_date=pub) p.save() p.choice_set=choices return p polls = [make_metal_poll(band,opinions) for band in band_names]
Discuss what this code does!
Change the models¶
Oh no! Your client, VOEL, has decided that they want to add a feature to the spec for the polling app. Namely, it’s not enough for them to know the poll question and creation date – they want it to be possible for polls to have closing dates, after which voting on the poll is closed. Which means we’re going to have to change our model.
Open polls/models.py and edit the Poll class:
class Poll(models.Model): question = models.CharField(max_length=200) pub_date = models.DateTimeField() end_date = models.DateTimeField(blank=True,null=True)
By setting blank=True and null=True, we’re telling Django that this field is optional, so it’s okay if it’s empty and a poll doesn’t have an end date.
Make a migration so the database knows about the new end_date field.
$ python manage.py schemamigration polls --auto
Apply the migration.
$ python manage.py migrate polls
Part 13: Write Views With Power¶
In Django, each view is responsible for doing one of two things: returning an HttpResponse object containing the content for the requested page, or raise-ing an exception such as Http404. What happens between Request and Response? [Magic!].
Your view can read records from a database, or not. It can use a template system such as Django’s – or not. It can generate a PDF file, output XML, create a ZIP file on the fly, anything you want, using whatever Python libraries you want.
All Django wants is that at the end, it gets an HttpResponse or an exception.
Most of the Django views in the world use Django’s own database API, which was touched on in the discuss of models. (Sorry, I guess we can’t forget about databases quite yet!)
Construct a better index() view. To match the spec, it should displays the latest 5 poll questions in the system, separated by commas, according to publication date.
Edit views.py:
from polls.models import Poll from django.http import HttpResponse def index(request): latest_poll_list = Poll.objects.all().order_by('-pub_date')[:5] output = ', '.join([p.question for p in latest_poll_list]) return HttpResponse(output)
Restart the dev server, and navigate to. You should see the text of the last 5 HEAVY METAL polls. There’s a problem here, though: The page’s design is hard-coded in the view. If you want to change the way the page looks, you’ll have to edit this Python code.
Use Django’s template system to separate the design from Python:
from django.
- What would you have to change to get 10 polls?
- What if you wanted the first 10 by name?
Reload. Now you’ll see an error:
TemplateDoesNotExist at /polls/ polls/index.html
Ah. There’s no template yet. Let’s make one.
Make a polls/templates/poll directory where templates will live. Right alongside the views.py for the polls app. This is what I would do:
mkdir -p polls/templates/polls
Edit polls/templates/polls/index.html to contain.
{% if latest_poll_list %} <ul> {% for poll in latest_poll_list %} <li><a href="/polls/{{ poll.id }}/">{{ poll.question }}</a></li> {% endfor %} </ul> {% else %} <p>No polls are available.</p> {% endif %}
Edit TEMPLATE_DIRS in settings.py to have the full path to the templates folder inside your new app. On my computer, this looks like:
TEMPLATE_DIRS = ( # Put strings here, like "/home/html/django_templates" or "C:/www/django/templates". # Always use forward slashes, even on Windows. # Don't forget to use absolute paths, not relative paths. '/karen/Code/pystarl/django-projects/myproject/polls/templates', )
Reload . You should see a bulleted-list containing some of the HEAVY METAL POLLS. There should also be link pointing to the poll’s detail page.
RE-RUN your tests. Save and Commit.
Fix The Detail View and Handle User Errors using a 404¶
Now, let’s tackle the poll detail view – the page that displays the question for a given poll.
Edit the views.py file. This view uses Python exceptions:
from django.http import Http404 # ... def detail(request, poll_id): try: p = Poll.objects.get(id=poll_id) except Poll.DoesNotExist: raise Http404 return render_to_response('polls/detail.html', {'poll': p})
Notice that view raises the Http404 exception if a poll with the requested ID doesn’t exist.
Create polls/templates/polls/detail.html with:
{{ poll }}
Verify your “detail” view works. Try it:
Re-run your tests. Note that we get in, we get a pesky TemplateDoesNotExist: 404.html message. Let’s fix that!
Create polls/templates/404.html (the polls template root dir) as:
<p>You have a 404. Go back and try again.</p>
Save and commit.
Load a poll page that does not exist, to test out the pretty 404 error:
- What? It says DEBUG has to be False? All right, set it (in settings.py), and try again!
- (note: Chrome ‘eats’ the 404. Safari will show our created page.)
- Change DEBUG back to True
- Re-run the tests, and show the TemplateDoesNotExist: 404.html goes away.
- Save and commit.
Discussion: raising a 404 here (Page Not Found) is meant to be illustrative. 404 is a blunt tool. In a real application, maybe we would redirect the user to the ‘create a poll’ page, or the search page.
Discuss in your group what behavior should happen in this case.
- Why did the user land here?
- What did they expect to find?
- What should happen next?
Add More Detail to the Details¶
Add more detail to the detail view.
Edit the polls/detail.html template to add a poll variable. poll points the particular instance of the Poll class.
<h1>{{ poll.question }}</h1> <ul> {% for choice in poll.choice_set.all %} <li>{{ choice.choice }}</li> {% endfor %} </ul>
The django.template system uses dot-lookup syntax to access variable attributes. Django’s template language is a bit looser than standard python. In pure Python, the . (dot) only lets you get attributes from objects, and we would need to use [] to access parts of list, tuple or dict objects. In this example, we are just doing attribute lookup, but in general if you’re not sure how to get data out of an object in django.templates, try dot.
Method-calling happens in the {% for %} loop: poll.choice_set.all is interpreted as the Python code poll.choice_set.all(), which returns a sequence of Choice objects and is suitable for use in the {% for %} template tag.
Reload . Observe that the poll choices now appear.
Save and commit your changes.
Detail yourself to go view out a window, get a drink of water, and let your eyes rest.
Yes, that means you!
Part 14: Deploy Again¶
- Commit and Push.
- Refresh your project on DjangoZoom.
- Go to . Is there anything there? Why not?
Takeaway: Your local datastore (here, database.db) is not present on DjangoZoom, and the data here and there can (and will!) be different.
Part 15: Let the people vote¶
Create the form¶
Recall that the prototype spec allows users to vote up and vote down choices on polls. We are going to use a form for that functionality. As an alternative, we could have used AJAX Requests, a special url (‘/polls/11/choice/3/upvote’) or some other mechanism.
Update our poll detail template (polls/detail.html) to contain. A quick rundown:
The above template displays a radio button for each poll choice. The value of each radio button is the associated poll choice’s ID. The name of each radio button is “choice”. That means, when somebody selects one of the radio buttons and submits the form, the form submission will represent the Python dictionary {'choice': '3'}. That’s the basics of HTML forms; you can learn more about them at your local library!
We set the form’s
- action to /polls/{{ poll.id }}/vote/
-.
- Fix views.py to protect against CSRF hacking:
from django.template import RequestContext from django.shortcuts import get_object_or_404, render_to_response # ... def detail(request, poll_id): p = get_object_or_404(Poll, pk=poll_id) return render_to_response('polls/detail.html', {'poll': p}, context_instance=RequestContext(request))
Notice we also added a function that checks if a 404 is returned for us. This is a common pattern, so there is a pre-built shortcut function for it so we can use fewer lines of code! The details of how the RequestContext works are explained in the documentation for RequestContext
Review your work at .
Save and commit.
Process the form¶
Recall that our urls.py includes:
(r'^(?P<poll_id>\d+)/vote/$', 'vote'),
Recall also that we created a dummy implementation of the vote() function.
Create a real version of vote(). Add the following to polls/views.py:
from django.shortcuts import get_object_or_404, render_to_response from django.http import HttpResponseRedirect, HttpResponse from django.core.urlresolvers import reverse from django.template import RequestContext from polls.models import Choice, Poll # ...,))). POP QUIZ: Why is this? [answer]
- poll form with an error message if choice isn’t given.
- After incrementing the choice counter, just good Web development practice. That way, if the web surfer hits reload, they get the success page again, rather than re-doing the action.
We are using the reverse() function in the HttpResponseRedirect constructor urls.py, this reverse() call will return a string like
'/polls/3/results/'
where the 3 is the value of p.id. This redirected URL will then call the results view to display the final page. Note that you need to use the full name of the view here (including the prefix).
RUN YOUR TESTS. What is still failing? Not much! I hope!
Write the result view, which will redirect to the results page for the poll. Augment views.py.
def results(request, poll_id): p = get_object_or_404(Poll, pk=poll_id) return render_to_response('polls/results.html', {'poll': p})
This is almost exactly the same as the detail() view we wrote earlier. The only difference is the template name. We’ll can fix this redundancy later.
Create a /polls/templates/polls/results.html template:
<h1>{{ poll.question }}</h1> <ul> {% for choice in poll.choice_set.all %} <li>{{ choice.choice }} -- {{ choice.votes }} vote{{ choice.votes|pluralize }}</li> {% endfor %} </ul> <a href="/polls/{{ poll.id }}/">Vote again?</a>
Restart your dev server.
Navigate to in your browser and vote in the poll. You should see a results page that gets updated each time you vote.
Verify that if you submit the form without having chosen a choice, you should see a warning message. Why does this happen? [answer] Nah, just funnin’! [answer]
RE-RUN TESTS! They should all pass at this point.
Save and commit:
# in myprojects git status git add <some files> # whatever files need adding! git commit -m "protoype complete. all tests pass." git push origin master
Part 16: Editing your polls in the Django admin interface¶¶¶
The Django admin site is not activated by default – it’s an opt-in thing.
Activate the admin site for your installation:
Open up myproject/settings.py and uncomment “django.contrib.admin” and “django.contrib.admindocs” in your INSTALLED_APPS setting.
Edit myproject/urls.py file and uncomment the lines that reference the admin – there are four lines in total to uncomment.
from django.contrib import admin admin.autodiscover() # and (r'^admin/doc/', include('django.contrib.admindocs.urls')), (r'^admin/', include(admin.site.urls)),
Since you have added a new application to INSTALLED_APPS, the database tables need to be updated:
python manage.py syncdb
Restart the development server¶¶
Now, try logging in. (You created a superuser account earlier, when running syncdb for the fist time. If you didn’t create one or forgot the password you can create another one.) We suggested super super as the name and password earlier :).
Create polls/admin.py, and edit it to look like this:
from polls.models import Poll from django.contrib import admin admin.site.register(Poll)
Restart the dev server..
Customize the admin change list¶ way back in the models part of this workshop:
class PollAdmin(admin.ModelAdmin): # ... list_display = ('question', 'pub_date', 'was_published_today')
Examine the polls list.).
This is shaping up well.
Add some search capability. Add this to class PollAdmin:
class PollAdmin(admin.ModelAdmin): # ....
Add drill-down by date..
Discuss as a group Polls app vs. admin
That’s the basics of the Django admin interface. Employ it liberally!
Relax, and bask in self-satisfaction.
Part 18: Takeways and Next Steps¶
By now, you have seen:
- test-driven development
- acceptence testing
- user stories
- specs and requirements
- iterative development
- git (and version control generally)
- http on a local server
- http logging, status codes
- ports
- django url parsing
- regular expressions
- templates / views
- GET and POST; http forms
- Django admin sites.
- interacted with a sqlite db directly
- django models / orms (object-relational mappers)
- remote deployment
You have seen a workflow that is similar to those of top developers worldwide. Use this as a stepping stone to learn more.
What next?¶
Become a PyStar TA. You did it, now give back by teaching!
Give feedback so we can make the course and text better
Expand! Choose a topic area, and dive in: obvious choices might be:
- Python (we did barely any!)
- Django
- SQL / DB work
- Other Python web frameworks (Pyramid/Pylons, Twisted.web)
Fill a hole: we didn’t even get to much HTML, CSS, JavaScript, JQuery, or the like!
Review. Read the online Django tutorial or Djangobook | http://pystar.github.io/pystar/badges/badge_djangoapp.html | CC-MAIN-2017-43 | refinedweb | 7,909 | 69.28 |
Overview
Atlassian Sourcetree is a free Git and Mercurial client for Windows.
Atlassian Sourcetree is a free Git and Mercurial client for Mac.
Flask-Pure - a Flask extension for Pure.css
Flask-Pure is an extension to Flask that helps integrate Pure.css to your Flask application.
Quick Start
- Installation
pip install Flask-Pure
- Configuration
from flask import Flask, render_template from flask_pure import Pure app = Flask(__name__) app.config['PURECSS_RESPONSIVE_GRIDS'] = True app.config['PURECSS_USE_CDN'] = True app.config['PURECSS_USE_MINIFIED'] = True Pure(app) @app.route('/') def hello(): return render_template('hello.html') if __name__ == '__main__': app.run(debug=True)
- In
templates/hello.html:
{% extends "pure/layout.html" %} {% block title %}Hello world from flask-pure{% endblock %} {% block nav %} <div class="pure-menu pure-menu-horizontal"> <!-- ... --> </div> {% endblock %} {% block content %} <h1>Hello world</h1> {% endblock %}
- Profit!
How It Works
Once registered, this extension provides a template variable called
pure, it has a property named
css that will be rendered
as HTML
<link> tag to the Pure.css stylesheets either from free CDN or
be served from a bundled blueprint, also called
pure.
A
{{ pure.css }} inside
<head> tag is all you need.
A bare bone HTML5 template is also available as
pure/layout.html.
Please check out the example in code repository and documentation for details.
License
BSD New, see LICENSE for details. | https://bitbucket.org/pyx/flask-pure/ | CC-MAIN-2019-09 | refinedweb | 218 | 54.08 |
[
]
Raghu Angadi commented on HADOOP-1704:
--------------------------------------
> Raghu's block crc upgrade code throttles the deletion of .crc files.
blockCrc upgrade does not throttle deletes explicitly. It deletes in a single thread and each
deletiion results in editsLog entry.. these two naturally throttle the rate to around 700-800
deletes a second. Deleting a directory deletes whole tree with sigle editsLog entry. This
is the difference.
The similarity is that that none of these 5M blocks were removed from the namespace until
after the upgrade is complete. In that sense memory overhead is same as deleting 5M files
in one shot.
> Throttling for HDFS Trash purging
> ---------------------------------
>
> Key: HADOOP-1704
> URL:
> Project: Hadoop
> Issue Type: Bug
> Components: dfs
> Reporter: dhruba borthakur
>
> When HDFS Trash is enabled, deletion of a file/directory results in it being moved to
the "Trash" directory. The "Trash" directory is periodically purged by the Namenode. This
means that all files/directories that users deleted in the last Trash period, gets "really"
deleted when the Trash purging occurs. This might cause a burst of file/directory deletions.
> The Namenode tracks blocks that belonged to deleted files in a data structure named "RecentInvalidateSets".
There is a possibility that Trash purging may cause this data structure to bloat, causing
undesireable behaviour of the Namenode.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online. | http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/200709.mbox/%3C24634786.1189101392124.JavaMail.jira@brutus%3E | CC-MAIN-2017-30 | refinedweb | 234 | 57.06 |
Reading Excel Sheets from Java using JDBC-ODBC Bridge:
There are couples of BLOGS present in SDN, but I didnt find this way to read & write to/from any Spread Sheet (Ex. MS Excel Sheet) using Java. This is one of
easiest ways of achieving the results (read, write & update) while working on Excel Sheets through Java and thought of sharing.
The following java code will read data from Excel Sheet.
import java.sql.Connection; import java.sql.DatabaseMetaData; import java.sql.DriverManager; import java.sql.ResultSet; import java.sql.ResultSetMetaData; import java.sql.Statement; import java.util.StringTokenizer; public class ConnectExcel{ public void static main(String s[]){ Connection connection = null; Statement statement = null; String fileName = "D:/TestSheet.xls"; try{ Class.forName( "sun.jdbc.odbc.JdbcOdbcDriver" ); connection = DriverManager.getConnection("jdbc:odbc:Driver={Microsoft Excel Driver (*.xls)};DBQ=" + fileName); //connection = DriverManager.getConnection("jdbc:odbc:ConnectExcelDSN"); statement = connection.createStatement(); String query = "Select [Name] from [Sheet1$]"; //String query1 = "Select [Name] from [Sheet1$] where [Name] like %M%"; ResultSet rs = statement.executeQuery( query ); while( rs.next() ) { System.out.println( rs.getString(1) +"\n"); } rs.close(); statement.close(); }catch( Exception e ){ System.out.println ("In Catch: "+ e ); } }//eof main() }//eof class
There following is the screen shot of TestSheet.xls with the values in it.
After executing the following code, you would get a list of all the values in column Name. You can execute any normal SQL query statements to fetch values
from the Excel sheet. This is the coolest way to achieve our customized results from Excel Sheet.
What exactly required:
We need to register the Excel Sheet as Database and connect it as we do generally for other databases like Oracle, MS SQL Server. There are 2 ways to do this,
either we can create a Data Source (in Windows) or just directly specifying the Driver Name as follows:
1#
DriverManager.getConnection(“jdbc:odbc:Driver={Microsoft Excel Driver (*.xls)};DBQ=” + fileName);
filename -> it the physical/network path to the excel file.
2#
Or if we create a DSN, use the following code.
connection = DriverManager.getConnection("jdbc:odbc:ConnectExcelDSN");
ConnectExcelDSN -> it is the System DSN created in ODBC Data Source Administrator for Microsoft Excel Driver (*.xls) as Driver
Creating DNS in Data Source:
Excel Sheet as Database:
The Excel file can be considered as a database, all the Sheets (Sheet1, Sheet2, Sheet3 etc) are tables in a database. By default, the First Row of any Sheet is the
name_of_the_column. In our example Name, Emp Code & Salary are the Column Names of table Sheet1.
There are few rules to build our query string, except that we can use any type of query on spread sheets based on the user permissions to the file.
Table Name (Work Sheet Name): When writing a query string the work sheet should be enclosed in [ ] braces and followed by a $ sign.
Select * from [Sheet1$];
Column Name: When writing the columns it is advisable to enclose them in [ ] braces.
Select [Name], [Salary] from [Sheet1$];
Using Cell Ranges: To use a cell range in a query string, add a cell range. Use a colon : between the starting and ending cell positions.
Select * from [Sheet2$A1:D5] WHERE [Dept_No] > 1001;
In order to be used effectively, you will have to know the exact data range for the cells in the worksheet.
1.Wt do u mean by ‘I don’t need to see the address of the URL as the text in question is all I am concerned with’, if you don’t want to display the text of URL what do u want to display?
2.”The fields in question are showing up as NULLs when I extract them from the ResultSet.” Wt do u mean by ‘fields in question are showing up as NULLs’?
Well I’ve tried to add some urls to my excel sheet and try to print them, its working fine there are no issues in displaying the URLs.
Munna_SAP | https://blogs.sap.com/2006/07/31/reading-excel-sheet-from-java-without-using-any-framework/ | CC-MAIN-2019-09 | refinedweb | 652 | 66.33 |
A:
I’m beginning to think it is physically impossible for me to do a demo that doesn’t involve cats. Anyway, let’s talk about the Ionic 2 version. First, the view:
<ion-header> <ion-navbar> <ion-title> Image Search </ion-title> </ion-navbar> </ion-header> <ion-content padding> <ion-item> <ion-input</ion-input> </ion-item> <button ion-button full (click)="doSearch()">Search</button> <ion-slides [options]="mySlideOptions"> <ion-slide * <img [src]="slide.MediaUrl"> </ion-slide> </ion-slides> </ion-content>
In form, it’s similar to the V1 version of app, but while V1 still had a mix of HTML and Ionic components, this is nearly 100% Ionic tag-based. The only non-Ionic component there is the button tag and even it uses an argument to flag it as being Ionic-controlled anyway. If you’re still new to Angular 2 (like me!), you should pay special attention to the new syntax used for event handling:
(click)="doSearch()" and two way binding:
[(ngModel)]="search". Another tweak is to iteration. While V1 had
ng-repeat, I’m using
*ngFor in V2. All in all, the view here is simpler than my previous version. (But to be clear, the previous version did everything in one HTML file as it was so simple. I could have seperated out the view into its own file.) Now let’s take a look at the code. First, the code for the view:
import { Component } from '@angular/core'; import { NavController } from 'ionic-angular'; import { ImageSearch } from '../../providers/image-search'; @Component({ selector: 'page-home', templateUrl: 'home.html', providers:[ImageSearch] }) export class HomePage { search:string; slides:any[]; mySlideOptions = { pager:true }; constructor(public navCtrl: NavController, public searchProvider:ImageSearch) { } doSearch() { console.log('searching for '+this.search); this.searchProvider.search(this.search).subscribe(data => { console.log(data); this.slides = data; }); } }
There isn’t anything too scary here. I’ve got one method,
doSearch, that simply fires off a call to my provider for Bing searches. The only real interesting part is
mySlideOptions. If you want to tweak how the slide works, you can’t just pass arguments via the component tag. That kinda sucks in my opinion, but I’m guessing there is a good reason for that. I had tried adding
pager="true" to my
ion-slides tag, but that didn’t work. I had to create a variable and then bind to it from the view. Again, that bugs me. I can get over it though.
The provider is now all wrapped up in fancy Oberservables and crap which frankly still confuse the hell out of me. But I got it working. The hardest thing was figuring out how to do headers.
import { Injectable } from '@angular/core'; import { Http, Headers} from '@angular/http'; import 'rxjs/add/operator/map'; import { Observable } from 'rxjs/Observable'; /* Generated class for the ImageSearch provider. See for more info on providers and Angular 2 DI. */ @Injectable() export class ImageSearch { appid = "fgQ7ve/sV/eB3NN/+fDK9ohhRWj1z1us4eIbidcsTBM"; rooturl = "'"; constructor(public http: Http) { this.http = http; } search(term:string) { let url = this.rooturl + encodeURIComponent(term) + "'&$top=10"; let headers = new Headers(); headers.append('Authorization', 'Basic '+ btoa(this.appid + ':' + this.appid)); return this.http.get(url,{headers:headers}) .map(res => res.json()) .map(data => data.d.results); } }
And yep, that’s my Bing key. I’ll probably regret sharing it. So how does it look? Here’s an iOS example:
Right away you’ll notice the pager isn’t actually there. I’m not sure why that is failing to show up because it does show up when running
ionic serve. I could definitely try to size that image a bit nicer in the view, but for a quick demo, I got over it.
A few minutes later…
So I was wrapping up this blog post when I started chatting about it over on the Ionic Slack channel. Mike, and others, helped me discover a few things.
First, you know how I complained about having to create a variable in my JS code just to handle a simple option? Turns out you can do it all in the view - thanks Mike:
<ion-slides [options]="{pager:true}">
I added that and then removed the code from
home.ts, making it even simpler. But I still had the bug with the pager not showing up. It clearly worked in
ionic serve, see?
So wtf, right? Then I remembered - Chrome DevTools has a Responsive Mode. What happens when we turn that on?
Boom. Right away I see the same bug… and I notice the scrollbar. I scroll down and see…
Yep, my slides portion is just too big. On my iOS simulator I scrolled down and confirmed. Sigh. So (and again, with help from smart folks on the Slack channel!), I ended up styling the
ion-slides components:
<ion-slides [options]="{pager:true}" style="max-height:400px">
I use a set max of 400px, which isn’t terribly cross platform compatible, but it helped:
Perfect! Except this is what you see on initial load:
Ugh. So I tried going back to having options defined in JavaScript and simply changing it when data was loaded, but that didn’t work. I then tried getting a pointer to the slider object and updating it that way. It also didn’t work.
Ugh again. So I went back to a simple inline option declaration but also hid the entire slider:
<ion-slides [options]="{pager:true}" style="max-height:400px" *
I then modified my code to default haveData:
import { Component, ViewChild } from '@angular/core'; import { NavController, Slides } from 'ionic-angular'; import { ImageSearch } from '../../providers/image-search'; @Component({ selector: 'page-home', templateUrl: 'home.html', providers:[ImageSearch] }) export class HomePage { search:string; slides:any[]; haveData:boolean = false; constructor(public navCtrl: NavController, public searchProvider:ImageSearch) { } doSearch() { console.log('searching for '+this.search); this.searchProvider.search(this.search).subscribe(data => { console.log(data); if(data.length >= 1) { this.haveData=true; this.slides = data; } }); } }
And now it works perfectly! To be fair, this still needs a nice “loading” widget UI when searching, but as I mainly wanted to focus on the slides, I figure I should try to keep this simple. The full source code for this demo may be found here:
Let me know if you have any questions or comments below. | https://www.raymondcamden.com/2016/10/24/ionic-example-slides | CC-MAIN-2017-09 | refinedweb | 1,043 | 66.84 |
The app was not installed on iphone because its resources have been modified
The app was not installed on iphone because its resources have been modified Hi,
I am getting following error while my client is installing my... resources have been modified"
What is the solution?
Thanks
Hi,
You have
WEB SITE
WEB SITE can any one plzz give me some suggestions that i can implement in my site..(Some latest technology)
like theme selection in orkut
like forgot password feature..
or any more features
Sorry but its
maximum limit of database connections have been reached.
maximum limit of database connections have been reached. How do I check in my code whether a maximum limit of database connections have been reached - Variable, Constant and Literal in Java
keyword. The values
of the constant can't be changed once its declared.
Literal... Java - Variable, Constant and Literal in Java
... : You can assign the values to the variable once it has been
declared
We have organized our site map for easy access.
You can browser though Site Map to reach the tutorials and information
pages. We will be adding the links to our site map as and when new pages
are added
struts
technologies like servlets, jsp,and struts.
i am doing one struts application where i... into the database could you please give me one example on this where i i have...struts hi
Before asking question, i would like to thank you
PHP Magic Constant
by dynamic loading or they have been compiled before.
There are many magical... of the magic constants are as follows:
1) __LINE__: This constant is used to indicate the current line number of the file.
2) __DIR__ : This constant is used
Struts 1 Tutorial and example programs
article Aggregating Actions in Struts , I have given a brief idea of how...
This lesson is an introduction to the Struts and its architecture...
In this tutorial I will show you how to integrate Struts and Hibernate.
After
How to Upload Site Online
and office. Since 1999, CuteFTP Pro and CuteFTP Mac Pro have also been available... the
website on your server.
To upload the website on your server, you should have....
Uploading site can be done in many ways, but the most popular is FTP. After hosting
How to Upload Site Online?
to be purchased from the website hosting provider.
I have hosting account but I...How to Upload Site Online? After designing and developing your... program is free for uploading a website?
How to Upload Site Online?
Thanks
.
In this section we have listed the tutorials on Rose India website. We have... India website.
Index |
Ask
Questions | Site
Map
Web Services... |
Hibernate Tutorial |
Spring Framework Tutorial
| Struts Tutorial
Struts Books
covers everything you need to know about Struts and its supporting technologies...:
The Jakarta Struts Model 2 architecture and its supporting... to the Apache Foundation in 2001, Struts has been rapidly accepted as the leading
Resources have been modified iPhone
Resources have been modified iPhone Hi,
My client is getting Resources have been modified error while installing AdHoc distribution on his iPhone. How to solve Resources have been modified iPhone error?
Thanks
Hi/". Its a very good site to learn struts.
You dont need to be expert in any other framework or else before starting struts. you just need to have...Struts Good day to you Sir/madam,
How can i start
Visitor Design Pattern
Visitor Design Pattern
Visitor
Design Pattern
The design pattern provides additional functionality to a class. The Visitor
pattern allows
Still have the same problem--First Example of Struts2 - Struts
Still have the same problem--First Example of Struts2 Hi
I tried... tried my own example. Its not displaying the "Struts Welcome" message and also...
.......... struts-config.xml
.......... web.xml
Here I gave congiguration
How to define a constant variable in Java?
once its declared. For more related to constant variable in java
Thanks...How to define a constant variable in Java? Hi,
How to define a constant variable in Java?
thanks
HI,
The Constants variables
In this section of sitemap we have listed... to learn.
We have given the important links of Java, JSP, Struts... Fundamentals
Struts
Introduction
Dynamically update the Label & set Bounds constant between 2 Label(Thread)
in Dynamically update the label. when i m changing the the value in database , its...Dynamically update the Label & set Bounds constant between 2 Label(Thread...() {
}
private JLabel createlabel(int i,String tagNo)
{
final
Have you tried PHP resource site; PHPKode.com?
Have you tried PHP resource site; PHPKode.com? is a good free open source PHP resource site which helps me a lot during my PHP learning. Have you tried it before
struts internationalisation - Struts
struts internationalisation hi friends
i am doing struts iinternationalistaion in the site
i followed the procedure as mentioned but i am not getting
Protecting Your iPad Has Never Been Easier
Protecting Your iPad Has Never Been Easier
Since its release early in 2010.... The amazing new features and light weight design have been largely responsible... convenient their lives have been or how their entertainment experiences have been
Hide Adsense ads when visitor coming From Facebook
Hide Adsense ads when visitor coming From Facebook I want to Hide my adsense ads when Visitor is coming from Facebook and when he clicks On other Post ads should appear.
I don't know it is easy or difficult task.
Plz help me
i am Getting Some errors in Struts - Struts
i am Getting Some errors in Struts I am Learning Struts Basics,I am Trying examples do in this Site Examples.i am getting lot of errors.Please Help Class Constant
PHP Class Constants:
Sometimes we need to have constant values in our... before the constant to use or to declare it.
The value of the constant should remain unchanged that means it must be a constant expression.
Example
variable answer1 might not have been initialized
variable answer1 might not have been initialized import java .io.*;
public class Test{
public static void main(String[] args) throws...++;
}
}
}
We have modified your code. Here it is:
import java .io,
I want to installed tomcat5.0 version please help me i already visit ur site then i can't understood that why i installed please give some idea for installed tomcat version 5 i have already tomcat 4.1 i have no any idea about struts.please tell me briefly about struts?**
Hi Friend,
You can learn struts from the given link:
Struts Tutorials
Thanks
STRUTS ACTION - AGGREGATING ACTIONS IN STRUTS
are a Struts developer then you might have experienced the pain of writing huge number of Action classes for your project. The latest version of struts provides classes...STRUTS ACTION - AGGREGATING ACTIONS IN STRUTS
the JDBC Driver has been forcibly unregistered
the JDBC Driver has been forcibly unregistered Hi
my application name is hospital i can build successfully but in the tomcat console showing... application was st
opped. To prevent a memory leak, the JDBC Driver has been
struts
struts hi
i would like to have a ready example of struts using "action class,DAO,and services" for understanding.so please guide for the same.
thanks Please visit the following link:
Struts Tutorials
Struts Alternative
of struts. stxx sits on top of Struts, extending its existing functionality to allow..., making the web site more maintainable over its lifespan and providing a viable....
But that was then ... and this is now. In the years that Struts has been around (five and counting Book - Popular Struts Books
Struts and its supporting technologies, including JSPs, servlets, Web applications....
The book begins with a discussion of Struts and its Model-View-Controller..., and everyone in between who wishes to use the Struts Framework to its fullest
getOutputStream() has already been called for this response
getOutputStream() has already been called for this response hi to all,
i am trying to export some data from my project to excel file....while clicking on the icon i am getting the following error.....(i am using jboss6.1 have one textbox for date field.when i selected date from datecalendar then the corresponding date will appear in textbox.i want code for this in struts.plz help have crude application
I have crude application I have crude application, how to load into this roseindia I have no.of checkboxes in jsp.those checkboxes values came from the databases.we don't know howmany checkbox values are came from... the checkbox.i want code
i want to retriev and update in same form but its not working pls help....
i want to retriev and update in same form but its not working pls help.... ...;IS APPLICANT EVER BEEN CHARGED WITH CRIMINAL PROCEEDINGS??</td><td><...;td>IS APPLICANT EVER BEEN REFUSED OR DENIED PASSPORT??</td><td><
JBoss Tools 3 Alpha has been released
, Portal etc.
All plugins in the new release have been now
shifted to Eclipse... in the Eclipse API.
The new features that have
been added to this Release are as follows...The first Alpha release of
JBoss Tools 3 has been made recently with intials
web site creation
web site creation Hi All ,
i want to make a web site , but i am using Oracle data base for my application .
any web hosting site giving space for that with minimum cost .
please let me know which site offering all
Struts Tutorials
great tutorials posted on this site and others, which have been very helpful... module based configuration. That means we can have multiple Struts configuration... application development using Struts. I will address issues with designing Action
I have a small problem in my datagridview - Design concepts & design patterns
I have a small problem in my datagridview i have datagridviewer in c#(platform) and i try that change cell beckground, this cell Should... base_database_) its back color will be for example blue..
-------change
IMP - Struts
IMP Hi...
I want to have the objective type questions(multiple choices) with answers for struts.
kindly send me the details
its urgent.../jakartastrutsinterviewquestions.shtml Links - Links to Many Struts Resources
feel that I exaggerate the negatives of Struts in the next section. I like... covers Struts 1.2. The course is usually taught on-site at customer locations... have to purchase. All the tutorials that I looked at were also available in pdf
Struts - Struts
Java Bean tags in struts 2 i need the reference of bean tags in struts 2. Thanks! Hello,Here is example of bean tags in struts 2: Struts 2 UI
Understanding Struts - Struts
on is Mifos, it is an open-source application.
I have been reading some...Understanding Struts Hello,
Please I need your help on how I can understand Strut completely. I am working on a complex application which
struts
struts <p>hi here is my code in struts i want to validate my form fields but it couldn't work can you fix what mistakes i have done</p>...
}//execute
}//class
struts-config.xml
<struts
inheritance....please help me friends...!!!... this is important project that i have to do it..
inheritance....please help me friends...!!!... this is important project that i have to do it.. Point
.................
#x : int
#y : int... radius, X and Y.
In the Point class, its default constructor will assign X and have already define path in web.xml
i m sending --
ActionServlet...
/WEB-INF/struts-config.xml am retrieving data from the mysql database so the form bean will be null for that action.... if i give it in struts config it is asking me to have a form bean.... how to solve this problem
HTML FAQ site
HTML FAQ site For a school project need simple website. It is an FAQ site which uses natural language processing to get the answers. Natural... of answers or the actual answer should be generated. As close as possible.
I need
Struts Articles
As a big promoter of JSF, I have been doomed to deal with many people, who have previous Struts experience. I notice in many cases that some sort of ?paradigm... to be an introduction to either Struts or JSR 168. It assumes you have some
Struts - Struts
Struts Hello
I like to make a registration form in struts inwhich... source code to solve the problem.
Mention the technology you have used.
Struts1/Struts2
For more information on struts visit to :
What is Struts - Struts Architecturec
.
Struts is famous for its robust Architecture and it is being used for
developing small and big software projects.
Struts is an open source framework used...
What is Struts - Struts Architecture
need
REGARDING TREE STRUCTURE IN STRUTS - Struts
REGARDING TREE STRUCTURE IN STRUTS Hello friends,
I need ur help its urgent requirement.
I need a dynamic tree structure format i.e I have created a database entries whenever i enter a data in database, the first fieldname
Index | About-us | Contact Us
|
Advertisement |
Ask
Questions | Site
Map | Business Software
Services India
Tutorial Section ... Tutorials |
WAP Tutorial
|
Struts
Tutorial |
Spring
Struts Validator Framework
-side validation. Starting in Struts 1.2.0 the default javascript definitions have... Struts Validator Framework
This lesson introduces you the Struts Validator
Struts - Struts
Struts Hello
I have 2 java pages and 2 jsp pages in struts... for getting registration successfully
Now I want that Success.jsp should display... with source code to solve the problem.
For read more information on Struts visit
File I/O
the text file then read it and write it into as a comma delimitade file. i have... with "tAC" and ends with "->0|0|6" as you shall see in the text file i have posted below:
here is the code to what i have done thank you looking forward
File I/O
File I/O i have a problem i am trying to print on a new line every time i reach a certain condition "if(line.startsWith("Creating"))" i want...();
}
br.close();
}
pw.close();
System.out.println("your file have been
struts - Struts
struts hi,
i have formbean class,action class,java classes and i configured all in struts-config.xml then i dont know how to deploy and test... and do run on server. whole project i have run?or any particular
Accessing your site
Accessing your site I cant acess your site.I am getting good grip on java through roseindia.kindly help me so that I can acess roseindia
i have an ques. of advanced java
i have an ques. of advanced java write a wap to implement AWT by login form
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://www.roseindia.net/tutorialhelp/comment/2639 | CC-MAIN-2015-40 | refinedweb | 2,437 | 66.33 |
On Wed, Mar 27, 2019 at 01:58:45PM +0900, William Breathitt Gray wrote:> This macro iterates for each 8-bit group of bits (clump) with set bits,> within a bitmap memory region. For each iteration, "start" is set to the> bit offset of the found clump, while the respective clump value is> stored to the location pointed by "clump". Additionally, the> bitmap_get_value8 and bitmap_set_value8 functions are introduced to> respectively get and set an 8-bit value in a bitmap memory region.> +unsigned long bitmap_get_value8(const unsigned long *addr, unsigned long start)> +{> + const size_t idx = BIT_WORD(start);> + const unsigned long offset = start % BITS_PER_LONG;> +> + return (addr[idx] >> offset) & 0xFF;I would spell index instead of idx, but it's minor and up to you.> +}-- With Best Regards,Andy Shevchenko | https://lkml.org/lkml/2019/3/27/448 | CC-MAIN-2021-10 | refinedweb | 128 | 55.98 |
Marble (KDE) For Dummies [It's Ironic ]
// TODO: Put Here a Comes Creative Common and more of Licensing Bla Bla which i never get Open Source or Closed Sourced .
//FIXME :Complete This DISCLAIMER : I Don’t any responsibility for being a Dummy myself and writing a for dummy text , i suppose this is called freedom of expression or something like that :p ..Read Any further on your own risk ..
//There are some better places when you want info on Marble like or
#include <Intelligent.h> //I really wish life ( read Being Intelligent
) was this easy
In this series i would be posting what i have till now understood about marble / it’s code …coding first requires compiling the code ,, so in that spirit i’ll tell you a secret
i compiled Marble on Fedora 7/8/9 Open Suse 10.3/11Beta 1, Ubuntu 6 LTS (just don’t ask how ,,i had to compile qt 4.4 for it ) /8.0.4)
( a ) For getting marble’s code use subversion GUI software ( kdesvn , qsvn ,esvn , Tortoise svn (windows))
or on CLI (Command Line Interface) type svn co svn://anonsvn.kde.org/home/kde/trunk/KDE/kdeedu/marble
( b ) You need to at least have Qt to compile Marble ..[qt would also requires gcc(Linux/Unix) or mingw(Windows) ]
( c ) Marble also requires cmake [windows] { Linux [OpenSuse] [Generic] [SourceCode] }
( d )Though an IDE is not necessary and as my mentor says kate is enough
, newbies like me love IDE … so here are some of the IDE’s you could use with qt (for Marble) .. in case IDE word is too alien to you ,, in Human speak it means a Integrated Developemnt Engine like Adobe Dreamweaver ,,, making all the coding tools in easy reach.
- Monkey Studio (Cross PLatform)[Not For Faint Heart , you will have to compile the IDE too ,, lol )
- QDevelop (Cross PLatform)
- Eduyak (Cross PLatform)
- Eclipse (Cross PLatform)
- Kdevelop (I Prefer KDevelop)
- Kate
- Vim
- Mircrosoft Visual Studio (Don’t ask , Don’t Tell
)
after completing all these steps , now you are ready for compilations
Marble has two modes Qt Only and KDE Dependent…you need to compile Marble in one of these methods ..qt reuires only Qt
Qt Mode requires you to write
- cmake -DQTONLY=TRUE
- make
- make install
and voila you have compiled Marble..
paypal
Recent Comments | http://techfreaks4u.com/blog/posts/marble-kde-for-dummies-its-ironic/ | CC-MAIN-2014-41 | refinedweb | 389 | 62.51 |
Thu, 17 Sep 2009
Simple, elegant HTML generation, released
I've just released my "simple, elegant HTML generation" module, "html.py"
See, I do release stuff :)
And astute readers might notice I've "released often" as well :)
I've just released my "simple, elegant HTML generation" module, "html.py"
See, I do release stuff :)
And astute readers might notice I've "released often" as well :)
OK, I looked. I searched. I didn't find. So here you go...
from cgi import escape class HTML(object): '''Easily generate HTML. >>> h = HTML() >>> p = h.p('hello, world!') >>> p.text('more text') >>> with h.table(border='1', newlines=True): ... for i in range(2): ... with h.tr: ... h.td('he<l>lo', a='"foo"') ... h.td('there') ... >>> print h <p>hello, world!more text</p> <table border="1"> <tr><td a=""foo"">he<l>lo</td><td>there</td></tr> <tr><td a=""foo"">he<l>lo</td><td>there</td></tr> </table> ''' def __init__(self, name=None, stack=None): self.name = name self.content = [] self.attrs = {} # insert newlines between content? self.newlines = False if stack is None: stack = [self] self.stack = stack def __getattr__(self, name): # adding a new tag or newline if name == 'newline': e = '\n' else: e = HTML(name, self.stack) self.stack[-1].content.append(e) return e def text(self, text): # adding text self.content.append(escape(text)) def __call__(self, *content, **kw): # customising a tag with content or attributes if content: self.content = map(escape, content) if 'newlines' in kw: # special-case to allow control over newlines self.newlines = kw.pop('newlines') for k in kw: self.attrs[k] = escape(kw[k]).replace('"', '"') return self def __enter__(self): # we're now adding tags to me! self.stack.append(self) return self def __exit__(self, exc_type, exc_value, exc_tb): # we're done adding tags to me! self.stack.pop() def __str__(self): # turn me and my content into text join = '\n' if self.newlines else '' if self.name is None: return join.join(map(str, self.content)) a = ['%s="%s"'%i for i in self.attrs.items()] l = [self.name] + a s = '<%s>%s'%(' '.join(l), join) if self.content: s += join.join(map(str, self.content)) s += join + '</%s>'%self.name return s
There was an escaping error in this blog post (the < and > around the "l" first in "hello") which I didn't catch before Planet Python grabbed my feed, so it's wrong on there but correct here. Hrm.
Also, look, ma! A ternary expression! My first in Python!
Did you like "join.join"? Heh. I know... | http://www.mechanicalcat.net/richard/log?year=2009&month=9 | CC-MAIN-2014-35 | refinedweb | 431 | 62.34 |
DHTMLX Gantt is a Gantt chart JS library that allows building feature-rich applications for project management and task tracking. One of the dhtmlxGantt great features is the ability to export files to Microsoft Project, a software product designed to help project managers in their day-to-day responsibilities.
DHTMLX library provides 2 ways to export your Gantt chart project. Firstly, you can get your own export local module. It’s a Gantt add-on that is built with ASP.NET and runs on Windows and IIS. This method is the safest one since all data will be stored on your own server where the export module is deployed.
Secondly, you can use an online export service (the sample is available on our website). In this article, we’ll show you how to use DHTMLX online service to export files from your Gantt chart into MS Project and vice versa. Let’s get started!
Export to MS Project
To successfully export data into the XML file, you have to follow the steps mentioned below.
First of all, to enable the online export service, you have to include the file on the page:
<script src="codebase/dhtmlxgantt.js"></script> <script src=""></script> <link rel="stylesheet" href="codebase/dhtmlxgantt.css" type="text/css">
And then call the
exportToMSProject method to export data from your Gantt chart. This method is responsible for sending a request to the remote service, which in turn generates an XML file. Otherwise, the service will return an URL to download generated data.
The
exportToMSProject method allows you to specify the file name, set auto-scheduling parameters for tasks, set custom properties to the exported project entity or tasks items. You can also specify the list of resources to export into an MS Project file:
gantt.exportToMSProject({ name:'custom.xml' auto_scheduling: false project: { 'Author': 'I am!', 'MinutesPerDay': function () { return gantt.config.hours_per_day * 60; } } tasks: { 'StartVariance': function (task) { if (task.startVariance) return task.startVariance; else return 0; }, 'PercentWorkComplete': function (task) { return (task.progress + 0.1); }, 'Custom': function (task) { return 'Custom value'; }, 'Custom 2': 'My Custom value' } resources: [ {"id":"1","name":"John","type":"work"}, {"id":"2","name":"Mike","type":"work"}, {"id":"3","name":"Anna","type":"work"} ] });
Finally, you have to specify the server property. You can use it with the local install of the export service:
gantt.exportToMSProject({ server:"" });
Import from MS Project
If you want to convert an XML or MPP MS Project file, you have to send the following request to the export service:
<form action="" method="POST" enctype="multipart/form-data"> <input type="file" name="file" /> <input type="hidden" name="type" value="msproject-parse"> <button type="submit">Get</button> </form>
Alternatively, you can use the client-side API, where the file property should contain either an XML or MPP Project file:
gantt.importFromMSProject({ data: file, taskProperties: ["Notes", "Name"], callback: function (project) { if (project) { gantt.clearAll(); if (project.config.duration_unit) { gantt.config.duration_unit = project.config.duration_unit; } gantt.parse(project.data); } } });
You can set the duration unit (“minute”, “hour”, “day”, “week”, “month”, or “year”) to the server, get project and tasks properties to be imported.
Limits on MS Project Importing/Exporting
There are two API endpoints for the MS Project export and import services. The first one is the default endpoint which serves all export methods. The maximum request size is 10 MB. The second one is dedicated to MS Project services and comes with a maximum request size of 40 MB.
The API endpoint can be specified by the
server property of the export configuration object:
gantt.importFromMSProject({ server:"", data: file, callback: function(project){ // some logic } });
Now you know how to import and export a JS Gantt chart from MS Project and can put your knowledge into practice. Besides, all the attendant configurations you will find in the Export and Import from MS Project section in our docs.
If you're a newbie in DHTMLX Gantt chart library, we suggest you to try a free 30-day trial version with official technical support.
Discussion (0) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/plazarev/export-and-import-from-dhtmlx-gantt-chart-to-ms-project-4p66 | CC-MAIN-2021-17 | refinedweb | 666 | 55.24 |
24 September 2009 18:50 [Source: ICIS news]
TORONTO (ICIS news)--US chemical railcar traffic rose year-on-year last week for only the second time this year but a railroad association cautioned that last year's numbers were hard hit by disruptions from a hurricane.
US chemical railcar traffic rose 17.4% in the week ended on 19 September from the same week in 2008, according to data released on Thursday by an industry association.
However, shipments in the same week of 2008 were hit by the disruptions caused by Hurricane Ike along the US Gulf coast, the Association of American Railroads (AAR) said.
US chemical railcar loadings for the week were 27,608, up by 4,082 railcars from 23,526 in the same week in 2008, the AAR said.
With railroads moving more than 20% of the chemicals produced in the ?xml:namespace>
In the previous week ended on 12 September,
For the year-to-date period through 19 September, US chemical railcar loadings were down 14.5% to 968,250, from 1,132,772 in the same period last year.
The AAR also provided comparable chemical railcar shipment data for
Canadian chemical rail traffic for the week ended on 19 September fell 11.1% to 11,898, from 13,391 in the same week last year.
For the year-to-date period, Canadian shipments were 429,243, down 22.5% from 553,907 in the same period in 2008.
Mexican weekly chemical rail traffic rose 31.7% to 1,106 shipments, from 840 in the same week a year earlier.
For the year-to-date period, Mexican shipments were 41,045, up 21.8% from 33,712 in the same period last year.
For all of
For the year to date, North American chemical railcar traffic was down 16.4% to 1,438,538 from 1,720,391 in the year-earlier period. | http://www.icis.com/Articles/2009/09/24/9250182/us-weekly-chemical-railcar-traffic-rises-year-on-year.html | CC-MAIN-2014-42 | refinedweb | 317 | 64.1 |
09-14-2012 09:54 AM
As any "good" developer does... I want to try and avoid hardcoding any screen size information in my AIR based apps to ensure that they can work on the PlayBook (1024x600), the BB10 FullTouch (1280x720), the BB10 w/Keyboard (720x720)... and anything else that may ever come up in the future... (even though RIM has indicated they are constraining themsleves for BB10 to the above dimensions as noted here:
So in order for this to work... I need to ensure:
a.) any default configuration doesn't lock in a size...
b.) I need to be able to programatically read in the device dimensions (and know if I need to "rotate" my view)
For item (a) I've been a bit confused by the opening line I've seen in documentation for your *.as class files that starts like this...
package{ import abc; ... [SWF(width="1024", height="600", backgroundColor="#000000", frameRate="30")] public class MyAppName extends Sprite{ ... } }
Although I'll be honest I haven't even tried... can I drop that metadata line? or can I change the values to variables like ${DeviceWidth}, $DeviceHeight}...? or is there a better option?
For item (b) I need to be able to read in the actual device dimensions.... (width, height) and potentially "know" how to force the rotation.
e.g. for a Game I'm making... it is intended to run in the "Landscape" orientation on the PlayBook (and on a full touch BB10 phone)... and although it really wouldn't matter... I'm guessing I'd prefer the "portrait" orientation for a BB10 keyboard phone (even though the screen resolution would be the same at any orientation.
Am I right to be looking at these values for width and height?
var width:int = qnx.display.DisplayMode.width; var height:int = qnx.display.DisplayMode.height;
and for orientation... this one?
var isPreferred:Boolean = qnx.display.DisplayMode.isPreferred;
Cheers,
Steve
09-14-2012 10:24 AM
If you are looking to force the application into Portriat or Landscape mode, you are best off doing that from the xml file instead of in code.
Also I believe that it is safe to drop the metadata line (but have to admit that I haven't tried it).
09-14-2012 10:27 AM
09-14-2012 07:44 PM
You can drop the meta line and include this in your main constructor:
this.stage.frameRate = 30; this.stage.align = StageAlign.TOP_LEFT; this.stage.scaleMode = StageScaleMode.NO_SCALE;
The stage size is picked up from the device boundary.
In the update display function, you can compare the width and height to determine orientation. Portrait ( w < h ) is "normal" orientation of the device.
09-14-2012 07:47 PM
09-17-2012 11:45 AM
Sarah Northway wrote a good article on this topic here --
Cheers,
Dustin | http://supportforums.blackberry.com/t5/Adobe-AIR-Development/Avoiding-hardcoding-screen-sizes-in-AIR-apps/m-p/1909743 | CC-MAIN-2013-20 | refinedweb | 473 | 75.5 |
See also: IRC log
<pgroth> Scribe: Curt Tilmes
<pgroth>
<pgroth>
<tlebo> =1
<Curt> 0 (not present)
<tlebo> +1
<hook> +1
<ivan> (was not present)
+1
<tlebo> (that actually works, since I was the first vote...)
<pgroth> approved minutes of the 01 November 2012 telco
<Luc> approved: minutes of the 01 November 2012 telco
<Curt> pgroth: we are in great shape!
<Curt> pgroth: will discuss documents on rec. track
<Curt> pgroth: most issues closed or will be momentarily
<Curt> pgroth: need to follow w3c process and do due diligence
<Curt> pgroth: document everything clearly
<Curt> pgroth: CR period will focus on implementations
<Curt> pgroth: both finding other folks to implement as well as working on implementations ourselves
<Curt> pgroth: we must show that we implement these specs
<Curt> pgroth: need coverage of all the features
<Curt> pgroth: reach out to people, engage others, push notes out, FAQ, etc. for outreach to implementers
<Curt> pgroth: it has been a long hard slog to get here, need to keep up momentum and let people know what we've done
<Curt> Luc: we've done amazing work since the last meeting there has been serious progress, now we need to finish
<Curt> Luc: need to promote the work that has been done
<Curt> GK: getting specs out is the start, we now hope the wider community will pick things up
<Paolo> sorry to be a pest: I think the phone mic goes to sleep even with short pauses so now it's all very on/off -- hard to follow. only continuous voices come across clean
<Curt> hook: this is a time to focus on implementations -- two serializations (PROV-O, PROV-XML) are each distinct encodings, distinct implementations
<Curt> hook: current definition is loose
<Curt> pgroth: in terms of implementation, we are looking for usage. A markup of a web site is an implementation
<Curt> pgroth: we are also looking for things that generate, consume, validate constraints, etc.
<Curt> pgroth: we will see people use PROV as the basis for other work
<Curt> pgroth: our exit criteria count data marked up, vocab. extensions, applications each as implementations
<Curt> GK: do extensions help us with CR exit criteria?
<Curt> pgroth: yes! similar to SKOS, we want to verify that people are using the work. That includes markup and extensions
<Curt> Luc: obviously, we are looking for applications to generate and consume provenance -- those really demonstrate interoperability
<pgroth> fyi :
<Curt> pgroth: status outstanding issues
<Luc>
<ivan> issue-482?
<trackbot> ISSUE-482 -- [external question] bundle IDs on insertion, context -- pending review
<trackbot>
<Curt> pgroth: haven't received acknowledgement from externtal reviewer satra
<Luc>
<Curt> Luc: He has acked.
<zednik> Luc is breaking up over the audio
<Curt> Luc: there was a suggestion we should consider adding an example of bundles to FAQ
<Zakim> GK, you wanted to say the last example I saw was relating to the *previous* positon on namespace prefixes
<Curt> GK: the last example dealt with previous situations without nested identifiers
<Curt> pgroth: we clarified the way it worked, he wanted examples of using prefixes properly and how not to use them
<Curt> pgroth: need an action to add examples to FAQ
<Curt> Luc: I will do it (the example from his message?)
<Luc>
<GK> ^^ not "without nested identifiers" but "without nested prefixes" - it's important to distinguish between these.
<Curt> Luc: the example he gave is valid, we need to explain why it is valid and add an example that is invalid
<pgroth> action Luc - add example of document/bundle to faq explaining validity
<trackbot> Created ACTION-122 - - add example of document/bundle to faq explaining validity [on Luc Moreau - due 2012-11-16].
<Curt> jcheney: agreed -- that's what we need to do.
<Zakim> GK, you wanted to ask about id used for bundle and entity within bundle
<Curt> Luc: we can close the issue now, with the coming action
<Curt> GK: you can have a bundle with an identifier, and use the identifier inside the bundle, to give provenance of the bundle itself. Is that ok?
<Curt> pgroth: that's a separate issue
<Curt> pgroth: that wouldn't change the spec
<Curt> GK: I thought you (paul) thought that would be invalid
<Curt> pgroth: delay considering that until later
<Curt> Luc: that is perfectly valid, and has an example in the DM
<Curt> zednik: the FAQ could attempt to address that
<Curt> pgroth: issue-569
<Curt> Luc: pending review waiting for james' response, came back to simon yesterday. he is happy with the suggestion, can close now
<Curt> pgroth: issue-475, mention
<Curt> pgroth: let's consider that (mention) at the end of this session so we can discuss it
<Curt> pgroth: editor review DM a final time for cleanliness/etc.
<Curt> Luc: how should we acknowledge reviewers?
<Curt> ivan: they will get listed as well as listing the working group
<Curt> ivan: put the same list of reviewers in each document
<pgroth> ACTION: Luc editor check [recorded in]
<trackbot> Created ACTION-123 - Editor check [on Luc Moreau - due 2012-11-16].
<Curt> ivan: everything that needs to be changed has been changed?
<Curt> Luc: yes, except for final review, it is ready to go
<smiles> I'm trying to call in to the W3C bridge with code 7768 as said on the Wiki, but get "This pass code is not valid". Is there another code for today?
<GK> @smiles - that often happens to me … but usually works if I try again (i.e. re-enter the passcode).
<pgroth> we have now addressed all open issues (except mention) for prov-dm
<Luc>
<smiles> * yes, I've tried a few times. not sure what the problem is, but will keep trying!
<Curt> Luc: last week, we agreed we would change scoping of prefixes, haven't received any feedback
<Curt> Luc: would be nice to have a few more examples
<pgroth> ACTION: Luc prov-n editor check [recorded in]
<trackbot> Created ACTION-124 - Prov-n editor check [on Luc Moreau - due 2012-11-16].
<pgroth> smiles are you on
<pgroth> ?
<smiles> yes, but the sound keeps cutting in and out
<Curt> Luc: there is a typo in the current text
<Curt> Luc: all documents cross-reference each other, which URL should we use
<Curt> ivan: the dated URL
<Curt> ivan: it is a real pain, but they must always reference by the dated URI
<Curt> ivan: a global search/replace should take care of it.
<pgroth> ACTION: tlebo, jcheney, luc - check to see that all references refer to the dated documents (after a publication date is given) [recorded in]
<trackbot> Sorry, couldn't find tlebo,. You can review and register nicknames at <>.
<Curt> Luc: we can't refer to those until we get the publication date
<Curt> Luc: is there a way to define the reference prefix up front and reuse it?
<Curt> ivan: (redacted)
<pgroth> w?
<pgroth> note: we are happy with prov-n
<pgroth>
<tlebo>
<Curt> tlebo: issue-552, subclass, we did what they recommended
<Curt> tlebo: haven't heard back
<Curt> tlebo: we asked for a response on tuesday
<Curt> ivan: ok to close, we did what they suggested
<pgroth> ACTION: tlebo to add email link to the response page [recorded in]
<trackbot> Created ACTION-125 - Add email link to the response page [on Timothy Lebo - due 2012-11-16].
<tlebo>
<Curt> tlebo: he says terms are confusing, but his concern isn't clear
<Curt> tlebo: he expressed a concern, tim suggested an alternative approach, he hasn't responded to that
<Curt> laurent: wasInfluencedBy and wasInformedBy can get confused, there may be a better way to describe/depict their relationship
<Curt> tlebo: in the HTML is isn't as obvious which is the superproperty?
<Curt> laurent: yes, is isn't totally obvious in the HTML description of the ontology
<Zakim> tlebo, you wanted to ask hwo to make it "more top level" - it is already a superproperoty
<Curt> Luc: we changed the superclass description in the DM since Ralph reviewed, it might be more clear now
<Curt> Luc: Could revise the HTML description to clarify further
<Curt> jcheney: agreed, it says what we want it to say, but we might want to make it clear right up front which is the superproperty for querying and that you ought to use the more specific terms if possible
<Curt> ivan: might want to add the clarifying diagram
<Curt> pgroth: the document is already large, we are talking about ways to better guide how people should use the standard, but not affecting the standard itself
<Curt> pgroth: that sort of material, patterns, etc. should be in the FAQ
<Curt> ivan: we need to make sure those clarifications aren't lost, maybe include in the primer? where would people want to find that sort of material
<Curt> pgroth: I'm happy to have that added to the primer
<Curt> pgroth: that type of material -- I haven't seen that specific image or writeup
<Zakim> GK, you wanted to say - adding to primer means its fixed onpublication
<Curt> Luc: tlebo should forward Laurent's material to the list to consider for adding to the primer
<Curt> GK: the primer is fixed on publication, maybe link it to somewhere more dynamic
<Curt> pgroth: I like the FAQ for this type of stuff
<Curt> ivan: For usage patterns, I agree with GK, they will change/evolve, but the diagram from Laurent is more fixed
<Curt> GK: agreed, the diagram is different
<pgroth> ACTION: tlebo add a comment to use more specific things through document [recorded in]
<trackbot> Created ACTION-126 - Add a comment to use more specific things through document [on Timothy Lebo - due 2012-11-16].
<Curt> tlebo: reassigned issue 592 to the primer
<pgroth>
<tlebo>
<Curt> GK: difficult to follow cross-references when the document is printed
<Curt> Luc: in the DM, we numbered everything and refer by number instead of just the static link
<Curt> Luc: it was difficult to put in all those
<Curt> tlebo: :-)
<Curt> ivan: now is the time to make those sorts of changes
<Curt> tlebo: to address that, we would have a number for everything, and a table with all the numbers to index the terms
<Curt> tlebo: it may be difficult to do all that and not break anything
<Curt> tlebo: it is a purely editorial issue
<Curt> tlebo: if we can get through CR without that, then address it prior to next phase
<Curt> GK: this may be just too much work to implement
<pgroth> accepted: ISSUE-461 is editorial, the group agrees that this is ok to go ahead with CR and may look to address in the period of PR
<tlebo>
<Curt> tlebo: need to change TTL example to exercise hadActivity
<Curt> tlebo: examples are considered editorial?
<Curt> ivan: yes, it is, but can it be done for CR?
<pgroth> ACTION: tlebo to add hadActivity example to prov-o [recorded in]
<trackbot> Created ACTION-127 - Add hadActivity example to prov-o [on Timothy Lebo - due 2012-11-16].
<tlebo>
<Curt> pgroth: we removed all TrIG?
<Curt> tlebo: there are a few remaining for 'mention'
<Curt> tlebo: reduced amount of TriG, and cited/described use of TriG
<Curt> ivan: clarify that all examples are informative
<Curt> ivan: must add that to the document
<Curt> ivan: then you can use TriG in examples and note that
<Curt> ivan: there may be a document from RDF group about TriG, and we could reference that later as an editorial change
<Curt> ivan: TriG reference must be informative, not normative
<Curt> ivan: it can reference it as a work in progress
<GK> -
<tlebo>
<TomDN>
<Curt> pgroth: closing the issue, tim will clarify that examples are informative
<pgroth> ACTION: tlebo to add a statement on informative and normative in prov-o [recorded in]
<trackbot> Created ACTION-128 - Add a statement on informative and normative in prov-o [on Timothy Lebo - due 2012-11-16].
<tlebo>
<Curt> tlebo: fully addressed, waiting for daniel to respond
<Curt> tlebo: closing issue-566
<ivan> issue-491?
<trackbot> ISSUE-491 -- [external] feedback on prov:agent explanation. -- pending review
<trackbot>
<Curt> tlebo: made some changes, Patrice likes it even less
<Curt> tlebo: doesn't like colloquial use of some terms and phrases
<Curt> tlebo: wants things expressed in logic terms
<Curt> pgroth: his phrasing would rewrite the document in a rule based form
<lebot> hello?
<Curt> ivan (and others): the proposed language is very convoluted for people to read, we shouldn't do it
sorry about the noise
<Curt> ivan: some of the wording could be better
<Zakim> GK, you wanted to suggest s/used by/used with/
<Curt> GK: change "used by" to "used with"
<Curt> ivan: yes, that may be a simple way to address some concerns
<Curt> pgroth: are these in many places?
<Curt> tlebo: I removed some of the objectionable language
<Curt> ivan: why was he even more upset?
<Curt> tlebo: we were reusing prov:AgentInfluence, but we change our usage of that, with a better definition
<Curt> tlebo: we've addressed some of the expressed concerns
<Curt> tlebo: I think we've addressed it all
<Curt> pgroth: we don't want to use the proposed phrasing, I think this has been adequately addressed
<Curt> tlebo: closing issue 491
<Curt>
<Curt> Luc: Tim will address action 116 post-CR release, determine if it is doable
<Curt> pgroth: Tim will do an editor check of PROV-O
<pgroth> ACTION: tlebo editor check prov-o [recorded in]
<trackbot> Created ACTION-129 - Editor check prov-o [on Timothy Lebo - due 2012-11-16].
<pgroth> very happy with prov-o
<Curt> pgroth: All issues have been addressed, sent back to reviewer
<Curt> jcheney: he has had a week to consider our responses
<Curt> ivan: were any of the resolutions controversial?
<Curt> jcheney: there were a few common themes, some were simply typo/rewording
<pgroth> close ISSUE-587
<trackbot> ISSUE-587 Concerns about analogies to RDF blank nodes/semantics closed
<Curt> group: (we like tracker!)
<pgroth> close ISSUE-586
<trackbot> ISSUE-586 The description of 'toplevel bundle' as 'set of statements not appearing in a named bundle' is unclear closed
<pgroth> close ISSUE-582
<trackbot> ISSUE-582 'of their respective documents.' should be '... of their respective instances.' closed
<Curt> jcheney: some of the suggestions might be more appropriately addressed in the semantics document
<Curt> jcheney: they didn't fit the nature of the the constraints goals
<Curt> ivan: maybe we didn't clarify the goals of the document?
<Curt> jcheney: I tried to elaborate purpose of document, that somewhat addresses that concern
<Curt> pgroth: current description of constraints document
]. "
<Curt> Luc: message in document is fairly clear what we intend for the document
<Curt> ivan: that description sounds ok, need to be clear that this is a precise way to check validity of PROV
<Curt> ivan: Antoine may be looking for semantics -- that isn't the goal of this document
<Curt> jcheney: that is how I have addressed the issues
<Curt> pgroth: add 1 sentence to description on constraints document -- this defines a precise way to validate provenance
<pgroth> This document defines how to precisely validate provenance documents.
<Curt> jcheney: will add that sentence
<Curt> pgroth: I read all the issue responses and thought they were good -- so did luc
<Curt> jcheney: issue-585, described why things are worded the way they are
<pgroth> close ISSUE-585
<trackbot> ISSUE-585 Suggestion to avoid discussing how to 'apply' constraints; clarify what it means to 'satisfy' constraints closed
<Curt> issue 576, the term merging was replaced with unification that is more accurate
<pgroth> close ISSUE-584
<trackbot> ISSUE-584 The nonstandard/procedurally defined 'merging' operation on terms closed
<Curt> ^576^584
<Curt> jcheney: issue 583, rewrote wording of equivalent instances
<pgroth> close ISSUE-583
<trackbot> ISSUE-583 Questions concerning what it means for applications to treat equivalent instances 'in the same way', particularly in bundles. closed
<Curt> jcheney: issue 581 wording around normalization/equivalence
<Curt> GK: equivalence is really observed behavior -- given the same situation, you should get the same provenance
<Curt> jcheney: I'll reword some of this and circulate for comment
<pgroth> ACTION: jcheney to add a bit of text around equivalence and remove normative SHOULD [recorded in]
<trackbot> Created ACTION-130 - Add a bit of text around equivalence and remove normative SHOULD [on James Cheney - due 2012-11-16].
<GK> ^^ Not "equivalence", but "treat in tghe same way" is what is observed/able behavious.
<Curt> issue 581, we agree we are not specifying the algorithm, will clarify,
<pgroth> close ISSUE-581
<trackbot> ISSUE-581 Suggestion to avoid wording that 'almost requires' using normalization to implement constraints closed
<Curt> jcheney: issue 580, definitions for expanding compact language not needed; response -- yes, we do need to define how those things work
<pgroth> close ISSUE-580
<trackbot> ISSUE-580 Suggestion to drop definitions in section 4.1 since they are not needed if the semantics is defined more abstractly closed
<TomDN> issue-578?
<trackbot> ISSUE-578 -- Use of "equivalent" incompatible with common uses of the term in logic/mathematics -- pending review
<trackbot>
<Curt> jcheney: issue 578, we defined equivalence only on valid documents, not arbitrary documents
<Curt> jcheney: we need to consider equivalence for other scenarios beyond validity
<pgroth> close ISSUE-578
<trackbot> ISSUE-578 Use of "equivalent" incompatible with common uses of the term in logic/mathematics closed
<Curt> ivan: for the purpose of this document, our description is sufficient
<Curt> jcheney: yes, once we clarify the purpose of our document, the concern becomes somewhat moot
<TomDN> issue-577?
<trackbot> ISSUE-577 -- Terminology: valid vs. consistent -- pending review
<trackbot>
<Curt> issue 577, we use the word "valid" where logic uses "consistent",
<Curt> ivan: this document isn't meant for logicians
<pgroth> close ISSUE-577
<trackbot> ISSUE-577 Terminology: valid vs. consistent closed
<Curt> jcheney: we are using the words appropriate for our purpose
<pgroth> close ISSUE-576
<trackbot> ISSUE-576 logical definition and comments on prov-constratins closed
<Curt> issue 556, translating constraints to prov-o out of scope
<Curt> pgroth: that is a concern of implementers
<pgroth> close ISSUE-556
<trackbot> ISSUE-556 public comment: should qualfied and unqualified versions the same closed
<pgroth>
<pgroth> ACTION: jcheney editorial check on prov-constraints [recorded in]
<trackbot> Created ACTION-131 - Editorial check on prov-constraints [on James Cheney - due 2012-11-16].
<pgroth> ACTION: jcheney add response email to responses to public comments page [recorded in]
<trackbot> Created ACTION-132 - Add response email to responses to public comments page [on James Cheney - due 2012-11-16].
<pgroth> we are happy with constraints
<pgroth> 15 minute break
<pgroth> start at 11
<lebot> i added a comment to can I close it?
<pgroth> Scribe: Tim Lebo
<pgroth> starting again
<lebot> paul: we came to a simple definition of mention, from many before it.
<lebot> … connects Entity in one bundle to an Entity in another bundle. It's a kind of specialization
<lebot> … Luc's response to Graham's public comment
<lebot> … "at risk" is not appropriate for mention.
<lebot> … having "at risk" in CR - does not look good.
<lebot> … need to settle it now. Make it lean.
<lebot> ivan: at CR, "at risk" is one that the WG thinks it has an issue implementing. But mention is not an implementation issue, it's a design issue.
<lebot> … if design, then it is an abuse of "at risk"
<lebot> pau: the chairs do not want to abuse "at risk".
<lebot> … thus, include or exclude now.
<Luc> @lebot: can you use pgroth as handle?
<lebot> … we've spent a LOT of time on mention. we need to go from that work.
<lebot> pgroth: lets hear case against as it stands.
<lebot> … does anybody want it in?
<lebot> … who wants it out?
<lebot> … we'll decide in or now today.
<lebot> GK: debate has been going on for long time.
<lebot> … we can't conflate previous things with what it is now.
<lebot> … feel there is an attempt to introduce something which cannot be specified in RDF.
<lebot> … BUT the public objection is NOT ^^^
<lebot> … basically, I don't know what it is trying to say.
<lebot> … what does it mean?
<lebot> … what is new beyond what we already have?
<TomDN> (original email: )
<lebot> … my claim is that it does not add anything.
<lebot> pgroth: who wants, will use mention?
<lebot> jcheney: at last F2F we discussed this.
<lebot> … strong motivation in ontology to relate MentionOf relation two two entities.
<lebot> (asInBundle)
<lebot> … the idea is to translate mention of DM into two triples in RDF.
<lebot> … how to convert when round tripping DM PROVO DM?
<lebot> … what if two mention triples?
<lebot> … you'll get confusion when coming back to DM.
<lebot> (The "limitation" is that you an only be asInBundle to one bundle)
<lebot> … seems like a misalignment in the serializations.
<Luc>
<lebot> … could be viewed as doing different things in PROVO and DM.
<lebot> luc: we introduced the constraint the mention must be unique - so you can't have the confusion that jcheney suggests.
<lebot> lebot: I'm happy with it.
<Luc> specialization is not reflexive, so they must be different URIs
<lebot> lebot: when we're trying to interconnect descriptions of entities in others' bundles, it's a natural thing to do.
<lebot> ivan: do you use the same URI?
<lebot> lebot: you can do either, depending on what you want to do.
<lebot> Curt: mention is the only capability to reference into the bundle. You'll run into problems if you don't have it.
<lebot> TomDN: i support using mention of.
<lebot> … a lab with multiple documents and multiple people. You just want to mention it, not repeat the provenance.
<lebot> … it's interesting to provide your own view on the entity that you're using.
<lebot> pgroth: we have specialization and alternate of.
<Luc> In view of implementation phase, can we see who will make use of the mention construct in their implementation?
<lebot> … the key aspect of mention of is that you name the entity and the bundle in which the entity is described. The Bundle IS the specialization.
<lebot> … without mention, you can still link the entities, but you lose the ability to mention the bundle.
<lebot> +1
<lebot> +1
<lebot> +1
<lebot> Luc: who will implement it?
<lebot> TomDN: we will.
<lebot> hook: mentionOf, but used unique identifiers to link across. didn't use mentionof
<lebot> … trying to link bundles. it was easier to not use mentionOf.
<lebot> hook: KISS philosophy.
<Zakim> lebot, you wanted to state that the system hook is using is one system, not multple
<lebot> lebot: mentionOf's power comes in when you don't have control over the entire system.
<Curt> +1 lebot
<lebot> hook: we should force people to use mentionOf to increase interoperability.
<lebot> pgroth: we can't force people to use it (and shouldn't)
<lebot> … we should offer it for people to use.
<lebot> hook: sounds like it doesn't hurt to leave it in, helps to connect.
<lebot> +1 hook
<lebot> +1 hook
<lebot> Luc: [not?] concerned with comments that Graham raises.
<lebot> … but the doubt is if it is really useful or not.
<lebot> … believe in stitching histories.
<lebot> … we need a construct for it.
<lebot> … BUT concerned if it is a subtype of specialization.
<lebot> … working to develop the use cases.
<lebot> … as a sub property of specialization, the lifetimes are maintained.
<lebot> … in the use case, the timeline constraint may not apply.
<TomDN> +q
<lebot> GK: not sure if it breaks specilization
<lebot> (+1 to GK)
<lebot> luc: unsure about making it a type of specialization.
<lebot> … we're stuck with keeping mentionOf as specialization (and not alternate)
<lebot> … if it's specialization, does it break?
<lebot> (-1 that it's broken as specialization. It's inherently specialization)
<lebot> TomDN: how does it break as specialization?
<lebot> … did we want the validity over different bundles?
<lebot> … at what point do we make a new entity?
<lebot> (+1 Tom)
<Paolo> I missed all of Tom's comment -- low voice
<Paolo> ok thanks
<lebot> pgroth: the question: do we have validity over different bundles
<lebot> TomDN: luc's problem goes away once the entity is in a different instance.
<lebot> … entity in a different instance, valid, same instance different bundle = invalid
<lebot> Luc: <example with e1 e2 and bundles>
<lebot> … generation and invaliation of both entities, specialization applies and must have a lifetime.
<lebot> TomDN: impossible to make valid if repeating the mention?
<lebot> pgroth: it done'st make it invalid, but …. (?)
<lebot> jcheney: inférences on uniqueness are flagged as at risk.
<lebot> … if somebody is at risk, we can decided to remove it w/o going to LC
<lebot> @luc, you're abusing mention of for the wrong use cases. (it appears)
<lebot> jcheney: is it possible to take out parts of the at risk?
<lebot> ivan: mention is a design feature, defined [as specialization]. it is a design element.
<lebot> … it is all or nothing.
<lebot> jcheney: we can remove it all. If we change it, then it's a design change.
<lebot> GK: can't you drop parts of the definition and not others, providing that the others are not changed?
<lebot> ivan: feature at risk, feature defined. Remove or keep it.
<lebot> … splitting hairs is sticky.
<lebot> Curt: I don't follow the issue. It DOES fit into specialization.
<lebot> … as a primary producer, I wont' use mention of, but for anyone that wants to augment my Entiteis, they need mentionOf to do it.
<Luc> @tlebo, can you clarify why i am abusing it?
<lebot> … the third party needs it.
<lebot> @luc, I'm not clear on what you're trying to do, but it doesn't sound like mentionOf
<lebot> Curt: when yoiu do your own provenance, you ond't need it, but metnionOf lets you "reach into" someone else's bundle.
<lebot> jcheney: second order provenance and linking.
<lebot> … but it's also true for other things.
<lebot> … are we solving a specific problem and not the more general?
<lebot> … it's clear that there is a need, but is it justified?
<Curt> entity is pretty much our most general thing to refer to
<lebot> … I am still uncomfortable with mentionOf
<lebot> … if it was lightweight with no inferences, then fine. But we might get into trouble later.
<lebot> … as things are, it doesn't seem like we should kill it, but people might trip over it later.
<lebot> hook: the linking of bundles should be in the model, we should not rely on a serialization
<lebot> @hook how are they different?
<TomDN> +q
<lebot> pgroth: there are existing ways to annotate. Refer to things an annotate them.
<lebot> … open annotation
<lebot> … some let you point to named graphs.
<lebot> …. well out side of our scope.
<lebot> … but those things are not for provenance.
<smiles> So mentionOf is just a way to reference a part of a document without reference to the serialisation format? Is mentionOf really to do with provenance apart from being arbitrarily restricted to PROV?
<lebot> … open annotation is not a standard, but is in w3c
<lebot> hook: having it formally in DM would uniformly manifest implementations in different encodigns. we're not relying on serializations to do the linking.
<lebot> pgroth: right now, you can use RDF linking.
<lebot> TomDN: should we drop it and put it into a note?
<lebot> … here is how to link" in FAQ...
<lebot> … we can change as we see fit.
<lebot> GK: in IETF, "experimental track", mention of is in this.
<lebot> … best we can do is to put FAQ
<lebot> ivan: it is a nice idea.
<lebot> … we have notes, we'd just be adding one more.
<lebot> pgroth: if that's what we want to do, it'd go AQ
<lebot> … we can't start a new note
<lebot> ivan: agree with graham that AQ is to locate provenance of a given resoruce.
<lebot> … that's different than mentionOf
<lebot> … it doesn't fit
<lebot> hook: how many use cases involve mentionOf?
<lebot> … for what we do, it would be useful.
<lebot> Curt: the key is not provenance expression/represtionation, ti's for analysis.
<lebot> GK: how important is interoperability at the analysis/
<lebot> ?
<lebot> hook: it is very important.
<lebot> … each bundle is handled by different institutions, gov entities.
<lebot> … interop is key here.
<pgroth> who just joined/
<pgroth> ?
<lebot> Curt: we have a lot of cases where data is processed, then next org processes. each uses their own bundles.
<lebot> … each needs a way to reference across those bundles.
<lebot> … seems that mentionOf provides a capability that will be needed at some point.
<lebot> Luc: jcheney, you'd be more comfortable to get rid of the inference?
<lebot> jcheney: uniqueness constraint makes to align with provo round tripping.
<lebot> … it's not clear that it buys you much.
<lebot> … you could just state the specialization.
<lebot> (I think the 'you don't get anything" assumes that you "have it all" and does not consider the practicality of the problem)
<lebot> jcheney: not hearing strong objections, but nobody is giving specific uses for it (?)
<Paolo> +1 for unlinking MentionOf from Specialization (if I understand James correcty)
<lebot> jcheney: not worth rolling all of it back
<lebot> Luc: we didn't want to make it a top-level, that's where we started.
<lebot> jcheney: not worth blowing the whole thing up over.
<Luc> is there opposition to remove it?
<lebot> pgroth: straw poll on mentionOf
<lebot> (this will decide who I sit with at lunch, btw)
<lebot> SamCoppens: selective removal okay?
<lebot> pgroth: no, since it changes the spec too much.
<pgroth> straw poll: who objects to keeping mentionOf?
<GK> +1
<smiles> +1
0
<Paolo> 0
<ivan> 0
<khalidBelhajjame> 0
<pgroth> straw poll: who objects to removing mentionOf?
0
<GK> 0
<khalidBelhajjame> 0
<lebot> :-(
<TomDN> 0
<zednik> 0
<SamCoppens> 0
<hook> +1
<Paolo> 0
<lebot> GK: I would formally object in its current form.
<smiles> I would not formally object. I was indicating that I think it is better not to be in the spec in the straw poll.
<lebot> Curt: I think it's valuable, but I won't formally object.
<GK> Longer response, in IRC for lack of time:
<GK> - yes, there are valid use cases, strong motivation
<GK> - I don't recognise them in the mentionOf as described (my complaint) in a way that can't be done without mentionOf
<GK> - some of those use-cases don't map to present-day RDF semantics - I worry about this, as we'd end up building on sand if we try to impose these semantics
<GK> - not defining it now doesn't mean it can't be defined later
<Paolo> may be back later
<khalidBelhajjame> I may be back later
<lebot> tlebo: If GK's formal objection is the thing to scare away this construct, then I'd be willing to bring RPI's formal objection to dropping it.
<lebot> … but this is weighted by the fact that I'm exhausted with supporting this construct.
<lebot> ivan: formal objection is a HUGE thing.
<pgroth> start again in one hour
<smiles> OK thanks
<smiles> My objecytion was not formal
<pgroth> scribe: James Cheney
pgroth: 30 minutes on mention
... have formal objections changed?
<ivan> scribenick: jcheney
GK: after lunch discusion with tlebo
... thinks problem may be fixable with changes to descriptive text, but not sure yet
ivan: can we do it now?
gk: maybe not enough time
... can we proceed on assumption it will be fine?
luc: wants certainty
... can we take an hour and do it now?
GK: will look at it offline now.
pgroth: Graham will look at document for ~1hr, we move on to prov-xml, goal is to come back to CR vote today
[luc is chair]
Luc: prov-xml was reviewed over past week (James, Paul, Luc)
would like to decide on release as fpwd
scribe: would like to decide on release as fpwd
zednik: document mostly content complete,
adding bundles today
... should be finished in ~5min
... reviews identified typos & rephrasing, had some questions about design/descriptions
... discussion topic list to respond & discuss feedback
... most feedback has been incorporated
... all 3 said it was ok to proced to fpwd
... currently addressing more complex identifier issues
curt: also thinks things are OK
smiles: wanted to point out comment that might
have been missed
... delegation element in prov-xml: schema description is different from actual schema
... but also agree document is ready for release
<ivan>
zednik: will double check
pgroth: do we vote next or have content discussion?
Luc: discuss reviews and any tecnical issues first, then vote
<smiles> @zednik: the issue was that the activity was an option of actedOnBehalfOf in the schema, compulsory in the schema fragment in the HTML
pgroth: thinks its OK for FPWD, would like to discuss technical issues
curt: would like to discuss 572
<TomDN> issue-572?
<trackbot> ISSUE-572 -- What constraints should we have on ordering of elements within the main complexTypes? -- raised
<trackbot>
jcheney: mostly happy, can discuss offline
pgroth: also wanted to suggest
signposting/context, is this intended before fpwd?
... meaning expanation of the style of schema being used (salami slice pattern, etc)
ivan: sounds good, helpful to reader
zednik: prov-xml group is discussing adding a design section, explain salami slice pattern, not sure if it will go in before fpwd
<pgroth> i wouldn't want it to delay fpwd
Luc: conform happy with document release,
flagged some technical issues
... need to catch up on mailing list traffic, but OK with flagging as outstanding issues in text as notes
... to avoid giving impression that it is a final design
... design section sounds useful
... timetable to release: need not be ASAP, but would be good to sync with CR
... to give time to write section
<TomDN> +1 for synchronous release
pgroth: would like it to be released
synchronously with CR/primer, etc.
... have gotten burned before by piecemeal release
... prov is the family, would like releasing as such
... no rush to get xml out, but there are minor things we can do to improve accessibility
ivan: we clearly don't have enough documents
to publish, so let's add one
... owl WG had relatively short overview document published with rest
... otherwise family of documents becomes messy
Luc: not committed to it in charter extension, avoid overcommitment
ivan: together with CR release?
Luc: not enough time
<Curt> copy the intro from the DM
<pgroth> ACTION: pgroth to draft a first one page overview [recorded in]
<trackbot> Created ACTION-133 - Draft a first one page overview [on Paul Groth - due 2012-11-16].
pgroth: will try to draft 1 page, group will look at it. as curt says, this is already done in most documents
luc: can reuse presentation tutorial
materials.
... informal poll to gauge positions on fpwd
... is ther opposition to prov-xml fpwd release?
<Paolo> no objection
[crickets chirping]
<smiles> no objection
sorry
Luc: what do we want to finalize before fpwd?
pgroth: want 1 para about design + "warning, this is a fpwd, subject to change"
Luc: any other input?
... can we confirm prov-xml as short name?
<Luc> proposed: To release prov-xml as a first public working draft, after adding design overview and sign-posting issues under consideration, with prov-xml as short-name
<TomDN> +1
<ivan> +1
<pgroth> +1
<smiles> +1
<Curt> +1
<SamCoppens> +1
+1 UoE
<lebot> +1
<Luc> accepted: To release prov-xml as a first public working draft, after adding design overview and sign-posting issues under consideration, with prov-xml as short-name
Luc: now have time to discuss technical issues
<TomDN> issue-572?
<trackbot> ISSUE-572 -- What constraints should we have on ordering of elements within the main complexTypes? -- raised
<trackbot>
<ivan> issue-572?
<trackbot> ISSUE-572 -- What constraints should we have on ordering of elements within the main complexTypes? -- raised
<trackbot>
Curt: Mapping from PROV-N to PROV-DM into xml
schema decided to keep same order of sub-elements as in
prov-dm
... Current rationale: atributes are ids
... ordering of content is static matching prov-n
... except for optional attributes which are unordered
<pgroth> wonder why there's no issue about sub typing?
Curt: could relax ordering, or require
ordering of attributes
... Concern that ordering makes it easier for processing, but harder for generation
... unlike prov-n
jcheney: happy with wat it is, decreases tax
on everyone to normalize
... happy with way it is, decreases tax on everyone to normalize
luc: had idea to require prov attributes to
appear first, then non-prov
... use xsd:any for all the rest
... should make it easier to convert between xml and other PL embeddings
... with xml, thinking about serializations but also queries
... does order have impact?
jcheney: probably XQuery with unordered xpath axes is enough, so order probably not a big issue for queries
pgroth: not sure of issue
luc: orm will want to be able to find
prov:type
... so mapping will be challenging
jcheney: we don't need to solve this now necessarily
ivan: can ask for feedback
pgroth: automated generation tools are a use case, we should flag this for asking for feedback
<Zakim> pgroth, you wanted to say we should test
luc: issue remains open, but will be signposted
pgroth: wants to discuss subtyping
<Zakim> pgroth, you wanted to ask for about sub typing
pgroth: if you look at prov-xml, many subtypes
are defined through use of prov:type
... in prov-o, a revision has a corresponding relation
... why can't xml / xsd do something similar
curt: also would like to do this
zednik: followed prov-n initially, but can explore and add in after fpwd. note in each section to explain this
pgroth: raise issue?
zednik: did look at subtyping early, but
mainly entity and agent and it didn't seem to gain a lot since
these subtypes don't have additional elements/attributes
... but relations may have a benefit
pgroth: in xml, you see agent but not person
etc.
... writing xpath query to ask for people is easier if the element name is prov:person
zednik: would have to specialize complex type
and add new toplevel element referencing it
... this should work, but hasn't been tried yet. may work for entity and agent subtypes too.
Luc: will have to add subtype and new
elements. don't we want to allow use of person, etc. wherever
an agent is allowed?
... but then haven't you fixed all the subclasses of entity/agent, forbidding extensions?
zednik: not familiar with extended types in xml, but should allow specialization / subtypes without using substitution groups
Luc: something to keep in mind when looking at revised design.
<pgroth> did someone raise the issue?
zednik: suggest we mark the terms that use prov:type for subtyping as something that might change
<pgroth> issue: prov-xml subtyping needs to be marked in the document
<trackbot> Created ISSUE-595 - Prov-xml subtyping needs to be marked in the document ; please complete additional details at .
pgroth: whoa!
luc: next issue, identifiers/qnames
<Luc> entity(ex:0001)
luc: can write entities like this
ivan: this is why rdfa does not use qnames
luc: grammar accepts qualified names but xml schema requires qnames
ivan: [shrug] life sucks
luc: can define new type of strings that match
this
... in prov toolbox, using in non validating mode so these recognize as qualified names but painful
zednik: should try to determine what is best
for xml to use as identifier
... identifying scheme for prov-n makes sense in rdf, may not make sense in xml
... defining our own string subtype may not be best either
pgroth: agrees with stefan's approach. made
prov-n open-ended for human consumtion
... with xml, need to be more restrictive to remain compatble for tools, even if it constraints what you can use as ids
tlebo: rdf/xml has same problem,
<Luc> my concern is that people will generate xml that does not validate
pgroth: design for tooling
ivan: it is a choice to allow more liberal strings, but will not work well with tools
<lebot> +1 pgroth and zednik on letting prov-xml constrain, c.f. prov-o's "type" must be a Resource and not Literals, as prov-n permits.
pgroth: does qname resolve to uri? main
serializations will be xml, rdf/turtle
... we don't have to define in documents, but should say somewhere what subset of ids are interoperable across main formats.
... "don't do this"
luc: concerned people will generate xml
serializations that don't validate because of ids
... qnames are very restrictvie
curt: seems ok to say "if you want to interoperate, do this"
hook: no xlinking
pgroth: shouldn't define our own ids. do people use something other than qnames?
laurent: people used to use urn, now uri/url
<zednik> +1 pgroth for determining what is best for ids from xml community, and use that
ivan: there are organizations whose internal
identification of items is similar, rdfa discussion began
because news organization wanted to use similar names
... rdfa avoided use of qnames
pgroth: also allowed in prov-o, prov-n
ivan: defining new id type worse because many xml tools assume id attribute is of a specific form (?)
Luc: we use prov;id, not toplevel id
ivan: some tools recognize/exploit atributes declared
<ivan> s/atributes/attributes/
jcheney: will ask ht
<Curt> This could be an explicit question for FPWD review too
luc: prov-dm uses qualified names as shortcut
for uri
... can reconstruct full uri
... not done in xml by default
<Paolo> I will have to go soon -- are you planning to discuss prov-dictionary next?
luc: we need to state the convention
<pgroth> @paolo yes
luc: plan: flag issue, have james ask henry thompson
zednik: wanted to add that we could put forth question + possible direction such as xsd:anyURI
luc: may lose some benefit of xml?
... congratulations to prov-xml team
luc: renamed collections to dictionaries, then
decided to remove from dm leaving lean collections
... decided to create note for dictionaries, starting with all text from older verisons of prov-dm/prov-o
... but some work is needed. who will work on it?
<TomDN> +q
<Paolo> +q
TomDN: what is timetable?
luc: to be detemined
TomDN: synchronous release?
Luc: no, later than cr release
... but before end of wg
... including time for iterations
Paolo: discussed earlier, and when we decided
on note, ownership was assigned to stian with paolo agreeing to
... but was involved in other documents so did not have time
... talked with stian and discussed timetable but this hasn't been realized
... plan to ask stian if interested, volunteer to help, otherwise try to pick up
... would still like to see it happen
... should be able to start spending time on it after holidays
Luc: can you really do it?
... in terms of bandwidth
Paolo: will have more in January, not before
<lebot> @Paolo , we all have more bandwidth later. Until we don't ;-)
Paolo: can make time for it
... don't think we're too far
<SamCoppens> Tom and I would volunteer to help with the note
<lebot> good point, it was carried to Last Call drafts :-/
Paolo: material in note is not starting from scratch
pgroth: timetable would like to see fpwd or new release on notes before holidays for all documents
<lebot> +1 to a FPWD for collections before xmas
pgroth: there on most things already (prov-aq,
prov-dc)
... collections needs editorial work beyone existing content
Luc: at f2f3 took out of rec track document,
no activity since then
... if someone volunteers to work on it before holidays, great, if not, we may not have time to finish it by march
tlebo: reinforcing paolo's comments: content
is from pre-last call
... can support with prov-o parts
paolo: will struggle between now and end of
year but can try to make time
... spike in teaching activity now
<pgroth> @paolo that's why we need something else
paolo: unlikely to find more than 1-2
days
... was assigned to stian, so begin by checking whether he still plans to do this
sam: tom and i will definitely help, could take lead if needed
<Paolo> excellent I would definitely help out
luc: sounds good!
pgroth: stian may be busy, so extra help would
be good; stian is a core implementor in taverna, & working
with open annotation
... implementations more important
<pgroth> ls
<lebot> +1 drink each to @SamCoppens and @TomDN this evening ;-)
luc: NB: christmas is only ~6 weeks away
(oops that was me) NB: christmas is only ~6 weeks away
<lebot> @Paolo can we wrap our arms around the raw materials?
<Luc> ACTION: SamCoppens to draft a timetable for prov-dictionary for the next teleconference [recorded in]
<trackbot> Sorry, couldn't find SamCoppens. You can review and register nicknames at <>.
@tlebo
<Paolo> cool, have to go bye everyone
luc: completed prov-xml, prov-dictionary
... allocate 30-minutes to prov-sem?
<Luc> scribe: TomDN
<jcheney>
<pgroth> ACTION: TomDN draft a timetable for prov-dictionary for the next teleconference [recorded in]
<trackbot> Created ACTION-134 - draft a timetable for prov-dictionary for the next teleconference [on Tom De Nies - due 2012-11-16].
jcheney: Update on PROV-SEM.
<lebot> @SamCoppens
jcheney: Most of what's here:
is aligned with the LC docs
... With the CONSTRAINTS, we only enable people to track the constraints. But with SEM we could formalize all that more cleanly and acceptable for logics people
... It's kinda hard to write that stuff down in HTML, instead of in for example LaTeX
... There's an old Latex->HTML tool, but it's not conforming to the recent standards
... If I could write it as Latex, producing this note would be easier
Luc: What's your sense of timetable?
... And are there people who could help you?
jcheney: Help would be good.
... Now is a good time for me to do it.
... But time that I wanted to spend on this has gone to the constraints.
... I could definitely use people that can do the math markup
pgroth: Go ahead an focus on the content, and we can see if we can find people to make it look nice
ivan: it could be on the wiki after the WG closes. Then it has a URI and is read-only
jcheney: But is that OK for a formal Note?
ivan: no.
<zednik>
<lebot> @zednik what a great name for a tool.
Luc: Isn't there a tool at W3C to turn a wikipage into a note?
ivan: Sandro had some python tools, but I don't know whether that would work. You'd have to ask Sandro.
TomDN: I think the content is most important, to address comments about the semantics. let's focus on that first
jcheney: I think a lot of people thought it'd be nice to have this, so it's definitely worth doing. The feedback was useful, but not the main reason to produce the Note
pgroth: Conclusion: James keeps working on this in the way that's easiest for him, and then someone looks at the presentation stuff later.
Luc: Timetable?
jcheney: I need about a week (continuous) work
on this.
... The week of the holidays seems reasonable for a first draft
<jcheney>
jcheney: If you look at the end of the document, you'll see that I've already converted most stuff into the subset of LateX that Wiki supports.
<pgroth> 30 prov logic parsers
<pgroth> all independent implementations
<pgroth> we are happy
everyone: We are all happy
pgroth: Is there anyone on the phone that has comments on anything on the agenda?
Luc: Graham has thought about Mention.
GK: I think I have an explanation of it that
I'm OK with.
... I hope it aligns with what is meant in the document.
Luc: So you're not proposing a change of design, but a textual change?
GK: Yes, it's an explanatory change.
(Taking a break until 3:15 )
<smiles> OK, talk then
<laurent> @jcheney Instructions to export wiki pages to HTML used for SSN
(and we're back ! )
pgroth: Graham to propose editorial changes
<GK>
(who just joined? )
GK: Please see the link for the text regarding my suggestions
<jcheney> can someone resend link
GK: This is based on the description of Mention in PROV-DM
<SamCoppens>
<jcheney> @Sam thanks!
GK: I think this way, the examples could be done without TRiG
<pgroth> ace Luc
Luc: In your first 2 sentences, you talk about
the same entity using the same name, but with different
descriptions in different bundles
... However, we can't have the same name
... Why do you have to indicate that they have the same name? Why not just the same entities with different descriptions?
GK: Valid point, I was just working from a
specific use case
... It may not be necessary in the eventual descriptive text
Luc: are you introducing a new inference?
GK: I don't think so.
<lebot> I pause on "the descriptions may be based on observations of different specializations "
GK: To be clear, these are my thoughts on the matter, not something that should go directly into the description
<lebot> it seems to impose a specialization of an entity every time someone attempts to fix an aspect of the entity.
luc: Is "An application may have access to additional out of band information " there to explain the difference with /just/ a specialization?
GK: yes
<lebot> I pause on "about the specialization of e1 that is described in bundle b" since a specialization is not asserted - e1 is itself!
Luc: Example: ratings. If I rate something that lasts an hour fast, someone else might rate it differently
<lebot> I very much like "The mentionOf construct provides a way to introduce a new entity that is the basis for observations in a specified bundle"
GK: I think we're talking about the same thing
Luc: Do you want to add these inferences to the document?
GK: No, they are to help capture the essence of the text
Luc: Does this mean that you are now happy with mention? (If we do these edits in the text)
GK: I'd say yes, if my interpretation is what's meant in the document
Luc: Attempting to assess the changes to be made
<pgroth>
<lebot> I think the term "mentionedIn" is too broad from what we currently have: "asInBundle"
<jcheney> "introduce a new entity that is the basis" -> "relate an entity in the current instance to another one that is the basis..." ??
GK: I had trouble with "Some applications may want to interpret this entity e1 with respect to the descriptions found in the bundle b it occurs in."
Luc: Yes, it looks liek we actually mean "The description of the entity in bundle b"
s/liek/like
GK: also, "additional aspects"
tlebo: but "aspects" is central in the definitions of alternate and specialization
<lebot> Central to mention: "The primary author did not see fit to specialize, but the secondary consumer/author *does* see fit to specialize the entity".
Luc: We didn't want a formal definition of aspect
tlebo: the term "additional aspect" just refers to the specifying of the bundle. After that you can add whatever you want
GK: My problem is that it's focusing on the mention as the aspect
Luc: I want to know exactly which edits we want to make
GK: I was treating this as trying to capture
the same information as in the document
... as a replacement
smiles: I personally find the original text
clearer than Graham's
... To me, it doesn't seem to be about provenance, and not useful.
... I wouldn't formally object, but I wouldn't use it
... everything else in the document describes things in the past. But this doesn't.
... So it's not really provenance
+q
scribe: I don't see a problem with it being at risk. But since we want it in or out now, I would vote for out
<Curt> It allows us to tie additional information to provenance information
TomDN: But alternate and specialization technically don't describe things in the past, so why block mention for that reason?
smiles: because they do describe "this thing was alternate of this thing" in the past
TomDN: I think mention does that as well, just with a different name...
pgroth: What I'm worried about is leaving here
with a pseudo-agreement to have an editorial change, and then
later someone objects to it
... What Graham wrote seems like a different concept than what we have
... We need an answer from the WG to the question: "Is this construct worth delaying everything else?"
Luc: I think it's different from what's in the document, but essentially, you didn't change the bullets, or did you?
GK: I reordered them
... I said generalEntity: an identifier (supra) for an entity that that appears in bundle (b);
... whereas you said: generalEntity: an identifier (supra) of the entity that is being mentioned.
... and: specificEntity: an identifier (infra) of the entity that is a mention of the general entity (supra);
... instead of: specificEntity: an identifier (infra) of an entity that is a specialization of (supra);
... I couldn't understand the original description, but mine is what I made from it after discussion
Luc: what about incompatibility with RDF semantics?
GK: that was part of the basis of my concern,
but not the essence
... I'm checking whether I can make lighter changes with the same effect
)."
tlebo: When we were comparing the bullets, I was thinking it would make sense to keep the current DM definition for bundle and specific entity, but use Graham's general entity
<lebot> generalEntity: an identifier (supra) for an entity that that appears in bundle (b);
tlebo: If we get rid of this word "mentioned", then we can avoid some confusion
<lebot> generalEntity: an identifier (supra) of the entity that is being mentioned.
<lebot> bundle: an identifier (b) of a bundle that contains a description of supra and further constitutes one additional aspect presented by infra.
<lebot> ^^ wipe that :-)
<lebot> specificEntity: an identifier (infra) of the entity that is a mention of the general entity (supra);
<lebot> generalEntity: an identifier (supra) for an entity that that appears in bundle (b);
<lebot> bundle: an identifier (b) of a bundle that contains a description of supra and further constitutes one additional aspect presented by infra.
Luc: I'm concerned that we're not progressing
ivan: I think the only way to move forward is
to drop it from the spec
... It's harsh, but realistic
SamCoppens: This seems to be about interpretation. Can't we just leave the description as such, but explain using Graham's example?
ivan: We are at the last minute
pgroth: We're not even arguing about a little
bit of text. This is a substantial change
... The goal of the DM was to have an intuitive, easy to understand model.
<lebot> wasInfluencedBy is confusing with wasInformed
<Zakim> lebot, you wanted to ask isn't this what FAQs are for? That's how we addressed the issues earlier today.
pgroth: Now, we agree on the structure, but not on the definition it seems
tlebo: isn't this what FAQs are for? That's how we addressed the issues earlier today.
pgroth: But the commenter from earlier today
wasn't a WG member, that had the chance to discuss with us for
a long time
... If it's not clear for Graham, how can we expect outsiders to get it?
Luc: yesterday, it seemed to me like there was
no support for the construct. But this morning it seemed there
was.
... But now we have to move to CR.
GK: I will back down from making a formal objection, after discussing it today
Luc: Still, at previous meetings, we agreed
that if there's no consensus, we would drop it.
... I say we just vote
smiles: I wanted to ask: what is the negative consequence of it being removed?
pgroth: You can't use it
... we lose some interoperability
<Curt> we have a 6pm res
jcheney: It might be good to state the pros
and cons
... pro: clear use case
... con: it's been controversial
+q
scribe: pro of removing: covering our euphimisms
<smiles> A note has the advantage that if a better way is found later, the DM would still stand complete without the note
tomDN: still in favor of creating a note. seems like the same amount of time, but without delaying CR
hook: Could we do something less strong than that?
Luc: like a wiki
ivan: You could take what's there, and put it
into an informative appendix
... as a guideline.
... But it wouldn't be in the standard ontology
<Curt> similarly, we would have to leave it out of the XML schema
pgroth: My worry with that is that it's
confusing.
... CR speaks with a clear voice
... An informative appendix does not
Hook: So do we provide ambiguous guidance or no guidance at all?
pgroth: Either crystal clear or not at all
ivan: Any member can do a member submission, but that's really the weakest form
<pgroth> propose: Keep mentionOf as part of PROV as is and not at risk
<smiles> -1
<lebot> +1
<Curt> +1
<jcheney> 0
<hook> +1
<GK> -1
<zednik> +1 (RPI)
+1
<SamCoppens> 0
<pgroth> resolved: mentionOf is removed from PROV rec track documents
+q
<Curt> as a note, what would be the effect on the OWL or XSD schema?
-q
<smiles> @pgroth yes, exactly
Luc: Simon, Graham, would you object to mention as is in a note?
Graham: no, I'd go -0 or support it
smiles: no, probably 0
ivan: so timetable for this hypothetical note?
pgroth: not together with CR. It's a
"new"note.
... Who would do this?
Luc: As editors. we should take out the text
from the recs, and put it into a document
... I'll take on this
<pgroth> ACTION: luc create a mention of document [recorded in]
<trackbot> Created ACTION-135 - Create a mention of document [on Luc Moreau - due 2012-11-16].
Hook: This will be a new note for the DM, but how far deep would the note go regarding the other documents?
Luc: A single, comprehensive document
ivan: What about the 2 extra terms in owl. Which namespace would that be?
<lebot> @hook, the "put it all together" approach is what we agreed to do for dictionary
Luc: same for XML
s/same/same question for
<lebot> didn't we agree that the dictionary term URIs were "reserved" in our namespace?
pgroth: same solution as with the other notes
ivan: has to be made clear that these are not standard properties
pgroth: We have prov-aq.owl, prov-dc.owl, etc.
<hook> @lebot, thanks for clarifying.
pgroth: Eventually, we'll create a "super" owl file including everything, with clear commenting what is standard and what not
ivan: So it'll all be in the same namespace. And I am happy with that
<jcheney> should we formally close 475??
<pgroth> close ISSUE-475
<trackbot> ISSUE-475 Request to drop "mention" and related elements closed
<pgroth> ACTION: Luc to update public response on mention [recorded in]
<trackbot> Created ACTION-136 - Update public response on mention [on Luc Moreau - due 2012-11-16].
<pgroth> proposed: prov-dm, prov-o, prov-constraints, prov-n to be submitted as candidate recommendations as soon as all editorial actions are completed
<jcheney> what is the record for number of issues?
+1
<ivan> +1
<Curt> +1
<smiles> +1
<jcheney> +1
<GK> +1
<hook> +1
<SamCoppens> +1
<lebot> +1
<zednik> +1 (RPI)
<pgroth> +1 (VUA)
<Luc> +1 (Southampton)
<pgroth> accepted: prov-dm, prov-o, prov-constraints, prov-n to be submitted as candidate recommendations as soon as all editorial actions are completed
<smiles> Sorry, got to go now. Talk to you tomorrow
<GK>
Luc: It would be good to hear from the editors and set a time for a next release
GK: Until about a week ago there was no
progress.
... In the last week I started going through the issue list
... 25 are pending review
... There were 2 issues I'd like some feedback on
... One is link relations or full URIs
tlebo: If there's something I can edit in the document, I could settle my raised issues
GK: Just let me know what the changes are
ivan: rel="provenance" is something that isn't
defined by HTML yet
... if you use full URIs, you don't have that problem
pgroth: can you use those in the header of an HTTP request?
ivan: not sure
GK: I think it might work
ivan: Another option is RDFa
... prov:provenance
... Might be good to talk to the Linked Data Profile WG
... I am not familiar with all the details of their spec, but it makes sense to try and comply with their method
... Making it clear that we arent talking about a REC
tlebo: think the proposed change (to put a full URI or prov: prefix in link/@rel) would actually fix the issue that I ran into in March when trying to use AQ in PROV-O HTML.
GK: So we basically agree to push ahead with
URIs
... Paul raised an issue about introducing roles of consumer and publisher
... I've taken that on board in the discovery section, so you may want to review.
pgroth: Locating provenance information section?
GK: yes
... We are also dropping the reference to POWDER
pgroth: Do we still want best practice in this document?
ivan: This also might be interesting to discuss with the LDP WG
Luc: Do want a discussion on bundle
identifiers? And how we access their content?
... When are we aiming for the next release?
GK: last time I checked, by the end of this
month
... at least with the outstanding issues resolved and ready for another round of review
Luc: So the end of the year would be
feasible?
... And do we synchronize with the family of specs?
ivan: Absolutely
pgroth: I would like an implementation of
AQ
... using the example corpus of provenance
... Also, the document should be cleaner
... (e.g. best practices inside the document)
... smaller would also be good
... We should aim for a release cycle by the end of the year
Luc: As we did with the DM, we can release an internal draft for review of specific people
GK: I'll give it a shot
... Question for the group: What do we do with the issues that have been there for a long time?
Luc: We should send out reminders
pgroth: I'll set a date for all the pending reviews
GK: sounds good
<Luc> ACTION: pgroth to organize closure of issues closed pending review [recorded in]
<trackbot> Created ACTION-137 - Organize closure of issues closed pending review [on Paul Groth - due 2012-11-16].
Luc: We're trying to close the ones created
before summer, specifically
... Anything specific (technical) that you'd like to discuss now?
GK: Not really
pgroth: I just need to respond to your responses
Luc: Do we want to say something about
dereferencing bundle identifiers to obtain the content of a
bundle?
... Currently, we don't have a mechanism for that
ivan: Intuitively, I'd say you GET a set of
provenance statements
... in some serialization
... depending on content negotation
... (RDF or XML)
<Curt> and PROV-JSON!
jcheney: Naïvely, it seems that PROV-N and
PROV-XML define what a PROV document is, and that has a
name/identifier
... Are we saying that the URIs of the bundles in that document should be dereferencable?
lebot: I propose an alternative HTTP response: at least one triple would come back, saying that the type is prov:bundle
@lebot: (is that about right, Tim?)
<pgroth> that's actually how we do it the paq
Luc: What if the bundle name is not a URL, so
you can't dereference it
... We may have UUIDs...
<lebot> I think s/UUID/hash(graph)/ helps phrase the discussion better.
s/UUID/hash(graph)
<lebot> @TomDN No, @luc means UUID.
<lebot> VOID and DCAT handle this distinction with void:dataDump and dcat:distribution [ dcat:accessURL ]
ivan: Coming back to James's question. If
we're talking about an ID, do we mean a document or a
bundle?
... The file containing the bundles is conceptually different from the bundles
... I'd like to get the bundle in 1 place
<lebot> VOID and DCAT handle this distinction with ?bundle void:dataDump <THE-PROV-ASSERTIONS> and ?bundle dcat:distribution [ dcat:accessURL <THE-PROV-ASSERTIONS> ] .
pgroth: another way to put it is:How do you retrieve the description of an entity?
<lebot> solve the problem for Entity, you've solved the problem for Bundle.
pgroth: It might be out of scope, but we have
to look into that
... "Given the identifier of an entity, how do we get the provenance for that? "
<lebot> This sounds more difficult and less finished than "mention"...
ivan: My advice is to sit down with other WGs that specialize in that
<lebot> (but, not a CR...)
GK: there are many things we /could/ specify.
But we should focus on the simple stuff first
... So we start to sketch our own thoughts on the matter, and then go to other WGs
<Zakim> GK, you wanted to say there are many things we *could* specify, but there'a a question of how much we *should* specify - we want to guide developers to easy, simple options where
pgroth: To me, we should go and look what LDP
does
... Because, in a linked data context, all that stuff is already defined
... Do we want interoperability in this space?
... The linked data community is trying to tackle that, we don't have the manpower for it
... I want to focus on "Is the way we do it, the best, simplest, correct way to do it?"
Luc: conclusion: this issue is out of scope?
GK: We should just be careful about which route we go down on
Luc: So the editors will come up with a lightweight approach
pgroth: I think the best practice should be
separate
... That way the document becomes nice and small, and very clear
... and easy to implement
... and then all the bundle/SPARQL stuff separate
<Luc> proposed: PAQ editors to provide a light weight answer to ISSUE-596
<Luc> accepted: PAQ editors to provide a light weight answer to ISSUE-596
<pgroth> trackbot, end telcon
This is scribe.perl Revision: 1.137 of Date: 2012/09/20 20:19:01 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/section/session/ Succeeded: s/?:/laurent:/ Succeeded: s/hood/hook/ Succeeded: s/ace/ack/ FAILED: s/atributes/attributes/ Succeeded: s/something/somebody/ FAILED: s/liek/like/ FAILED: s/same/same question for/ FAILED: s/UUID/hash(graph)/ Found Scribe: Curt Tilmes Found Scribe: Tim Lebo Found Scribe: James Cheney Found ScribeNick: jcheney Found Scribe: TomDN Inferring ScribeNick: TomDN Scribes: Curt Tilmes, Tim Lebo, James Cheney, TomDN ScribeNicks: jcheney, TomDN WARNING: Replacing list of attendees. Old list: Paolo zednik laurent SamCoppens TomDN hook GK tlebo Luc Curt pgroth jcheney ivan stain smiles khalidBelhajjame [IPcaller] New list: MIT531 [IPcaller] smiles Paolo Default Present: MIT531, [IPcaller], smiles, Paolo Present: MIT531 [IPcaller] smiles Paolo WARNING: No meeting chair found! You should specify the meeting chair like this: <dbooth> Chair: dbooth Got date from IRC log name: 09 Nov 2012 Guessing minutes URL: People with action items: - add check email jcheney luc pgroth response samcoppens tlebo tomdn[End of scribe.perl diagnostic output] | http://www.w3.org/2012/11/09-prov-minutes.html | CC-MAIN-2016-50 | refinedweb | 11,275 | 67.99 |
I've developed this short test/example code, in order to understand better how static methods work in Python.
class TestClass:
def __init__(self, size):
self.size = size
def instance(self):
print("regular instance method - with 'self'")
@staticmethod
def static():
print("static instance method - with @staticmethod")
def static_class():
print("static class method")
a = TestClass(1000)
a.instance()
a.static()
TestClass.static_class()
Here is a post on static methods. In summary:
Regarding your questions:
selfis a convention, it pertains to the instance.
selfas an argument or decorate the method it with
@staticmethod.
It may be more clear to see how these work when called with arguments. A modified example:
class TestClass: weight = 200 # class attr def __init__(self, size): self.size = size # instance attr def instance_mthd(self, val): print("Instance method, with 'self':", self.size*val) @classmethod def class_mthd(cls, val): print("Class method, with `cls`:", cls.weight*val) @staticmethod def static_mthd(val): print("Static method, with neither: ", val) a = TestClass(1000) a.instance_mthd(2) # Instance method, with 'self': 2000 TestClass.class_mthd(2) # Class method, with `cls`: 400 a.static_mthd(2) # Static method, with neither args: 2
Overall, you can think of each method in terms of access: if you need to access the instance or an instance component (e.g. an instance attribute), use an instance method as it passes in
self. Similarly, if you need to access a class, use a class method. If access to neither is important, you can use a static method. Notice in the example above, the same argument is passed for each method type, but access to instance and class attributes differ via
self and
cls respectively.
If we need to access the class within an instance method, we could use
self.__class__:
... def instance_mthd2(self, val): print("Instance method, with `self` but class access:", self.__class__.weight*val) ... a.instance_mthd2(2) # Instance method, with `self` but class access: 400
I recommend watching Raymond Hettinger's talk Python's Class Development Toolkit, which elucidates the purpose for each method type clearly with examples. | https://codedump.io/share/DdZ2hRgjtBdl/1/what39s-the-point-of-staticmethod-in-python | CC-MAIN-2017-47 | refinedweb | 339 | 65.62 |
When a CA cert contains a nameConstraints extension, the names in all
subordinate certs are checked against those constraints as part of cert chain
validation. The code in NSS that does this is in function CERT_CompareNameSpace
and its subordinate functions, especially cert_CompareNameWithConstraints, both
in nss/lib/certdb/genname.c. The logic used in those functions, especially in
cert_CompareNameWithConstraints, is very different from what is specified in
RFC 3280.
The test for directoryNames may find matches where it should not. The tests for
all other types of names are very likely not to find matches where they should.
Depending on the type of constraint (permitted or excluded subtree), this may
result in false positive or negative outcomes for the validity of certs in a
chain.
The existing code has (at least) the following issues:
I1. Directory names match Attribute Types And Values (ATAVs, a.k.a., AVAs) in
any order, ignoring the hierarchical sequencing imposed by Relative
Distinguished Names. Thus C=US, ST=CA, O=AOL matches C=US, O=AOL, ST=CA
even when each of those 3 AVAs is in a separate RDN.
I2. constraints on DNSnames, RFC822names and URI hostnames all expect the
constraint GeneralName to be a regular expression with "*" as a wildcard.
No type of name constraint is defined to use regular expressions in this way,
so this logic is improper for all those types. In addition, URI hostname
parsing fails to strip away user names and port numbers prior to checking.
I3. IPAddress constraints are not recognized. Neither are OtherNames,
X400 Addresses, EDInames, and Registered OIDs.
I4. All unrecognized constraint GeneralName types lead to failure to match.
I5. When a nameConstraint contains a permitted subtree, then subordinate
certs may contain ONLY the types of names given in that permitted subtree.
This seems to necessitate some "wild card" definition for all types of
general names.
Open questions about name constraints as defined in RFC 3280:
Q1. Is the behavior in I5 above proper? Or should a name of a type not
found in the permitted subtree be considered not constrained by that subtree?
Q2. Consider a constraint containing a permitted subtree containing a base
GeneralName with a context-dependent tag (identifying the type of GeneralName)
but with a zero-length value, or (for directoryName) containing a value that
is a sequence of zero length.
Q2A. How is such a constraint interpreted?
Q2B. For what types (if any) is it considered a wildcard, matching all
GeneralNames of that type?
Q2C. What meanings does such a constraint have for other types of GeneralName?
Q3. For constraint subtree GeneralNames of types OtherName, X400name,
EDIPartyName and RegisteredID, which we do not use or understand, is it
permissible to "ignore" these types; that is, to never match them to any
name in an excluded subtree, and to always match them to any name of the
same type (including same OID for OtherName) for permitted subtrees?
Q4. For directoryName constraints, When an RDN in a cert's name contains
more AVAs than are found in the corresponding constraint's RDN, does the
name match the constraint or not? Example, using brackets to denote RDN
boundaries:
constraint name: [C=US] [ST=CA] [O=AOL]
cert name: [C=US] [ST=CA, L=Mountain View] [O=AOL] [OU=...
Does this name match this constraint?
PROPOSAL:
I will shortly attach a patch to this bug that proposes to resolve the above
questions as follows:
A1. Issue 5 above is presumed to be correct. When a nameConstraint contains a
permitted subtree, then subordinate certs may contain ONLY the types of names
given in that permitted subtree. Any name type for which no matching name
type exists in the permitted subtree is not permitted.
A2. Constraints with zero length DNSnames, RFC822names, or URI hostnames are
treated as wildcards, matching any and all names of their respective types.
A3. Constraint GeneralNames of types X400name, EDIPartyName and RegisteredID
are "ignored", not matching any names in excluded subtrees, matching all names
of the same type in permitted subtrees. For constraints of type OtherName,
in excluded subtrees they match no names, and in permitted subtrees, they match
any name with the same type-id OID.
Constraints of other types of GeneralNames, not defined in RFC 3280, always
cause a matching name type to fail the constraint, matching in excluded subtrees
and not matching in permitted subtrees.
A4. As long as all the AVAs in all the respective RDNs find matches in a cert
name, then that name is considered to match, even if the name contsins other
AVAs in those same RDNs. The example in Q5 above is considered to match,
regardless of whether the constraint is in the excluded or permitted subtree.
Your review of the proposed answers above and of the attachments is invited.
Addition to A3 in the above proposal. For IPAddress constraints, the
constraint's GeneralName actually contains an IP address and a subnet mask.
Wildcarding is accomplished using a zero subnet mask. So, IMO, no further
definition of a wildcard for IPAddress constraints is necessary.
Marking P1 for 3.9. Removing dependency on 208038 because the patches for this
bug and that do not overlap.
I decided to have a zero-length IPaddress constraint be a wildcard, too.
There is a cert "in the wild" that seems to expect this. See for an example.
Created attachment 124788 [details] [diff] [review]
patch implements above proposal.
I am requesting that this bug block moz 1.4. My rationalle is as follows.
I believe moz 1.4 will be the last release of mozilla to support encrypted
and/or signed email. This is because, AFAIK, neither firebird nor thunderbird
have any support for users' own personal certs (and private keys) and no such
support is planned. Therefore, the bugs in moz 1.4's secure email will be
around a long time. Taking this fix means moz 1.4's secure email will continue
to be useful even after such time as name constraints become commonplace in CA
certs (which I believe is forthcoming).
Comment on attachment 124788 [details] [diff] [review]
patch implements above proposal
None of my comments should block this patch from checking as is.
It would be nice if the table for compare DNSN2C included a 4th column which
explains which test accomplishes the desired result. Example:
.foo.bar.com foo.bar.com nomatch name->len < constraint< len
.foo.bar.com .foo.bar.com should be added to the table for completeness (the
DNS name in this case is invalid, but the code will match it: offset=0).
Comment on attachment 124788 [details] [diff] [review]
patch implements above proposal
I have the same question about the compareDNSN2C function.
It's not clear to me why we need to handle the case
.foo.bar.com no match
which fails the (name->data[offset - 1] == '.') + (constraint->data[0] == '.')
== 1
test, while many other invalid DNS names (for example,) would match. If we don't need to
handle that case, we can simplify the test to
(name->data[offset - 1] == '.') || (constraint->data[0] == '.').
re: comment 6. Both tests are necessary. We can either combine them in a way
that eliminates at least one invalid DNS name pattern, or in a way that
eliminates no invalid patterns. Is eliminating the one pattern objectionable?
I am working on another version of this patch that changes the meaning of the
value returned by cert_CompareNameWithConstraints from "The name does (or does
not) match one of the constraints" to "the name is (or is not) acceptable for
passing the name constraints validity test". This should simplify the logic
and reduce the number of return paths, which is desirable for the trunk. It
will not be any more or less correct than this existing patch, however, so if
patch 124788 is accepable, I'd like to check it in on the NSS 3.8 branch for moz
1.4.
Nelson wrote:
> Is eliminating the one pattern objectionable?
No, it's not objectionable. I was just wondering
why only that pattern is eliminated. When you eliminate
one pattern but not the others, it makes the code
reviewer wonder whether there is some significance
about that pattern that he missed.
Created attachment 125068 [details] [diff] [review]
prerequisite for patch above
The patch to genname.c in this bug adds a call function SECITEM_ItemsAreEqual.
That function is also vulnerable to a NULL pointer crash, if either item's
data pointer is NULL. This patch corrects that, as well as making that
function faster when item lengths are not equal. This patch is prerequisite
to the patch to genname.c
Created attachment 125073 [details] [diff] [review]
patch v2, from dff -u, pretty unreadable, since this is a rewrite
Like the first version,.
This patch incorporates review feedback. It also removes the temporary arenas
and calls to the copy functions, which were completely unneeded.
Checked in on NSS trunk, rev 1.12.
Created attachment 125074 [details]
patch v2, just the new source code, not a diff, easier to read
This is the new source code found in patch v2. This form should be easier to
read and review, since it is a rewrite, the diff -u form isn't that useful.
Note that attachment 125068 [details] [diff] [review] is still a prerequisite to patch v2, as it was to
patch v1.
Comment on attachment 125068 [details] [diff] [review]
prerequisite for patch above
r=wtc.
Should we fix SECITEM_CompareItem(), too?
Comment on attachment 125068 [details] [diff] [review]
prerequisite for patch above
>+ if (!a->data || !b->data) {
>+ /* avoid null pointer crash. */
>+ return (PRBool)(a->data == b->data);
>+ }
We may want to augment this with an assertion:
PORT_Assert(a->data && b->data);
because at that point both a->len and b->len
are nonzero and it is an error for a SECItem to
have a null 'data' and a nonzero 'len'. This
function has no way to report an error, hence
the suggestion of using an assertion.
Suggested patch for SECITEM_CompareItem:
Index: secitem.c
===================================================================
RCS file: /cvsroot/mozilla/security/nss/lib/util/secitem.c,v
retrieving revision 1.8
diff -u -5 -r1.8 secitem.c
--- secitem.c 6 Jun 2003 04:51:26 -0000 1.8
+++ secitem.c 6 Jun 2003 22:06:47 -0000
@@ -141,10 +141,15 @@
SECITEM_CompareItem(const SECItem *a, const SECItem *b)
{
unsigned m;
SECComparison rv;
+ if (!a || !a->len || !a->data)
+ return (!b || !b->len || !b->data) ? SECEqual : SECLessThan;
+ if (!b || !b->len || !b->data)
+ return SECGreaterThan;
+
m = ( ( a->len < b->len ) ? a->len : b->len );
rv = (SECComparison) PORT_Memcmp(a->data, b->data, m);
if (rv) {
return rv;
Comment on attachment 125073 [details] [diff] [review]
patch v2, from dff -u, pretty unreadable, since this is a rewrite
Please attach an incremental patch that addresses the case
of comparing a directory name with only email address
constraints. Then I will mark this patch review+.
Comment on attachment 125073 [details] [diff] [review]
patch v2, from dff -u, pretty unreadable, since this is a rewrite
Nelson explained to me why the patch correctly handles
the case of comparing a directory name with only email
address constraints.
r=wtc.
Nelson, is it possible for a directory name to have only
email AVAs? That is, all the AVAs of all the RDNs have
the tag SEC_OID_PKCS9_EMAIL_ADDRESS or SEC_OID_RFC1274_MAIL.
If this is possible, and if all the constraints are email
address constraints, then phase 2 may not handle this case
correctly. (All the name constraint checking has been
done in phase 1. Phase 2 should not alter the result.)
It is possible for a directory name to contain only emailaddress type attributes.
(I would say that a well formed directory address should not contain only those
attributes, but it is possible.)
It is possible for a name constraints extension to contain only GeneralNames of
type rfc822name. If a name constraints extension containes a "permitted
subtree" that consists only of rfc822Names, then according to points I5 and A1
above,
no directory name would be permitted, because no directoryname was defined in
the permitted subtree.
Is there some other case being overlooked?
Comment on attachment 125073 [details] [diff] [review]
patch v2, from dff -u, pretty unreadable, since this is a rewrite
Requesting Mozilla 1.4 approval. This is a rewrite of the
cert_CompareNameWithConstraints function.
The risk of this patch is low, as explained below.
1. Logic: the current code does not implement the RFC
correctly in many cases. The new code vastly improves
the conformance to the RFC.
2. Crashes: the current code already has the crash fixes
that we recently checked into the Mozilla 1.4 branch. The
new code takes the same precautions against crashes. So
the risk of crashes should be comparable.
The reward of this patch is that we will significantly improve
our handling of the "name constraints" certificate extensions,
from "very broken" to "mostly compliant". We will address
the remaining non-compliances in future patches.
Created attachment 125608 [details] [diff] [review]
incremental patch, to be applied on top of previous v2 patch
This patch makes a few corrections per discussion with an IETF PKIX luminary.
1. An email constraint consisting of a single dot (".") matches all email
boxes.
2. An empty IPAddress constraint is invalid and matches no IPAddress
generalName.
3. (the biggie). If the constraint's permitted subtree contains no names of a
certain type, then all names of that type are permitted (unless explicitly
excluded in the excluded subtree). This is a reversal of our previous policy.
Comment on attachment 125608 [details] [diff] [review]
incremental patch, to be applied on top of previous v2 patch
1. In compareRFC822N2C, should this be removed?
> if (!constraint->len)
> return SECSuccess;
Since "." is the wildcard, I am wondering if we still
need to treat a constraint of length zero as a wildcard.
2. In CERT_CompareNameSpace, we shouldn't dup the cert
because the caller of this function doesn't destroy the
cert.
>+ if (rv != SECSuccess) {
>+ return (PORT_GetError() == SEC_ERROR_EXTENSION_NOT_FOUND)
>+ ? NULL /* success, space is unconstrained. */
>+ : CERT_DupCertificate(cert); /* failure, some other error */
>+ }
Regarding comment 21, part 1, I think the answer depends on whether we want the
cert chain reported in bug 204555 to pass or not. If we want that chain to
pass the test, then we must interpret a zero-length RFC822 name constraint as
a wildcard.
Regarding comment 21, part 2. I think you've uncovered a cert reference leak,
yet another bug, in the original code.
When returning a non-NULL cert pointer, the pointer should either always be a
new cert reference, or never be a new cert reference. It should either always
or never be proper for the caller of this function to destroy that reference.
Prior to this change, the only way for this function to return a non-NULL cert
pointer was to return the value that is returned by CERT_FindCertByName, which
does bump the ref count on that cert. In other words, CERT_FindCertByName
returns a new cert reference, so I dup'ed the cert handle to ensure that a new
cert reference is returned in this path, too.
As you know, there is another bug filed about the problems with the
CERT_CompareNameSpace API. I think it is appropriate for this function to
remain consistent, returning a new cert reference for all non-null return
values, until that other bug is fixed. When that bug is fixed, there will
no longer be any need to return a new cert reference (since the value returned
will always be one of the arguments passed into the function). At that time,
it would be good to remove the DupCert call. Agreed?
Agreed. Should we fix the cert reference leak by adding
a CERT_DestroyCertificate call to the caller of
CERT_CompareNameSpace?
Comment on attachment 125073 [details] [diff] [review]
patch v2, from dff -u, pretty unreadable, since this is a rewrite
moving approval request forward.
This bug has been fixed in NSS on the trunk for NSS 3.9.
Has there been any negative feedback about this since it landed on the trunk?
How critical is this to get into Mozilla 1.4.1?
No negative feedback has come to me.
Considering the absence of secure email in *bird, I'd surely like to see the
last mozilla release to feature secure email to be as compliant as we can make it.
Comment on attachment 125073 [details] [diff] [review]
patch v2, from dff -u, pretty unreadable, since this is a rewrite
This is not going to make 1.4.1. Please re-request aproval after 1.4.1 ships
if you'd like to get this in for 1.4.2.
Nelson, is this pretty safe for 1.4.2?
Michael, I would not recommend that you try to pick and choose inidivual
patches to NSS for moz 1.4.2. This is because of interdependencies that
may exist bewteen patches in various files. A patch like this is likely to
be "safe" only if you take other patches on which it depends. I do not
recall, now, what patches this patch depends upon, and I do not know
what patches are already in the moz 1.4.x tree.
The NSS group produces various Betas and releases of NSS< and I could
recommend that you consider moving moz 1.4.2 to one of those well defined
points rather than cherry picking individual patches.
clearly wtc felt that this patch should go in 1.4...
Michael, it's been months since I requested 1.4.2 approval.
At this point I agree with Nelson that it is safer for
Mozilla 1.4.2 to upgrade to the next NSS 3.8.x stable release.
If you are interested, we can look into that. (NSS 3.8.3 is
very close to completion.)
Comment on attachment 125608 [details] [diff] [review]
incremental patch, to be applied on top of previous v2 patch
Removing review request. Patch was checked in 6 months ago.
Comment on attachment 125073 [details] [diff] [review]
patch v2, from dff -u, pretty unreadable, since this is a rewrite
We took NSS 3.9 for 1.4.2 | https://bugzilla.mozilla.org/show_bug.cgi?id=208047 | CC-MAIN-2016-44 | refinedweb | 3,005 | 64.1 |
Ask A Question
How Do i add an "Are You Sure?" button in my update controller
Hi
I want add a pop up button that says "Are you sure" when the user clicks on submit ? if it was a destroy mehod it would be something like this method: :delete, data: { confirm: "Are you sure?"} but not sure how to do it on the update method
def update if donation_plan.update(donation_plan_params) redirect_to :back, notice: "Thank you for your Gift Aid Donation" else render :show end end
or do i add this to the show page
= simple_form_for donation_plan, url: my_donations_path do |f| = f.input :giftaid, inline_label: " reclaim Gift Aid of 25p, in every £1 on past donations and all future donations I may make.", label: false %p= f.submit "Submit", class: "btn btn-primary"
I think its this
%p= f.submit "Submit", class: "btn btn-primary", data: { confirm: "Are you Sure"}
Yeah, data-confirm is just a generic attribute you can use on any clickable item, not just delete links. That should work fine. | https://gorails.com/forum/how-do-i-add-an-are-you-sure-button-in-my-update-controller | CC-MAIN-2020-40 | refinedweb | 173 | 71.65 |
File URL Namespace
File URL Namespace
This content is no longer actively maintained. It is provided as is, for anyone who may still be using these technologies, with no warranties or claims of accuracy with regard to the most recent product version or service release. ExOLEDB registers with the OLE DB RootBinder to support both a public and private namespace in local Exchange private and public stores. Both namespaces are prefixed by "". This is staticit cannot be changed by system administration or any other means. The public namespace is:
""
This is for all public folder databases, including the default MAPI public folder database. DomainName specifies the fully qualified domain name for the organization. TopLevelFolder specifies the top folder in a folder tree for a particular MDB. For example:
""
Folder trees outside of the default MAPI public folder are not visible to Microsoft® Outlook® 2000 and prior MAPI clients.
The private namespace is:
""
This namespace is for all private databases. The UserName string is the part of the address string that comes before the "@" of the user's proxy address. For example:
""
Directory Listings
The store will notify the ExOLEDB provider when a new database is created. Also, a directory listing at the "" level will return the list of organizations. A directory listing at the "" will return the list of top level folders for the specified domain or organization, including the static "MBX" string, which represents private mailboxes. A directory listing at the "" will return an empty listto preserve mailbox privacy. However, a directory listing of a specific user, such as "", will give a list of the top-level folders within the IPM sub-tree (for example, Inbox, Outbox, Deleted Items, and so on). You can change to any of these directories and continue to navigate down the folder hierarchy.
NON-IPM Sub-trees
Access to a user's NON-IPM sub-tree is via the URL "". The NON-IPM sub-tree is not visible in a directory listing of a user mailbox. The only way to access the NON-IPM sub-tree is to explicitly change the directory to the NON-IPM folder. After you bind to the NON-IPM folder, you can get a directory listing of the folders under the NON-IPM folder for a particular user.
Access to the top level of a public NON-IPM sub-tree is similar via the URL "". The behavior is the same as described for private databases.
Examples
Here are some additional examples for further clarification:
- Browse to
Result: List of top-level folders plus an MBX folder.
- Browse to
Result: List of folders under TopLevelFolder1.
- Browse to where DomainName doesn't exist
Result: MBX folder.
- Browse to where DomainName does or doesn't exist
Result: Empty directory.
- Browse to where DomainName doesn't exist
Result: Command fails.
- Browse to
Result: Lists the top-level folders in the NON-IPM-SUBTREE for this user. The NON-IPM-SUBTREE is not visible in a normal directory listing of //./backofficestorage/DomainName/MBX/UserName. You must specify the NON-IPM-SUBTREE explicitly to get its contents.
- Browse to
Result: Lists the top-level folders in the NON-IPM-SUBTREE for TopLevelFolder1. | https://msdn.microsoft.com/en-us/library/ms876443(v=exchg.65).aspx | CC-MAIN-2015-11 | refinedweb | 527 | 55.84 |
This signal is delivered to a process when its terminal size changes. The default action is to ignore the signal. Usually a process that cares about displaying information on the screen would use the resize information to redraw the screen and update its idea of how large the screen is.
Now, the real question here, is how do you know what the new size of the screen is? the answer is the TIOCGWINSZ ioctl(2).
an example:
#include <termios.h> #include <sys/ioctl.h> #include <fcntl.h> #include <stdio.h> int main(int argc,char **argv) { struct winsize ws; int fd=open("/dev/tty",O_RDWR); if (ioctl(fd,TIOCGWINSZ,&ws)!=0) { perror("ioctl(/dev/tty,TIOCGWINSZ)"); return 1; } printf("rows %i\n",ws.ws_row); printf("cols %i\n",ws.ws_col); return 0; }
If bash(1) (or zsh(1)...) is the terminal's controlling process when a terminal is resized, it will automatically update the $LINES and $COLUMNS variables.
My xterm sometimes dies if I resize it while IRC is running in it. Presumably, what's happening is that during the resize, the irc client tries to write to the bottom of the xterm and the signal from xterm doesn't get handled in time, causing my xterm (version 156) to "disappear" when the ncurses(3x)? library tries to write to a bit of the xterm that's no longer there.
lib/main.php:944: Notice: PageInfo: Cannot find action page | http://wiki.wlug.org.nz/SIGWINCH?action=PageInfo | CC-MAIN-2015-18 | refinedweb | 240 | 68.57 |
World's Best AI Learning Platform with profoundly Demanding Certification Programs
Designed by IITian's, only for AI Learners.
how to extraction feature from text data using Bag-of-Word model?
A bag-of-words model, or BoW for short, is a way of extracting features from text for use in modeling, such as with machine learning algorithms and to represent text as numerical feature vectors.
A bag-of-words is a representation of text that describes the occurrence of words within a document. It involves two things:
It is called a bag of words, because any information about the order or structure of words in the document is discarded. The model is only concerned with whether known words occur in the document, not where in the document.
The idea behind the bag-of-words model is quite simple and can be summarized as follows:
Implement Bag of Words using Python Keras
from keras.preprocessing.text import Tokenizer
text = [ 'There was a man', 'The man had a dog', 'The dog and the man walked', ]
Fit a Tokenizer on the text
model = Tokenizer() model.fit_on_texts(text)
Get.]]
Chat now for any query | https://www.insideaiml.com/questions/What-is-a-Bag-of-Words-Model-%3F-23 | CC-MAIN-2021-31 | refinedweb | 191 | 59.94 |
#include <hallo.h> * Wichert Akkerman [Fri, Jan 10 2003, 12:47:19PM]: > Previously Eduard Bloch wrote: > > NOTE: This is not another flamewar about CPU optimised packages, there > > are packages where the optimisation makes sense and Camms method > > (#120418) is already used, successfully. > > We have already indicated we want to use another method to do that. Where? I do not remember a such decission, nor can I find a consens from a productive discussion, nor do you provide any hints in the last message. So what's up? Gruss/Regards, Eduard. -- Microsoft: Where do you want to go today? Linux: Where do you want to go tomorrow? BSD: Are you guys coming or what? | https://lists.debian.org/deity/2003/01/msg00075.html | CC-MAIN-2014-10 | refinedweb | 113 | 64.3 |
Quick way to store db connection details out of sources
Project Description
Quick way to store db connection details out of sources.
import configdb import ooop ooop.OOOP( configdb.configdb(required=”user pwd dbname”) )
By default, attributes are taken from a YAML file at system defined user configuration location. From the key (profile) ‘default’. If the file is not there, it is created with a yaml file with null values. You have to fill them.
To change the profile you can use the ‘profile’ keyword or
DBCONFIG_PROFILE environ.
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/configdb/ | CC-MAIN-2018-17 | refinedweb | 113 | 67.76 |
DHCP Service LoadBalancing Scheduler¶
Launchpad blueprint:
Problem Description¶
With the current Agent Management and Scheduler extensions we are not able to effectively distribute the DHCP namespaces when there are multiple Network Nodes available. Existing scheduler(Chance Scheduler) does not load balance the DHCP namespace properly across multiple Network Nodes based on the network load of DHCP agents.
Chance scheduler schedules DHCP namespaces unevenly, which is not suitable when large number of namespaces are created. Chance scheduler schedules around 90% of DHCP namespaces on a single Network Node and remaining 10% of namespaces are distributed across remaining Network Nodes.
This blueprint attempts to address this issue by proposing a new DHCP agent scheduler which will equally distribute the DHCP namespaces across multiple Network Nodes based on DHCP namespace count of the DHCP agent.
Proposed Change¶
In the Neutron server, we have written a DHCP agent scheduler which will keep track of the DHCP services running on the Network Nodes based on the “network_scheduler_driver” configuration of ‘neutron.conf’.
‘LeastNetworksScheduler’ type of DHCP scheduler will be triggered only if the “network_scheduler_driver” parameter is set to LeastNetworksScheduler. The default value for this flag is ‘ChanceScheduler’.
When the new network is created the DHCP agent scheduler will fetch the dhcp agents with minimum number of networks hosted on it. And schedules the newly created DHCP namespace service on those minimally loaded DHCP agents.
Here the dhcp agent load is decided based on the number of DHCP namespaces which are already created on the dhcp agent. ‘LeastNetworksScheduler’ will return as many number of minimally loaded DHCP agents as mentioned in ‘dhcp_agents_per_network’ configuration parameter.
In this implementation the DHCP namespace count is calculated based on report status messages received from DHCP agents.
Data Model Impact¶
A new column named “load” will be added in the “Agents” table. This column contains the Agent load based on the DHCP namespaces count which are hosted on that particular agent. The table will be sorted based on the load and least loaded top n Agents will be supplied for scheduler to host the DHCP namespaces.
Other Deployer Impact¶
This feature can be enabled by setting the flag network_scheduler_driver = LeastNetworksScheduler which is configurable from “neutron.conf”. If the flag is set to ChanceScheduler, none of this code will be executed.
Implementation¶
Work Items¶
- Highlevel tasks include:
Refactor the code of ChanceScheduler and introduce new class named LeastNetworksScheduler This activity includes moving the common methods to parent class named DHCPScheduler and inheriting those methods in ChanceScheduler and LeastNetworksScheduler classes.
LeastNetworksScheduler This class will have the implementation to get the dhcp agents which is hosting least number of networks. Subsequently it will schedule the respective DHCP namespace services on those minimally loaded DHCP agents.
Testing¶
The code will be covered with unit tests.
Documentation Impact¶
Current documentation will have to be enhanced to add the content specific to DHCP service load balancing scheduler.
User Documentation¶
Scheduler section of Openstack Configuration Reference document needs to be modified.
References¶
The code has been submitted for review at the below link | https://specs.openstack.org/openstack/neutron-specs/specs/kilo/dhcpservice-loadbalancing.html | CC-MAIN-2020-10 | refinedweb | 504 | 51.18 |
curl-library
Re: libcurl eas built whitout LIBSSH2
Date: Wed, 23 Apr 2008 09:14:01 -0700
Stephen Collyer wrote:
> OK, but AFAICS, by default, VS expects foo.lib to be an import library
> for foo.dll.
That's just not true at all. The name of the DLL is encoded into the
import library; the filename is totally irrelevant. For example, the
name of the import lib in the other curl binary packages is e.g.
libcurl_imp.lib or libcurldll.lib. You can name it whatever you want,
it doesn't matter.
> Right. However, a .lib can also be a static library, no ?
> Is there a tool to tell me whether or not a given .lib is
> an import or static library ?
Yes, these .lib files (both static libraries and import libraries) have
the same file format as an 'ar' archive -- in the GNU toolchain (i.e.
MinGW/Cygwin) they are named as libcurl.a and libcurl.dll.a to make this
explicit. You can tell an import library using 'nm', e.g.:
$ if nm -AP libcurl.a | grep -q ' I '; then echo "import library"; else
echo "static library"; fi
static library
I don't know what the MSVC equivalent is, probably dumpbin.
> Right, that's useful to know to those of us who are hard-of-Windows.
> Is it possible to generate an import library from a .def file and
> a .dll ? It sounds like there should be all the required info
> available.
You can create an import lib from a .def file using 'dlltool' (GNU
tools) or 'lib' (MSVC tools.) In fact you can link directly to a .dll
without even needing an import lib if you're using the GNU toolchain.
The DLL itself is not needed; the .def file lists all the exports of the
DLL, that's its entire function. You can create a .def file from an
existing .dll as well. The GNU toolchain has the 'pexports' command for
this, I don't know what the MSVC equivalent is.
Brian
Received on 2008-04-23 | https://curl.haxx.se/mail/lib-2008-04/0420.html | CC-MAIN-2019-26 | refinedweb | 342 | 86.81 |
Creating an archive of Urban Institute publications using the Box API and Python
At the Urban Institute, we’ve been working for more than 50 years to produce evidence-based economic and social policy research. A good portion of that research is available on our main website, urban.org, which launched in October 1996, but a significant portion pre-dates the website. Up to now, older research has been available only in paper form. When we moved our office to 500 L’Enfant Plaza in 2019, we cleared out a lot of paper. But in the spirit of Marie Kondo, we wanted to save the publications and papers that sparked joy. As part of that effort, our communications team and researchers scanned hundreds of these older publications and converted them into PDF documents.
The resulting archive project was a collaborative internal effort of the Tech and Data group and the communications group, with support from our library staff. Eventually, we scanned and archived hundreds of these PDF documents in a single archived Box folder, which was a big win, but given the nature of scanned PDF documents we needed a way to make these documents accessible and searchable by Urban researchers who might be interested in them. Also, we wanted to allow researchers to contribute artifacts to the archive going forward to ensure that publications we missed could be included at any time.
In this post, I’ll describe part of the technical effort to ensure the documents were easily searchable, where we leveraged the Box application programming interface (API) and Box Python SDK to access the documents stored in Urban’s Box instance, added metadata, and created a basic search interface. Finally, I’ll supply an example program that uses the Box API to read and write custom metadata, for those of you who may want to explore further.
Why use metadata?
First, why attach metadata to objects in a file system like Box? With good metadata, a collection of files that does not have readable text, such as old photos and images of scanned documents, goes from being difficult to search and classify to being easy to search and classify. For an example of useful metadata, I use Google Photos to save virtually all the pictures I take with my iPhone. My digital pictures contain a default set of metadata (in EXIF, or exchangeable image file format), including geographic location. So I can ask Google Photos to show me all the pictures I’ve taken in a certain place, or a list of all the places in which I’ve taken pictures.
Recently, I started scanning some old family slides. Those images don’t contain geolocation information, so there would be no way to find a scanned slide of the family trip to Yellowstone Park using location metadata — I would just have to look at each image and annotate each picture with metadata on location to enable such a search. In creating the Urban Archive, we had roughly the same situation — scanned PDFs but no reliable metadata (e.g., title, author, publication date, abstract, topic). Luckily, Box gives us the data structures we need to make our older, nondigital publications ready for search by adding metadata.
In another piece of good fortune, Urban still maintains the old databases that have all the data we need about the scanned publications. So much of the work in the project involved developing methods for attaching these metadata by matching each publication with a record in our older databases. The first step was to use optical character recognition (OCR) to translate the scanned text into machine-readable text. Then, using text processing tools such as SequenceMatcher in Python’s difflib, we could (partially) automate the process of matching records with the scanned PDF files by looking at the (usually comprehensible) OCR’ed text inside the PDFs and comparing them with titles and other columns in the databases. SequenceMatcher is a powerful tool, and it allowed us to find the best “fuzzy” match for such fields as title and authors from database records with strings found in the PDF text.
Once correlated with a record, we could easily write information like title, authors, abstract, topics, and publication date into the Box metadata, in a process I describe in the following section. Box then provides us a basic search on the metadata as part of its web interface. We then uploaded a copy of the metadata to our TDNet library service (many thanks to their helpful staff), which provided a search interface that is better suited for publications. This way, the Box metadata becomes the reference system for the Archive, and TDNet supplies a search interface that integrates with our Library website. As with finding the slides of our Yellowstone trip, I can now ask for all publications tagged with the topic “Housing Discrimination,” or whatever other keywords were tagged in our old databases, or any of the other associated metadata fields we had available.
Example program — reading and writing metadata with the Box API
I don’t want to describe everything that went into the archive effort, but I will summarize the efforts I’ve described above and some sample code for reading and writing metadata in Box. The Box API is a powerful tool that allows you to manipulate metadata for Box folders and files, and it was just what we needed. And the Box Python SDK gives you a Pythonic way to use the API endpoints. Here is what we’ll cover:
- Getting started
- Creating a Box Metadata Template and applying it to a Box folder
- Entering metadata into the form for our documents
- Installing the Box SDK for Python
- Reading and writing metadata from a document in Box using methods supplied by the Python Box SDK package
Getting started
If you want to try out the code, you can use a Box instance you have access to, or you can sign up for a free Box account (10 GB limit at this writing) and set up test documents. The free account gives you all you need to create a Box app and a developer token for the app, which is what you need to use the API. And the nice thing about having your own account is that you are the administrator. Python will be fine from any platform: Mac, Windows, Linux, or a Jupyter Notebook environment like PythonAnywhere. You can also use the Box Postman collection, a great way to try out your API calls. Once you get something to work in Postman, you can use the Python code generated by Postman in your own program. The Box API documentation also provides example code for cURL, .NET, Python, Node, and iOS for any given API endpoint.
Creating a Box Metadata Template and applying it to a Box folder
To have metadata to work with, we first need to create a metadata template and then apply that template to a Box folder. You can find out how to do this at the link below. You need to either be an administrator in Box or ask another Box administrator to help you.
Customizing Metadata Templates
To create a new metadata template: In the Admin Console, in the lefthand navigation click Content. Toward the top of…
support.box.com
Once the metadata template is created, the metadata fields are available to apply to a folder in the Box interface, and you can enter or write metadata to a given file under that folder.
Entering metadata into the form for our documents
Here is a sample document we uploaded to the Box folder “Urban Institute Historical Publications.” The folder has had a metadata template applied to it, which means that any document we upload to that folder will have the custom metadata fields available for the document. Box also has default metadata — that is, properties for all files and folders. To provide unique identification, Box assigns all folders a folder ID, and all files a file ID.
For the example, we have a slimmed-down set of custom metadata fields: PubID (a unique ID number from our publication databases), title, authors, publication date, and abstract, all of which are text fields except publication date. In the screenshot here, we have filled in the template with data appropriate to the artifact. To be able to edit the metadata, you need to first click on the “M” button to display the metadata, and then click on the pencil icon to edit. A Save button (not shown) lets you save your entries. At this point, we can manually enter metadata for any publication document.
Now that we have a Box folder, at least one Box document, and some metadata, our next task will be to read the metadata for a file using Python and the Box SDK, and then write some new metadata using the API, instead of doing it manually. The reference for writing metadata is here: Box API endpoint for updating metadata on a file.
Installing the Box SDK for Python, and accessing your Box instance
Directions for this step are available here, but basically, you just need to use pip and give the following command:
pip install boxsdk
Or, on a platform like PythonAnywhere:
pip install — user boxsdk
Then you will have the many methods of the Box Python SDK available to you by importing the package with
from boxsdk import DevelopmentClient
To have a way to authenticate to Box, you need to first create a Box app, using the Developer Console. You can find out how to do that here.
To create a client connection to your Box instance, you need a Developer Token, which is available on your Box Developer Console. The Developer Token lasts for only an hour or so, but it’s easy to revoke or refresh as needed. For longer-term applications, authenticate using OAuth2.
Here is an overview of the program “run_metadata_example.py.” The Python code is in a Gist that appears below.
- Import pubsfun_examples, which contains two utility routines
get_my_fields(boxmeta): Return a dictionary of fields from the Box metadata
get_my_props(boxprops): Return a dictionary of basic Box properties from the Box metadata
- Connect to Box by calling DevelopmentClient, which prompts for the token
- For a given file and folder (supply the IDs)
- get the Box properties and the custom metadata using
client.fileand print out
- write new metadata using
.update
When you’re using the DevelopmentClient to connect, you will get a lot of information in JSON (JavaScript Object Notation) when you call the API endpoints. This is handy when you’re debugging or trying to find out the real names of metadata fields, the enterprise scope, and the template name. Once you have everything working and switch to an OAuth2 connection, your program output will be less verbose. Also, to write metadata to your files, you will need to authorize your Box app with the proper permissions. Details are available here:
To conclude
Box proved to be a great solution for us as a content management system for the Urban Archive. As we were already using Box as our cloud-based file system and TDNet for our library service, the project had no need for new software acquisition and had zero budget impact in that respect. The archive project has been a great learning experience for making use of the Box API, and we hope to use it in other areas where our content is unstructured and could benefit from being organized with good metadata.
Digital preservation of your content is worth thinking about. But don’t take my word for it; ask internet pioneer Vint Cerf. The Urban Archive project is counting on archival PDF documents to preserve our publications and artifacts, which is great as long as you think of the publications as computer-readable versions of print documents. Other publications we are making these days at Urban, however, are more dynamic. How do we preserve these as hyperlinks deteriorate, operating systems change, software is updated, and servers are retired? This will be our next big challenge.
Recommended Links
- Box Postman Collection:
- Work with metadata (in the Box API):
- Creating Metadata Templates in Box:
- Box API reference:
- Authentication setup for Box developers:
- Getting started with the Box Python SDK:
- Box Python SDK in GitHub:
The Role of Archives in Digital Preservation:
Want to learn more? Sign up for the Data@Urban newsletter. | https://urban-institute.medium.com/creating-an-archive-of-urban-institute-publications-using-the-box-api-and-python-8f377fc1f195 | CC-MAIN-2022-27 | refinedweb | 2,070 | 55.58 |
SMTP relay.....Email Need to send by an application
Hello team I have a testing environment which is running Exchange 2007, which can send and receive only internaly.No Internet Connection in this setup. so no external emails cannot received and aslo no one can send the email to external. i would like to setup or coffigure like SMTP relay, which i looking for progam or application which can send the send the email to the adminsiatrtor or any maiboxes is there any application or program is avaialble for free download .. Could you please advice...How to achive this???? Example.. Payrol application , IT application etc...once the user fisnish the submit. Am mail will trigger to the assigned mailbox.... Hope You all undersatnd the requirement?
July 26th, 2010 12:44pm
Hi If you want to let some server/application relay through your exchange 2007 you need to create a connector for that new-ReceiveConnector -Name 'Relay Connector' -Usage 'Custom' -Bindings '0.0.0.0:25' -RemoteIPRanges 'theserver/applicationip' -Server 'exchangeservername' Get-ReceiveConnector -Identity 'Relay Connector' | Add-ADPermission -User "NT AUTHORITY\ANONYMOUS LOGON" -ExtendedRights "ms-Exch-SMTP-Accept-Any-Recipient"Jonas Andersson MCTS: Microsoft Exchange Server 2007/2010 | MCITP: EMA 2007/2010 | MCSE/MCSA Blog:
Free Windows Admin Tool Kit Click here and download it now
July 26th, 2010 12:57pm
Hi Jonas. Thanks, but i realy need a sample of application.
July 26th, 2010 3:05pm
Hi, A sample of relay SMTP application (C#): 1. Create a Windows Application on VS2005. 2. Add a button object to form. Double click the button and input the following codes: using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Text; using System.Windows.Forms; using System.Net.Mail; using System.Net; namespace Relay { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void button1_Click(object sender, EventArgs e) { try { //to MailMessage mailMsg = new MailMessage(); mailMsg.To.Add("administrator@genli.lab"); // Send the message to genli.lab //from MailAddress mailadd = new MailAddress("administrator@hotmail.com"); mailMsg.From = mailadd; mailMsg.Subject = "Hi, this is a relay email"; mailMsg.Body = "Can you get this email"; //Init SMTP client and send SmtpClient smtpc = new SmtpClient("thoml.gen.com", 25); // thoml.gen.com is the relay SMTP server. smtpc.Send(mailMsg); } catch (Exception ex) { Console.WriteLine("Exception caught in CreateTestMessage"); ex.ToString(); } } } } If there is anything unclear, please feel free to let me know. Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Thanks
Free Windows Admin Tool Kit Click here and download it now
July 28th, 2010 5:55am
Thanks Gen Lin, Here is my testing appilcation which i created by my selfy For email relay(SMTP) . Tha application purpose is for I.T services if any of the users in the organization having any problem with realted to I.T means, They sgould have open the below application and log a complaint to administartor . user name is the name of the user who is rasing the complaint. and email address is the Email Id if the user who is raising the compliant and nature of the problem is , any problem where the user is facing Example user : John Martin Email ID : martinj@xxx.com Nature of the problem : Mouse not working Command Button : Submit to I.T Once the user is click submit to I.T , then one Email from martinj@xxx.com to administrator@xxx.com has to send with his nature of problem , so that administartor should action on it. Hope this will give much understand for you 1. I open the excel Sheet (MS Excell 2007) and My desgined is like below User Name ------------ Inserted Text Box Email Address----------Insterted Text Box Nature of the Problelem ----------Insterted Text box and last one COMMAND BUTTON ------> to send the email I had saved the excel file as HTM file (IT.HTM)and put in one of the lcation in the hard dive. then i Opened the IIS a manager from my exchange server 2007 and created the new website and located the above IT.HTM) Every thing fine for. iam able to browse the HTM file . but am unable to any thing like below 1. unable to type anyting in the text box 2. what is the macro or program should i write in the coomand button to send the email. 3. How to achive this.? 4 . Am i doing any complicated things? instaed of doing better way than this? ..please suggest.....
August 8th, 2010 8:35am | http://www.networksteve.com/exchange/topic.php/SMTP_relay.....Email_Need_to_send_by_an_application/?TopicId=4171&Posts=3 | CC-MAIN-2019-13 | refinedweb | 781 | 59.8 |
Introduction
Since version 2019.2, InterSystems IRIS has provided their Native API for Python as a high-performance data access method. The Native API allows you to directly interact with the native IRIS data structure.
Globals
As InterSystems developers, you’re likely already familiar with the globals. We’ll review the basics in case you’d like a refresher, but feel free to skip ahead to the next section.
InterSystems IRIS uses globals to store the data. A global is a sparse array which consists of nodes that may or may not have a value and subnodes. The following is an abstract example of a global:
In this example,
a is a root node, referred to as the global name. Every node has a node address that consists of the global name and one or multiple subscripts (names of subnodes).
a has subscripts
b and
c; the node address of those nodes is
a->b and
a->c.
The nodes
a->b and
a->c->g have a value (
d and
h), the nodes
a->b->e and
a->b->f are valueless. The node
a->b has subscripts
e and
f.
An in-depth description of this structure can be found in the InterSystems book "Using Globals".
Reading and Writing to the Global
The Native Python API allows direct reading and writing of data to the IRIS global. The
irisnative package is available on GitHub — or if InterSystems IRIS is locally installed on your machine, you’ll find it in the
dev/python subdirectory of your installation directory.
The
irisnative.createConnection function lets you create a connection to IRIS and the
irisnative.createIris function gives you an object from this connection with which we can manipulate the global. This object has a
get and
set method to read/write from/to the global, and a
kill method to delete a node and its subnodes. It also has an
isDefined method, which returns 0 if the requested node does not exist; 1 if it has a value, but no descendants; 10 if it is valueless and has descendants; or 11 if it has a value and descendants.
import irisnative conn = irisnative.createConnection("127.0.0.1", 51773, "USER", "<user>", "<password>") iris = irisnative.createIris(conn) iris.set("value", "root", "sub1", "sub2") # sets "value" to root->sub1->sub2 print(iris.get("root", "sub1", "sub2")) print(iris.isDefined("root", "sub1")) iris.kill("root") conn.close()
It also has an
iterator method to loop over subnodes of a certain node. (Usage will be demonstrated in the next section.)
For a full description of each method, refer to the API documentation.
San Francisco GTFS Transit Data Files
Storing the data in global
The General Transit Feed Specification (GTFS) is a format for public transportation schedules and routes. Let’s see how we can use the IRIS Native API to work with San Francisco GTFS data from June 10, 2019.
First, we will store the information from the data files in IRIS global. (Not all files and columns will be used in this demo.) The files are in CSV format, where the first row shows the column names and all other rows contain the data. In Python, we will start with the necessary imports and establishing a connection to IRIS:
import csv import irisnative conn = irisnative.createConnection("127.0.0.1", 51773, "USER", "<user>", "<password>") iris = irisnative.createIris(conn)
Based on the column names and data, we can construct a sensible tree structure for each file and use
iris.set to store the data in the global.
Let’s start with the
stops.txt file, which contains all public transport stops in the city. From this file, we will only use the
stop_id and
stop_name columns. We will store them in a global named
stops within a tree structure with one layer of nodes, with the stop IDs as subscripts and the stop name as node values. So our structure looks like
stops → [stop_id]=[stop_name]. (For this article, I’ll use square brackets to denote when a subscript is not literal, but instead a value read from the data files.)
with open("stops.txt", "r") as csvfile: reader = csv.reader(csvfile) next(reader) # Ignore column names # stops -> [stop_id]=[stop_name] for row in reader: iris.set(row[6], "stops", row[4])
csv.reader returns an iterator of lists that hold the comma-separated values. The first line contains the column names, so we will skip it with
next(reader). We will use
iris.set to set the stop name as value of
stops -> [stop_id].
Next is the
routes.txt file, of which we will use the
route_type,
route_id,
route_short_name and
route_long_name columns. A sensible global structure is
routes -> [route_type] -> [route_id] -> [route_short_name]=[route_long_name]. (The route type is 0 for a tram, 3 for a bus, and 5 for a cable car.) We can read the CSV file and put the data in the global in exactly the same way.
with open("routes.txt", "r") as csvfile: reader = csv.reader(csvfile) next(reader) # Ignore column names # routes -> [route_type] -> [route_id] -> [route_short_name]=[route_long_name] for row in reader: iris.set(row[0], "routes", row[1], row[5], row[8])
Every route has trips, stored in
trips.txt, of which we will use the
route_id,
direction_id,
trip_headsign and
trip_id columns. Trips are uniquely identified by their trip ID (which we will later see in the stop times file). Trips on one route can be separated into two groups based on their direction, and the directions have head signs associated with them. This leads to the tree structure
trips -> [route_id] -> [direction_id]=[trip_headsign] -> [trip_id].
We need two
iris.set calls here — one to set the value to the direction ID node, and one to create the valueless node of the trip ID.
with open("trips.txt", "r") as csvfile: reader = csv.reader(csvfile) next(reader) # Ignore column names # trips -> [route_id] -> [direction_id]=[trip_headsign] ->[trip_id] for row in reader: iris.set(row[3], "trips", row[1], row[2]) iris.set(None, "trips", row[1], row[2], row[6])
Lastly, we will read and store the stop times. They’re stored in
stop_times.txt and we will use the
stop_id,
trip_id,
stop_sequence and
departure_time columns. A first option could involve using
stoptimes -> [stop_id] -> [trip_id] -> [departure_time] or if we want to keep the stop sequence,
stoptimes -> [stop_id] -> [trip_id] -> [stop_sequence]=[departure_time].
with open("stop_times.txt", "r") as csvfile: reader = csv.reader(csvfile) next(reader) # Ignore column names # stoptimes -> [stop_id] -> [trip_id] -> [stop_sequence]=[departure_time] for row in reader: iris.set(row[2], "stoptimes", row[3], row[0], row[4])
Querying the Data Using the Native API
Next, our goal is to find all departure times for the stop with the given name.
First, we retrieve the stop ID from the given stop name, then we will use that ID to find the relevant times in the
stop_times.
The
iris.iterator("stops") call lets us iterate over the subnodes of the stops root node. We want to iterate over the pairs of subscripts and values (to compare the values with the given name, and immediately know the subscript if they match), so we call
.items() on the iterator, which sets the return type to (subscript, value) tuples. We can then iterate over all these tuples and find the right stop.
stop_name = "Silver Ave & Holyoke St" iter = iris.iterator("stops").items() stop_id = None for item in iter: if item[1] == stop_name: stop_id = item[0] break if stop_id is None: print("Stop not found.") import sys sys.exit()
It is worth noting that looking up a key by its value through iteration is not very efficient if there are a lot of nodes. One way to avoid this would be to have another array, where the subscripts are the stop names and the values are the IDs. The value --> key lookup would then consist of one query to this new array.
Alternatively, you could use the stop name as identifier everywhere in your code instead of the stop ID -- the stop name is unique as well.
As you can see, if we have a significant amount of stops, this search can take a while — it is also known as “full scan”. But we can take advantage of globals and build the inverted array where names will be keys with IDs for values.
iter = iris.iterator("stops").items() stop_id = None for item in iter: iris.set(item[0], "stopnames", item[1])
Having the global of stopnames, where index is name and value is ID, will change the code above to find the
stop_id by name to the following code which will run without a full scan search:
stop_name = "Silver Ave & Holyoke St" stop_id=iris.get("stopnames", stop_name) if stop_id is None: print("Stop not found.") import sys sys.exit()
At this point, we can find the stop times. The subtree
stoptimes -> [stop_id] has trip IDs as subnodes, which have the stop times as subnodes. We are not interested in the trip IDs — only the stop times — so we will iterate over all trip IDs and collect all stop times for each of them.
all_stop_times = set() trips = iris.iterator("stoptimes", stop_id).subscripts() for trip in trips: all_stop_times.update(iris.iterator("stoptimes", stop_id, trip).values())
We are not using
.items() on the iterator here, but we will use
.subscripts() and
.values() because the trip IDs are subscripts (without associated values) or the bottom layer (
[stop_sequence]=[departure_time]), we are only interested in the values and departure times. The
.update call adds all items from the iterator to our existing set. The set now contains all (unique) stop times:
for stop_time in sorted(all_stop_times): print(stop_time)
Let’s make it a little more complicated. Instead of finding all departure times for a stop, we will find only departure times for a stop for a given route (both directions) where the route ID is given. The code to find the stop ID from the stop name can be kept in its entirety. Then, all trip IDs on the given route will be retrieved. These IDs are then used as an extra restriction when retrieving departure times.
The subtree of
trips -> [route_id] is split in two directions, which have all trip IDs as subnodes. We can iterate over the directions as before, and add all of the directions’ subnodes to a set.
route = "14334" selected_trips = set() directions = iris.iterator("trips", route).subscripts() for direction in directions: selected_trips.update(iris.iterator("trips", route, direction).subscripts())
As a next step, we want to find the values of all subnodes of
stoptimes -> [stop_id] -> [trip_id] where
[stop_id] is the retrieved stop ID and
[trip_id] is any of the selected trip IDs. We iterate over the
selected_trips set to find all relevant values:
all_stop_times = set() for trip in selected_trips: all_stop_times.update(iris.iterator("stoptimes", stop_id, trip).values()) for stop_time in sorted(all_stop_times): print(stop_time)
A final example shows the usage of the
isDefined function. We will expand on the previously written code: instead of hardcoding the route ID, the short name of a route is given, then the route ID has to be retrieved based on that. The nodes with the route names are on the bottom layer of the tree. The layer above contains the route IDs. We can iterate over all route types, then over all route IDs, and if the node
routes -> [route_type] -> [route_id] -> [route_short_name] exists and has a value (
isDefined returns 1), then we know that
[route_id] is the ID we’re looking for.
route_short_name = "44" route = None types = iris.iterator("routes").subscripts() for type in types: route_ids = iris.iterator("routes", type).subscripts() for route_id in route_ids: if iris.isDefined("routes", type, route_id, route_short_name) == 1: route = route_id if route is None: print("No route found.") import sys sys.exit()
The code serves as a replacement for the hardcoded
route = "14334" line.
When all IRIS operations are done, we can close the connection to the database:
conn.close()
Next Steps
We’ve covered how the native API for Python can be used to access the data structure of InterSystems IRIS, then applied to San Francisco public transport data. For a deeper dive into the API, you can visit the documentation. The native API is also available for Java, .NET and Node.js.
Great article.
Can Native API call ObjectScript class methods?
Thank you! Yes, it can:
Excellent article!
One quibble: it is actually possible to use fields like departure_time and stop_name as subscripts. Those fields will be imported as string subscripts, which can contain any character (including nonprinting characters). For example, the following code works fine, and automatically imports departure_time as a string subscript:
Thanks for your correction! I must have done something else wrong back then, as now it also works fine for me. I'll update the article accordingly. | https://community.intersystems.com/post/iris-native-api-python | CC-MAIN-2019-39 | refinedweb | 2,117 | 65.62 |
“AddSOAPHeaderBean” module
AddSOAPHeaderBean? thinking??, is it a custom adapter module?? Well, it is a standard SAP module but unfortunately no where documented 🙁
1. What is the use?
As the module name indicates, we can add our own custom soap headers for incoming and outgoing message in XI using this module
2. How to use in channel?
Simply add AF_Modules/AddSOAPHeaderBean in require channel as a local ejb type with a module key entry e.g., 0, ns, auth etc.. as per your requirement
3. What parameters it supports?
Mandatory Parameter: namespace as key and it’s value e.g., namespace
Optional Parameters: any textual key, pair values. Each key will become a soap header tag with the maintained value in module configuration
The module key will be suffixed with static ‘ns’ string and will become namespace prefix for each soap header that is required via this module
e.g., if modulekey is 0, key is hdr1, value is 1234 then below value will be added in SOAP Header
<SOAP:Header><ns0:hdr1 xmlns:ns0=’‘>1234</ns0:hdr1></SOAP:Header>
4.Use cases
Case1: for modulekey is 0, namespace and hdr1 1234
Result:
<SOAP:Header>
<ns0:hdr1 xmlns:ns0=’‘>1234</ns0:hdr1>
</SOAP:Header>
Case2 (tricky one): for modulekey is 0, namespace and authenticationHdr <UserName>Praveen Gujjeti</UserName><Password>exposedpasswordinID</Password>
Result:
<SOAP:Header>
<ns0:authenticationHdr xmlns:ns0=’‘>
<UserName>Praveen Gujjeti</UserName>
<Password>exposedpasswordinID</Password>
</ns0:authenticationHdr>
</SOAP:Header>
Case3 (same as case2, but we can hide all header params in ID): for modulekey is 0, namespace and pwd.authenticationHdr <UserName>Praveen Gujjeti</UserName><Password>passwordNotExposedinID</Password>
Result:
<SOAP:Header>
<ns0:authenticationHdr xmlns:ns0=’‘>
<UserName>Praveen Gujjeti</UserName>
<Password>passwordNotExposedinID</Password>
</ns0:authenticationHdr>
</SOAP:Header>
5. What other alternative solutions are available to add custom SOAP headers?
Well, adding custom soap headers has been a frequent integration requirement in XI world since XI3.0. Java/XSLT mapping to create custom headers is one such option. And recently, this feature was also introuduced with AXIS HandlerBean module. Also this feature seems to be available with WS adapter with some limitations
References for Alternative solutions–how-to-remove-xi-headers-from-payload
**Note: Though this module is a standard SAP one, but since it is not documented and released officially by SAP, developers should use this module at their own risk.
Regards,
Praveen Gujjeti
Thank you Praveen! I used to see this module in the JNDI browser but there is no documentation. Now we can run some simple header addition without resorting to no soap mode.
Regards,
Mzrk
Indeed Mark 🙂
Best Regards,
Praveen Gujjeti
Hi Praveen,
I want to construct below SOAP Header in SAP PO 7.4, will it be possible using this adapter module, please advice:
Can we modify the namespace from ns0 to wsse ?S I get below error: SOAP: Response message contains an errorXIAdapter/PARSING/ADAPTER.SOAP_EXCEPTION - soap fault: An error was discovered processing the <wsse:Security> header.
Hi Praveen,
Thank you so much for sharing this information.
I will try to implement in upcoming projects if it required!
I appreciate.
Regards,
Hari Suseelan
Thanks Hari
Hi Praveen,
You are welcome.
Keep sharing and rocking as usual.
All the best!
Regards,
Hari Suseelan
Hi Praveen,
Good to know about the hidden module via this blog ...keep blogging..
Regards
Rajesh
Thanks Rajesh.
Thank you, This information is very important for me, I did this with java Mapping and check do not use envelope on receiver channel. now I have a question, How can I set parameter value dynamically?.
The idea is use message mapping and set parameter value for UDF.
Thank you.
Hello Johan,
Sorry, I didn't understand your question completely. Can you please explain a little more..
As far as I understood, you are expecting a dynamic value for parameter value in module configuration. AFAIK, there is no standard way we can pass value dynamically to the module configuration parameters.
Best Regards,
Praveen Gujjeti
Thank you for you answer.
I want to set parameter value the module from UDF in message mapping
thank you.
Well, this is long overdue.. I've spent countless hours for unsuccessful attempts at writing Java mappings, adapter modules etc. only to be able to pass some custom SOAP header data, only to be "ignored" by the SOAP adapter. It's kinda frustrating to see there was even a standard module for it, which is not even documented. Thanks Praveen for pointing it out, much appreciated!
Praveen, nice blog.
I have basic question, althought it is already answered here only but wanted to get some real time examples.
My question is why do we need to add additional soap header, what is the use of it?
Can anyone share example to showcase need of the same.
--Divyesh
Hello Divyesh,
Some applications hosting webservices handle messages in a different way using SOAP header such as for e.g., "WS-Security Username Authentication" as per this link WS-Security Username Authentication.
Also in general, using soap header information one application hosting this webservice can process the message and then can forward the message to another application hosting same or a different webservice for further processing of message. The same concept can be realized within PI system for e.g., SOAP header information gets modified during each pipeline stage of message processing
Best Regards,
Praveen Gujjeti
Got it, Praveen.
Thanks for your explaination and examples.
--Divyesh
Hi Praveen,
From which PI version we can use AF_Modules/AddSOAPHeaderBean? We are using PI 7.0 SP 12, but we find we couldn't use it cause the communication channel monitoring gave the following the error after we configured that parameter: "com.sap.engine.services.jndi.persistent.exceptions.NameNotFoundException: Object not found in lookup of AddSOAPHeaderBean."
Thanks.
Hi xinjiang li,
As far as I know, AF_Modules/AddSOAPHeaderBean module is available from PI7.11 version on wards.
Regards,
Praveen Gujjeti
OK, thanks a lot.
Hi,
where I can see results like these?():
<SOAP:Header>
<ns0:authenticationHdr xmlns:
<UserName>Praveen Gujjeti</UserName>
<Password>passwordNotExposedinID</Password>
</ns0:authenticationHdr>
</SOAP:Header>
I have configured but not working, I guess that's the manespace, I need to remove it, any ideas?.
Regards.
Hi Maximiliano Colman,
If you configure the AddSOAPHeaderBean module in sender channel, then you should be able to check added soap header in Moni.
And if configure the module in receiver channel, then you should be able to realize added headers in receiving (target) application system if it logs
For testing purposes, you can also simulate receiver mode by creating a mock webservice in SOAP UI and then by sending message with module configuration in PI to mock webservice endpoint and then you can watch the soap headers in SOAP UI mock service editor
//BR,
Praveen Gujjeti
Thank you very much! I do not understand how I could live without knowing the Mock Services in SOAP UI!
Hi Praveen,
Thanks a lot for sharing this information. It will help us a lot.
Regards,
Partha
Hi Praveen,
I just create a demo as what you instructed in the article. However I could not find the ideal SOAP header I want in the moni.
The following is the configuration:
And in addition, in the msg log I could find that
Did I miss something important?
Regards,
Leon
Hi Leon,
In general, the default adapter modules (in this case it is callSapAdapter) for any adapter channel should always be the last in module configuration for all Asynchronous communications. So, move the AddSOAPHeader Bean as first module in channel configuration and activate.
** One general point for SOAP sender adapter channel since it has some limitations with standard SAP/custom adatper modules. check this: Can modules be added to the sender SOAP adapter?
Please let us know the outcome
Best Regards,
Praveen Gujjeti
Hi Praveen,
I just moved the AddSOAPHeader Bean as the first module and try it again and happy to tell you that I could successfully find the additional data in the SOAP header.
Many many thanks.
Best Regards,
Leon Luo
Cheers 🙂 .. And thanks for the update
//BR,
Praveen Gujjeti
Good blog Praveen.
so no need of going for axis framework like below one.
How to Configure AXIS Framework for Authentication Using the "wsse" Security Standard in SAP PI
i missed this blog else could have used it instead of going for axis framework 😉
Regards,
Muni.
Hi Praveen,
I have a use case where the receiver expects the "To" element in the header like:
<s:Envelope xmlns:
<s:Header>
<a:To s:</a:To>
</Header>
...
I Wonder how to configure this using AF_Modules/AddSOAPHeaderBean?
I have tried allmost anything. The problem is: How do I get the mustUnderstand attribute into the "To" element?
Any suggestions?
Best Regards,
Olaf Bertelsen
Thank you Praveen - nice blog.
Does anybody know if there is somthing similar the way around - getSOAPHeaderBean which fills XHeaderName1 on which we have access within user defined functions?
Or do we have the source of that AddSOAPHeaderBean?
Thanks for any idea.
Best regards
Stefan
Thanks Stefan
Please wait for few days, you should have my new blog on 'getSOAPHeaderBean' custom module. I will update the blog info here...
Best Regards,
Praveen Gujjeti
Hi Praveen,
I tried to add the module in Soap sender channel.But not getting soap headers which I added as parameters in ModuleConfig tab.Please have a look and suggest me how to get the headers.
After adding the module, below is the log in msg monitor
Hi Praveen,
I want to construct below SOAP Header in SAP PO 7.4, will it be possible using this adapter module, please advice:
Can we modify the namespace from ns0 to wsse ? I get below error: SOAP: Response message contains an errorXIAdapter/PARSING/ADAPTER.SOAP_EXCEPTION - soap fault: An error was discovered processing the <wsse:Security> header.\
I have tried below configuration in Module:
name: Security value: <UsernameToken><Username>2</Username><Password Type="">pass1234</Password></UsernameToken>
name : namespace value :
Hi Ravijeet,
Could you please tell me what solution you used to get this output? I have a similar requirement of Soap header with WSSE.
Thanks
Minal Bakore
Hi Minal,
I had used this module to insert the soap security header. It is working good in Production.
Thanks
Ravijeet
Hi Ravijeet,
Did you manually insert the soap security header under pwd.authenticationHdr ?
parameter name: pwd.authenticationHdr
Parameter value: <wsse:Security soap:mustUnderstand=\"1\" xmlns:soap=\"\" xmlns:wsse=\"\">
<wsse:UsernameToken wsu:Id=\"UsernameToken-1\" xmlns:wsu=\"\">
<wsse:Username>username</wsse:Username>
<wsse:Password Type=\"\">password</wsse:Password>
</wsse:UsernameToken>
</wsse:Security>
Appreciate your response.
Thanks a lot!
Hello Marc,
Can you please detail out how have you implemented wsse security as I am getting fault error exception if putting the tags in namespace value.
Apppreciate your response.
Thanks.
Sarbtej
Hello Sarbtej,
Did you already solved your issue?
Thanks,
Marc
Hello Ravijeet,
Can you please detail out the steps to implement the SOAP Header as was your requirement. I have tried to use the module, but I am getting error as fault was not recognized.
Thanks.
Sarbtej
Hi Praveen,
Very Nicely Explain. I tried to add this headers in my modules but when I execute the same through SOAPUI getting error HTTP/1.1 401 Unauthorized. My PI Version is 7.40.
Below is the code for the header
<soapenv:Header>
<ns0:authenticationHdr xmlns:
<UserName>user</UserName>
<Password>pwd556</Password>
</ns0:authenticationHdr>
</soapenv:Header>
Also I tried
<soap:Header>
<ns0:authenticationHdr xmlns:
<UserName>user</UserName>
<Password>pwd556</Password>
</ns0:authenticationHdr>
</soap:Header>
Can you help me out?
Regards,
Alex
Hi Alex,
Sorry, I couldn't understand your requirement completely. Can you please explain your requirement in detail and where exactly you are getting this error
Sender (protocol ?) --> PI/PO --> Receiver (protocol ? )
Br,
Praveen
Hi Praveen,
Thanks for your response onto this. My requirement was to allow third party system use WSDL without entering username and password which is done through using below thanks to Karthik B.
http:// <host name> : <port name> /XISOAPAdapter/MessageServlet?senderParty= <name of the sender party> &senderService= <name of the sender service> &interface= <name of the interface>&receiverParty= <name of the receiver party> &receiverService= <name of the receiver service>&interfaceNamespace= <name of the interface namespace>&j_username=<UserName>&j_password=<Password>
Set Username and password
Regards,
Alex
Hi Alex,
I think this is not possible since SOAP MessageServlet on SAP PI/PO by default authenticates using BASICauthentication mechanism. And BASIC authentication is a general HTTP header and we can't include this in SOAP message as SOAP headers
I remember that, there is a way where you can by-pass SOAP adapter BASIC authentication mechanism with SOAP adapter setting in NWA(don't remember the parameter). And with this, any soap message(interface) can enter into system without needing any credentials information
a nice work around: Anonymous SOAP calls in SAP PI
Hope it is clear...
Thanks,
Praveen
Hi Praveen,
I am using AddSOAPHeaderBean to include custom fields to the SOAP header. I have a requirement where I need to add attribute to the element as below,
<ns1:async
here "async" is element and "expectReply" is attribute.
Can you please let me know how to handle attributes to the element using AddSOAPHeaderBean.
Regards,
Raj the blog gujjeti..:)
It helped me in our project, | https://blogs.sap.com/2013/11/22/addsoapheaderbean-module/ | CC-MAIN-2021-39 | refinedweb | 2,205 | 56.15 |
Hello,
regarding the 1st solution:
According to the JavaDocs the ServletRequestWrapper throws an
IllegalArgumentException if you pass "null" as delegate, so this won't
work (I'll come back to that later though). However, given that you're
worried about NullPointerExceptions in case someone calls methods that
have been introduced in the Servlet 3.0 version release, I assume that
MyFaces isn't really concerned about those methods anyway. Otherwise
you'd probably override those methods? If I'm mistaken, please correct
me as some suggestions later on rely on this assumption.
regarding the 2nd solution:
Just ignoring the @Override annotation won't work as the respective
interfaces introduce dependencies to artifacts that are only available
in a Servlet 3.0 environment (for example, there's the startAsync()
method that returns an AsyncContext). If a class loader were to load
your request / response dummy class, he would now also have to load the
class AsyncContext as it's a dependency of your class itself, which
apparently the class loader cannot do in a Servlet 2.5 environment.
Given that I'd say you'll have to create two different dummy
implementations, one that implements the Servlet 2.5 ServletRequest
interface and one that implements the Servlet 3.0 ServletRequest (i.e.
the only thing that changes is the set of methods you have to
implement). However, now another problem arises as you can't just use
two different versions of the same API in a single build, i.e. there's
no way to tell the compiler that one class just implements the methods
in the Servlet 2.5 version whereas another class has to implement the
methods of the Servlet 3.0 version. Both versions have to be compilable
using the same Servlet API version and as the Servlet 2.5 API is just a
subset of the Servlet 3.0 API, both versions have to be compilable using
the Servlet 3.0 version.
The big issue now is that we've got a contradiction now. If we want to
support a Servlet 3.0 environment, we'll have to use this version in our
build (again, Servlet 3.0 is if I'm not mistaken a superset of Servlet
2.5, that's the reason for that). However, the 2.5 version of the dummy
class cannot compile if one uses the 3.0 version for the actual build.
Maybe that sounds a little bit strange up until now, but hopefully now
it will get clearer: A 2.5 compatible implementation of the
ServletRequest interface must not implement the method "startAsync" as
it introduces an unsatisfiable dependency, but a 3.0 compatible build
environment requires any implementation to implement the method
"startAsync" (amongs others) as it is a part of the interface after all.
Hence I'm afraid but this solution just won't work either. Of course,
the third solution would probably work, but why bother about the
performance implications if there's another solution? :-)
I think the preferable solution is actually the first one. It's easy to
implement as we don't have to deal with the difference between the
Servlet 2.5 API and Servlet 3.0 API, but as I've already mentioned there
is the IllegalArgumentException issue that you just can't ignore either.
We just want to get rid of the null value somehow, so why not use a
dummy proxy instead? Note that there are no performance implications if
you override the wrapped methods anyway, i.e. in fact, the proxy won't
be called even once. It's sole purpose is to replace the "null", that's
it. It could look like the following:
///
public class DummyServletRequest extends ServletRequestWrapper {
public DummyServletRequest() {
super(Proxy.newProxyInstance(
DummyServletRequest.class.getClassLoader(),
new Class[] { ServletRequest.class },
new InvocationHandler() {
public Object invoke(Object proxy, Method m, Object[] args) {
throw new UnsupportedOperationException(...);
}
}
);
}
// --------- "Implement" the interface ServletRequest now!
public Object getAttribute(String name) {
// ...
}
// ...
}
\\\
Hope that helps. :-)
regards,
Bernhard Huemer
On 12/01/2009 09:48PM GMT, Michael Concini
> | http://mail-archives.apache.org/mod_mbox/myfaces-dev/200912.mbox/%3C4B15C100.6000507@gmail.com%3E | CC-MAIN-2015-18 | refinedweb | 671 | 57.98 |
I have a large number of files that begin with numbers (e.g., 10admin_boundary_x) and would like to rename the files so that they do not begin with a digit (e.g., admin_boundary_x). I am working with shapefiles (.shp, .shx, .dbf, etc) and thought a python script could save me some time.
I haven't quite figured it out yet, but here is the code I've got so far:
import os #read in file from the directory for filename in os.listdir("."): f = open(filename, "w") i = True while i: #if filename starts with a digit, lose first index by renaming and #try again while filename[0].isdigit(): filename = filename[1:] os.rename(f, filename) i = False print 'Any filename starting with a digit has been renamed.'
Thank you for your help! | https://www.daniweb.com/programming/software-development/threads/387997/removing-numbers-from-the-beginning-of-filenames | CC-MAIN-2022-21 | refinedweb | 133 | 83.05 |
Introduction: In this article i will explain with example How to upload image file through file upload control and resize the uploaded image in asp.net using both C#.Net and VB.Net languages.In previous articles i explained How to create thumbnail, small and large version of the uploaded:
VB.NET Code to resize the images
Imports System.Drawing:
<table>
<tr>
<td>
<asp:FileUpload
</td>
</tr>
<tr>
<td>
<asp:Button</td></tr>
</table>
C#.NET Code to upload and resize the images
-=string.Empty;
Bitmap bmpImg=null;
try
{
bmpImg = Resize_Image(FileUpload1.PostedFile.InputStream, 210, 130);
img = Server.MapPath("images/") + Guid.NewGuid().ToString() + “.png";
bmpImg.Save(img, ImageFormat.Jpeg);
}
catch (Exception ex)
{
Response.Write("Error occured: " + ex.Message.ToString());
}
finally
{
img = string.Empty;
bmpImg resize the images
- In the code behind file (.aspx.vb) write the code as:
First Import following namespaces:
Imports System.Drawing
Imports System.Drawing.Drawing2D
Imports System.Drawing.Imaging
Imports System.Drawing.Design
Imports System.IO
- Then write the code as:
Protected Sub btnUpload_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles btnUpload.Click
If FileUpload1.HasFile Then
Dim img As String = String.Empty
Dim bmpImg As Bitmap = Nothing
Try
bmpImg = Resize_Image(FileUpload1.PostedFile.InputStream, 210, 130)
img = Server.MapPath("images/") + Guid.NewGuid().ToString() + ".png"
bmpImg.Save(img, ImageFormat.Jpeg)
Catch ex As Exception
Response.Write("Error occured: " & ex.Message.ToString())
Finally
img = String.Empty
bmpImg = CDbl(originalImage.Width) / CDbl(originalImage.Height)
If aspectRatio <= 1 AndAlso originalImage.Width > maxWidth Then
newWidth = maxWidth
newHeight = CInt(Math.Round(newWidth / aspectRatio))
ElseIf aspectRatio > 1 AndAlso originalImage.Height > maxHeight Then
newHeight = maxHeight
newWidth = CInt(Math.Round(newHeight * aspectRatio))
End If
Return New Bitmap(originalImage, newWidth, newHeight)
End Function
Note: Uploaded image will be stored in the “images” folder of the root directory in our case. You can change the folder name and location as per your requirement. Now you can display the resized image on your website.."
7 commentsClick here for comments
some times image not saved.Why?Reply
This code is completely tested and working code..Can you please mention the case when you are facing this issue?Reply
hello sir,i tried out this code it works fine but only one problem that every image has his own width and height so according to this code we take out the aspect ratio of the image which will be differ for each image so could you tell me that i want to save image in my folder with 960 width and 360 height how could i can do that with any kind of image whether its small or big could you help me sir?Reply
sir plz write an article " upload video in asp.net by using c #"...Reply
Hello umbreen sabir..thanks for your suggestion..i will create an article as per your requirement and publish very soon..so stay connected and keep reading ...:)Reply
thanks, nice article. but i need one help. i am trying to preview it in a an image div after saving. so i gave the code imagebuttonname.ImageUrl=img; after the code bmpImg.Save(img, ImageFormat.Jpeg); but the image is not getting preview. kindly help me. i ll be very thankfulReply
very useful.. | https://www.webcodeexpert.com/2013/04/how-to-resize-image-in-aspnet.html | CC-MAIN-2021-43 | refinedweb | 533 | 54.39 |
Introduction.
As you read this article, you will have an opportunity to explore and expand your knowledge of JavaScript and ASP.NET, learn about object-oriented JavaScript, and gain a better understanding of the relationship between ASP.NET Web controls, HTML, and JavaScript..
Adding the HTML Div and Table Stubs
The GridView will have a header. For the technique for pinning it, you could use the GridView’s actual header, but is actually a little easier to use a clone. The reason is that if you pin the GridView’s actual header, you have to adjust its position differently when you have scrolled to the top (or you hide row 1), and you might have to account for a positioned header’s index when manipulating the data. For these reasons, I elected to use a stub div and table to mimic the fixed header.
Tip: Did you know that a table header is rendered as a <TR> with <TH> children instead of <TD> (cells) children? Did you also know that you can place the row containing the header (or <TH> cells) anywhere in a Table? You can.
To add the div and table stub, open your web page and add the following HTML just above the GridView’s tag. Here is the additional HTML followed by complete HTML/ASP for the sample form (in Listing 2).
<div id="fixedHeader" style="position: absolute"> <table id="header" style="position: absolute"> </table> </div>
Listing 2: The complete listing for the sample web page, showing the relationship of the GridView and the HTML that will be used to affix a header.
<%@ Page <html > .”
<%@() { alert("This is a simple JavaScript class"); } var foo = new Foo(); </script> </head> <body> <form id="form1" runat="server"> <div> </div> </form> </body> </html>
In the listing, the function Foo is treated like a class and the statement alert… basically plays the role of initialization (or construct) code. The statement var foo = new Foo(); ultimately creates an instance of Foo (upper case) and runs the initialization code.
You could move the alert statement to a function within Foo and invoke that function. This will give you a better sense of the object-oriented nature of JavaScript when used in this manner (see Listing 4).
Listing 4: Adding a member function to Foo.
<%@() { this.MemberFunc = memberFunc; function memberFunc() { alert("This is a simple JavaScript class"); } } var foo = new Foo(); foo.MemberFunc(); </script> </head> <body> <form id="form1" runat="server"> <div> </div> </form> </body> </html>
In the revision, Foo has a member function called memberFunc (notice the case) and makes it accessible externally by assigning it to this.MemberFunc. Without the this.member statement, the member is treated as inaccessible externally.
Cloning the Grid Header
To begin wrapping things up, you now can implement what I refer to as the PositionClass. The PositionClass will figure out where the grid is, where the header is, and keep track of where the header should be depending on the scroll position and the size of the window containing the grid. Listing 5 is a couple of hundred lines long, but a complete explanation is offered after the listing. To get started, add a new JavaScript file to your solution and put the code in Listing 5 in that file.
Listing 5: JavaScript code to manage and position a row containing header information.
// JScript File - by PTK // Implements a fixed header for HTML tables (like a GridView, // which renders and HTML table) function PositionClass() { this.Top = top; this.Left = left; this.Width = width; this.Height = height; this.ClientID = clientID; this.SavePosition = savePosition; this.Reposition = reposition; this.SetClientID = setClientID; var top, left, width, height, clientID; var head, caption; // the client ID of the table/gridview function setClientID(id) { clientID = id; } // determines the absolute position of X or Y--determined by // the getOffset function to handle control nesting function getAbsolutePosition(control, getOffset, adjustCaption) { var result = 0; while(control) { if(control.tagName) if((control.tagName == "TBODY") || (control.tagName == "TR")) { if(control.parentElement) control = control.parentElement; else break; continue; } if(control.style.position == "absolute") return result; result += getOffset(control); if(control.parentElement) control = control.parentElement; else break; } if(adjustCaption && caption && isAlignedTop(caption)) result -= caption.clientHeight; return result; } // returns the x offset function getXOffset(control) { if(control.offsetLeft) return control.offsetLeft; else return 0; } // returns the y offset function getYOffset(control) { if(control.offsetTop) return control.offsetTop; else return 0; } // get the grid header row. It may not exist or there may be a // caption above it function getHeaderNode(grid) { if(!grid) return null; for(var i=0; i<grid.rows.length; i++) { var s = new String(); if(grid.rows[i].childNodes.length > 0) { s = grid.rows[i].childNodes[0].tagName; if(s.toLowerCase() == "th") return grid.rows[i]; } } return null; } // everything but "bottom" is top=aligned function isAlignedTop(theCaption) { if(!theCaption) return false; var tag = theCaption.align.toLowerCase(); return (tag == "top") || (tag == "left") || (tag == "right") || (tag == ""); } // find the caption; we have to allow for a caption above the // header function getCaptionNode(grid) { if(!grid) return null; return grid.caption; } // stores the current position of the control function savePosition() { // debugger; // get the grid var grid = document.all[clientID]; if(!grid) return; header.cellPadding = grid.cellPadding; header.cellSpacing = grid.cellSpacing; header.border = grid.border; header.bgColor = grid.bgColor; header.className = grid.className; caption = getCaptionNode(grid); if(caption && (caption.style.backgroundColor == "")) { // debugger; caption.style.backgroundColor = "white"; } head = getHeaderNode(grid); if(head == null) return; head.style.visibility = "hidden"; // get the header position top = getAbsolutePosition(head, getYOffset, true); left = getAbsolutePosition(head, getXOffset, false); width = head.clientWidth; height = head.clientHeight; var temp = head.cloneNode(true); // store sizes in style attributes for(var i=0; i<head.childNodes.length; i++) { temp.childNodes[i].style.width = head.childNodes[i].clientWidth; temp.childNodes[i].style.height = head.childNodes[i].clientHeight; } head = temp; // clone the header head.style.visibility = "visible"; // insert the header (and caption) into our copy table if(caption && isAlignedTop(caption)) { var newCaption = caption.cloneNode(true); if(header.caption) header.replaceChild(newCaption, header.caption); else header.appendChild(newCaption); caption.style.visibility = "hidden"; } // place the new table header in the table-playing header var th = getHeaderNode(header); if(th) header.childNodes[0].replaceChild(head, th); else header.childNodes[0].appendChild(head); // fix the position of the div to overlap the gridview's header fixedHeader.style.posLeft = left; fixedHeader.style.left = left + "px"; fixedHeader.style.posTop = top; fixedHeader.style.top = top + "px"; fixedHeader.style.width = width + "px"; fixedHeader.style.height = height + "px"; fixedHeader.style.visibility = "visible"; fixedHeader.style.zIndex = 0; } // repositions the control if necessary function reposition() { if(!document.all[clientID]) return; // added silent try..catch because masterpages can have other // things that scroll; // for instance we scroll the treeview that fires this event try { if(document.body.parentNode.scrollLeft > 0) { fixedHeader.style.posLeft = document.body.parentNode.scrollLeft; fixedHeader.style.left = document.body.parentNode.scrollLeft + "px"; } else { fixedHeader.style.posLeft = document.body.parentNode.scrollLeft + left; fixedHeader.style.left = document.body.parentNode.scrollLeft + left + "px"; } if(document.body.parentNode.scrollTop > 0) { fixedHeader.style.top = document.body.parentNode.scrollTop + "px"; fixedHeader.style.posTop = document.body.parentNode.scrollTop; } else { fixedHeader.style.top = document.body.parentNode.scrollTop + top + "px"; fixedHeader.style.posTop = document.body.parentNode.scrollTop + top; } fixedHeader.style.width = width + "px"; fixedHeader.style.height = height + "px"; fixedHeader.style.visibility = "visible"; } catch(oException) { } } }
The first ten lines or so beginning with this of var define public members and fields for the PositionClass. As you might expect, you need to store the position and instance of the header as well as the caption, if present. It is these items that you will need to resize (absolutely) if the grid scrolls or the window is scrolled or resized.
Any element that you want to be a public member you need to assign to a local-named element with the this keyword, for instance this.Width = width. Statements like the preceding will introduce an externally accessible member Width (note the capital W) and associate it with variable width (note the lowercase w). This relationship is similar to the field and property relationship in VB. (It is worth noting that JavaScript is case sensitive.)
Because you are defining controls such as a GridView in code-behind, and these controls will generate an HTML table with a specific client-id, it is convenient to store the actual client-id in the PositionClass. setClientID does this. In simple pages, you might hard code the client-id with the identifier given in the code-behind. For example, on a simple page a GridView control with the name GridView1 may actually get a client-id of “GridView1”. However, when you build more complicated pages with user controls, you are more likely to get a client-id that includes all of the names of parent controls concatenated together.
The getAbsolutePosition function determines the X or Y position of the header by adding all of the offsets of its parent controls together. The TBODY and TR parent controls are ignored, and this approach seems to yield the best result. The argument getOffset is actually a function argument. It is initialized with either of the functions getXOffset or getYOffset depending on which position—horizontal or vertical—you are resolving. The adjustCaption argument is used to determine whether you should adjust the vertical offset for a top-aligned caption.
The getHeaderNode function walks all of the HTML table rows—the grid rows—to find the row—TR control—containning the table headers—<TH> elements. It is worth noting that you actually could position the real table’s row containing table headers. That is, table headers, or any row for that matter, can be positioned absolutely with script regardless of its index in the table. This means that, instead of cloning the table header row and positioning the clone, you could position the original table header row. This is left as an exercise. The challenge with the latter approach is that the fixed table header will overlap row 1 when the grid is scrolled to the top-most row.
The isAligned function checks to see whether the table has a caption and its align property is top, left, right, or blank (“”). For all intents and purposes, any of these alignments mean you need to leave room for the caption when positioning a fixed header.
The savePosition function does all of the heavy lifting of storing the present position of the header, cloning it, duplicating its styles, and positioning the cloned-header correctly. Finally, reposition is called when the window (or panel, or anything that scrolls that the grid sits on) scrolls or is resized. This function figures out how far the container was scrolled and adjusts the fixed header accordingly.
Writing an Initialization Function in JavaScript
The final two steps are to write and initialize an invoke function that creates and locates the fixed header and inject a little JavaScript that ensures you have the correct client-id of the grid/table whose header you’d like to fix. Listing 6 shows a function—InitializeFixedHeader—that is not a member of the PositionClass but that will initialize an instance of the PositionClass and bind its behaviors to window events, including onload, onscroll, and onresize.
Listing 6: Initializing the fixed header class, PositionClass.
function InitializeFixedHeader(id) { var pc = new PositionClass(); pc.SetClientID(id); window.onload = pc.SavePosition; window.onscroll = pc.Reposition; window.onresize=function() { pc.SavePosition(); pc.Reposition(); } }
Injecting the JavaScript Initialization Code
As you write more complication web applications and reuse user controls, you will need to ensure that the client-id passed to the ID argument in Listing 6 is correct. This can be done easily by injecting some startup script from the code behind. The simple code in Listing 7 demonstrates how you can inject a call to InitializeFixedHeader with the GridView’s ClientID property, ensuring the JavaScript id matches the name ASP.NET gives the GridView on the client.
Listing 7: Injecting code to initialize the fixed header.
Protected Sub Page_Load(ByVal sender As Object, _ ByVal e As System.EventArgs) Handles Me.Load Dim Script As String = _ String.Format("InitializeFixedHeader('{0}');", _ GridView1.ClientID) Page.ClientScript.RegisterStartupScript(Me.GetType(), _ "fixedHeader", Script, True)
Don’t use Response.Write to inject script any longer—if you ever did. The Document Type Definition (DTD) XHML 1.0—the new DTD standard for ASP.NET 2.0—gets wonky and you can get some subtle and annoying behaviors by injecting script with Response.Write.
Summary
This sample is long and by no means easy. JavaScript is harder to debug than VB, and it’s harder to code correctly because it is syntactically less forgiving. However, JavaScript can add some cool, client-side behaviors to your ASP.NET and VB solutions, so JavaScript (or VBScript) is worth learning. (You will also need JavaScript to make the most of Atlas/Ajax.)
In this article, you learned about object-oriented or object-based JavaScript. And, you learned how to fix a grid header no matter how complicated or nested your pages become. There are shorter techniques—including a previous one that I wrote. These shorter techniques only seem to work on fairly simple pages, though. [email protected].
If you are interested in joining or sponsoring a .NET Users Group, check out. Glugnet is opening a users group branch in Flint, Michigan in August 2007. If you are interested in attending, check out the web site for updates.
By Paul Kimmel. [email protected] | https://www.developer.com/languages/javascript/a-better-fixed-gridview-header-for-asp-net/ | CC-MAIN-2021-17 | refinedweb | 2,233 | 50.23 |
Introduction: Driving the Pm55L-048-hp69 With Arduino
In this instructable I will teach you how to wire your stepper to run it with the arduino board.
You will need:
- Arduino board (mine is uno)
- Breadboard and wires
- ULN2803 or ULN2003
- PM55L-048-HP69 stepper
- Power suply for stepper (I'm using a 19V 1A since this stepper works on 24V)
Step 1: Wiring
Using this tutorial, I was able to find the wiring of my stepper.
So, if you are using the example from the stepper library stepper_one Revolution
you have to wire this way:
A to digital pin 11
C to digital pin 10
B to digital pin 9
D to digital pin 8
I'm actually using the ULN2803, but the only diference is the 2803 have more pins to work with.
Step 2: The Code
Since we are workingh a 48 steps stepper, you have to change the line const int stepsPerRevolution =200;
for const int stepsPerRevolution =48; if you don't make this chage, it won't work.
-----------------------------------------------------------------------------------------------------------
#include <Stepper.h>
const int stepsPerRevolution = 48;);
}
-----------------------------------------------------------------------------------------------------------
Now have fun with the code. It worked well within 10 to 180 rpm.
Good luck, hope you like it.
10 Discussions
im using 12v and my uln2003 heats up like crazy after 20 seconds of turning. whyy?
guys this motor can be used with a 12 v power supllie and be used in a 3d printer. i have diagrams if any one need them. its a unipolar motor that cam be converted into bipolar servomotor.
I dont know if you still follow.. but yes.. can you send me the notes please.
pauloafcosta@yahoo.com.br
Thanks a lot buddy.
i have to look in my old files , what is the info that you need ..
I'm interested too!! if you could send me some info I would be thankful.
I'm trying to learn how to control stepper motors with a cnc shield v3 and a a4988 driver.
Thanks!
Any extra info about this motor interests me. :)
What the benefits to convert to bipolar.
sorry, i got confused, your fritzing diagram states different pinouts for the arduino than you write later.
eg; you said 'you have to wire this way: A to digital pin 8 - C to digital pin 9 - B to digital pin 10 - D to digital pin 11 but the illustration above states that it should go A-11 C-10 B-9 D-8. Witch one is it? thank you so much =D
i'm sorry for the mistake. and about this time you probably have figured it out by yourself. but the diference is the way it will turn. clockwise or counter clockwise. either way its going to work.
Ooo that's so cool, awesome job sharing your skills! Welcome to instructables!
thank you, I decided to share this because I coudn't find on the internet. so I get it by myself. | https://www.instructables.com/id/Driving-the-pm55L-048-hp69-with-Arduino/ | CC-MAIN-2018-34 | refinedweb | 488 | 82.04 |
By Christopher K. Fairbairn, Johannes Fahrenkrug, and Collin Ruffenach
Apple provides a class called NSXMLParser to iPhone developers. Developers use this class when parsing XML. This article from Objective-C Fundamentals shows that NSXMLParser provides delegate methods that handle every point of parsing for both XML- and DTD-based documents.
To save 40% on your next purchase use Promotional Code code40project when you check out at.
You may also be interested in…
Apple provides a class called NSXMLParser to iPhone developers.
Developers use this class when parsing XML. Several open source alternatives to
NSXMLParser
are available and used by many developers, but we are going to look at the delegate
methods of the standard Cocoa XML Parser.
There is no <NSXMLParser> protocol; you will receive no warning if you do not
declare this in the header of the application you are creating..
XML is a type of file that can hold data in a very structured
manner. As a quick introduction, XML uses the syntax of HTML to create unique
data structures. An example of an XML element that describes a person is shown
in listing 1.
<Author>
<name>Collin Ruffenach</name>
<age>23</age>
<gender>male</gender>
<Books>
<Book>
<title>Objective C for the iPhone</title>
<year>2010</year>
<level>intermediate</level>
</Book>
</Books>
</Author>
XML is a very common means of getting data from online sources
such as Twitter. XML is also used to facilitate the data required to run your
specific iPhone project. iPhone development relies heavily on PLISTS. These
files are really just XML.
DTD stands for Document Type Definition. This is a document that
would describe the structure of the XML that you are going to work with. The
document type definition for the XML in version 7.4.3.1 would be:
<!ELEMENT Author (name, age, gender, books_list(book*))>
<!ELEMENT name (#PCDATA)>
<!ELEMENT age (#PCDATA)>
<!ELEMENT gender (#PCDATA)>
<!ELEMENT Book (title, year, level)>
<!ELEMENT title (#PCDATA)>
<!ELEMENT year (#PCDATA)>
<!ELEMENT level (#PCDATA)>
For some applications, examining the structure of the XML they
are receiving will change the manner in which the application parses. In this
case, we say that the XML will contain an element called Author. An Author will
be defined by a name, age, and gender, which will be simple strings. An author
will also have a list of Book elements. A Book is defined by a title year and
level that are all simple strings. This ensures that the NSXMLParser
knows what to do.
The majority of the time when you parse XML, you will be aware of
its structure when writing your parser class. For these instances, you will not
need to investigate the XML feeds DTD. An example of this would be the Twitter
XML feed for a timeline. We will assume we know the XML structure for our XML
and only implement the parsing functions of the NSXMLParser delegate to parse the Author
XML we have already looked at.
The first step when implementing NSXMLParser is to create a class that
will contain the parser object and implement its delegate methods. Let’s create
a new view-based project called Parser_Project and create a new NSObject subclass
called Parser. The only instance variables we are going to declare for the
Parser class is an NSXMLParser and an NSMutableString to help. Make Parser.h look like the
following.
#import <Foundation/
Foundation.h>
@interface Parser : NSObject <NSXMLParserDelegate>
{
NSXMLParser *parser;
NSMutableString *element;
}
@end
We are going to need to have an XML file to parse. You can take
the XML in listing 2 and place it in a regular text file. Save the file as
Sample.xml and add it into the project. This will give us a local XML file that
we can reference to parse.
Now we need to fill in Parser.m. Parser.m will contain an
initialize and the implementation of the three most common NSXMLParser
Delegate methods. Let’s start with the initializer method and add the code
shown in listing 2 into XMLParser.m.
-init {
if(self == [super init]) {
parser = [[NSXMLParser alloc]
initWithContentsOfURL:[NSURL fileURLWithPath:[[NSBundle mainBundle]
pathForResource:@"Sample" ofType: @"xml"]]];
[parser setDelegate:self];
[parser parse];
}
return self;
}
Here we are going to initialize our NSXMLParser parser using a file URL
pointing to our Sample.xml file that we imported into our project earlier. NSURL is a large
class with all sorts of initializers. In this case, we are telling it that we
will be providing a path to a file URL, or a local resource. With that done, we
tell the NSXMLParser
that this class will be the delegate of the parser and, finally, we tell the NSXMLParser we are
ready to parse by sending the parse exam.
Once the parse method is called on the NSXMLParser, the parser
will begin to call its delegate methods. The parser reads down an XML file much
like Latin/English characters are read: left to right, top to bottom. While
there are many delegate methods, we will be focusing on three of them.
§ - (void)parser:(NSXMLParser
*)parser didStartElement:(NSString *)elementName namespaceURI:(NSString
*)namespaceURI qualifiedName:(NSString *)qualifiedName attributes:(NSDictionary
*)attributeDict
While this method has a lot of parameters passed into it, it
is actually quite simple for our purposes. This method is called when an
element is seen starting. This means that any element (between <>) that
does not have a /. In this method we will first print the element we see
starting and we will clear our NSMutableString element. You will see upon
implementing the new methods that we use the element variable as a string that
we add to as delegate methods are called. The element variable is meant to hold
the value of only one XML element. So, when a new element is started, we make
sure to clear it out. Fill out the following, shown in listing 3, for this
delegate method.
- (void)parser:(NSXMLParser *)parser
didStartElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI
qualifiedName:(NSString *)qualifiedName attributes:(NSDictionary
*)attributeDict {
NSLog(@"Started
Element %@", elementName);
element = [NSMutableString
string];
}
§ - (void)parser:(NSXMLParser
*)parser didEndElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI
qualifiedName:(NSString *)qName
This method is called when an element is seen ending. This
means when an element has a / this method will be called. When this method is
called out NSMutableString element variable will be complete. We will simply
print out the value we have seen (see listing 4).
- (void)parser:(NSXMLParser *)parser
didEndElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI
qualifiedName:(NSString *)qName {
NSLog(@"Found an element named: %@ with a
value of: %@", elementName, element);
}
§ - (void)parser:(NSXMLParser
*)parser foundCharacters:(NSString *)string
This method is called when the parser sees anything between
an element’s beginning and ending. We will use this entry point as a way to
collect all the characters that are between an element; this is done by calling
the appendString
method on our NSMutableString.
By doing this every time, this method is called; by the time the didEndElement
method is called, the NSMutablrString
will be complete. In this method, we first make sure that we have initialized
our NSMutableString
element and then we append the string we are provided, shown in listing 5.
- (void)parser:(NSXMLParser *)parser
foundCharacters:(NSString *)string
if(element == nil)
element = [[NSMutableString
alloc] init];
[element appendString:string];
}
Now all that is left to do is create an instance of our Parser
and see it go. Go to Parser_ProjectAppDelegate.m and add the code shown in
listing 6 into the already existing method.
- (BOOL)application:(UIApplication *)application
didFinishLaunchingWith Options:(NSDictionary *)launchOptions {
// Override point for customization after app launch
[window addSubview:viewController.view];
[window makeKeyAndVisible];
Parser *parser = [[Parser alloc] init];
return YES;
}
If you run the application and bring up the terminal window
(shift + apple + r), the output shown in listing 7 should be generated.
Parser_Project[57815:207] Started Element Author
Parser_Project[57815:207] Started Element name
Parser_Project[57815:207] Found an element named: name with a
value of: Collin Ruffenach
Parser_Project[57815:207] Started Element age
Parser_Project[57815:207] Found an element named: age with a
value of: 23
Parser_Project[57815:207] Started Element gender
Parser_Project[57815:207] Found an element named: gender with a
value of: male
Parser_Project[57815:207] Started Element Books
Parser_Project[57815:207] Started Element Book
Parser_Project[57815:207] Started Element title
Parser_Project[57815:207] Found an element named: title with a
value of: Objective C for the iPhone
Parser_Project[57815:207] Started Element year
Parser_Project[57815:207] Found an element named: year with a
value of: 2010
Parser_Project[57815:207] Started Element level
Parser_Project[57815:207] Found an element named: level with a
value of: intermediate
Parser_Project[57815:207] Found an element named: Book with a
value of: intermediate
Parser_Project[57815:207] Found an element named: Books with a
value of: intermediate
Parser_Project[57815:207] Found an element named: Author with a
value of: intermediate
You can see that using the NSXMLParser delegate methods we
successfully parsed all of the information in our XML file. From here, we could
create Objective-C objects to represent the XML and use it throughout our
application. XML processing is a vital part of most applications that get their
content from some kind of web source; Twitter Clients, News Clients, or YouTube.
Protocols are all over the place when developing for the iPhone.
They are one of the foundation design decisions for the majority of the classes
the Apple provides. With attentive coding, the usage of these protocols can
make your application efficient and error proof. Through a proper understanding
and implementation of the protocol design method, you can ensure a
well-designed application..
Here are some other Manning titles you might be
interested in:
iPhone and iPad in Action
Brandon Trebitowski, Christopher Allen, and Shannon Appelcline
iPhone in Practice
Bear P. Cahill
iPad in Practice
Paul Crawford
Last updated: August 27, 2011
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) | http://www.codeproject.com/Articles/248883/Objective-C-Fundamentals-NSXMLParser | CC-MAIN-2014-42 | refinedweb | 1,648 | 53.71 |
Exploring Spark DataSource V2 - Part 5 : Filter Push fifth blog in the series where we discuss about implementing filter push. You can read all the posts in the series here.
Mysql Datasource
To understand how to implement filter push, we will be using a mysql datasource rather than in-memory datasource. A mysql datasource is similar to our earlier in-memory datasource, except it reads the data from mysql database rather than in-memory array. We will be using JDBC API to read from mysql. The below is the code in Reader interface to setup an iterator and read data.
def next = { if(iterator == null) { val url = "jdbc:mysql://localhost/mysql" val user = "root" val password = "abc123" val properties = new java.util.Properties() properties.setProperty("user",user) properties.setProperty("password",password) val sparkSession = SparkSession.builder.getOrCreate() val df = sparkSession.read.jdbc(url,getQuery,properties) val rdd = df.rdd val partition = rdd.partitions(0) iterator = rdd.iterator(partition, org.apache.spark.TaskContext.get()) } iterator.hasNext } def get = { iterator.next() }
As you can see from above code, we are using jdbc and sparkSession.read.jdbc API’s to read the data. In our example, we are assuming all the data coming from single partition. We will fix this in upcoming examples.
Once we setup the iterator, get method is just calling next method on the iterators.
Filter Pushdown
In data sources, often we don’t want to read complete data from the source. In many cases, we will be analysing subset of data for our analysis. This is expressed as the filter in spark SQL code.
In normal sources, to implement filter, the complete data is brought to spark engine and then filtering is done. This is ok for sources such as file source or hdfs source. But for sources like relational databases this is very inefficient. These sources have an ability to filter data in source itself, rather than brining them to spark.
So in Datasource V2 there is new API to specify that source supports source level filtering. This helps us to reduce the amount of data transfer between the source and spark.
Filter Push in Mysql Source
The below are the steps to add filter push support for the mysql data source.
1. Implement SupportsPushDownFilter Interface
We need to implement SupportsPushDownFilter interface to indicate to the spark engine that source supports filter pushdown. This needs to be implemented by Datasource Reader.
class SimpleMysqlDataSourceReader() extends DataSourceReader with SupportsPushDownFilters { var pushedFilters:Array[Filter] = Array[Filter]() def pushFilters(filters:Array[Filter]) = { println(filters.toList) pushedFilters = filters pushedFilters }
In above code, we have implemented the interface. Then we have overridden the pushedFilters method to capture the filters. In this code, we just remember the filters in a variable.
2. Implement Filter Pushdown in Mysql Query
Once we have captured the filters, we need to use them to create jdbc queries to push them to the source. This is implemented in DataReader.
class SimpleMysqlDataReader(pushedFilters:Array[Filter]) extends DataReader[Row] { val getQuery:String = { if(pushedFilters == null || pushedFilters.isEmpty) "(select user from user)a" else { pushedFilters(1) match { case filter : EqualTo => val condition = s"${filter.attribute} = '${filter.value}'" s"(select user from user where $condition)a" case _ =>"(select user from user)a" } } }
In above code, the pushed filters are taken an class parameters. Once we have filters available to us, we write a method which generates the queries depending upon the filters.In the query column name and table name is hard coded. This is done to simplify over all code. In real world scenario these will be passed as options.
In the code, if there is no filter we just read all the data. But if there is a filter, we generate the table query which will have a where condition. In our example, we only support equal to . But you can support other ones also.
Also in code, we are looking at second filter ( 1 index in pushed filters ). There is a reason for that. We will understand more when we see in a example.
You can access complete code on github.
Using Mysql Datasource with Filter Push
Once we have implemented filter push, we can test it from an example.
val simpleMysqlDf = sparkSession.read.format( "com.madhukaraphatak.examples.sparktwo. datasourcev2.simplemysql") .load() simpleMysqlDf.filter("user=\"root\"").show()
In above code, we read from our source and add a filter for user.
The above code prints below result
List(IsNotNull(user), EqualTo(user,root)) +----+ |user| +----+ |root| |root| |root| |root| +----+
The first line of result signifies the filters pushed for the source. As you can see here, even though we have specified only one filter in our spark sql code, spark has pushed two of them. The reason is , spark always checks for the rows where there are no nulls. This simplifies upstream code to do the aggregations etc. The second filter is the one which we are interested.
Once filter is done, we see all the rows where filter matches. You can verify the is filter is pushed or not from mysql logs. The mysql log should show a query like below. You may need to enable logging in mysql.
119 Query SELECT `user` FROM (select user from user where user = 'root')a
Above line makes sures that actual source is getting query with filter.
Conclusion
In this post, we have discussed how to implement filter push down in datasource V2 API. Implementing filter pushdown, greatly reduces the data transfer between source and spark engine, which intern makes the overall data source more performant. | http://blog.madhukaraphatak.com/spark-datasource-v2-part-5/ | CC-MAIN-2018-34 | refinedweb | 922 | 58.48 |
I'm a little confused by some behaviour I'm seeing with Firebase. I never used the old version, but I believe getRedirectResult is new since they joined forces with Google.
I have a SPA that I am using Vue.js with vue-router, and Firebase for. There is a landing page, and then another view for which users can be logged in, or not. Login is done by redirect. When this second view is loaded, I check getRedirectResult in the vue-router 'activate' hook, and if there is a user, do some other stuff with the user information.
The problem proceeds thusly:
1. We are on second page. User logs in. getRedirectResult is called and finds a user. Yay.
2. User logs out. We are back on landing page.
3. We click a button that leads us to second page. getRedirectResult is called and finds the previous user. What?! No!
I can't find anything on whether I am missing something and need some kind of extra check in place, or to somehow forcibly refresh the page after logout so it forgets that the last user logged in, or if this would be considered a bug. Any assistance would be greatly appreciated!
getRedirectResult call on second page in vue component router 'activate' hook:
firebase.auth().getRedirectResult() .then((result) => { return result.user; }).then((user) => { // Do stuff. });
Update: Solved by doing a hard page refresh in the logout callback, as follows:
firebase.auth().signOut() .then(() => {window.location.href = '/'});
Use a flag to detect if
getRedirectResult() has been processed. i.e.
firebase.auth().getRedirectResult() .then((result) => { return result.user; }).then((user) => { if (this.authRedirected) return; // Skip // Do stuff. this.authRedirected = true; // Mark redirected });
This drove me crazy. As you'll see here (), this is unfortunately intended behavior.
My approach was to avoid using
getRedirectResult() entirely and achieve the same functionality by testing for the presence of an authenticated Firebase user (rather than waiting for the redirect callback). In Angular, you use AngularFire's authState observable. With this approach, when you
signOut(), there's no issue with that lingering user in your client memory because it wasn't stored in
getRedirectResult().
The concept is that you place a Route Guard on the login page. The Route Guard only lets you on to the login page if an authenticated user isn't present. At that point, you log in, and once that succeeds (
signInWithRedirect() always takes a few seconds), the firebase user data is loaded into the client, which triggers the Route Guard to block you from the login page and instead redirect you to the location of your choice.
For bonus points, if you want to preserve the returnUrl, store that string in local storage before firing
signInWithRedirect(), and then retrieve it in the Route Guard when the redirect happens (and delete it from local storage).
Inspiration from this Firebase blog post:. Hopefully this can be applied conceptually to what you are doing in Vue.js.
If you're curious, here's what that Route Guard looks like in Angular 2:
import { Injectable } from '@angular/core'; import { AuthService } from './auth.service'; import { Router, CanActivate } from '@angular/router'; import { map } from 'rxjs/operators'; @Injectable({ providedIn: 'root' }) export class LoginGuardService implements CanActivate { constructor( private auth: AuthService, private router: Router ) { } canActivate() { return this.auth.firebaseUser$. pipe(map(user => { // The firebaseuser determines if user is logged in // If logged in, block the route with a redirect if (user) { console.log('User detected', user); const returnUrl = localStorage.getItem('returnUrl'); // Route user to returnUrl, if none, go to profile if (returnUrl && returnUrl !== '/') { this.router.navigate([returnUrl]); } else { this.router.navigate(['profile']); } return false; } return true; })); } } | https://javascriptinfo.com/view/418321/firebase-getredirectresult-is-being-called-after-logout | CC-MAIN-2020-50 | refinedweb | 606 | 58.28 |
I have worked with a large number of developers throughout my career and I can say that one area that causes a lot of confusion and can take a lot of time to fully understand is the concept of interfaces.
I believe the main reason for this is a large number of online tutorials overcomplicating the subject with very academic descriptions and complex examples. Interfaces are in general a simple concept; they can be easily understood with a good and simple example.
When describing what interfaces are and when they are useful I refer to the USB (Universal serial Bus) example.
USB is so wide spread that its common on all manner of electronic devices. This means it’s easy to explain to developers who use computers on a daily basis.
If we have 2 devices that inherit the USB interface, let’s say a desktop PC and a laptop. As they have USB ports, I can connect a USB device such as a memory stick with these devices. However, my Headphones do not inherit the USB interface, but rather the audio jack interface. So this means I cannot utilize USB with this device.
So how does this work in relation to C#? let’s look at some example code.
We have 2 classes, Desktop_PC and Laptop_PC which both inherit the interface IUSB.
public interface IUSB { bool USB_Connector { get; set; } bool USB_Power { get; set; } List<Byte> USB_Data_Transfer(); }
Here is our IUSB interface, notice how we don’t have any accessibility modifiers, it’s just the return type and name. Also, it’s a good conversion to name your interfaces with a Capital I at the beginning, Hence IUSB.
Now, let’s have a look at our desktop_PC and Laptop_PC classes that inherit the USB Interface.
public class Desktop_PC : IUSB { public string Manufacturer { get; set; } public int CaseHeight { get; set; } public bool USB_Connector { get; set; } public bool USB_Power { get; set; } public List<Byte> USB_Data_Transfer() { // USB_Data_Transfer Method Logic... } }
public class Laptop_PC : IUSB { public int ScreenSize { get; set; } public bool USB_Connector { get; set; } public bool USB_Power { get; set; } public List<Byte> USB_Data_Transfer() { // USB_Data_Transfer Method Logic... } }
Notice that as long as a class that inherits an interface and declares the members of the interface they can have other members that are unique to the class, so Laptop_PC has ScreenSize which is unique to the laptop and the desktop has CaseHeight which is unique to the desktop.
Let’s have a look at the class below. This is the MemoryStick class which in its non-default constructor takes any object of that inherits the IUSB interface, so this means I could pass Desktop_PC or Laptop_PC, or any class that inherits the IUSB interface. This is the main advantage of Interfaces; you can pass any object that inherits the interface. This could be a TV, Mouse, Phone, anything that has USB.
public class MemoryStick { public double MemorySize { get; set; } public List<Byte> Data_Transfer { get; set; } public MemoryStick() { Data_Transfer = new List<byte>(); } public MemoryStick(IUSB USBDevice) : this() { USBDevice.USB_Power = true; USBDevice.USB_Connector = true; foreach(var data in USBDevice.USB_Data_Transfer()) { // start downloaning data to the device.... } } }
It’s also important to know that in C# a class can inherit multiple interfaces. The example below shows how I can modify Laptop_PC with an Audio jack interface.
public class Laptop_PC : IUSB, IAudioJack { public int ScreenSize { get; set; } public bool USB_Connector { get; set; } public bool USB_Power { get; set; } public List<Byte> USB_Data_Transfer() { // USB_Data_Transfer Method Logic... } public bool Audio_Connector { get; set; } public List<Byte> Audio_Transfer { get; set; } } public interface IAudioJack { bool Audio_Connector { get; set; } }
Interfaces allow you to create more elegant code with much better reuse and readability, try utilising interfaces in your project where you have similar class objects, refactor these using an interface. | https://www.intermittentbug.com/article/articlepage/understanding-interfaces-in-c-sharp/2018 | CC-MAIN-2019-13 | refinedweb | 623 | 51.78 |
Making Login form's corners as smooth curves for appearance
This is an article, Just shown how to make the corners of the login form as curvers to improve the apearance.Recently I had done some code to improve the appearance of the login screen for one of my windows application. After doing this a small bit of change it added a value to the whole application as it look good. Hope this may help the developers to add a value to their application too.
This is a few set of lines of code to implement the logic.
Just before going for the code Select an image and set that image as a background for the login form or the form which you need using the BACKGROUND property of the form.
Add this following set of code in the code behind
using System.Drawing.Drawing2D;
Include this above namespace
public static GraphicsPath RoundRect(Rectangle rectangle, int roundRadius)
{
Rectangle innerRect = Rectangle.Inflate(rectangle, -roundRadius, -roundRadius);
GraphicsPath path = new GraphicsPath();
path.StartFigure();
path.AddArc(RoundBounds(innerRect.Right - 1, innerRect.Bottom - 1, roundRadius), 0, 90);
path.AddArc(RoundBounds(innerRect.Left, innerRect.Bottom - 1, roundRadius), 90, 90);
path.AddArc(RoundBounds(innerRect.Left, innerRect.Top, roundRadius), 180, 90);
path.AddArc(RoundBounds(innerRect.Right - 1, innerRect.Top, roundRadius), 270, 90);
path.CloseFigure();
return path;
}
private static Rectangle RoundBounds(int x, int y, int rounding)
{
return new Rectangle(x - rounding, y - rounding, 2 * rounding, 2 * rounding);
}
Place this above code in your form and call this function in the form load event.
GraphicsPath oPath = new GraphicsPath();
oPath = RoundRect(this.ClientRectangle, 15);
this.Region = new Region(oPath);
Hope this code snippet will add some appearance to your application.
Thanks Marshal. Making the corners of the form curvy. But it will look better if we can make the curve smoother. | http://www.dotnetspider.com/resources/42893-making-login-form-s-corners-smooth-curves-for.aspx | CC-MAIN-2019-26 | refinedweb | 301 | 58.58 |
9.8. Signal handling
Two functions allow for asynchronous event handling to be provided.
A signal is a condition that may be reported during program
execution, and can be ignored, handled specially, or, as is the default,
used to terminate the program. One function sends signals, another is used
to determine how a signal will be processed. Many of the signals may be
generated by the underlying hardware or operating system as well as by means
of the signal-sending function
raise.
The signals are defined in the include file
<signal.h>.
SIGABRT
- Abnormal termination, such as instigated by the
abortfunction. (Abort.)
SIGFPE
- Erroneous arithmetic operation, such as divide by 0 or overflow. (Floating point exception.)
SIGILL
- An ‘invalid object program’ has been detected. This usually means that there is an illegal instruction in the program. (Illegal instruction.)
SIGINT
- Interactive attention signal; on interactive systems this is usually generated by typing some ‘break-in’ key at the terminal. (Interrupt.)
SIGSEGV
- Invalid storage access; most frequently caused by attempting to store some value in an object pointed to by a bad pointer. (Segment violation.)
SIGTERM
- Termination request made to the program. (Terminate.)
Some implementations may have additional signals available, over and above
this standard set. They will be given names that start
SIG, and
will have unique values, apart from the set above.
The function
signal allows you to specify the action taken on
receipt of a signal. Associated with each signal condition above, there is
a pointer to a function provided to handle this signal. The signal function
changes this pointer, and returns the original value. Thus the function is
defined as
#include <signal.h> void (*signal (int sig, void (*func)(int)))(int);
That is to say,
signal is a function that returns a pointer
to another function. This second function takes a single int argument and
returns
void. The second argument to
signal is
similarly a pointer to a function returning
void which takes an
int argument.
Two special values may be used as the
func argument (the
signal-handling function),
SIG_DFL, the initial, default,
signal handler; and
SIG_IGN, which is used to ignore
a signal. The implementation sets the state of all signals to one or other
of these values at the start of the program.
If the call to
signal succeeds, the previous value of
func for the specified signal is returned. Otherwise,
SIG_ERR is returned and
errno is set.
When a signal event happens which is not being ignored, if the
associated func is a pointer to a function, first the equivalent of
signal(sig, SIG_DFL) is executed. This resets the signal
handler to the default action, which is to terminate the program. If
the signal was
SIGILL then this resetting is implementation
defined. Implementations may choose to ‘block’ further instances of
the signal instead of doing the resetting.
Next, a call is made to the signal-handling function. If that
function returns normally, then under most circumstances the
program will resume at the point where the event occurred. However, if the
value of
sig was
SIGFPE (a floating point
exception), or any implementation defined computational exception,
then the behaviour is undefined. The most usual thing to do in the handler
for
SIGFPE is to call one of the functions
abort,
exit, or
longjmp.
The following program fragment shows the use of signal to perform a tidy exit to a program on receipt of the interrupt or ‘interactive attention’ signal.
#include <stdio.h> #include <stdlib.h> #include <signal.h> FILE *temp_file; void leave(int sig); main() { (void) signal(SIGINT,leave); temp_file = fopen("tmp","w"); for(;;) { /* * Do things.... */ printf("Ready...\n"); (void)getchar(); } /* can't get here ... */ exit(EXIT_SUCCESS); } /* * on receipt of SIGINT, close tmp file * but beware - calling library functions from a * signal handler is not guaranteed to work in all * implementations..... * this is not a strictly conforming program */ void leave(int sig) { fprintf(temp_file,"\nInterrupted..\n"); fclose(temp_file); exit(sig); }Example 9.4
It is possible for a program to send signals to itself by means of the
raise function. This is defined as follows
include <signal.h> int raise (int sig);
The signal sig is sent to the program.
Raise returns zero if successful, non-zero otherwise. The
abort library function is essentially implementable as
follows:
#include <signal.h> void abort(void) { raise(SIGABRT); }
If a signal occurs for any reason other than calling abort or raise,
the signal-handling function may only call signal or assign a value to
a volatile static object of type
sig_atomic_t. The type
sig_atomic_t is declared in
<signal.h>.
It is the only type of object that can safely be modified as an atomic
entity, even in the presence of asynchronous interrupts. This is a very
onerous restriction imposed by the Standard, which, for example, invalidates
the
leave function in the example program above; although
the function would work correctly in some environments, it does not follow
the strict rules of the Standard. | http://publications.gbdirect.co.uk/c_book/chapter9/signal_handling.html | crawl-002 | refinedweb | 825 | 55.74 |
Testing Your Tests? Who Watches the Watchmen?
PHP Mutation Testing
No, no, it’s nothing like that. Mutation Testing ( or Mutant Analysis ) is a technique used to create and evaluate the quality of software tests. It consists of modifying the tests in very small ways. Each modified version is called a mutant and tests detect and reject mutants by causing the behavior of the original version to differ from the mutant. Mutations are bugs in our original code and analysis checks if our tests detect those bugs. In a nutshell, if a test still works after it’s mutated, it’s not a good test.
Mutation Testing with Humbug
Humbug is a mutation testing framework for PHP.
In order for Humbug to be able to generate code coverage, we will have to have XDebug installed and enabled on our machine. Then, we can install it as a global tool.
composer global require 'humbug/humbug'
After this, if we run the
humbug
command, we should be able to see some of our Humbug installation information and an error indicating that we don’t have a
humbug.json file.
Bootstrapping
Before we configure and use Humbug, we need a project that we can test. We will create a small PHP calculator package where we will run our unit and mutation tests.
Let’s create a
/Calculator folder. Inside it, let’s create our
/src and
/tests folders. Inside our
/src folder, we will have our application code; the
/tests folder will contain our unit tests. We will also need to use PHPUnit in our package. The best way to do that is using Composer. Let’s install PHPUnit using the following command:
composer global require phpunit/phpunit
Let’s create our Calculator. Inside the
/src folder, create a
Calculator.php file and add the following content:
<?php namespace package\Calculator; class Calculator { /** * BASIC OPERATIONS */ public function add($a1, $a2) { return $a1 + $a2; } public function subtract($a1, $a2) { return $a1 - $a2; } public function multiply($a1, $a2) { return $a1 * $a2; } public function divide($a1, $a2) { if ($a2 === 0) { return false; } return $a1 / $a2; } /* * PERCENTAGE */ //This will return $a1 percent of $a2 public function percentage($a1, $a2) { return ( $a1 / $a2 ) * 100; } /* * PI */ //Returns the value of pi public function pi() { return pi(); } /* * LOGARITHMIC */ //Returns the basic logarithm in base 10 public function log($a) { return log10($a); } }
It is a rather straightforward program. A simple calculator, with the basic arithmetic, percentage and logarithmic operations and a function to return the value of pi. Next, inside our
/tests folder, let’s create the unit tests for our calculator. If you need help with unit testing in PHP, check out this tutorial.
Create a CalculatorTest.php file and add the following:
<?php use package\Calculator\Calculator; class CalculatorTest extends PHPUnit_Framework_TestCase { public function testAdd() { $calculator = new Calculator(); $result = $calculator->add(2, 3); $this->assertEquals($result, 5); } public function testSubtract() { $calculator = new Calculator(); $result = $calculator->subtract(6, 3); $this->assertEquals($result, 3); } public function testMultiply() { $calculator = new Calculator(); $result = $calculator->multiply(6, 3); $this->assertEquals($result, 18); } public function testDivide() { $calculator = new Calculator(); $result = $calculator->divide(6, 3); $this->assertEquals($result, 2); } }
This will be our initial test stack. If we run the
phpunit, command we will see that it executes successfully, and our 4 tests and 4 assertions will pass. It is important that all of our tests are passing, otherwise, Humbug will fail.
Configuring Humbug
Humbug may either be configured manually, by creating a
humbug.json.dist file, or automatically, by running the command:
humbug configure
Running the command will ask us for answers to some questions:
- What source directories do you want to include?
In this one we will go with src/, the directory of our source code.
- Any directories you want to exclude from within your source directory?
May be useful in some cases, like an external vendor directory that we don’t want tested. It does not apply in our current case.
- Single test suite timeout in seconds.
Let’s go with 30 seconds on this one. It is probably too much, but we want to be sure everything has had enough time to run.
- Where do you want to store your text log?
humblog.txtcomes as default and we will leave it as that.
- Where do you want to store your json log (if you need it)?
The default comes empty but we will store it in
humblogjson.json.
- Generate “humblog.json.dist”?
This file will, when generated, contain all the configuration values we just supplied. We can edit it manually if we want to change something.
Using Humbug
Now that we have both our application running with tests and Humbug installed, let’s run Humbug and check the results.
humbug
The result should be close to this:
Interpreting Humbug results
The number of mutations created is just the number of small changes introduced by Humbug to test our tests.
A killed mutant (.) is a mutation that caused a test to fail. Don’t be confused, this is a positive result!
An escaped mutation (M) is a mutation where the test still passed. This is not a positive result, we should go back to our test and check what’s missing.
An uncovered mutation (S) is a mutation that occurs in a line not covered by a unit test.
Fatal errors (E) and timeouts (T) are mutations that created fatal errors and mutations that create infinite loops, respectively.
What about the metrics?
The Mutation Score Indicator indicates the percentage of generated mutations that were detected. We want to aim at 100%.
Mutation Code Coverage indicates the percentage of tests covered by mutations.
The Mutation Score Indicator gives you some idea of how effective the tests that do exist really are.
Analyzing our humbug log, we can see that we have 9 mutants not covered, and some really bad metrics. Take a look at the
humblogjson.json file. This file was generated automatically just like the
humblog.txt file, and contains much more detailed information on what failed, where and why. We haven’t tested our percentage, pi and logarithm functions. Also, we need to cover the case where we divide a number by 0. Let’s add some more tests to cover the missing situations:
public function testDivideByZero() { $calculator = new Calculator(); $result = $calculator->divide(6, 0); $this->assertFalse($result); } public function testPercentage() { $calculator = new Calculator(); $result = $calculator->percentage(2, 50); $this->assertEquals($result, 4); } public function testPi() { $calculator = new Calculator(); $result = $calculator->pi(); $this->assertEquals($result, pi()); } public function testLog() { $calculator = new Calculator(); $result = $calculator->log(10); $this->assertEquals($result, 1); }
This time around, 100% means that all mutations were killed and that we have full code coverage.
Downsides
The biggest downside of mutation testing, and by extension Humbug, is performance. Mutation testing is a slow process as it depends on a lot of factors like interplay between lines of code, number of tests, level of code coverage, and the performance of both code and tests. Humbug also does initial test runs, logging and code coverage, which add to the total duration.
Additionally, Humbug is PHPUnit specific, which can be a problem for those who are using other testing frameworks.
That said, Humbug is under active development and will continue to improve.
Conclusion
Humbug can be an important tool for maintaining your app’s longevity. As the complexity of your app increases, so does the complexity of your tests – and having them all at 100% all the time becomes incredibly important, particularly when dealing with enterprise ecosystems.
The code we used in this tutorial can be cloned here.
Have you used Humbug? Do you do mutation testing another way? Give us your thoughts on all this!_3<<
🤓 Ok. When did a code editor from Microsoft become kinda cool!?
Popular Books
Visual Studio Code: End-to-End Editing and Debugging Tools for Web Developers
Form Design Patterns
Jump Start Git, 2nd Edition | https://www.sitepoint.com/testing-your-tests-who-watches-the-watchmen/ | CC-MAIN-2020-24 | refinedweb | 1,318 | 55.64 |
Write a program to input a sentence and print the longest word or words in it, and the average length of words to the nearest whole number.
Example:
INPUT: The good Lord gave us a good meal.
OUTPUT: good Lord gave good meal
Average length: 3
Program:
import java.io.*; class Longest{ public static void main(String args[]) throws IOException{ InputStreamReader in = new InputStreamReader(System.in); BufferedReader br = new BufferedReader(in); System.out.print("Sentence: "); String s = br.readLine(); String word = new String(); int len = 0; int sum = 0; int count = 0; int avg = 0; s = s.replace(".", ""); s = s.replace(",", ""); s = s.replace("?", ""); s = s.trim(); s += " "; for(int i = 0; i < s.length(); i++){ char ch = s.charAt(i); if(ch == ' '){ if(len < word.length()) len = word.length(); sum += word.length(); word = new String(); count++; } else word += ch; } System.out.print("Longest words: "); for(int i = 0; i < s.length(); i++){ char ch = s.charAt(i); if(ch == ' '){ if(len == word.length()) System.out.print(word + "\t"); word = new String(); } else word += ch; } avg = (int)(Math.rint((double)sum / count)); System.out.println("\nAverage length: " + avg); } }
Sir,a program is given which states that:
WAP in Java to accept a paragraph containing n no of sentences where n>=1&& n<4. The words are to be separated with a single blank space and are in upper case.A sentence may be terminated either with a '.', '?' ,'!' only. Any other character may be ignored. Perform the following operations:
1) Accept the no.of sentences. If the number of sentences exceeds the limit, an appropriate error message must be displayed.
2)Find the number of words in the whole paragraph.
3) Display the words in ascending order of their frequency. Words with same frequency may appear in any order.
Eg:
Sample Input:
Enter number of sentences:1
Enter sentences:TO BE OR NOT TO BE.
Sample Output:
Total number of words:6
WORD FREQUENCY
OR 1
NOT 1
TO 2
BE 2 | https://www.happycompiler.com/class-11-longest-word-2013-final/ | CC-MAIN-2020-50 | refinedweb | 331 | 70.5 |
Fermat numbers have the form
Fermat numbers are prime if n = 0, 1, 2, 3, or 4. Nobody has confirmed that any other Fermat numbers are prime. Maybe there are only five Fermat primes and we’ve found all of them. But there might be infinitely many Fermat primes. Nobody knows.
There’s a specialized test for checking whether a Fermat number is prime, Pépin’s test. It says that for n ≥ 1, the Fermat number Fn is prime if and only if
We can verify fairly quickly that F1 through F4 are prime and that F5 through F14 are not with the following Python code.
def pepin(n): x = 2**(2**n - 1) y = 2*x return y == pow(3, x, y+1) for i in range(1, 15): print(pepin(i))
After that the algorithm gets noticeably slower.
We have an efficient algorithm for testing whether Fermat numbers are prime, efficient relative to the size of numbers involved. But the size of the Fermat numbers is growing exponentially. The number of digits in Fn is
So F14 has 4,933 digits, for example.
The Fermat numbers for n = 5 to 32 are known to be composite. Nobody knows whether F33 is prime. In principle you could find out by using Pépin’s test, but the number you’d need to test has 2,585,827,973 digits, and that will take a while. The problem is not so much that the exponent is so big but that the modulus is also very big.
The next post presents an analogous test for whether a Mersenne number is prime. | http://www.statsblogs.com/2018/11/27/searching-for-fermat-primes/ | CC-MAIN-2019-22 | refinedweb | 269 | 79.9 |
How to Think Like a Computer Scientist: Learning with Python 2nd Edition/Modules and files
Contents
Modules and files[edit]
Modules[edit][edit]
Classes will be discussed in later chapters, but for now we can use pydoc to see the functions and data contained within modules.
The keyword module contains a single function, iskeyword, which as its name suggests is a boolean function that returns True if a string passed to it is a keyword:
The data item, kwlist contains a list of all the current keywords in Python:
We encourage you to use pydoc to explore the extensive libraries that come with Python. There are so many treasures to discover!
Creating modules[edit]
All we need to create a module is a text file with a .py extension on the filename:
We can now use our module in both scripts and the Python shell. To do so, we must first import the module. There are two ways to do this:
and:
In the first example, remove_at is called just like the functions we have seen previously. In the second example the name of the module and a dot (.) are written before the function name.
Notice that in either case we do not include the .py file extension when importing. Python expects the file names of Python modules to end in .py, so the file extention is not included in the import statement.
The use of modules makes it possible to break up very large programs into managable sized parts, and to keep related parts together.
Namespaces[edit]
A namespace is a syntactic container which permits the same name to be used in different modules or functions (and as we will see soon, in classes and methods).
Each module determines its own namespace, so we can use the same name in multiple modules without causing an identification problem. operator[edit]
Variables defined inside a module are called attributes of the module. They are accessed by using the dot operator ( .). The question attribute of module1 and module2 are accessed using module1.question and module2.question.
Modules contain functions as well as attributes, and the dot operator is used to access them in the same way. seqtools.remove_at refers to the remove_at function in the seqtools module.
In Chapter 7 we introduced the find function from the string module. The string module contains many other useful functions:
You should use pydoc to browse the other functions and attributes in the string module.
String and list methods[edit]
As the Python language developed, most of functions from the string module have also been added as methods of string objects. A method acts much like a function, but the syntax for calling it is a bit different:
String methods are built into string objects, and they are invoked (called) by following the object with the dot operator and the method name.
We will be learning how to create our own objects with their own methods in later chapters. For now we will only be using methods that come with Python's built-in objects.
The dot operator can also be used to access built-in methods of list objects:[edit]
While a program is running, its data is stored in random access memory (RAM). RAM is fast and inexpensive, but it is also volatile, which means that when the program ends, or the computer shuts down, data in RAM disappears. To make data available the next time you turn on your computer and start your program, you have to write it to a non-volatile storage medium, such a hard drive, usb drive, or CD-RW.
Data on non-volatile storage media is stored in named locations on the media called files. By reading and writing files, programs can save information between program runs.
Working with files is a lot like working with a notebook. To use a notebook, you have to open it. When you're done, you have to close it. While the notebook is open, you can either write in it or read from it. In either case, you know where you are in the notebook. You can read the whole notebook in its natural order or you can skip around.
All of this applies to files as well. To open a file, you specify its name and indicate whether you want to read or write.
Opening a file creates a file object. In this example, the variable myfile refers to the new file object.[edit].
Directories[edit]
Files on non-volatile storage media are organized by a set of rules known as a file system. File systems are made up of files and directories, which are containers for both files and other directories.:
This example opens a file named words that resides in a directory named dict, which resides in share, which resides in usr, which resides in the top-level directory of the system, called /. It then reads in each line into a list using readlines, and prints out the first 5 elements from that list.
You cannot use / as part of a filename; it is reserved as a delimiter between directory and filenames.
The file /usr/share/dict/words should exist on unix based systems, and contains a list of words in alphabetical order.
Counting Letters[edit]v[edit]:
The results will be different on your machine of course.
The argv variable holds a list of strings read in from the command line when a Python script is run. These command line arguments can be used to pass information into a program at the same time it is invoked.
Running this program from the unix command prompt demonstrates how sys.argv works:
$ python demo_argv.py this and that 1 2 3 ['demo_argv.py', 'this', 'and', 'that', '1', '2', '3'] $
argv is a list of strings. Notice that the first element is the name of the program. Arguments are separated by white space, and separated into a list in the same way that string.split operates. If you want an argument with white space in it, use quotes:
$ python demo_argv.py "this and" that "1 2" 3 ['demo_argv.py', 'this and', 'that', '1 2', '3'] $
With argv we can write useful programs that take their input directly from the command line. For example, here is a program that finds the sum of a series of numbers:
In this program we use the from <module> import <attribute> style of importing, so argv is brought into the module's main namespace.
We can now run the program from the command prompt like this:
You are asked to write similar programs as exercises.
Glossary[edit]
Exercises[edit]
Complete the following:
- Start the pydoc server with the command pydoc -g at the command prompt.
- Click on the open browser button in the pydoc tk window.
- Find the calendar module and click on it.
While looking at the Functions section, try out the following in a Python shell:
Experiment with calendar.isleap. What does it expect as an argument? What does it return as a result? What kind of a function is this?
If you don't have Tkinter installed on your computer, then pydoc -g will return an error, since the graphics window that it opens requires Tkinter. An alternative is to start the web server directly:
$ pydoc -p 7464
This starts the pydoc web server on port 7464. Now point your web browser at:
and you will be able to browse the Python libraries installed on your system. Use this approach to start pydoc and take a look at the math module.
-?
- Use pydoc to investigate the copy module. What does deepcopy do? In which exercises from last chapter would deepcopy have come in handy?
Create a module named mymodule1.py. Add attributes myage set to your current age, and year set to the current year. Create another module named mymodule2.py. Add attributes myage set to 0, and year set to the year you were born. Now create a file named namespace_test.py. Import both of the modules above and write the following statement:When you will run namespace_test.py you will see either True or False as output depending on whether or not you've already had your birthday this year.
Add the following statement to mymodule1.py, mymodule2.py, and namespace_test.py from the previous exercise:'s have to say about namespaces?
- Use pydoc to find and test three other functions from the string module. Record your findings.
- Rewrite matrix_mult from the last chapter using what you have learned about list methods.
- The dir function, which we first saw in Chapter 7, prints out a list of the attributes of an object passed to it as an argument. In other words, dir returns the contents of the namespace of its argument. Use dir(str) and dir(list) to find at least three string and list methods which have not been introduced in the examples in the chapter. You should ignore anything that begins with double underscore (__) for the time being. Be sure to make detailed notes of your findings, including names of the new methods and examples of their use. ( hint: Print the docstring of a function you want to explore. For example, to find out how str.join works, print str.join.__doc__)
Give the Python interpreter's response to each of the following from a continuous interpreter session:
-:
Explain how this statement makes both using and testing this module convenient. What will be the value of __name__ when wordtools.py is imported from another module? What will it be when it is run as a main program? In which case will the doctests run? Now add bodies to each of the following functions to make the doctests pass:Save this module so you can use the tools it contains in your programs.
- unsorted_fruits.txt_ contains a list of 26 fruits, each one with a name that begins with a different letter of the alphabet. Write a program named sort_fruits.py that reads in the fruits from unsorted_fruits.txt and writes them out in alphabetical order to a file named sorted_fruits.txt.
Answer the following questions about countletters.py:
Explain in detail what the three lines do
does. What is the purpose of if counts[i]?
Write a program named mean.py that takes a sequence of numbers on the command line and returns the mean of their values.:
$ python mean.py 3 4 3.5 $ python mean.py 3 4 5 4.0 $ python mean.py 11 15 94.5 22 35.625A session of your program running on the same input should produce the same output as the sample session above.
Write a program named median.py that takes a sequence of numbers on the command line and returns the median of their values.:
$ python median.py 3 7 11 7 $ python median.py 19 85 121 85 $ python median.py 11 15 16 22 15.5A session of your program running on the same input should produce the same output as the sample session above.
Modify the countletters.py program so that it takes the file to open as a command line argument. How will you handle the naming of the output file? | http://en.wikibooks.org/wiki/How_to_Think_Like_a_Computer_Scientist:_Learning_with_Python_2nd_Edition/Modules_and_files | CC-MAIN-2014-23 | refinedweb | 1,869 | 74.29 |
I had previously followed the r/roguelikedev summer tutorial series, but never finished. I tried again this year (2020), using rot.js[1] this time, and finished. 🎉
Source code: game.html + roguelike-dev.js (no build step) - and on github[2].
My goal is not to implement my own features, but instead focus on only the features from the Python + tcod tutorial, implementing them in JavaScript + browser + rot.js instead. I’m willing to implement the data structures, code style, etc., differently from the tutorial to better fit JavaScript and my own coding style. However, every time I think of a game feature I’d like to add or do differently, I add it to a text file somewhere, and then I can come back to it after I finish the tutorial.
1 Drawing the ‘@’ symbol and moving it around#
I’m going to follow the Roguelike Tutorial for Python[3] (2019 version, not the 2020 version, for various reasons) and adapt it for rot.js[4].
1.1. Setup#
Part 0[5] of the tutorial covers setting up Python and libtcod. I’ll instead set up rot.js.
HTML:
<figure id="game"></figure> <script src=""></script> <script src="game.js"></script>
Javascript:
const display = new ROT.Display({width: 60, height: 25}); document.getElementById('game') .appendChild(display.getContainer()); display.draw(5, 4, '@');
Set the
fontFamily property if you want to override the default browser monospace font. For example,
fontFamily: "Roboto Mono".
A warning about my coding style: I follow make it work before making the code nice. That means I’ll use plenty of global variables and hacks at first, and clean up some of them later. Don’t look at my code as an example of how to structure a program “properly”.
1.2. Key input#
Part 1 of the roguelike tutorial[6] covers setting up a screen and keyboard handler. I already set up the screen in the previous section so now I need to set up a keyboard handler. Unlike Python, we don’t write an event loop in the browser. The browser already is running an event loop, and we add event handlers to it.
There are various places to attach a keyboard event handler.
- The rot.js manual[7] suggests using an <input> element for event handlers. I decided not to do this. If that input ever loses focus, I don’t know how to get focus back. Clicking an input box is what I think of when I’m typing input text but not what I expect to do when moving around on the map.
- One rot.js tutorial[8] attaches the event handlers to the global
windowobject. I decided not to do this. There are times when I am typing elsewhere on the page (typing into the site search box, or adding a comment, or maybe even in-game actions like typing my name in). I don’t want the game to treat those keystrokes as player actions.
I decided to instead make the game map focusable by adding the tabindex="1" attribute to its
canvas. This way, clicking on the game map will give it keyboard focus. You can click away to add a comment and then come back to the game.
Javascript:
const canvas = display.getContainer(); canvas.addEventListener('keydown', handleKeyDown); canvas.setAttribute('tabindex', "1"); canvas.focus();
The problem is that a canvas isn’t an obviously focusable element. What happens if it ever loses focus? I decided to add a message when the canvas loses focus:
HTML:
<figure> <div id="game"></div> <div id="focus-reminder">click game to focus</div> </figure>
Javascript:
const focusReminder = document.getElementById('focus-reminder'); canvas.addEventListener('blur', () => { focusReminder.style.visibility = 'visible'; }); canvas.addEventListener('focus', () => { focusReminder.style.visibility = 'hidden'; });
CSS:
#game canvas { display: block; margin: auto; }
The CSS is not self-explanatory. I use
display: block because a
<canvas> element is inline by default, and that means it has some extra space below it matching the extra space a line of text has below it to separate it from the next line below. I don’t want that so I change it from an inline element to a block element. I use
margin: auto to center the canvas in the parent element.
Here’s what it looks like if it does not have focus:
The next thing I need is an event handler:
function handleKeyDown(event) { console.log('keydown', event); }
I often start out with a
console.log to make sure that a function is getting called.
What’s next for Part 1? I need to make arrow keys move the player around. I can’t do that yet, because I don’t have a player position.
1.3. Player movement#
I need to keep track of the player position and then change it when a key is pressed.
let player = {x: 5, y: 4, ch: '@'}; function drawCharacter(character) { let {x, y, ch} = character; display.draw(x, y, ch); } function draw() { drawCharacter(player); } function handleKeyDown(event) { if (event.keyCode === ROT.KEYS.VK_RIGHT) { player.x++; } if (event.keyCode === ROT.KEYS.VK_LEFT) { player.x--; } if (event.keyCode === ROT.KEYS.VK_DOWN) { player.y++; } if (event.keyCode === ROT.KEYS.VK_UP) { player.y--; } draw(); } draw();
Two problems:
- When using arrow keys, the page scrolls. I can fix this by adding
event.preventDefault(). But if I do that, then browser hotkeys stop working. So I need to do something a little smarter. I’m going to prevent the default only if I handled the key.
- The
@character doesn’t get erased when I move. I need to either draw a space character over the old position, or I need to clear the game board and redraw everything. I’m going to redraw everything. I find it to be simpler and less error prone.
This would be a good time to mention that the rot.js interactive manual doesn’t cover all the functionality. You may also want to look at the non-interactive documentation[9] for a more complete list of methods. In this case, I looked at display/canvas→Canvas[10] to find the
clear method.
Part 1[11] of the Python tutorial splits up keyboard handling into a function that generates an action and another function that performs the action. I’ll do the same.
function handleKeys(keyCode) { const actions = { [ROT.KEYS.VK_RIGHT]: () => ['move', +1, 0], [ROT.KEYS.VK_LEFT]: () => ['move', -1, 0], [ROT.KEYS.VK_DOWN]: () => ['move', 0, +1], [ROT.KEYS.VK_UP]: () => ['move', 0, -1], }; let action = actions[keyCode]; return action ? action() : undefined; } function handleKeyDown(event) { let action = handleKeys(event.keyCode); if (action) { if (action[0] === 'move') { let [_, dx, dy] = action; player.x += dx; player.y += dy; draw(); } else { throw `unhandled action ${action}`; } event.preventDefault(); } } function draw() { display.clear(); drawCharacter(player); }
Ok, that’s better. It only captures keys that are being used for the game, and leaves browser hotkeys alone. And it erases the screen before drawing a new frame.
What else is in Part 1 of the tutorial?
- fullscreen toggle
- press escape to quit
I’m going to skip these two.
Note: I later changed the code from using
keyCode to using
key. This is a newer browser feature, and it provides a string name of the key that was pressed, handling shifted keys too. For example,
keyCode doesn’t distinguish between / and ? because they are on the same key, but
key will be different.
2 The generic Entity, the render functions, and the map#
Part 2[12] of the tutorial covers entities. My design differs slightly from the tutorial:
- I include only “instance” data in the entity, such as position and health, but not “static” data such as its color.
- I include an entity type string instead. Normally this is “implicit” information in that each object belongs to a class. I prefer making game classes explicit.
- I also don’t put methods in this object. I’ve had too many methods that don’t “belong” in any one class, so I prefer to leave them as free functions. My goal is to have the object serializable as JSON.
- I give each entity an id. I find that useful in debugging. It may come in handy later for serialization or events or logging.
function createEntity(type, x, y) { let id = ++createEntity.id; return { id, type, x, y }; } createEntity.id = 0; let player = createEntity('player', 5, 4);
Here’s an example of how this design differs from the one in the Python tutorial:
function drawEntity(entity) { const visuals = { player: ['@', "hsl(60, 100%, 50%)"], troll: ['T', "hsl(120, 60%, 50%)"], orc: ['o', "hsl(100, 30%, 50%)"], }; const [ch, fg, bg] = visuals[entity.type]; display.draw(entity.x, entity.y, ch, fg, bg); }
Instead of storing the character and the color in the object, I store a type in the object, and then store the character and color in a lookup table. There are some scenarios where I like this design better:
- if I want to change the appearance based on its status (bleeding, poisoned, etc.)
- if I want to show another map (perhaps a sonar view) where the visuals of each entity are different
- if I save the game, and then in the new version of the game I want to change colors
- if I want to switch from ascii to graphical tiles
Ok, cool, I have a way to make entities. Let’s make a second one:
let troll = createEntity('troll', 20, 10);
Now I have to modify the drawing function to draw it too:
function draw() { display.clear(); drawEntity(player); drawEntity(troll); }
Looks good. The player and monster have different appearances.
I can’t keep adding a variable for each entity. Part 2 of the Roguelike Tutorial converts the individual entity variables into an set of entities. I was going to use an array or a Set but decided to use a Map instead.
let entities = new Map(); function createEntity(type, x, y) { let id = ++createEntity.id; let entity = { id, type, x, y }; entities.set(id, entity); return entity; } createEntity.id = 0;
Then when I draw them, I can loop over entities:
function draw() { display.clear(); for (let entity of entities.values()) { drawEntity(entity); } }
3 Generating a dungeon#
The second half of Part 2[13] creates a map data structure, and Part 3 generates a dungeon map. ROT.js includes dungeon map creation functions so I’ll use one of their algorithms. ROT will call a callback function for each map tile, 0 for walkable and 1 for wall. I’m going to store this data in a Map, indexed by a string
x,y. For example at position x=3, y=5, I’ll use a string key
"3,5".
function createMap(width, height) { let map = { width, height, tiles: new Map(), key(x, y) { return `${x},${y}`; }, get(x, y) { return this.tiles.get(this.key(x, y)); }, set(x, y, value) { this.tiles.set(this.key(x, y), value); }, }; const digger = new ROT.Map.Digger(width, height); digger.create((x, y, contents) => map.set(x, y, contents)); return map; } let map = createMap(60, 25);
The next step is to draw the map. I want to draw it first, before the player or monsters.
function draw() { display.clear(); for (let y = 0; y < map.height; y++) { for (let x = 0; x < map.width; x++) { if (map.get(x, y)) { display.draw(x, y, '⨉', "hsl(60, 10%, 40%)", "gray"); } else { display.draw(x, y, '·', "hsl(60, 50%, 50%)", "black"); } } } for (let entity of entities.values()) { drawEntity(entity); } }
The final step is to make player movement not allow moving onto a wall. I modified the movement function to check if the map tile is
0. This is slightly different from checking that it’s not
1 in that this will automatically makes sure I don’t walk off the map, where the values are
undefined.
function handleKeyDown(event) { let action = handleKeys(event.keyCode); if (action) { if (action[0] === 'move') { let [_, dx, dy] = action; let newX = player.x + dx, newY = player.y + dy; if (map.get(newX, newY) === 0) { player.x = newX; player.y = newY; } draw(); } else { throw `unhandled action ${action}`; } event.preventDefault(); } }
The dungeon generation algorithm also generates a list of rooms and corridors. This might be useful later.
4 Field of view#
ROT.js includes two field of view algorithms[14]. The field of view library is fairly easy to use. The input callback lets it ask you “can you see through x,y?” and the output callback lets it tell you “there’s this much light at x,y”. I saved the results in a Map and used it for calculating the light level at any point. The Python tutorial doesn’t use the light level but maybe I’ll find something to do with it later.
const fov = new ROT.FOV.PreciseShadowcasting((x, y) => map.get(x, y) === 0); function draw() { display.clear(); let lightMap = new Map(); fov.compute(player.x, player.y, 10, (x, y, r, visibility) => { lightMap.set(map.key(x, y), visibility); }); const colors = { color = colors[lit][wall]; display.draw(x, y, ' ', "black", color); } } for (let entity of entities.values()) { if (lightMap.get(map.key(entity.x, entity.y)) > 0.0) { drawEntity(entity); } } }
Ok, this seems like it’s not too hard. Looks cool:
But there’s a problem: the entities (
@ and
T) are getting drawn with a black background color, not with the map background. In libtcod, I can set the background and foreground separately, so in the official tutorial the map sets the background and the entity sets the foreground and character. In ROT.js, I have to set all three at once.
I need to merge my drawing loops somehow.
I’m going to remove the
drawEntity() function and replace it with a lookup function. Instead of drawing to the screen it only tells the
draw() function what to draw.
/** return [char, fg, optional bg] for a given entity */ function entityGlyph(entityType) { const visuals = { player: ['@', "hsl(60, 100%, 70%)"], troll: ['T', "hsl(120, 60%, 30%)"], orc: ['o', "hsl(100, 30%, 40%)"], }; return visuals[entityType]; }
Now the draw function has more logic, because it’s merging the entity glyph with the map background color:
function draw() { display.clear(); let lightMap = new Map(); // map key to 0.0–1.0 fov.compute(player.x, player.y, 10, (x, y, r, visibility) => { lightMap.set(map.key(x, y), visibility); }); let glyphMap = new Map(); // map key to [char, fg, optional bg] for (let entity of entities.values()) { glyphMap.set(map.key(entity.x, entity.y), entityGlyph(entity.type)); } const mapColors = { ch = ' ', fg = "black", bg = mapColors[lit][wall]; let glyph = glyphMap.get(map.key(x, y)); if (glyph) { ch = lit? glyph[0] : ch; fg = glyph[1]; bg = glyph[2] || bg; } display.draw(x, y, ch, fg, bg); } } }
Now the background colors behind entities look reasonable:
The background color comes from the map and the foreground color and character comes from the entity.
The next step is to implement the three states of the map:
- Unexplored: don’t show anything.
- Explored, but not currently visible: show in blue.
- Visible: show in yellow.
For this I’ll add a flag
explored to the map. It will start out false and become true if the tile is ever visible. I realized that my map object isn’t great. It has a
get and
set but that is returning 0 for a floor and 1 for a tile. I also have other similar types of maps like
lightMap and a
glyphMap.
I’m going to make a wrapper around 2d maps from (x,y) to any value:
function createMap(initializer) { function key(x, y) { return `${x},${y}`; } return { _values: new Map(), at(x, y) { let k = key(x, y); if (!this._values.has(k)) { this._values.set(k, initializer()); } return this._values.get(k); }, }; }
I replaced my game map data structure with the generic one:
function createTileMap(width, height) { let tileMap = createMap(); const digger = new ROT.Map.Digger(width, height); digger.create((x, y, contents) => tileMap.set(x, y, { walkable: contents === 0, wall: contents === 1, explored: false, }) ); return tileMap; }
A note about data structure: I used to fall into a loop. I would put a lot of effort into the core data structures, figuring out class hierarchies, modules, extensibility, generics, patterns, etc. Then I would use it for a bit and realize something isn’t great. But I wouldn’t change it because I had put so much effort into it that it was really hard to justify throwing anything away.
These days I don’t start with the right data structures. Instead, I start with something and then plan to change it once I figure out what I want. I discover the best patterns while working on the project, instead of starting with the patterns and then making the project fit. Because I put so little effort into the initial code, it’s no big deal to throw it out and replace it with something better.
I changed the data structures for this project four times already, and it was still faster than if I had tried to figure out everything ahead of time. I’m optimizing for making it easy to make changes.
Now that I have a 2d sparse map data structure, I’ll reuse it for the light and glyph maps. While calculating the light map, I also update the
explored flag in the tile map. Another possible design would be to keep a separate
exploredMap instead of modifying the tile map; that would allow for multiple explored maps corresponding to different player characters. But this will do for now.
function computeLightMap(center, tileMap) { let lightMap = createMap(); // 0.0–1.0 fov.compute(center.x, center.y, 10, (x, y, r, visibility) => { lightMap.set(x, y, visibility); if (visibility > 0.0) { if (tileMap.has(x, y)) tileMap.get(x, y).explored = true; } }); return lightMap; } function computeGlyphMap(entities) { let glyphMap = createMap(); // [char, fg, optional bg] for (let entity of entities.values()) { glyphMap.set(entity.x, entity.y, entityGlyph(entity.type)); } return glyphMap; }
Here’s the new
draw() function:
const mapColors = { [false]: {[false]: "rgb(50, 50, 150)", [true]: "rgb(0, 0, 100)"}, [true]: {[false]: "rgb(200, 180, 50)", [true]: "rgb(130, 110, 50)"} }; function draw() { display.clear(); let lightMap = computeLightMap(player, tileMap); let glyphMap = computeGlyphMap(entities); for (let y = 0; y < HEIGHT; y++) { for (let x = 0; x < WIDTH; x++) { let tile = tileMap.get(x, y); if (!tile || !tile.explored) { continue; } let lit = lightMap.get(x, y) > 0.0; let ch = ' ', fg = "black", bg = mapColors[lit][tile.wall]; let glyph = glyphMap.get(x, y); if (glyph) { ch = lit? glyph[0] : ch; fg = glyph[1]; bg = glyph[2] || bg; } display.draw(x, y, ch, fg, bg); } } }
And hey, it works!
5 Placing enemies and kicking them (harmlessly)#
Part 5 of the Python tutorial adds monsters to rooms.
One of the things the Python tutorial uses is the Python
randint() function. ROT.js’s manual[15] shows that it has
getUniform(), which I can wrap to make a
randint() function. However if you dig deeper, ROT.js actually has the randint function[16], called
getUniformInt(). There seem to be a lot of things that aren’t covered in the manual.
I made a shortcut for it:
const randint = ROT.RNG.getUniformInt.bind(ROT.RNG);
and then used it for the monster creating function:
function createMonsters(room, maxMonstersPerRoom) { let numMonsters = randint(0, maxMonstersPerRoom); for (let i = 0; i < numMonsters; i++) { let x = randint(room.getLeft(), room.getRight()), y = randint(room.getTop(), room.getBottom()); if (!entityAt(x, y)) { let type = randint(0, 3) === 0? 'troll' : 'orc'; createEntity(type, x, y); } } }
But what is a room? The ROT.js dungeon digger records room objects in addition to tiles. I stored these in the
tileMap for now.
function createTileMap(width, height) { let tileMap = createMap(); const digger = new ROT.Map.Digger(width, height); digger.create(…); tileMap.rooms = digger.getRooms(); tileMap.corridors = digger.getCorridors(); return tileMap; }
and then used them to make monsters in each room:
for (let room of tileMap.rooms) { createMonsters(room, 3); }
Cool, it works! (Note: I disabled FOV for this screenshot)
Or … does it? Why are they all orcs?! I thought there must be a bug in my code, but no, it’s just random luck. If I change the seed I get both trolls and orcs.
The next step is that they add a
blocks flag to each Entity. I decided to make that a property of the entity type.
const ENTITY_PROPERTIES = { player: {blocks: true, visuals: ['@', "hsl(60, 100%, 70%)"],}, troll: {blocks: true, visuals: ['T', "hsl(120, 60%, 30%)"],}, orc: {blocks: true, visuals: ['o', "hsl(100, 30%, 40%)"],}, };
As I mentioned earlier, I’ll often do something and then change how it works later. I’m replacing the
entityGlyph() function with this table.
I modified the
handleKeyDown() function to check if there’s already an entity there:
… let newX = player.x + dx, newY = player.y + dy; if (tileMap.get(newX, newY).walkable) { let target = entityAt(newX, newY); if (target && ENTITY_PROPERTIES[target.type].blocks) { console.log(`You kick the ${target.type} in the shins, much to its annoyance!`); // TODO: draw this to the screen } else { player.x = newX; player.y = newY; } } …
I tested this and it worked. Moving into a monster prints an message to the console.
The next section in the Python tutorial sets up a state
PLAYER_TURN and
ENEMY_TURN. I didn’t like the way it worked, because it will ignore the player keypress during the enemy turn. I don’t quite know what I want to do about it.
I think for now I’ll have the enemies move after each player move. I moved the above code into its own function:
function enemiesMove() { for (let entity of entities) { if (entity !== player) { console.log(`The ${entity.type} ponders the meaning of its existence.`); } } }
Since my random number generator produced all orcs, I get a lot of console messages:
The orc ponders the meaning of its existence.
Great! Before I move on to the next part of the tutorial, I wanted to add a way to see the messages under the game screen.
5.1. Console#
I updated the UI to have an extra div for messages, and put the instructions box below it. This is covered in the Python tutorial part 7, but I implemented it earlier:
<figure> <div id="game"></div> <pre id="messages"></pre> <div id="instructions"/> </figure>
I gave it some style:
#messages { box-sizing: border-box; font-size: 0.8em; height: 6em; /* see explanation below */ line-height: 1.0; background: black; color: white; margin: 0; padding: 0.5em 1em; text-align: left; }
The size calculation was a little tricky. I want the height to be 5 lines tall. A line is typically
line-height times
font-size. I set the line height to 1.0 so it seems like the height will be 1.0 * 0.8em * 5 = 4em. But it’s not! The css for
<pre> em is relative to the
<pre>’s font size, except for
font-size: 0.8em which is relative to the parent
<figure>’s font size. So it’s really 1.0 * 1em = 5em. Plus, with
box-sizing: border-box I need to include the size of the padding. Both the top and bottom padding are 0.5em here so that means the total height of the box is 6em.
Ok, and here’s the Javascript to print a line of text to the message area:
function print(message) { const MAX_LINES = 5; let messages = document.querySelector("#messages"); let lines = messages.textContent.split("\n"); lines.push(message); while (lines.length > MAX_LINES) { lines.shift(); } messages.textContent = lines.join("\n"); }
And here’s the updated code for the instructions box, which used to hide/show “Click game to focus” but now replaces that text with “Arrow keys to move”:
function setupKeyboardHandler(display, handler) { const canvas = display.getContainer(); const instructions = document.getElementById('instructions'); canvas.setAttribute('tabindex', "1"); canvas.addEventListener('keydown', handleKeyDown); canvas.addEventListener('blur', () => { instructions.textContent = "Click game for keyboard focus"; }); canvas.addEventListener('focus', () => { instructions.textContent = "Arrow keys to move"; }); canvas.focus(); }
Here’s what it looks like:
Back to the Python tutorial.
6 Doing (and taking) some damage#
Part 6[17] of the Python roguelike tutorial adds a “fighter” component with hp, max_hp, defense, power, and an “ai” component that tells the monster how to move.
This is the part of the tutorial where the real game logic starts. My goal is to implement the features from the Python tutorial, but not necessarily with the same code structure.
My own coding style is to prefer separating “static” from “instance” data and functions. Static properties are part of the game. Instance properties are part of running the game. For example, the troll’s color is a static property of the troll. The color is decided before any trolls exist. It is the same for all trolls, but it might be different if I change the program code. A position is an instance property of the troll. It is different for each troll, but it’ll be the same if I load the save file in new version of the game.
I also prefer to use “free functions” (static) and “plain old data” (instance) instead of combining them into classes. It makes things like persistence and multiplayer easier for me to implement. I’m going to do this not only for the entity but also the fighter and ai components.
For the entities, I put the static properties into a
ENTITY_PROPERTIES lookup table, and I used Javascript prototype inheritance to attach it to each entity.
For the fighter and AI, I decided to look ahead in the tutorial to see how they will get used.
- For the fighter, I was planning to treat
defenseand
powerand
max_hpas static properties, and
hpas a per object property, but it looks like
defenseand
powerand
max_hpall become computed properties, so maybe this is a premature design decision. The
attack()function seems to be the same for all entities so I think I’ll leave it out of the entity/component (in ECS it’d be a “system”). I’m seeing no particular value in making this a separate component right now.
- For the AI, it’s either a basic monster that moves towards the player, or a confused monster that moves randomly for some number of turns and then reverts to the previous AI behavior. I think I could treat that as a function, but functions don’t work well with persistence, so I’m going to treat it as data that has a
typefield, like I did with entities.
I’m going to see how far I can get without adding an
owner pointer. I strongly prefer not to have it, because it leads to a circularity in the data, which will make persistence more complicated. In an ECS this circularity would be broken by storing an entity id instead of a pointer to the entity object, but I’m not using an ECS here.
Here’s an example of a top level function that’s not a method of either the attacker or defender:
function attack(attacker, defender) { let damage = attacker.power - defender.defense; if (damage > 0) { takeDamage(defender, damage); print(`${attacker.type} attacks ${defender.type} for ${damage} hit points.`); } else { print(`${attacker.type} attacks ${defender.type} but does no damage.`); } }
I added the fighter and ai components, but not in the same way the Python tutorial implemented them. I’ll refactor later.
Then I implemented rendering order as a static property of each entity.
Then I worked on handling dead bodies. I change their
type to
corpse to get them to switch all static properties (
blocks, character, color, render order) at once.
It’s starting to come together!
7 Creating the Interface#
7.1. Health bar#
Part 7[18] of the Python tutorial covers three topics:
- health bar
- message area
- entity info on mouseover
I already implemented the message area earlier. It’s in HTML instead of in ROT.js. I’m going to implement the health bar in HTML as well.
7.2. Message area, part 2#
I also improved the message area:
- added color by switching from
<pre>to a
<div>with
<div>children, each with a css
color: …set
- got word-wrap for free by making this switch!
- added indentation by applying css
margin-left: 1em; indent-text: -1em;so that a message that wraps will get subsequent lines indented
- added scrolling by setting css
overflow-x: hidden; overflow-y: scrollbut hid the scrollbar using
scrollbar-width: none(Firefox supports the standard[19]) and
::-webkit-scrollbar { width: 0 }(Chrome and Safari naturally have their own non-standard approach)
- added a spacing between the messages with
margin-bottom: 0.25emso that the line spacing within messages is smaller than between messages
Version 2 of the Python tutorial also adds a way to scroll the message area, integrated into its event loop. The browser gives me that for free, although it’s mouse based and not keyboard based like the Python version. Version 2 also adds message stacking, but I didn’t implement that either.
I think it wouldn’t be hard to add color within lines. I did that with my DOS games[20] and liked the effect. But for now I’m sticking to implementing the features from version 1 of the Python tutorial.
7.3. Mouse info#
The Python tutorial fits the mouse handling into the main event loop but the browser has its own event loop, so I deviated from the tutorial’s approach.
First question is: where do I want to display this information? I decided to use CSS grid to display it over the message console. Here’s the HTML:
<div id="message-area"> <div id="messages"/> <div id="message-overlay"/> </div>
Normally this would be laid out with the message console (
#messages) first and then the next div afterwards. But with CSS grid I put them in the same spot:
#message-area { margin: auto; display: grid; grid-template-areas: "all"; height: 8em; font-family: monospace; font-size: 0.8em; line-height: 1.0; } #messages { grid-area: all; … } #message-overlay { grid-area: all; z-index: 2; /* workaround for chrome */ opacity: 0.0; background: hsl(200, 50%, 30%); color: white; white-space: pre-wrap; } #message-overlay.visible { opacity: 1.0; }
Then in the code, I set the opacity to 1 if there’s text and 0 if not:
const setOverlayMessage = (() => { let area = document.querySelector("#message-overlay"); return function(text) { area.textContent = text; area.setAttribute("class", text? "visible" : ""); }; })();
This works nicely! Except on Chrome, where it works initially but then fails later for reasons I couldn’t figure out. By adding
z-index: 2 to the overlay, it worked, but I don’t understand why.
The next step is to attach a
mousemove listener to the
<canvas>. I didn’t integrate into the main game loop, but instead have this running independently.
Not mentioned in the ROT.js interactive manual, ROT.js’s display class has a useful method
eventToPosition, documented here[21]. It takes a mouse event and gives us back a grid tile location. If it returns
[-1, -1] it was out of range (which seems like it should never happen, but does).
function allEntitiesAt(x, y) { return Array.from(entities.values()).filter(e => e.x === x && e.y === y); } function handleMousemove(event) { let [x, y] = display.eventToPosition(event); // returns -1, -1 for out of bounds let entities = allEntitiesAt(x, y); let text = entities.map(e => e.name).join("\n"); setOverlayMessage(text); } function handleMouseout(event) { setOverlayMessage(""); } const canvas = display.getContainer(); canvas.addEventListener('mousemove', handleMousemove); canvas.addEventListener('mouseout', handleMouseout);
Hooray, it works!
This feature was relatively easy to implement.
But … this broke the message area scrolling! The problem is that the scroll mouse event gets sent to the overlay, even when its opacity is 0.0. I fixed this by adding CSS:
#message-overlay { … pointer-events: none; }
While I was at it, I also added a fade-out effect using
transition: all 0.3s to
#message-overlay and
transition: all 0s to
#message-overlay.visible. When making the overlay visible, it will have no transition (0s) but when making the overlay invisible it will use a quick transition (0.3s).
8 Items and Inventory#
Part 8 of the Python tutorial covers items and inventory.
8.1. Items and locations#
I decided to use a different representation than they use. To pick up an item, they remove the entity from the global
entities array, and then ignore the
x and
y fields on it.
- instead of entities having
xand
y, they have
location, which can be either
{x:int, y:int}if the entity is on the map, or
{carried:id, slot:int}if the entity is in someone’s inventory.
- instead of inventory having
capacity:intand
items:Array<object>, the inventory is a fixed length
Array<null|id>, with the length being the capacity.
In the Python tutorial, if you pick up a potion of healing, then pick up a potion of confusion, they will be assigned keys a and b. If you drop the first potion, then the potion of confusion changes from b to a. I like the Brogue approach, where an item keeps its key, so I represented the inventory as a fixed size array where each slot can contain an item or null.
The logic for this is slightly tricky because there are two systems that have to be kept in sync. To pick up an item, I need to update both the item’s data and the carrier’s data:
- item
A:
{carried: E.id, slot: P}
- entity
E:
{inventory: […, A.id, …]}where that id is in position
P
And to drop the item I need to update both again:
- item:
A:
{x: x, y: y}
- entity
E:
{inventory: […, null, …]}
What happens if the entity is consumed? I haven’t decided yet.
I wanted to encapsulate the logic for keeping this data in sync, so I wrote a function:
function moveEntityTo(entity, location) { if (entity.location.carried !== undefined) { let {carried, slot} = entity.location; let carrier = entities.get(carried); if (carrier.inventory[slot] !== entity.id) throw `invalid: inventory slot ${slot} contains ${carrier.inventory[slot]} but should contain ${entity.id}`; carrier.inventory[slot] = null; } entity.location = location; if (entity.location.carried !== undefined) { let {carried, slot} = entity.location; let carrier = entities.get(carried); if (carrier.inventory === undefined) throw `invalid: moving to an entity without inventory`; if (carrier.inventory[slot] !== null) throw `invalid: inventory already contains an item ${carrier.inventory[slot]} in slot ${slot}`; carrier.inventory[slot] = entity.id; } // TODO: add constraints for at most one (player|monster) and at most one (item) in any {x, y} }
This logic is tricky enough that I want to make sure I have plenty of assertions in there!
8.2. Inventory actions#
Part 8 also introduces an inventory UI. Since the browser already provides multiple UI elements, I wanted to try using them instead of creating my own as the Python version is forced to do. I created a new overlay <div id="inventory-overlay"> and hid it when not in use.
I had hoped to try using the focused element status from the browser to keep track of whether the main game had focus or the inventory dialog had focus, but that ended up being more complicated than I wanted to tackle right now. The main problem is that the user may change the focus with the browser controls (mouse or keyboard) instead of going through the game’s controls and then I need to handle that somehow.
The inventory UI gets used for two actions: use and drop. When invoking the action, I need to remember which action I’ll perform when the inventory item is selected. This also affects the text the player sees at the top of the dialog box.
I decided the way to remember the action is to make two separate inventory overlays. I can apply different styling to them, but have them share common code.
As with the message area, I needed to use CSS
pointer-events: none on these inventory screens so that clicking on them will give the underlying
<canvas> focus.
I’ve been sharing less code as I go along, in part because the refactorings made it harder to show the changes, but also, because I’m spending more time on the code and less on this explanation of it.
9 Ranged scrolls and targeting#
Part 9[22] adds new items:
- a lighting scroll that will attack the nearest visible enemy
- a fireball scroll that lets you click on an enemy to attack
- a confuse scroll that lets you click on an enemy to confuse
This complicates the game state some more, because we need to remember which item is being cast. I think I can do this with the browser’s event system. I’ll add an overlay during spell casting, and attach event handlers to it that remember the item, and then remove the event handlers and the overlay when the item is used or the action is cancelled.
There’s a little glitch with this idea: because I had been using
pointer-events: none, mouse events don’t go to the targeting overlay. Easy fix: set
pointer-events: auto. But now when clicking on the targeting overlay, the game loses focus! For now my workaround is to re-focus the game canvas after receiving a click on the targeting overlay, but I think this isn’t a great solution.
10 Saving and loading#
The Python tutorial (v1) has complex objects that they save using the
shelve module. In my project I’ve tried to keep the objects simple enough to fit into JSON. Did I succeed? Let’s look at the world data:
tileMap
entities
player(this is one entry from
entities)
- ROT.js’s RNG state
How about UI state?
- the message log
- whether the “use item” screen is up
- whether the “drop item” screen is up
- whether the “targeting” overlay is up for fireball and confusion
- whether the “mouseover” overlay is up
- the current mouseover position and text
I don’t want to save these. Or maybe I should save the message log like the tutorial does. I’d have to change the message log code to allow saving; I’ll do that later.
How about constants?
- the size of the map
- the colors of the monsters
- whether entities block, or are an item
For now I’ll assume the map size doesn’t change. I would eventually want to change it so that the map size can be anything, and what’s displayed on screen is the portion of the map near the player. The other things are safe to change.
Did I keep data JSON-compatible? The answer is no! I ended up using a class for the tile map, and a dictionary structure for both the tile map and the entities. I decided to change my tile map to use objects instead:
function createMap() { function key(x, y) { return `${x},${y}`; } return { _values: {}, has(x, y) { return this._values[key(x, y)] !== undefined; }, get(x, y) { return this._values[key(x, y)]; }, set(x, y, value) { this._values[key(x, y)] = value; }, }; }
Working with globals is tricky, and I ended up with a bug that I would’ve never caught with the current state of the game:
createEntity.id stores the next id to use. I never use this once the map is created, but that would’ve been a tricky bug to catch. Ugh. Even though I thought I was doing everything to make serialization easy, I missed that. I’d like to say this taught me my lesson, and that I’ll do things better next time, but I will probably have to learn this lesson a few more times before it sticks.
Despite the few bugs I ran into, I think this went pretty well. Here’s the code for saving:
function serializeGlobalState() { const saved = { entities: Array.from(entities), nextEntityId: createEntity.id, playerId: player.id, tileMap: tileMap, rngState: ROT.RNG.getState(), }; return JSON.stringify(saved); }
I construct a single JSON object with all the globals I want to save.
Here’s deserialization:
function deserializeGlobalState(json) { const reattachEntityPrototype = entry => [entry[0], Object.assign(Object.create(entity_prototype), entry[1])]; const saved = JSON.parse(json); entities = new Map(saved.entities.map(reattachEntityPrototype)); createEntity.id = saved.nextEntityId; player = entities.get(saved.playerId); Object.assign(tileMap, saved.tileMap); ROT.RNG.setState(saved.rngState); }
I restore all the globals I saved earlier, but I have to fix up the entities. I had been using Javascript prototype inheritance to attach the in-world data (location, health, etc.) with the static data (color, blocks or not, etc.). The JSON serialization saves the in-world data but I need to reattach the static data after deserializing. That’s what the
reattachEntityPrototype function does.
Now I can save to a string and restore from a string. The Python tutorial saves the data to a file on exit, and offers a key to create a new game or to load the game from the file. I am running this in a browser, and there’s no exiting, so I’m going to do things differently. Either:
- I could save to
localStorageevery turn, and load from it when opening the page. There’d also be a key to start a new game.
- I could have a key to save and a key to load, and I could rely on reloading the page to start a new game.
I decided on 2. The Python tutorial uses A for new game, B to load, C to save. But this is because they have a game menu at the beginning. I don’t have a game menu so I’ll use S to save, R to restore.
It all works nicely! Except…
I didn’t serialize the messages. I thought it’d be no big deal but it doesn’t look right. My
print() function directly writes to the page. I changed
print() to append to a global
messages array:
const MAX_MESSAGE_LINES = 100; let messages = []; // [text, className] function print(message, className) { messages.push([message, className]); messages.splice(0, messages.length - MAX_MESSAGE_LINES); drawMessages(); }
And then I have a separate function to draw that array to the page:
function drawMessages() { let messageBox = document.querySelector("#messages"); // If there are more messages than there are <div>s, add some while (messageBox.children.length < messages.length) { messageBox.appendChild(document.createElement('div')); } // Remove any extra <div>s while (messages.length < messageBox.children.length) { messageBox.removeChild(messageBox.lastChild); } // Update the <div>s to have the right message text and color for (let line = 0; line < messages.length; line++) { let div = messageBox.children[line]; div.textContent = messages[line][0]; div.setAttribute('class', messages[line][1]); } // Scroll to the bottom messageBox.scrollTop = messageBox.scrollHeight; }
Much better!
The Python tutorial does a lot of other things to restructure the code but I didn’t feel the need to do any of that.
11 Delving into the Dungeon#
Part 11 covers several things:
- Stairs
- Dungeon levels
- Experience points
- Player levels
- Upgrade menu when leveling up
- Character stats screen
11.1. Stairs#
The Python tutorial has a render order: actor > item > corpse > stairs. That means a corpse will block the stairs if you are in the room, but you can see the stairs once you step outside the field of view. I switched stairs and corpse.
Because of the way I draw things, if there’s a monster on top of the stairs, the monster is to be drawn. But if the monster isn’t in the field of view, that spot will be hidden. And that means the stairs will be hidden too. This is a bug but I’ll have to redesign some of my drawing code to fix it. I added it to my list of things to do, but I’m not going to work on it right now. It seems like a rare situation. I like to accumulate several related bugs before some code redesign so that I can tackle all those problems together.
Another possible issue is that there can be two items in one tile, and no way to access them separately.
11.2. Dungeon levels#
I had hard-coded the player’s starting location to work with the initial map for one particular seed. That doesn’t work when generating new maps, so I checked the Python tutorial to see what they did. They start the player in the first generated room. That sounds like a good solution to me. I placed the player first because the monster generator will avoid placing another monster on top of a blocked tile.
The Python tutorial discards all entities other than the player when changing dungeon levels. I’m representing my entities a little differently. The global
entities set also contains entities that are in a creature’s inventory. A
location can either be
{x: y:} for something on the map, or
{carried: slot:} for something carried by someone. When changing dungeon levels I discarded all entities with
{x: y:} except the player.
A different design would be to change entities to
{x: y: z:}. Then I wouldn’t discard any entities when changing levels. This would allow going back to a previous level. To avoid scope creep I’m limiting features to what’s in the Python tutorial, so I added this to my “future ideas” list.
11.3. Experience points#
This may be a little tricky, because I deviated from the Python tutorial a few weeks ago. They had returned an object which described the message to print, whereas I had printed it directly. But now they’re returning additional information like the experience points gained. Where will that fit into my code?
I think if I keep going with this project I’d want to refactor this into an event system and not use the approach used by the tutorial. But for now I’m going to do the simplest thing that could possibly work. I’ll only allow the player to gain xp.
11.4. Player levels#
The tutorial uses a Level component with
level_up_base set to 200 and
level_up_factor set to 150. I don’t see myself varying these per instance, so I’m going to express this instead as a function that calculates the xp needed to reach a specific level. The tutorial has
current_xp reset every time you gain a level. I’m going to have it always go up.
The formula is encoded in this code:
@property def experience_to_next_level(self): return self.level_up_base + self.current_level * self.level_up_factor
Let’s see what that looks like:
Is there a compact formula for the total xp column? Yes: it simplifies to 200 * N + 150 * (N * (N+1)) / 2, where
N is the current level. When we get past that amount, it’s time to level up.
function xpForLevel(level) { return 200 * level + 150 * (level * (level+1)) / 2; }
Now the player can level up with this code:
function gainXp(entity, amount) { if (!entity.xp) { return; } // this entity doesn't gain experience entity.xp += amount; if (entity.id !== player.id) { throw `XP for non-player not implemented`; } print(`You gain ${amount} experience points.`, 'info'); while (entity.xp > xpForLevel(entity.level)) { entity.level += 1; print(`Your battle skills grow stronger! You reached level ${entity.level}!`, 'warning'); // TODO: let player choose an upgrade } }
11.5. Level upgrade#
After leveling up the Python tutorial immediately brings up a menu where you can choose an upgrade: constitution, strength, or agility. I’m going to implement this menu with an overlay, just as I did with inventory. This might be a good time to generalize the inventory menu code to work as any menu, like the Python tutorial has already done, but it wasn’t immediately obvious how I should do that so I’m writing a new function to handle level upgrades, and will then figure out a general pattern later.
There’s a logic bug in my code: if I leveled up twice in one combat action, I only get to pick an upgrade once. Killing a monster won’t trigger this bug, but using fireball can kill several trolls at once, and that would be enough to trigger this bug. It seems like it won’t happen often, so I’ve added this to the list of things to look at later. One way to handle this would be to have another field on the player to indicate how many upgrades I have remaining.
11.6. Character stats screen#
The character screen shows level, xp, xp to next level, max hp, attack, defense. In the Python tutorial it’s formatted inside the game screen, but I’m going to use the browser’s layout system to put it in an overlay, like I have done with inventory and upgrades.
I like it.
12 Increasing Difficulty#
I’m going to keep following the tutorial but I’m losing interest at this point. I think the problem is that I’m following the tutorial in part to complete something, but I’m not especially interested in the game itself. The difficulty levels are about the game design rather than the implementation techniques I want to learn.
The Python tutorial adds a function
random_choice_from_dict and
random_choice_index. I can instead use the poorly documented ROT.RNG.getWeighted[23] function. If you’re using rot.js, I urge you to browse the autogenerated documentation[24] because it contains useful things that aren’t described in the manual.
The tutorial also adds a function
def from_dungeon_level(table, dungeon_level): for (value, level) in reversed(table): if dungeon_level >= level: return value return 0
It uses a table
[[2, 1], [3, 4], [5, 6]] to mean:
- at level 1, return 2
- at level 4, return 3
- at level 6, return 5
I decided to invert these tables. I like to think of input then output, so I changed this table to
[[1, 2], [4, 3], [6, 5] and wrote this code instead:
function evaluateStepFunction(table, x) { let candidates = table.filter(xy => x >= xy[0]); return candidates.length > 0 ? candidates[candidates.length-1][1] : 0; }
If I were using lodash, I’d use findLastIndex[25]: _.findLastIndex(table, xy => x >= xy[0]).
This section of the tutorial was pretty short.
13 Gearing up#
Ok, this is back to some interesting stuff. How do I want to support “equipment” separate from “inventory”? I’m going to use a slightly different model from what the Python tutorial uses.
13.1. Equipment#
Their model: equipment is a subset of inventory. An item is in your inventory and can be equipped or unequipped. If you drop an item from inventory, you also unequip it. If you already had an item equipped when you equip a new one, you unequip the old one.
My model: equipment is another location, like inventory. An item is in your inventory or in your equipment. If you drop an item from inventory, it is not unequipped because inventory items are not equipped.
But now I need a way to unequip things into your inventory. And what if the inventory is full?
Partway down the Python tutorial has the player have some initial equipment. I’m going to take this further and say that the player has all equipment slots filled. Always. When you equip a new item from your inventory, it swaps position with the item that you already had equipped there. This solves the problem with unequipping into a full inventory, because you can only unequip by equipping something else.
I initially extended my
moveEntityTo function to work with equipment but then decided to write a separate one
swapEquipment which will perform the swap more simply than two moves, and ensure that the equipment slot is always full.
13.2. Initial equipment#
Equipment has to have properties
item: true, equipment_slot: X. When equipped it will have
location: {equipped_by: entity, slot: X}. Those
X’s have to be the same (equipment can only be put into a valid slot).
Before I had
moveEntityTo to handle all the invariants, but now I don’t have that for equipment, so I need to carefully construct the initial equipment for the player:
- construct equipment as an entity
dagger.
player.equipment[main_hand_slot]should contain
dagger.id
dagger.locationshould be
{equipped_by: player.id, slot: main_hand_slot}
13.3. Bonuses#
I ended up with these fields, where
F is
max_hp,
power, or
defense:
bonus_Fon the item
base_Fon an actor
increased_Fon an actor is a computed property summing the bonuses from equipmet
effective_Fon an actor is a computed property summing the base and increased values
The character screen shows the base and increased values:
14 Conclusion#
This was a fun project. I had tried previously with Rust, but learning Rust and learning to write a roguelike game was tackling two things when I should’ve tackled only one at a time. This time went better.
My code is nowhere near as nicely structured as the Python code, but this is often my style. I tend to write big files with lots of functionality. In this case everything fit into one file of around 1050 lines. The Python tutorial v1 is around 1800 lines but it’s better structured.
I don’t think I’ll use any of this code in a future project, but the code itself is not the valuable part of following this tutorial. I learned the design patterns and the pieces that went into this type of game.
15 Future#
After I finished, I added one more thing: I changed the instructions line from one line of static text to one line of html, constructed to match where you are in the game. If you are standing over stairs, it will tell you [>] stairs. If you are standing over an item, it will tell you [g]et item. The problem is that the logic for determining whether something is a valid action is now in three places: the keyboard handler, the instructions, and the actual action handler. The instructions need to know things like which overlay is displaying. I think this would be a useful place to have some type of abstraction for the logic saying which actions are valid.
Other things I considered but didn’t get around to:
[ ]make the light level vary more smoothly than visible/invisible
[ ]make all the keyboard controls work on touch screens
[ ]add keyboard controls for targeting and looking at the map
[ ]add mouseover feedback when targeting, such as showing the blast radius for the fireball spell
[ ]distinguish permanent and temporary messages, so for example “you can’t move in that direction” could be a temporary message that shows up but doesn’t stay in the message log
[ ]automatically pick up items like in brogue
[ ]ensure only one item is on a tile; right now you can drop multiple items on a tile, and there’s no way to choose which to pick up
[ ]extend player vs enemies to factions so that anyone can attack anyone not in their faction
[ ]some way to start a new game without reloading the page
[ ]some way to delete your saved game to free up 90k of browser storage
[ ]a scrolling map so you always see yourself in the center
[ ]fix bug: if a monster is over the stairs, and you walk out of view, the stairs should be visible but are not
[ ]fix bug: if you level up twice in one combat hit (such as from a fireball hitting many enemies), you only get to upgrade attributes once
[ ]change the x,y location system to have x,y,z, and then have stairs that go back up so everything on previous levels is preserved
[ ]have monsters do something on their own
[ ]have corpses decay after some time
[ ]have monsters drop loot
[ ]try changing to a wasd+e+f scheme to see how that feels | https://www.redblobgames.com/x/2025-roguelike-dev/ | CC-MAIN-2021-43 | refinedweb | 9,313 | 65.83 |
Examining Random Eventsby Owen Densmore
07/29/2002
Complexity Note One: On the Wave-Particle Duality of Random Events
Author's note: This is the first in a series of short articles on complex adaptive systems and how they are beginning to be applied to computing, mainly in the areas of robustness, self organization, and peer (local knowledge) systems. This field is quickly showing us that the way ants discover near-optimal short paths to food can help run a data center.
I always think of random numbers as "uniform;" sort of an uninteresting fog-of-zero structure; the wave spread out across the numeric spectrum. Let's experiment with that idea. First, let's grab a bunch of random numbers between 0 and 1 and plot them. (I'll use Java, because later I'm going to use NetLogo, a Java application, to look at other random features.)
public class RanTest { public static void main (String args[]) { for (int i=0; i<10000; i++) { double r1=Math.random(); double r2=Math.random(); System.out.println(r1+" "+r2); } } }
We'll compile and run this and then use
gnuplot to create an image:
javac RanTest.java; java RanTest > random gnuplot << EOF set terminal png color set output "random.png" set size square .5,.5 plot "random" t "" with dots EOF
Figure 1. Plot of random points.
Figure 1 shows the "wave" aspect of random events ... a uniform spread across the spectrum of available values.
But now let's look at using this random generator to tease out the surprising structure. We'll take a one-dimensional "random walk" by flipping a coin, so to speak ... asking the random number generator to give a random 0 or 1. On 0, we go right one step; on 1, we go left one step. We do this a number of times, and notice how far from the initial point we wander over time. And we repeat this with several "walkers" (agents).
To make this easier to see, I'll use a nifty agent simulation system called NetLogo. I stack up 251 red walkers on top of each other, and have them paint where they've been in yellow. Figure 2 is the picture the first 80 of them make. (See the Homework section of this article to download the file used to create the simulation and run it yourself.)
Figure 2. First 80 walkers.
Recall that the red dot is the current position of the walker, and the yellow is where she has been. Averaging all of the positions, including the +/- sign, we get something near 0 (-4.4, to be exact). We also get lots of walkers at 0. The surprise, however, is just how far the walkers wander away from the point of origin -- one power walker all the way to 200. To be precise, we average the distances (absolute value of location) and find that the walkers wander away proportional to roughly the square root of the number of steps taken. Here's an explanation of the square root proportionality. Figure 3 below is a chart showing the average of all 251 walkers over time, in red, with the theoretical distance drawn in black. The histogram of all of the walkers is shown in Figure 4.
Figure 3. Average of all walkers over time.
Figure 4. Histogram of all of the walkers.
This experiment exhibits both the wave nature of random events (lots of values spread over all possibilities), and the particle nature (a surprisingly ordered tendency to wander that is proportional to the square root of the number of steps).
This is fanciful, I admit, but I hope to convince you in the next couple of articles that this teasing of structure out of randomness is extremely important to new directions in computing. From the Small Worlds problem to the mixed strategy game optimization, randomness leads from unordered to ordered solutions.
Why should you care? Because computing is starting to use the techniques of complex adaptive systems to build extremely robust systems, composed of many independent but communicating components. The lifelike behavior they display is often best understood by teasing out the "particle nature" of their random behavior. This is the story of "complexity."
Homework: Download NetLogo and run several of the included models. Modify one to do something new and interesting. Download the
walk.nlogo file used in this article and use it as an input file to NetLogo, and then run the simulation.
Further Reading: Mitchell Waldrop's Complexity is a history of the early days in the field, and describes the birth of the Santa Fe Institute.
The title for Gary William Flake's book, The Computational Beauty of Nature, sounds a bit odd, but it's one of the best how-to books, and includes lots of code samples.
Owen Densmore is an independent researcher/scientist who hopes to get billions of small devices to work well together, while hopefully not annoying the folks that depend on them.
Return to the O'Reilly Network.
| http://www.oreillynet.com/pub/a/network/2002/07/29/densmore.html | crawl-002 | refinedweb | 838 | 64.3 |
@namespaced Pretty much all 4208 boards. 4206 (very first XRs produced) are not affected but did have the bricking issue.
Rado
@Rado
Posts made by Rado
- RE: XR constantly losing app connection?
- RE: 16 blinks
@snosurfer Why did you have to pay to replace the motor? Wasn’t it under warranty?
- RE: XR constantly losing app connection?
@MotoGPTy I sent in my XR to have the Bluetooth issue fixed under warranty. I didn’t pay anything. It is improved but not as good as my V1 or Plus.
- RE: Ride without the app?
@surferdude You don’t need an app or a phone for the Onewheel to work.
- RE: 21 Blinks, OW not charging / turning on
@tomfoolery The app doesn’t connect to the board unless it’s plugged in and charging or it’s turned on.
- RE: Wobble at higher speeds
40 miles is not that much. Just keep riding. You’ll get better and feel more comfortable after you’ve ridden a few hundred miles.
- RE: Who (other than FM) can repair/replace the battery?
You could have someone build a new battery but you can only get the BMS through FM unfortunately.
- RE: Miles on my new OW
Out of the three OWs I own only one has the correct mileage. One had to get the BMS replaced and the other had to have the controller replaced. After receiving them back from repair both had different mileages. They didn’t just reset to zero.
- RE: 21 Blinks, OW not charging / turning on
@makkobu That could be the the cause. Sometimes the BMS just goes bad though. I’ve seen it quite a bit on the forums.
- RE: 21 Blinks, OW not charging / turning on
Your battery voltage is reading 0.0. This is not normal. You either have a bad BMS or a bad battery. You will most likely need to send it in to FM to get repaired. Call them, don’t use their online ticket system. | https://community.onewheel.com/user/rado | CC-MAIN-2019-39 | refinedweb | 331 | 85.69 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
Correctly search in one2many specific row
Hi, in product template I have defined one2many field to get something like this:
PRODUCT: A
APPLICATION DATA
BRAND MODEL INIT YEAR FINAL YEAR
nissan sentra 2000 2005
mazda aveo 1998 2004
The aim of this structure is to be able to search by model and year. Using the search box and typing 'sentra' as Model parameter, I get product A, this is great, just after this, I type 1998 as 'Search year' parameter and I also get product A because 1998 is part of range in row 2, this is wrong according to my requirement.
Is it possible that after typing the model paramater search the year only in the row in which appears the desired model?
Some piece of code:
<field string="Model" name="application_data_product_template_ids" filter_domain="[('application_data_product_template_ids.model', 'ilike',self)]"/>
<field name="date_search" />
def _search_year(self, cr, uid, obj, name, args, context):
x = [('application_data_product_template_ids.date_beg', '<=', args[0][2]), ('application_data_product_template_ids.dateend', '>=', args[0][2])]
res = self.search(cr, uid, x, context=context)
return [('id', 'in', res)]
'date_search': fields.function(lambda self: self, string='Search year', type='integer', fnct_search=_search_year),
Please, give some suggestions!!
Thanks!
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now | https://www.odoo.com/forum/help-1/question/correctly-search-in-one2many-specific-row-85879 | CC-MAIN-2017-22 | refinedweb | 246 | 54.42 |
Creating Event Log Messages for a Document Library in Microsoft Windows SharePoint Services
Nigel Bridport
Microsoft Corporation
October 2003
Applies to:
Microsoft® Windows® SharePoint™ Services
Microsoft Office SharePoint Portal Server 2003
Summary:. (13 printed pages)
Download CreatingEventsforaDocumentLibrarySample.exe.
Contents
Overview
Event Settings
Enabling Events for Document Libraries
Creating a Microsoft Visual Studio .NET 2003 Project for an Event
Associating Event Code with a Document Library
Conclusion
Appendix A-Sample Installation
Appendix B-Sample Event Code
Overview
Microsoft® Windows® SharePoint™ Services provides document library events that enable you to build upon the Windows SharePoint Services platform. You can create managed code assemblies that define handlers for these events and then bind the handlers to document libraries. The event handlers call into the object model (OM) to access the configuration and content databases directly or they invoke external services, using Windows SharePoint Services as a user interface for other systems. By defining the event handlers and enabling event logging for a specific document library, you can view event messages by using the Microsoft Event Viewer.
The following table describes the events for document libraries provided by Windows SharePoint Services for which you can enable logging.
Table 1. Document library events
In the context of Windows SharePoint Services, an event handler is a .NET class that implements the IListEventSink interface, whose single method, OnEvent, is used within the handler. The SPListEvent object contains information about an event that occurs. You can identify the type of the event from the Type property. You can use the Site property to access the object model of the Microsoft.SharePoint namespace within the handler.
You must install the managed assemblies that define an event handler to the Global Assembly Cache (GAC) or the appropriate virtual server BIN directory.
Note Additionally, in a server farm configuration, each front-end Web server must have the managed assembly installed.
To deploy an event handler on a server, you must enable event handling on the Virtual Server General Settings page in SharePoint Central Administration.
Note You must be a member of the local Administrators group or the SharePoint Administrators group to enable event handling for a document library.
Event Settings
The metadata for a document library binds the class of an event handler to the document library by means of the properties specified in the following table. You can specify these properties on the Document Library Advanced Settings page for the document library, or through code that sets the EventSinkAssembly, EventSinkClass and EventSinkData properties of the SPList object.
Table 2. Possible event settings
After you install the event handler assembly, on the Document Library Advanced Settings page for the document library, you must specify the strong name of the assembly in the Assembly Name box, in the following format:
Assembly_Name, Version=Version, Culture=Culture, PublicKeyToken=Public_Key_Token
You can identify these values by browsing to the default GAC location (%windir%\assembly) in Microsoft Windows Explorer. For example, by default, the strong name for the Windows SharePoint Services assembly is as follows:
Microsoft.SharePoint, Version=11.0.0.0, Culture=Neutral, PublicKeyToken=71e9bce111e9429c
The value that you specify for the Class Name box must be the complete, case-sensitive name of a class defined in the specified assembly that implements the IListEventSink interface. For example, the full name of the class in the following example is as follows:
WSSEventSink.EventSink
Enabling Events for Document Libraries
By default, Windows SharePoint Services does not enable events for a document library. Follow the steps below to enable events for a document library. Events are enabled on a per virtual server basis. Therefore, when you enable events, you enable them for all sites created on the specified virtual server.
To enable events on a virtual server
- On the server running Windows SharePoint Services, click Start, point to Administrative Tools, and then click SharePoint Central Administration. See Figure 1.Note The default Central Administration page is determined according to which SharePoint product or technology is installed on the computer. If necessary, in the Link pane, click Windows SharePoint Services to go to the Windows SharePoint Services Central Administration page:
Figure 1. The Central Administration page for Windows SharePoint Services
- In the Virtual Server Configuration section, click Configure virtual server settings.
- On the Virtual server list, click the virtual server with which you want to work, for example, click Default Web Site.
- On the Virtual Server Settings page, in the Virtual Server Management section, click Virtual server general settings.
- On the Virtual Server General Settings page, in the Event Handlers section, in the Event handlers are box, select On, and then click OK.
The virtual server that is hosting the SharePoint site is now enabled for events.
Creating a Microsoft Visual Studio .NET 2003 Project for an Event
This section describes how to create a custom event handler.
Important This section assumes that Microsoft Visual Studio® .NET 2003 is installed on the server running Windows SharePoint Services. If you want to develop the Event Handler on a computer that does not have Windows SharePoint Services installed, you will need access to a server running Windows SharePoint Services. On the server computer, copy Microsoft.SharePoint.dll from the <System Drive>:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\60\ISAPI folder and transfer it to a folder on the computer where you will be developing the event handler
To create a custom event handler in Visual Studio .NET 2003
- On the File menu, point to New and then click Project.
- In the Project Types window, click Visual Basic Projects and then in the Templates window, click Class Library.
- In the Name field, type WSSEventSink, and then click OK.
- From the Solution Explorer, click Class.vb and in the Properties window, under Misc, change the File Name to EventSink.vb.
The next step is to add a reference to Microsoft.SharePoint.dll.
- To do so, on the Project menu, click Add Reference. In the .NET list, click Microsoft.SharePoint.dll and then click OK. If this computer does not have Windows SharePoint Services installed, then on the .NET tab, click Browse and then navigate to the folder where you copied the Microsoft.SharePoint.DLL to as described previously. Select the .DLL and click Open.
- Copy the sample code from Appendix B and paste it in the code window for EventSink.vb, replacing any default code already present in the project file and then click Save.
- On the Tools menu, click Create GUID to create a GUID.
- Click 4. Registry Format (ie. {xxxxxxx-xxxx. . . xxxx}), click Copy, and then click Exit.
- In Solution Explorer, click AssemblyInfo.vb to open the code window for AssemblyInfo.vb.
- In the code window for AssemblyInfo.vb, locate the assembly GUID entry and replace the string with the GUID you copied.
- In the AssemblyInfo.vb, locate the following and add a version number.
For example, <Assembly: AssemblyVersion("1.0.0.1")>
- Save the project file.
Next, you need to strong-name the project before using it with Windows SharePoint Services:
- On your development computer, click Start, point to All Programs, point to Microsoft Visual Studio .NET 2003, point to Visual Studio .NET Tools, and then click Visual Studio .NET 2003 Command Prompt.
- Type the following sn.exe –k c:\keypair.snkNote You can change the path to whatever is required.
- In the event sink project open the AssemblyInfo.vb file and add the following line to the end of the module
<Assembly: AssemblyKeyFile("c:\\keypair.snk")>
- You can now compile the project. To do so, on the Build menu, click Build Solution.
- Verify and resolve any build errors and save any changes. Rebuild the solution if necessary.
- The next step is to copy the new .dll to the Global Assembly Cache (GAC) on your server running Windows SharePoint Services. To do this, browse to the location of the project on your development computer and copy the Windows SharePoint Services EventSink.dll file to the GAC on the Windows SharePoint Services server located at the following location relevant for Windows 2003 and Windows XP:
- Note the value of the Public Key Token once you have copied the assembly to the GAC as this value is required when associating the code with a document library in Windows SharePoint Services. To find the Public Key Token for the managed assembly, open the GAC in Windows Explorer and observe the Public Key Token column.
You have now successfully created an event handler for use with Windows SharePoint Services.
Note For a complete version of this code sample, download CreatingEventsforaDocumentLibrarySample.exe.
Associating Event Code with a Document Library
After creating your custom event handler, you must associate it with the relevant document library where you wish the code to execute. This step also acts as the
SafeControl entry for Windows SharePoint Services; therefore you do not need to modify other SharePoint files, such as the Web.config.
To associate event code
- On your SharePoint site, browse to the document library that you want to associate with this event code.
- In the link bar, click Modify settings and columns.
- In the General Settings section, click Change advanced settings.
- If you are using the sample managed assembly provided by this article, then in the Event Handler section, in the Assembly Name text box, type the following:
WSSEventSink, Version=1.0.0.2, Culture=neutral, PublicKeyToken=25a7f6c72becfbbbNote If you have created your own code, then you will need to update the Version and PublicKeyToken values with the one relevant for your managed assembly.
- In the Class Name text box, type the following:
WSSEventSink.EventSink
- Click OK.
You have now associated the custom event handler code with the document library.
Conclusion
This article provides an overview of how to enable event log messages for a document library in Microsoft Windows SharePoint Services. It provides in-depth instructions on how to create an event handler by using Microsoft Visual Studio .NET 2003 and provides a code sample for use in your own deployment of Microsoft SharePoint Products and Technologies. To see an example result set from the code, see Appendix A—Sample Installation.
Appendix A—Sample Installation
The following are sample events based on the custom event code used in this paper. To open the event viewer, click Start, point to Administrative Tools and then click Event Viewer. The following figure shows the default view of the Event Viewer.
Figure 2. Default view of the event viewer
The following figure shows the properties of a sample event.
Figure 3. Sample event
The following example shows a complete description of the event:
Event occurred in <Shared Documents> Event type was <Insert> Item that caused the event was <Shared Documents/sample.doc Item field details are as follows -> Field name = <Created Date> Value = <08/07/2003 12:00:58> Field name = <Created By> Value = <1; user_name\administrator> Field name = <Last Modified> Value = <08/07/2003 12:00:58> Field name = <Modified By> Value = <1; user_name\administrator> Field name = <Approval Status> Value = <0> Field name = <Approver Comments> Value = <> Field name = <URL Path> Value = </sites/site_name/Shared Documents/Steps to enable events for sample.doc> Field name = <URL Dir Name> Value = <11; sites/site_name/Shared Documents> Field name = <Modified> Value = <08/07/2003 12:00:58> Field name = <Created> Value = <08/07/2003 12:00:58> Field name = <File Size> Value = <193024> Field name = <File System Object Type> Value = <0> Field name = <ID of the User who has the item checked out> Value = <11; > Field name = <Name> Value = <Steps to enable events for sample.doc> Field name = <Virus Status> Value = <11; 193024> Field name = <Checked Out To> Value = <11; > Field name = <Checked Out To> Value = <11; > Field name = <Document Modified By> Value = <user_name\administrator> Field name = <Document Created By> Value = <user_name\administrator> Field name = <File Type> Value = <doc> Field name = <HTML File Type> Value = <> Field name = <Source Url> Value = <> Field name = <Shared File Index> Value = <> Field name = <Name> Value = <Steps to enable events for sample.doc> Field name = <Name> Value = <Steps to enable events for sample.doc> Field name = <Select> Value = <11> Field name = <Select> Value = <11> Field name = <Edit> Value = <> Field name = <Type> Value = <doc> Field name = <Server-based Relative URL> Value = </sites/site_name/Shared Documents/sample.doc> Field name = <Encoded Absolute URL> Value = <> Field name = <Name> Value = <Steps to enable events for sample.doc> Field name = <File Size> Value = <193024> Field name = <InstanceID> Value = <> Field name = <Title> Value = <Steps to enable events for sample.doc>
Appendix B—Sample Event Code
The following is sample code available for use when creating custom event handlers. The code is written in Microsoft Visual Basic® .NET using Microsoft Visual Studio .NET 2003.
Option Explicit On Imports System Imports System.IO Imports Microsoft.SharePoint Public Class EventSink : Implements IListEventSink Public Sub OnEvent(ByVal listEvent As Microsoft.SharePoint.SPListEvent) Implements Microsoft.SharePoint.IListEventSink.OnEvent On Error Resume Next Dim SharePointWeb As SPWeb = listEvent.Site.OpenWeb() Dim SharePointEventItem As SPFile = SharePointWeb.GetFile(listEvent.UrlAfter) Dim oItem As SPListItem = SharePointEventItem.Item Dim oField As SPField Dim oFields As SPFieldCollection Dim sLog As String 'Check to make sure that we actually have the event item! If SharePointEventItem Is Nothing Then EventLog.WriteEntry("Event Log Test", "Cannot retrieve event item", EventLogEntryType.Information, listEvent.Type) Exit Sub End If 'Get the fields collection for the Event Item oFields = oItem.Fields 'Inform the user of some top-level information such as the source of the event" + vbCrLf sLog = sLog + "Event type was <" + listEvent.Type.ToString + ">" + vbCrLf + vbCrLf 'The delete event carries no useful information to log in the fields collection If (Len(listEvent.UrlAfter.ToString) > 1) Then sLog = sLog + "Item that caused the event was <" + listEvent.UrlAfter.ToString + ">" + vbCrLf + vbCrLf sLog = sLog + "Item field details are as follows ->" + vbCrLf + vbCrLf 'Iterate through the items fields and detail them For Each oField In oFields sLog = sLog + "Field name = <" + oField.Title.ToString + "> " + vbCrLf + vbTab + "Value = <" + oItem(oField.Title.ToString) + ">" + vbCrLf Next End If Case SPListEventType.CheckIn 'Perform necessary actions for the CheckIn event Case SPListEventType.CheckOut 'Perform necessary actions for the CheckOut event Case SPListEventType.Copy 'Perform necessary actions for the Copy event Case SPListEventType.Delete 'Perform necessary actions for the Delete event Case SPListEventType.Insert 'Perform necessary actions for the Insert event Case SPListEventType.Invalid 'Perform necessary actions for the Invalid event Case SPListEventType.Move 'Perform necessary actions for the Move event Case SPListEventType.UncheckOut 'Perform necessary actions for the UnCheckOut event Case SPListEventType.Update 'Perform necessary actions for the Update event End Select End Sub 'OnEvent End Class 'EventSink | https://msdn.microsoft.com/en-us/library/dd583159(v=office.11).aspx | CC-MAIN-2016-07 | refinedweb | 2,409 | 55.64 |
ol.mousewheel
HI,
do you know how it works ? i try with the helper… but nothing appends. thanks for your help.
you can download it here :
ok, thanks a lot, Oli Larkin say :No… Jit.window reports mouse wheel messages in max6 which meant I didnt need an extra object
i ‘ve try to use this, i can read the value of the wheel ( speed ) but i don’t arrive to use it… i connect the result in a flonum…and nothing…
Hi there. I know that ol.mousewheel doesn’t work on the Mac anymore, but would like to let you know that I’ve tried it on Windows 7 (Pro 64) with Max 6.1.5 (32 bit) and it’s working fine there…
On a related note, now I really need a solution on the Mac side… is there any cross-platform solution to get the mousewheel information? It’s really unfortunate that such a basic and useful feature isn’t supported by [mousestate] or other builtin object (with jit.window’s exception). We can connect arduinos, kinects and wiimotes, but can’t have access to the mousewheel or its additional buttons (and no, [hi] doesn’t count, as the keyboard and mouse are not available on the Windows side…).
Just in case somebody is looking for a mousewheel solution for any kind of ui object:
jnativehook () implemented as a mxj java external works well on Windows 10 and Max/MSP 7
Hello Julius,
I think its a very useful solution that you just share with us, thank you a lot!
Do you or maybe the other MMJ users know how to implement this into a patch? As a beginner I really do not know where to start…
Looking forward to get this working.
Thank for your input,
All the best,
Alfredo
hi Alfredo
1) Download the "JNativeHook-2.0.3.zip" from this Link: "" Unzip it. You will get a .jar file.
2) Compile the java source code in the following post. You will get a .class file.
3) Now you have to tell MAX/MSP where to find the jar file and the .class file . This means that you edit the "max.java.config.txt" file, adding the path to both files to this .txt file. This .txt file can be found in: "C:\Program Files\Cycling ’74\Max 7\resources\packages\max-mxj\java-classes". Which lines to edit is described in the txt file itself.
4) Restart MAX/MSP
5) Create a mxj Object and enter"globalMousewheel" as its first argument. Send this object a "start" message" and it will output the mousewheel data.
Hope this helps.
julius
import java.util.logging.Level;
import java.util.logging.Logger;
import org.jnativehook.GlobalScreen;
import org.jnativehook.NativeHookException;
import org.jnativehook.mouse.NativeMouseWheelEvent;
import org.jnativehook.mouse.NativeMouseWheelListener;
import com.cycling74.max.Atom;
import com.cycling74.max.MaxObject;
public class globalMousewheel extends MaxObject implements NativeMouseWheelListener{
public globalMousewheel(Atom[] args) {
//declareAttribute("autostart", null, "autoStart");
declareIO(1, 1);
}
@Override
public void nativeMouseWheelMoved(NativeMouseWheelEvent e) {
outlet(0, e.getWheelRotation());
}
public void start(){
// Get the logger for "org.jnativehook" and set the level to warning.
// Logger logger = Logger.getLogger(GlobalScreen.class.getPackage().getName());
// logger.setLevel(Level.WARNING);
//no log message please
final Logger logger = Logger.getLogger(GlobalScreen.class.getPackage().getName());
logger.setLevel(Level.OFF);
//start NativeHook
try {
GlobalScreen.registerNativeHook();
}
catch (NativeHookException ex) {
post("There was a problem registering the native hook.");
post(ex.getMessage());
}
//start the listener
GlobalScreen.addNativeMouseWheelListener(this);
}
public void stop(){
//stop the listener
GlobalScreen.removeNativeMouseWheelListener(this);
//Stop the nativeHook
try {
GlobalScreen.unregisterNativeHook();
}
catch (NativeHookException ex) {
post("There was a problem unregistering the native hook.");
post(ex.getMessage());
}
}
}
Thank you a lot, Julius!
I’m on mac an did try to compile the code above, but get some errors at the end.
After check the code online I have same result:
I’m on Mac, does it have to do with it? Or else?
Thank you again,
All the best,
Alfredo
hi Alfredo
I think the best way to make this work is to setup Eclipse for Max/MSP Development as described here:
After this put the jnativehook.jar in the lib folder of your eclipse project and add it to the java build path. (Google: eclipse add external jar to build path)
I think that the reason why you get the error messages is that you named the .java file something other than the class Name "globalMousewheel". In Java the classname has to be the same as the filename. Even if this is the case you will get further error mesages once you corrected this: Have a look at the "import" statements at the beginning of the code. Several classes that are stored in the max.jar and the nativehook.jar are imported. When compiling the code the compiler needs to know where to find this jar files. Thats the reason why you have to put them on the build path in Eclipse.
Please let me know how much java expierence you have got. If you have little or no java expierence i would post other, more detailed instructions.
julius
Hello Julius,
my java experience is 0+, so I absolutely have to figure things out… and after few hours I could compile "something", please take a look at the attachment.
I’ve finished steps below:
1) Installed Ecliplse (eclipse-java-mars-2-macosx-cocoa-x86_64 and jdk-7u79-macosx-x64)
2) Made a new Java project with title "JNativeHook" (JavaSE-1.7 ex. enviroment)
3) Unpacked JNativeHook-2.0.3.zip and took only the jnativehook-2.0.3.jar from the package and putted in the "lib" folder of the relevant project’s workspace
4) Added jitter.jar and max.jar from Max 6 dir in the same folder
5) Added those 3 files to the "Referenced Libraries" by "Add to Build Path" function in Eclipse
6) Made a new class file with name "globalMousewheel" in the "JNativeHook/src" source folder with modifiers "Public"
7) Pasted your script complete in the globalMousewheel.java tab of Eclipse and compiled the .class
8) added a line max.dynamic.class.dir /Users/itsprobablyme/Documents/workspace/JNativeHook/bin and max.dynamic.jar.dir /Users/itsprobablyme/Documents/workspace/JNativeHook/lib into the max.java.config.txt and safed the file.
9) restarted Max and added a new [mxj globalMousewheel]
10) get an error:
MXJ System CLASSPATH:
/Applications/Max 6.1/Cycling ’74/java/lib/jitter.jar
/Applications/Max 6.1/Cycling ’74/java/lib/jode-1.1.2-pre-embedded.jar
/Applications/Max 6.1/Cycling ’74/java/lib/max.jar
MXJClassloader CLASSPATH:
/Applications/Max 6.1/Cycling ’74/java/classes/
/Users/itsprobablyme/Documents/workspace/JNativeHook/lib/jitter.jar
/Users/itsprobablyme/Documents/workspace/JNativeHook/lib/jnativehook-2.0.3.jar
/Users/itsprobablyme/Documents/workspace/JNativeHook/lib/max.jar
/Users/itsprobablyme/Documents/workspace/JNativeHook/bin
Jitter initialized
Jitter Java support installed
java.lang.UnsupportedClassVersionError: globalMousewheel : Unsupported major.minor version 51.0
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:637)
at java.lang.ClassLoader.defineClass(ClassLoader.java:621)
at com.cycling74.max.MXJClassLoaderImpl.doLoadClass(MXJClassLoaderImpl.java:119)
at com.cycling74.max.MXJClassLoader.loadClazz(MXJClassLoader.java:88)
Could not load class ‘globalMousewheel’
Im on Mac 10.9.5 with Max 6.1/Java 1.8.0_91
What would you suggest next?
Thank you for the patience,
All the best,
Alfredo
hi Alfredo
Max is currently bound to Java for OS X the maximum supported Java version is 1.6 (update 65). This means Max can’t handle mxj classes compiled with Java 1.7 or later. Read about this here:
It is quite easy in Eclipse to set up your project to use a specific Java version: Right click on your nativehook project :: properties::java ::compiler. Choose the 1.6 Apple Java Version.
Let me know if this did the trick
julius
hi Alfredo
Super that you achieved this.
One more thing: When you use the "globalMouswheel" object in a patch, make sure it has a "closebang" object with a message Object ("stop") connected to it. That way nativehhook will shut down when you close your patch, otherwise you will have a running ghost process in your system.
julius
Forums > MaxMSP | https://cycling74.com/forums/topic/ol-mousewheel/ | CC-MAIN-2017-09 | refinedweb | 1,375 | 51.55 |
Records the inputs and outputs of scripts
Project description
Plumbium is a Python package for wrapping scripts so that their inputs and outputs are preserved in a consistent way and results are recorded.
Example
from plumbium import call, record, pipeline from plumbium plumbium
Requirements
Plumbium is tested with Python v2.7 - 3/plumbium/issues
- Source Code: github.com/jstutters/plumbium
Support
If you are having problems, please let me know by submitting an issue in the tracker.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/plumbium/ | CC-MAIN-2021-04 | refinedweb | 106 | 63.19 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.