text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
18th-century bookmarklet May 24, 2010 at 1:10 PM by Dr. Drang Following up on this post, in which I tweaked my homemade Twitter client to display certain tweets in an old-fashioned font, I decided to create a JavaScript bookmarklet that would do the same when reading tweets in a regular web browser. This opens up 18-century rendering to everyone. The bookmarklet uses JavaScript to show @DrSamuelJohnson’s tweets in the IM Fell English font, a webfont that Google has made available. This should work in any browser. Just drag this link into your bookmarks bar—you can change the name to whatever you like. If you follow @DrSamuelJohnson, clicking the bookmarklet when you’re at twitter.com will change the font of his posts. You don’t need to have IM Fell English on your computer; it’s downloaded as part of the page, just like a photo or screenshot. You can, of course, read the bookmarklet to see how it works, but because it’s URL-encoded, there are lots of %22s and %20s in there that make reading hard. Here’s the unencoded source: $("head link:last").after('<link href="" rel="stylesheet" type="text/css">'); $("li.u-DrSamuelJohnson span.entry-content").css({"font-family": "IM Fell English", "font-size": "120%"}); Pretty simple. The first line sticks a link to the webfont into the <head> of the page. The second line grabs all the @DrSamuelJohnson tweets (list items assigned the u-DrSamuelJohnson class) and changes the font. Only the tweet text itself is changed; the meta information is left alone. The bookmarklet uses jQuery, but there’s no need to link to the jQuery library, as Twitter already uses it. It would be easy to extend the bookmarklet to change the fonts of anybody. If you follow me, please don’t change my tweets to Comic Sans; my feelings are easily hurt. This isn’t the solution I really want, because the user has to click the bookmarklet. It would be better if the JavaScript were executed automatically after the page loads. That would probably require Greasemonkey (for Firefox) or GreaseKit (for Safari), neither of which I have any experience with. Maybe it’s time to look into them. Update 5/24/10 Well, that wasn’t too hard—for Safari, anyway. Here’s a userscript for GreaseKit that does the font change automatically: // ==UserScript== // @name C18th // @namespace // @description Renders @DrSamuelJohnson's tweets in IM Fell English. // @include * // ==/UserScript== $("head link:last").after('<link href="" rel="stylesheet" type="text/css">'); $("li.u-DrSamuelJohnson span.entry-content").css({"font-family": "IM Fell English", "font-size": "120%"}); $("li.u-DrSamuelJohnson.latest-status span.entry-content").css({"font-family": "IM Fell English", "font-size": "175%"}); You’ll note that, in addition to the metadata at the top, I’ve added a third line of JavaScript. This line makes the text of @DrSamuelJohnson’s latest tweet appear in a larger font when I visit his Twitter page, as is conventional. If you have GreaseKit installed for Safari, click this link to install and activate the userscript. I have it set up to work only with sites that have “twitter.com” in their URL. This should work for Firefox/Greasemonkey, too, but it doesn’t. In fact, even the simple bookmarklet doesn’t work for me in Firefox. I don’t know why, and since I don’t use Firefox I don’t care enough to pursue it. If you know the answer, tell me in the comments or an email.
http://leancrew.com/all-this/2010/05/18th-century-bookmarklet/
CC-MAIN-2017-39
refinedweb
591
65.73
Functional thinking Functional features in Groovy, Part 1 Treasures lurking in Groovy Content series: This content is part # of # in the series: Functional thinking This content is part of the series:Functional thinking Stay tuned for additional content in this series. Confession: I never want to work in a non-garbage-collected language again. I paid my dues in languages like C++ for too many years, and I don't want to surrender the conveniences of modern languages. That's the story of how software development progresses. We build layers of abstraction to handle (and hide) mundane details. As the capabilities of computers have grown, we've offloaded more tasks to languages and runtimes. As recently as a decade ago, developers shunned interpreted languages for being too slow for production applications, but they are common now. Many of the features of functional languages were prohibitively slow a decade ago but make perfect sense now because they optimize developer time and effort. Many of the features I cover in this article series show how functional languages and frameworks handle mundane details. However, you don't have to go to a functional language to start reaping benefits from functional constructs. In this installment and the next, I'll show how some functional programming has already crept into Groovy. Groovy's functional-ish lists Groovy significantly augments the Java collection libraries, including adding functional constructs. The first favor Groovy does for you is provide a different perspective on lists, which seems trivial at first but offers some interesting benefits. Seeing lists differently If your background is primarily in C or C-like languages (including Java), you probably conceptualize lists as indexed collections. This perspective makes it easy to iterate over a collection, even when you don't explicitly use the index, as shown in the Groovy code in Listing 1: Listing 1. List traversal using (hidden) indexes def perfectNumbers = [6, 28, 496, 8128] def iterateList(listOfNums) { listOfNums.each { n -> println "${n}" } } iterateList(perfectNumbers) Groovy also includes an eachWithIndex() iterator, which provides the index as a parameter to the code block for cases in which explicit access is necessary. Even though I don't use an index in the iteraterList() method in Listing 1, I still think of it as an ordered collection of slots, as shown in Figure 1: Figure 1. Lists as indexed slots Many functional languages have a slightly different perspective on lists, and fortunately Groovy shares this perspective. Instead of thinking of a list as indexed slots, think of it as a combination of the first element in the list (the head) plus the remainder of the list (the tail), as shown in Figure 2: Figure 2. A list as its head and tail Thinking about a list as head and tail allows me to iterate through it using recursion, as shown in Listing 2: Listing 2. List traversal using recursion def recurseList(listOfNums) { if (listOfNums.size == 0) return; println "${listOfNums.head()}" recurseList(listOfNums.tail()) } recurseList(perfectNumbers) In the recurseList() method in Listing 2, I first check to see if the list that's passed as the parameter has no elements in it. If that's the case, then I'm done and can return. If not, I print out the first element in the list, available via Groovy's head() method, and then recursively call the recurseList() method on the remainder of the list. Recursion has technical limits built into the platform (see Related topics), so this isn't a panacea. But it should be safe for lists that contain a small number of items. I'm more interested in investigating the impact on the structure of the code, in anticipation of the day when the limits ease or disappear. Given the shortcomings, the benefit of the recursive version may not be immediately obvious. To make it more so, consider the problem of filtering a list. In Listing 3, I show an example of a filtering method that accepts a list and a predicate (a boolean test) to determine if the item belongs in the list: Listing 3. Imperative filtering with Groovy def filter(list, p) { def new_list = [] list.each { i -> if (p(i)) new_list << i } new_list } modBy2 = { n -> n % 2 == 0} l = filter(1..20, modBy2) The code in Listing 3 is straightforward: I create a holder variable for the elements that I want to keep, iterate over the list, check each element with the inclusion predicate, and return the list of filtered items. When I call filter(), I supply a code block specifying the filtering criteria. Consider a recursive implementation of the filter method from Listing 3, shown in Listing 4: Listing 4. Recursive filtering with Groovy def filter(list, p) { if (list.size() == 0) return list if (p(list.head())) [] + list.head() + filter(list.tail(), p) else filter(list.tail(), p) } l = filter(1..20, {n-> n % 2 == 0}) In the filter() method in Listing 4, I first check the size of the passed list and return it if it has no elements. Otherwise, I check the head of the list against my filtering predicate; if it passes, I add it to the list (with an initial empty list to make sure that I always return the correct type); otherwise, I recursively filter the tail. The difference between Listing 3 and Listing 4 highlights an important question: Who's minding the state? In the imperative version, I am. I must create a new variable named new_list, I must add things to it, and I must return it when I'm done. In the recursive version, the language manages the return value, building it up on the stack as the recursive return for each method invocation. Notice that every exit route of the filter() method in Listing 4 is a return call, which builds up the intermediate value on the stack. Although not as dramatic a life improvement as garbage collection, this does illustrate an important trend in programming languages: offloading moving parts. If I'm never allowed to touch the intermediate results of the list, I cannot introduce bugs in the way that I interact with it. This perspective shift about lists allows you explore other aspects, such as a list's size and scope. Lazy lists in Groovy One of the common features of functional languages is the lazy list: a list whose contents are generated only as you need it. Lazy lists allow you to defer initialization of expensive resources until you absolutely need them. They also allow the creation of infinite sequences: lists that have no upper bound. If you aren't required to say up front how big the list could be, you can let it be as big as it needs to be. First, I'll show you an example of using a lazy list in Groovy in Listing 5, and then I'll show you the implementation: Listing 5. Using lazy lists in Groovy def prepend(val, closure) { new LazyList(val, closure) } def integers(n) { prepend(n, { integers(n + 1) }) } @Test public void lazy_list_acts_like_a_list() { def naturalNumbers = integers(1) assertEquals('1 2 3 4 5 6 7 8 9 10', naturalNumbers.getHead(10).join(' ')) def evenNumbers = naturalNumbers.filter { it % 2 == 0 } assertEquals('2 4 6 8 10 12 14 16 18 20', evenNumbers.getHead(10).join(' ')) } The first method in Listing 5, prepend(), creates a new LazyList, allowing you to prepend values. The next method, integers(), returns a list of integers using the prepend() method. The two parameters I send to the prepend() method are the initial value of the list and a code block that includes code to generate the next value. The integers() method acts like a factory that returns the lazy list of integers with a value at the front and a way to calculate additional values in the rear. To retrieve values from the list, I call the getHead() method, which returns the argument number of values from the top of the list. In Listing 5, naturalNumbers is a lazy sequence of all integers. To get some of them, I call the getHead() method, specifying how many integers I want. As the assertion indicates, I receive a list of the first 10 natural numbers. Using the filter() method, I retrieve a lazy list of even numbers and call the getHead() method to fetch the first 10 even numbers. The implementation of LazyList appears in Listing 6: Listing 6. LazyList implementation class LazyList { private head, tail LazyList(head, tail) { this.head = head; this.tail = tail } def LazyList getTail() { tail ? tail() : null } def List getHead(n) { def valuesFromHead = []; def current = this n.times { valuesFromHead << current.head current = current.tail } valuesFromHead } def LazyList filter(Closure p) { if (p(head)) p.owner.prepend(head, { getTail().filter(p) }) else getTail().filter(p) } } A lazy list holds a head and tail, specified in the constructor. The getTail() method ensures that tail isn't null and executes it. The getHead() method gathers the elements that I want to return, one at a time, pulling the existing element off the head of the list and asking the tail to generate a new value. The call to n.times {} performs this operation for the number of elements requested, and the method returns the harvested values. The filter() method in Listing 5 uses the same recursive approach as Listing 4 but implements it as part of the list rather than a stand-alone function. Lazy lists exist in Java (see Related topics) but are much easier to implement in languages that have functional features. Lazy lists work great in situations in which generating resources are expensive, such as getting lists of perfect numbers. Lazy list of perfect numbers If you've been following this article series, you're well aware of my favorite guinea-pig code, finding perfect numbers (see "Thinking functionally, Part 1"). One of the shortcomings of all the implementations so far is the need to specify the number for classification. Instead, I want a version that returns a lazy list of perfect numbers. Toward that goal, I've written a highly functional, very compact perfect-number finder that supports lazy lists, shown in Listing 7: Listing 7. Pared-down version of number classifier, including nextPerfectNumberFrom() method class NumberClassifier { static def factorsOf(number) { (1..number).findAll { i -> number % i == 0 } } static def isPerfect(number) { factorsOf(number).inject(0, {i, j -> i + j}) == 2 * number } static def nextPerfectNumberFrom(n) { while (! isPerfect(++n)) ; n } } If the code in the factorsOf() and isPerfect() methods seem obscure, you can see the derivation of those methods in the last installment. The new method, nextPerfectNumber(), uses the isPerfect() method to find the next perfect number beyond the number passed as the parameter. This method call will take a long time to execute even for small values (especially given how unoptimized this code is); there just aren't that many perfect numbers. Using this new version of NumberClassifier, I can create a lazy list of perfect numbers, as shown in Listing 8: Listing 8. Lazily initialized list of perfect numbers def perfectNumbers(n) { prepend(n, { perfectNumbers(nextPerfectNumberFrom(n)) }) }; @Test public void infinite_perfect_number_sequence() { def perfectNumbers = perfectNumbers(nextPerfectNumberFrom(1)) assertEquals([6, 28, 496], perfectNumbers.getHead(3)) } Using the prepend() method I defined in Listing 5, I construct a list of perfect numbers with the initial value as the head and a closure block that knows how to calculate the next perfect number as the tail. I initialize my list with the first perfect number after 1 (using a static import so that I can call my NumberClassifier.nextPerfectNumberFrom() method more easily), then I ask my list to return the first three perfect numbers. Calculating new perfect numbers is expensive, so I would rather do it as little as possible. By building it as a lazy list, I can defer calculations until the optimum time. It is more difficult to think about infinite sequences if your abstraction of "list" is "numbered slots." Thinking of a list as the "first element" and the "rest of the list" encourages you to think of the elements in the list rather than the structure, which in turn allows you to think about things like lazy lists. Conclusion One of the ways that developers make quantum leaps in productivity is by building effective abstractions to hide details. We would never get anywhere if we were still coding with ones and zeros. One of the appealing aspects of functional languages is the attempt to abstract more details away from developers. Modern dynamic languages on the JVM already give you many of these features. In this installment, I showed how just shifting your perspective a little bit on how lists are constructed allows the language to manage state during iteration. Also, when you think of lists as "head" and "tail," it allows you to build things like lazy lists and infinite sequences. In the next installment, you'll see how Groovy metaprogramming can make your programs more functional — and lets you augment third-party functional libraries to make them work better in Groovy. Downloadable resources Related topics - The Productive Programmer (Neal Ford, O'Reilly Media, 2008): Neal Ford's most recent book discusses tools and practices that help you improve your coding efficiency. - An Excursion in Java Recursion: Recursion in Java has well-known limits, including some pointed out in this blog. - Find limit of recursion: Find the recursion depth on your platform with these tests from RosettaCode. - Download IBM product evaluation versions and get your hands on application development tools and middleware products from DB2®, Lotus®, Rational®, Tivoli®, and WebSphere®.
https://www.ibm.com/developerworks/java/library/j-ft7/index.html?ca=drs-
CC-MAIN-2017-26
refinedweb
2,259
52.6
#include <CV977Busy.h> class CV977Busy : public CBusy { CV977Busy(uint32_t base, unsigned crate = 0); CV977Busy(CCAENV977& module); virtual void GoBusy(); virtual void GoClear();} Busy objects control external hardware that indicates when the readout software is unable to respond to a new trigger. The CAENV977 module achieves this by using its ability to copy input signals to it's outputs in a latchemd manner. It is legal to use a single module as both a trigger and a busy module. The following signals are used by the module: CV977Busy(uint32_t base, unsigned crate = 0); Constructor that uses the base address and VME crate number to describe the module. CV977Busy(CCAENV977& module); Constructor that operates with an existing CCAENV977 module. virtual void GoBusy(); Called by the framework to pulse the going busy output. virtual void GoClear(); Called by the framework to clear the busy output. CBusy(3sbsReadout), CV977Trigger(3sbsReadout)
http://docs.nscl.msu.edu/daq/newsite/nscldaq-11.2/r79864.html
CC-MAIN-2017-39
refinedweb
146
51.48
Welcome to WebmasterWorld Guest from 50.19.47.197 As for the spam issues that go along with email list usage - it really depends upon your particulars. Your mail server, the application you use, and the recipients mail server, etc... Reaching people at home is becoming increasingly harder and I don't foresee this getting any easier. Reaching business customers, however, hasn't changed much. Is it worth the investment. I still think it is but with the caveats I mentioned earlier. You build the list and you nurture the relationship. This does two things. One it helps eliminate the anonyminity of your business by giving you the opportunity to display your personality. Second it offers your customers a way to contact you in return and gives them more of a sense that you're a real business and through the consistent and regular use of email, you establish a virtual identity in their minds. I'm relying on the old advertising rule of thumb here, the one that says it takes a potential customer seeing your ad 7-10 times before they will act on it. ...and as all the others have said (quite rightly too), building up and managing your own customer database as a marketing channel is one of the best ways to approach email marketing because you can be sure of the lead quality and also that they're interested in what you're selling. Is buying an email list worth it? Maybe, it depends what you expect that list to do for you. Bought-in email lists tend to well as limited-use tools to sell products or build your customer database, they work less well if you just merge them with your existing customer data before you've confirmed the interest of the contact / prospect. To get the most from a list you'd want to target people who've shown an interest in your area previously & who are in the right demographic - people who're more likely to register or buy something from you, therefore becoming part of your customer database which is a very valuable marketing asset in its own right. Aren't all email lists rubish? No, a good well managed email list can be tailored to meet your needs, offer a good response rate, low bounce rates and very few spam problems but it will cost you more to use. You'll note I said "use" and not "buy" - most businesses that sell quality email lists only lease their data for a set purpose, they don't sell it outright. They normally wont release their data outside the company and those that do will only release it to trusted 3rd parties - this ensures they have complete control over how and when their data which helps keep their list responsive as the people on don't get overloaded with junk. Unless you're dealing with your own customer data then IMHO the best approach to legitimate email marketing is to pay a reputable agency or list broker to manage the campaign which allows you to treat it as you would a regular marketing exercise. Why? Well for starters unless you're heavily into legitimate email broadcasting they're probably going to have a better broadcast setup than you meaning more emails get through, less get bounced or encounter technical issues. Quality email lists used to offer some kind of compensation or refund for verified bounces, but only if they're coming from a reputable source. Being a client has it's advantages too; you have a knowledgable point of contact for your questions, their familiarity with the email lists available on the market should ensure you get the best results possible for your campaign and they should be able to offer advice on response rates based on previous campaigns. It also means that if something does go wrong you're not left "holding the bag" because all the email comes from their servers and points back to their servers (so they can track response rates), although with 100% opt-in data that shouldn't be a problem. - Tony Using seed names in a list works a little like this; the people compiling the list strike a side-agreement with some of the contacts - in return for some freebie or other the contract agrees to report back on any offers they recieve as a result of being on that list. All the data-owners have to do is compare their list of "allowed" users with the responses from their seed names - if they find someone using the list without permission or using the list outside the terms of their license they send in the legal department armed with a copy of the license. How do they know for sure? Well if you managed to hit multiple seed names and those people have been given some unique identifier by the data-owner (such as a typo in their name) then the only way you could have done that is by using the list they provided to you. - Tony Someone could have received or stolen the name from ANYWHERE. The disk could have been lost....hell...it could be a postal worker who used the address. It's awful tough to prove I'd imagine. Well lets say they've given their "plant" data some unique contact information (for example giving Mr and Mrs Smith's house a vanity name when it's never had one, using a fictional middle name for Mr Jones or even adding a completely fictious person who doesn't exist) and for the sake of argument the seed names have been inserted really well so you can't spot them. At this point you've got data which is unique to certain source or company and they could argue that there's no other way for you to have gotten that data except through them - Mr & Mrs Smith don't really have a tacky house name and Mr Jones doesn't really have a middle name so they couldn't appear elsewhere and then there's that fictious person only who exists in the data from one company. Hitting a small number of seeded names might not raise any alarm bells but if you managed to hit a large portion of the seeded names present (complete with unique elements) then it'd be pretty obvious someone was using that data because the odds of those specific people randomly getting the same piece of marketing from the same company are going to be really-really-low. What about if the data "leaks"? Well there's nothing to stop a "seed name" from being unique to a certain client or campaign so they could identify the source of the leak and looking at the marketing to see who's using it now and then asking all manner of sticky questions. - Tony ps. You're right that it's not an exact science, but that's why most companies who value their information generally don't let it out of their sight, and even then there would probably still be seed names present "just incase". You're right that it's not an exact science Actually, direct marketing companies have been doing this for years. They have a large seed list and the names are inserted with a code that identifies the mailing it was rented to. No one is going to go through a couple 10K names to look for them. In physical mailing lists HannaK152HY Smith looks a bit odd and still works, but on an email list? That's like the norm. Besides that, many of the "good" companies you can try renting from won't even let you get your hands on the list itself. You would send them the creative which they would then mail for you. -,Josh [edited by: lorax at 7:55 pm (utc) on Jan. 28, 2005] [edit reason] removed URL [/edit] CITYADMIN...that's all you paid for THAT many addresses? ...just $30? A sure sign that the list is worthless. Looking for a good list is like looking for a good girl/boyfriend, you don't want one that is cheap and has been used by everyone on the block. And if you do decide to shack up, don't come crying if you end up with some *ehm* problems. ;) With direct mail, you normally do get the names and they are heavily seeded, so it would be impossible to be sure that you have removed the seeded names if you wanted to try to use the list again without permission. DM companies keep very close tabs on the mailings. Email is both easier and harder. Easier, because you can easily put thousands of seed names in that a virtually undetectable. Harder because spammers use programs that create infinate number of letter/number combinations for email spamming and so could possibly mail a seed email without ever knowing a list exists. Can't really do that with seeded physical addresses. Emails are much easier to mail than postal, so most of the time a list provider or broker will simply mail the list for you. Very good providers and brokers will provide a garuntee of return (i.e. 1% of the list will respond) so that you know you are not getting junk. Either way, I would caution anyone who is thinking about renting a list to be very careful. Over the past few years, a few companies have been prosecuted for spamming when in fact they had either rented a bad list or someone with a bad list emailed on their behalf. The government considers you liable, even if the "guy told you it was a real opt-in list". Spamming is now a crime and the government is taking it very seriously. Spammer gets 9 years [webmasterworld.com] Spammers ordered to pay $1 billion [webmasterworld.com]
https://www.webmasterworld.com/forum22/3372.htm
CC-MAIN-2016-40
refinedweb
1,656
64.75
I have gone through lot of similar questions in stack overflow like this one. Each one have different perspective of deploying the react app. But this question is for those who use redux, react-router and react-router-redux. It took me lot of time to figure out the proper ways to host Single page application. I am aggregating all the steps here. I have written the below answer that contains the steps to deploy react-redux app to S3. There is a video tutorial for deploying react app to s3. This will be easy for deploying the app as such on s3 but we need to do some changes in our react app. Since it is difficult to follow, I am summarising as steps. Here it goes: Add bucket policy, so that everyone can access the react app. click created bucket -> properties -> permissions. In that click Edit bucket policy. Here is an example for the bucket policy. { "Version": "2012-10-17", "Statement": [ { "Sid": "Allow Public Access to All Objects", "Effect": "Allow", "Principal": "*", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::<YOUR_BUCKET_NAME>/*" } ] } If you want logs, create another bucket and set the bucket name under logs section in properties. You need to have an index.html(which is the starting point of your app) and error.html and all the public assets(browserified reactJS app and other CSS, JS, img files) in a folder. Make sure you have relative reference to all these public files in index.html. Now, navigate to properties -> static website hosting. Enable the website hosting option. Add the index.html and error.html to the fields respectively. On saving, you will receive the Endpoint. Finally, your react-redux app is deployed. All your react routes will work properly. But when you reload the S3 will redirect to that specific route. As no such routes are already defined, it renders error.html We need to add some redirection rules under static web hosting. So, when 404 occurs, S3 needs to redirect to index.html, but this can be done only by adding some prefix like #!. Now, when you reload the page with any react route, instead of going to error.html, the same url will be loaded but with a prefix #!. Click Edit redirection rules and add the following code: <RoutingRules> <RoutingRule> <Condition> <HttpErrorCodeReturnedEquals>404</HttpErrorCodeReturnedEquals> </Condition> <Redirect> <HostName> [YOUR_ENDPOINT] </HostName> <ReplaceKeyPrefixWith>#!/</ReplaceKeyPrefixWith> </Redirect> </RoutingRule> </RoutingRules> In react, we need to remove that prefix and push the route to original route. Here is the change you need to do in react app. I have my main.js file, which is the starting point of react app. Add a listener to browserHistory like this: browserHistory.listen(location => { const path = (/#!(\/.*)$/.exec(location.hash) || [])[1]; if (path) { history.replace(path); } }); The above code will remove the prefix and push the history to the correct route (HTML 5 browser history). For eg: something like this will change to. Doing that makes react-router to understand and change the route accordingly. Here is my main.js file for better understanding: import React from 'react'; import ReactDOM from 'react-dom'; import {createStore, compose, applyMiddleware} from 'redux'; import {Provider} from 'react-redux' import {Router, Route, IndexRoute, browserHistory, useRouterHistory} from 'react-router'; import {syncHistoryWithStore} from 'react-router-redux'; import createLogger from 'redux-logger'; import thunk from 'redux-thunk'; const logger = createLogger(); const createStoreWithMiddleware = applyMiddleware(thunk, logger)(createStore); const store = createStoreWithMiddleware(reducers); const history = syncHistoryWithStore(browserHistory, store); browserHistory.listen(location => { const path = (/#!(\/.*)$/.exec(location.hash) || [])[1]; if (path) { history.replace(path); } }); So uploading the browserified or webpacked react app(app.js file) will do the required need. Since I found difficult to gather all the stuff, I have aggregated all these as steps.
https://codedump.io/share/XtWOR8fcVsw9/1/deploying-react-redux-app-in-aws-s3
CC-MAIN-2017-34
refinedweb
616
51.75
In this series of blog posts, I’ll be posting on writing Python code on Intel Galileo platform. I’ll be using the Grove kit. You can also use a bread board instead of Grove kit. There are two libraries available on the Intel Galileo for developing applications written in python. As is customary in the the programming world, the very first program has to be hello world and that would be blinking an LED in embedded world.: The above syntax is for accessing the remote files (the file that we want to create and edit has to be on Galileo).: To run the code, type in the following in the Galileo console: If you have connected the LED to D5, you should now see the LED blink. Now you know what the code does, it is time to know how it does that. The first thing is to bring in the “mraa” module in. This library let’s us control the GPIO. Next we import the “time” module that will provide us with the “sleep” method to be used for blinking effect. import mraa # For accessing the GPIO import time # For sleeping between blinks Next, we create an instance of the D5 GPIO pin that provides us with the methods to control the selected GPIO pin. In this example we will be using the methods “dir” and “write” LED_GPIO = 5 # we are using D5 pin blinkLed = mraa.Gpio(LED_GPIO) # Get the LED pin object Set the direction of the GPIO port we want to control as “OUT” since we want to OUTput the voltage rather than INputing it. The voltage that we output is applied across the LED and this controls the state of that LED. blinkLed.dir(mraa.DIR_OUT) # Set the direction as output Then in an infinite loop, the LED is turned on and off depending on the state of the flag “ledState” which is toggled every cycle. To turn on the led the “write” method is used. To “write”, if argument sent is “1” then the high logic level is applied at the selected pin causing the LED to turn on and writing “0” causes the LED to switch off. while True: if ledState == False: # LED is off, turn it on blinkLed.write(1) ledState = True # LED is on else: blinkLed.write(0) ledState = False 1”
https://navinbhaskar.wordpress.com/2015/03/20/python-on-intel-galileoedison-part-1/
CC-MAIN-2017-34
refinedweb
390
78.59
Cat Spray No More When working in interactive mode, you sometimes need to be reminded what names you've given to objects. The dir() function, which is built into interactive mode, lists the names (such as names of data objects, module names, and function names) that are stored in the interactive mode's namespace at any particular point in your coding session. (Namespace is a Python term for a list of names that a particular part of a program knows about.) Tip You can also use the dir() function to examine the contents of modules. Examining the namespace The following example shows what happens when you start Python's interactive mode (so you have not defined anything yet), use dir() to see what is defined, and then give a value to a name and use dir() again: % python Python 2.5b1 (r25b1:47038M, Jun 20 2006, 16:17:55) >>> dir() [' builtins ', ' doc ', ' name '] >>> too many>> dir() After you give a value the name too_many_cats, the namespace remembers that name and gives you the value if you ask for it, like this: Was this article helpful?
https://www.pythonstudio.us/tutorial-2/examining-names.html
CC-MAIN-2019-51
refinedweb
185
58.25
Suppose we have a string s where each characters are sorted and we also have a number k, we have to find the length of the longest substring such that every character occurs at least k times. So, if the input is like s = "aabccddeeffghij" k = 2, then the output will be 8, as the longest substring here is "ccddeeff" here every character occurs at least 2 times. To solve this, we will follow these steps − Let us see the following implementation to get better understanding − from collections import Counter class Solution: def solve(self, s, k): def rc(lst): c = Counter(lst) acc = [] ans = 0 valid = True for x in lst: if c[x] < k: valid = False ans = max(ans, rc(acc)) acc = [] else: acc.append(x) if valid: return len(acc) else: ans = max(ans, rc(acc)) return ans return rc(list(s)) ob = Solution() s = "aabccddeeffghij" k = 2 print(ob.solve(s, k)) "aabccddeeffghij", 2 8
https://www.tutorialspoint.com/program-to-find-length-of-longest-substring-with-character-count-of-at-least-k-in-python
CC-MAIN-2021-49
refinedweb
159
59.37
Today's Page Hits: 20 This page validates as XHTML 1.0, and will look much better in a browser that supports web standards, but it is accessible to any browser or Internet device. It was created using techniques detailed at glish.com/css/. Support for OASIS WS-SX standards in Metro We provide support for OASIS WS-SX standards WS-SecurityPolicy 1.2, WS-SecureConversation 1.3 and WS-Trust 1.3 in the current build of Metro. This will be included in the up-coming Metro 1.2 release as EA features. No Netbeans tooling support is avialbale yet. However one can manually modify the wsdl and configuration produced from Netbeans to produce WS-SX based service and STS. This applies to all the existing security scenarios using previous versions of WS-Trust and WS-SecureConversation. 1. Create a service secured with WS-SX: First create a service with Netbeans using an IssuedToken from an STS and/or secure conversation for the security. Then make the following the changes for the service WSDL: 1) Change the all the occurence of WS-SecurityPolicy namespace from "" to "". The change must also apply to the IncludeToken attribute. 2) Change all the occurence of the WS-Trust namespace from "" to "". This mainly applies to the element in the RequestSecurityTokenTemplate in IssuedToken policy assertion and what used for Action. 3) Change the policy assertion Trust10 to Trust13. 2. Create STS of WS-SX version: First Create an STS using Netbeans. (See also my blog entry for creating an customer STS). Then follow the above steps 1), 2) and 3) to make the namespaces and policy assertion changes. 3. Using WS-Policy 1.5 with WS-SX: With the service and STS produced from 1 and 2, WS-Policy 1.2 is used. One may also use the standard WS-Policy 1.5 from W3C with WS-SX support. For this, 1)Change all the occurence of the WS-Policy namespace from "" to "" 2) Using addressing metadata: Remove the policy assertion UsingAddressing. And then add the following assertion instead to enable Addressing: <wsam:Addressing> <wsp:Policy> <wsam:AnonymousResponses /> </wsp:Policy> </wsam:Addressing> where xmlns:wsp="" xmlns:wsam="" Alos change the prefix for Action to wsam (e.g wsam:Action=""). This mainly address the use of WS-SX for existing features. There are also some new features introduced for Metro 1.2 which will be described in the subsequent blogs. We will also provide samples with WS-SX in the current WSIT workspace. Posted at 10:55PM Feb 03, 2008 by jiandongg in Sun | Comments[2] I and our team are enjoy for this new standard support added in Metro. We are writing security assertions in our service contract, conform to the OASIS Specification Ws-SecutityPolicy 1.2. We have followed the above recommendations, replacing all Namespace reference with the new OASIS space. All fine work. The old assertions work with new Namespace. So, reading the OASIS specification, we have added a new element <sp:HashPassword /> under UserNameToken assertion. But this don't work. We see the following exception at startup time: AVVERTENZA: SP0100: Policy assertion Assertion[com.sun.xml.ws.policy.sourcemodel.DefaultPolicyAssertionCreator$DefaultPolicyAssertion] { assertion data { namespace = '' prefix = 'sp' local name = 'HashPassword' value = 'null' optional = 'false' ignorable = 'false' no attributes } no parameters no nested policy } is not supported under UsernameToken assertion. The question is: Is HashPassword element supported? If no, what is the manner to receive a DigestPasswordRequest in our Handler? This object is mandatory for our task, because we must use that for other business control. Thanks in advance Posted by Michele Di Noia on February 14, 2008 at 07:17 AM PST # HashPassword is not supported in Metro currently. It has been in our plan. Actually this is possible with that GlassFish (SailFin release) now has support for PasswordDigestAuthentication. If you need this feature earlier, please file an RFE here: Posted by Jiandong Guo on February 14, 2008 at 09:27 AM PST #
http://blogs.sun.com/trustjdg/entry/oasis_ws_sx_support_in
crawl-001
refinedweb
661
50.33
>> maximum amount of score that can be decreased from a graph Suppose, there is a weighted, undirected graph that has n vertices and m edges. The score of the graph is defined as the addition of all the edges weights in the graph. The edge weights can be negative, and if they are removed the score of the graph increases. What we have to do, we have to make the score of the graph minimum by removing the edges from the graph while keeping the graph connected. We have to find out the maximum amount of score that can be decreased. The graph is given in an array 'edges', where each element is of the form {weight, {vertex1, vertex2}}. So, if the input is like n = 5, m = 6, edges = {{2, {1, 2}}, {2, {1, 3}}, {1, {2, 3}}, {3, {2, 4}}, {2, {2, 5}}, {1, {3, 5}}}, then the output will be 4. If we remove edge (1, 2) and (2, 5) from the graph, the total decrease in score will be 4 and the graph will stay connected. To solve this, we will follow these steps− cnum := 0 Define an array par of size: 100. Define an array dim of size: 100. Define a function make(), this will take v, par[v] := v dim[v] := 1 Define a function find(), this will take v, if par[v] is same as v, then: return v return par[v] = find(par[v]) Define a function unify(), this will take a, b, a := find(a) b := find(b) if a is not equal to b, then: (decrease cnum by 1) if dim[a] > dim[b], then: swap values of (a, b) par[a] := b dim[b] := dim[b] + dim[a] cnum := n sort the array edges based on edge weights for initialize i := 1, when i <= n, update (increase i by 1), do: make(i) res := 0 for each edge in edges, do: a := first vertex of edge b := second vertex of edge weight := weight of edge if find(a) is same as find(b), then: if weight >= 0, then: res := res + 1 * weight Ignore following part, skip to the next iteration if cnum is same as 1, then: if weight >= 0, then: res := res + 1 * weight Otherwise unify(a, b) return res Example Let us see the following implementation to get better understanding − #include <bits/stdc++.h> using namespace std; int cnum = 0; int par[100]; int dim[100]; void make(int v){ par[v] = v; dim[v] = 1; } int find(int v){ if(par[v] == v) return v; return par[v] = find(par[v]); } void unify(int a, int b){ a = find(a); b = find(b); if(a != b){ cnum--; if(dim[a] > dim[b]){ swap(a, b); } par[a] = b; dim[b] += dim[a]; } } int solve(int n, int m, vector <pair <int, pair<int,int>>> edges){ cnum = n; sort(edges.begin(), edges.end()); for(int i = 1; i <= n; i++) make(i); int res = 0; for(auto &edge : edges){ int a = edge.second.first; int b = edge.second.second; int weight = edge.first; if(find(a) == find(b)) { if(weight >= 0) res += 1 * weight; continue; } if(cnum == 1){ if(weight >= 0) res += 1 * weight; } else{ unify(a, b); } } return res; } int main() { int n = 5, m = 6; vector <pair<int, pair<int,int>>> edges = {{2, {1, 2}}, {2, {1, 3}}, {1, {2, 3}}, {3, {2, 4}}, {2, {2, 5}}, {1, {3, 5}}}; cout<< solve(n, m, edges); return 0; } Input 5, 6, {{2, {1, 2}}, {2, {1, 3}}, {1, {2, 3}}, {3, {2, 4}}, {2, {2, 5}}, {1, {3, 5}}} Output 4 - Related Questions & Answers - C++ Program to find out the maximum amount of money that can be made from selling cars - C++ Program to find out the maximum amount of profit that can be achieved from selling wheat - C++ program to find out the maximum number of cells that can be illuminated - C++ program to find out the maximum sum of a minimally connected graph - Problem to Find Out the Maximum Number of Coins that Can be Collected in Python - C++ program to find out the number of coordinate pairs that can be made - Program to find out number of blocks that can be covered in Python - Program to Find Out the Amount of Rain to be Caught between the Valleys in C++ - Program to Find Out the Edges that Disconnect the Graph in Python - Program to find maximum amount of coin we can collect from a given matrix in Python - Program to find maximum score from removing stones in Python - C++ program to find at least how much score it needs to get G amount of score - Program to find maximum units that can be put on a truck in Python - Program to find maximum score of a good subarray in Python - Program to find maximum score from performing multiplication operations in Python
https://www.tutorialspoint.com/cplusplus-program-to-find-out-the-maximum-amount-of-score-that-can-be-decreased-from-a-graph
CC-MAIN-2022-33
refinedweb
818
51.99
Here is the compile error I am hitting: rm -f renderedge.o xlc -c -O -D__STR31__ -DNDEBUG -I. -I../include -I../mi - I../../../include/fonts -I../fb -I../hw/kdrive -I../../../include/extensions - I../../../exports/include/X11 -I../../../include/fonts -I../Xext -I../../.. - I../../../exports/include -DSYSV -DAIXV3 -DAIXV4 -DAIXV5 -D_ALL_SOURCE - DSHAPE -DXKB -DLBX -DXAPPGROUP -DXCSECURITY -DTOGCUP -DDPMSExtension - DPIXPRIV -DRENDER -DRANDR -DXFIXES -DDAMAGE -DCOMPOSITE -DXEVIE - D_IBM_LFT -DNDEBUG -DFUNCPROTO=15 renderedge.c "renderedge.c", line 47.9: 1506-213 (S) Macro name div cannot be redefined. "renderedge.c", line 47.9: 1506-358 (I) "div" is defined on line 634 of /usr/include/stdlib.h. "renderedge.c", line 59.51: 1506-068 (S) Operation between types "struct div_t" and "long" is not allowed. make: The error code from the last command is 1. Stop. And here is a simple patch to fix the problem: *** xc/programs/Xserver/render/renderedge.c.orig Fri Aug 6 18:42:10 2004 --- xc/programs/Xserver/render/renderedge.c Mon Aug 16 10:42:56 2004 *************** *** 44,49 **** --- 44,52 ---- return (i | f); } + #ifdef div + #undef div + #endif #define div(a,b) ((a) >= 0 ? (a) / (b) : -((-(a) + (b) - 1) / (b))) /* Redefining to _div to avoid conflict with stdlib.h on AIX. Patch checked in. Closing. Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.
https://bugs.freedesktop.org/show_bug.cgi?id=1103
CC-MAIN-2021-04
refinedweb
235
61.73
XML::Compile::Cache - Cache compiled XML translators XML::Compile::Cache is a XML::Compile::Schema is a XML::Compile); Extends "DESCRIPTION" in XML::Compile::Schema. Extends "METHODS" in XML::Compile::Schema. Extends "Constructors" in XML::Compile::Schema. {} When true, you may call the reader or writer with types which were not registered with declare(). In that case, the reader or writer may also get options passed for the compiler, as long as they are consistent over each use of the type.) Options added to both READERs and WRITERS. Options which are passed with declare() and opts_readers or opts_writers will overrule these. See addCompileOptions().', ...] }.. [0.993] Take all the prefixes defined in the $node, and XML::LibXML::Element. This is not recursive: only on those defined at the top $node. Lookup a prefix definition. This returns a HASH with namespace info. Lookup the preferred prefix for the $uri.); Return prefixes table. The $params are deprecated since [0.995], see addPrefixes().. [0.99] You may provide global compile options with new(opts_rw), opts_readers and opts_writers, but also later using this method. Inherited, see "Compilers" in XML::Compile::Schema. Inherited, see "Compilers" in XML::Compile Inherited, see "Compilers" in XML::Compile. example: my $schema = XML::Compile::Cache->new(\@xsd, prefixes => [ gml => $GML_NAMESPACE ] ); my $data = $schema->reader('gml:members')->($xml); my $getmem = $schema->reader('gml:members'); my $data = $getmem->($xml); Inherited, see "Compilers" in XML::Compile::Schema..(). This module is part of XML-Compile-Cache distribution version 1
http://search.cpan.org/~markov/XML-Compile-Cache-1.02/lib/XML/Compile/Cache.pod
CC-MAIN-2014-23
refinedweb
244
52.56
UnboundLocalError when accessing location.is_authorised() inside function - off! I can’t reproduce this. Have you tried restarting (i.e. force-quitting) Pythonista? This sort of thing can happen when using threads or button actions that use scripts that have been imported, when you run ine sdript, then another so that globals and user modules get cleared... You may want to post a complete example where this occurs. If I'm understanding the error message correctly, this doesn't have anything to do with missing globals. If you try to access a global that doesn't exist, you get a NameError. An UnboundLocalErrormeans that you're trying to access a local variable that is currently unassigned. Short explanation about Python locals - whether a variable in a function is local or global is determined statically at "compile time". If a variable is assigned to in the function, it is always local, even before the assignment. For example the following code will always produce an UnboundLocalError: var = "global" def fun(): print(var) # raises UnboundLocalError("local variable 'var' referenced before assignment") var = "local" # var is assigned to in the function, which makes it local - the global variable var is hidden fun() My guess is that you're accidentally using locationas a variable name later on in the function. Something like this: import location def get_direction(): if location.is_authorized(): location = location.get_location() # ... do stuff with the location ... else: print("Location access not allowed!") Because you're assigning to location, every use of locationin the function looks up the local variable location- when that happens before it is assigned, you get an UnboundLocalErrorlike the one you posted. The fix for this is to choose a different name for the local variable, so it doesn't conflict with any globals that you want to use.
https://forum.omz-software.com/topic/4385/unboundlocalerror-when-accessing-location-is_authorised-inside-function
CC-MAIN-2019-13
refinedweb
298
55.54
Devel::TraceCalls - Track calls to subs, classes and object instances ## From the command line perl -d:TraceCalls=Subs,foo,bar script.pl ## Quick & dirty via use use Devel::TraceCalls { Package => "Foo" }; ## Procedural use Devel::TraceCalls; trace_calls qw( foo bar Foo::bar ); ## Explicitly named subs trace_calls { Subs => [qw( foo bar Foo::bar )], ...options... }; trace_calls { Package => "Foo", ## All subs in this package ...options... }; trace_calls { ## Just these subs Package => "Foo", ## Optional Subs => qw( foo, bar ), ...options... }; trace_calls $object; ## Just track this instance trace_calls { Objects => [ $obj1, $obj2 ]; ## Just track these instances ...options... }; ... time passes, sub calls happen ... my @calls = $t1->calls; ## retrieve what happned ## Object orented my $t = Devel::TraceCalls->new( ...parameters... ); undef $t; ## disable tracing ## Emitting additional messages: use Devel::TraceCalls qw( emit_trace_message ); emit_trace_message( "ouch!" ); ALPHA CODE ALERT. This module may change before "official" release". Devel::TraceCalls allows subroutine calls to be tracked on a per-subroutine, per-package, per-class, or per object instance basis. This can be quite useful when trying to figure out how some poor thing is being misused in a program you don't fully understand. Devel::TraceCalls works on subroutines and classes by installing wrapper subroutines and on objects by temporarily reblessing the objects in to specialized subclasses with "shim" methods. Such objects are reblessed back when the tracker is DESTROYed. The default action is to log the calls to STDERR. Passing in a PreCall, or PostCall options disables this default behavior, you can reenable it by manually setting <LogTo = \*STDERR>>. There are 4 ways to specify what to trace. trace_calls "foo", "bar"; ## trace to STDOUT. trace_calls { Subs => [ "foo", "bar" ], ...options... }; The first form enables tracking with all Capture options enabled (other than CaptureSelf which has no effect when capturing plain subs). The second allows you to control the options. trace_calls { Package => "My::Module", ...options... }; # Multiple package names trace_calls { Package => [ "My::Module", "Another::Module" ], ...options... }; trace_calls { Package => "My::Module", Subs => [ "foo", "bar" ], ...options... }; This allows you to provide a package prefix for subroutine names to be tracked. If no "Subs" option is provided, all subroutines in the package will be tracked. This does not examine @ISA like the Class and Objects (covered next) techniques do. trace_calls { Class => "My::Class", ...options... }; trace_calls { Class => "My::Class", ...options... }; trace_calls { Class => "My::Class", Subs => [ "foo", "bar" ], ...options... }; This allows tracking of method calls (or things that look like method calls) for a class and it's base classes. The $self ($_[0]) will not be captured in Args (see "Data Capture Format"), but may be captured in Self if CaptureSelf is enabled. Devel::TraceCalls can't differentiate between $obj-foo( ... )> and foo( $obj, ... ), which can lead to extra calls being tracked if the latter form is used. The good news is that this means that idioms like: $meth = $obj->can( "foo" ); $meth->( $obj, ... ) if $meth; are captured. If a Subs parameter is provided, only the named methods will be tracked. Otherwise all subs in the class and in all parent classes are tracked. trace_calls $obj1, $obj2; trace_calls { Objects => [ $obj1, $obj2 ], ...options... }; trace_calls { Objects => [ $obj1, $obj2 ], Subs => [ "foo", "bar" ], ...options... }; This allows tracking of method calls (or things that look like method calls) for specific instances. The $self ($_[0]) will not be captured in Args, but may be captured in Self if CaptureSelf is enabled. The first form ( track $obj, ...) enables all capture options, including CaptureSelf. use constant _tracing => defined $Devel::TraceCalls::VERSION; BEGIN { eval "use Devel::TraceCalls qw( emit_trace_message )" if _tracing; } emit_trace_message( "hi!" ) if _tracing; Using the constant _tracing allows expressions like emit_trace_message(...) if _tracing; to be optimized away at compile time, resulting in little or no performance penalty. there are several options that may be passed in the HASH ref style parameters in addition to the Package, Subs, Objects and Class settings covered above. LogTo => \*FOO, LogTo => \@array, LogTo => undef, Setting this to a filehandle causes tracing messages to be emitted to that filehandle. This is set to STDERR by default if no PreCall or PostCall intercepts are given. It may be set to undef to suppress tracing if you need to. Setting this to an ARRAY reference allows call data to be captured, see below for more details. This is not supported yet, the API will be changing. But, it allows you some small control over how the parameters list gets traced when LogTo points to a filehandle. Setting this causes the call stack to be logged. PreCall => \&sub_to_call_before_calling_the_target, A reference to a subroutine to call before calling the target sub. This will be passed a reference to the data captured before the call and a reference to the options passed in when defining the trace point (this does not contain the Package, Subs, Objects and Class settings. The parameters are: ( $trace_point, $captured_data, $params ) PreCall => \&sub_to_call_after_calling_the_target, ( $trace_point, $captured_data, $params ) A reference to a subroutine to call after calling the target sub. This will be passed a reference to the data captured before and after the call and a reference to the options passed in when defining the trace point (this does not contain the Package, Subs, Objects and Class settings. The parameters are: ( $trace_point, $captured_data, $params ) TODO Wrapper => \&sub_to_delegate_the_target_call_to, A reference to a subroutine that will be called instead of calling the target sub. The parameters are: ( $code_ref, $trace_point, $captured_data, $params ) These options affect the data captured in the Calls array (see "The Calls ARRAY") and passed to the PreCall and PostCall handlers. Options may be added to the hash refs passed to trace_calls. Here are the options and their default values (all defaults chosen to minimize overhead): CaptureStack => 0, CaptureCallTimes => 0, CaptureReturnTimes => 0, CaptureSelf => 0, CaptureArgs => 0, CaptureResult => 0, CaptureAll => 0, ## Shorthand for setting all of the others Is CaptureStack is true, the StackCaptureDepth => 1_000_000, option controls the maximum number of stack frames that will be captured. Set this to "1" to capture just a single stack frame (equiv. to caller 0). The LogTo option can be used to log all data to an array instead of to a filehandle by passing it an array reference: LogTo => \@data, When passing in an array to capture call data (by using the Calls option), the elements will look like: { Name => "SubName", Self => "$obj", CallTime => $seconds, ## A float if Time::HiRes installed ReturnTime => $seconds, ## A float if Time::HiRes installed TraceDepth => $count, ## How deeply nested the trace is. WantArray => $wantarray_result, Result => [ "c" ], ## Dumped with Data::Dumper, if need be Exception => "$@", Args => [ "foo", ## A scalar was passed "{ a => 'b' }", ## A HASH (dumped with Data::Dumper) ... ], Stack => [ [ ... ], ## Results of caller(0). .... ## More frames if requested ], } NOTE: Many of these fields are optional and off by default. See the "OPTIONS" section for details. Tracing (via the LogTo parameter) enables several Capture options regardless of the passed-in settings. Result is an array of 0 or more elements. It will always be empty if the sub was called in void context ( WantArray => undef ). Note that Self, Args and Result are converted to strings to avoid keeping references that might prevent things from being destroyed in a timely manner. Data::Dumper is used for Args and Result, plain stringification is used for Self. Devel::TraceCalls::hide_package; Devel::TraceCalls::hide_package $pkg; Tells Deve::TraceCalls to ignore stack frames with caller eq $pkg. The caller's package is used by default. This is useful when overloading require(). Devel::TraceCalls::unhide_package; Devel::TraceCalls::unhide_package $pkg; Undoes the last hide_package. These calls nest, so Devel::TraceCalls::hide_package; Devel::TraceCalls::hide_package; Devel::TraceCalls::unhide_package; leaves the caller's package hidden. Sometimes it's nice to see what you're missing. This can be helpful if you want to be sure that all the methods of a class are being logged for all instance, for instance. Set the environment variable SHOWSKIPPED to "yes" or calling show_skipped_trace_points to enable or disable this. To enable: Devel::TraceCalls::set_show_skipped_trace_points; Devel::TraceCalls::set_show_skipped_trace_points( 1 ); To disable: Devel::TraceCalls::set_show_skipped_trace_points( 0 ); Calling the subroutine overrides the environment variable. To show the call stack in the log at each trace point, set the environment variable SHOWSTACK to "yes" or calling show_stack to enable or disable this. To enable: Devel::TraceCalls::set_show_stack; Devel::TraceCalls::set_show_stack( 1 ); To disable: Devel::TraceCalls::set_show_stack( 0 ); Calling the subroutine overrides the environment variable. The object oriented interface provides for more flexible than the other APIs. A tracer will remove all of it's trace points when it is deleted and you can add (and someday, remove) trace points from a running tracer. Someday you'll also be able to enable and disable tracers. my $t = Devel::TraceCalls->new( ... any params you might pass to trace_calls... ); $t->add_trace_points( ...any params you might pass to trace_calls... ); Add trace points to an existing tracer. Trace points for subs that already have trace points will be ignored (we can add an option to enable this; send me a patch or contact me if need be). The main advantage of the Devel:: namespace is that the perl -d:Foo ... syntax is pretty handy. Other modules which use this might want to be in the Devel:: namespace. The only trick is avoiding calling Devel::TraceCalls' import() routine when you do this (unless you want to for some reason). To do this, you can either carefully avoid placing Devel::TraceCalls in your Devel::* module's @ISA hierarchy or make sure that your module's import() method is called instead of Devel::TraceCalls'. If you do this, you'll need to have a sub DB::DB defined, because Devel::TraceCalls' wont be. See the source and the Devel::TraceSAX module for details. Massive. Devel::TraceCall is a debugging aid and is designed to provide a lot of detail and flexibility. This comes at a price, namely overhead. One of the side effects of this overhead is that Devel::TraceCall is useless as a profiling tool, since a function that calls a number of other functions, all of them being traced, will see all of the overhead of Devel::TraceCall in its elapsed time. This could be worked around, but it is outside the scope of this module, see Devel::DProf for profiling needs. requirestatements can result in classes being traced. There are several minor limitations. Exports a subroutine by default. Do a use Devel::TraceCalls (); to suppress that. If perl's optimized away constant functions, well, there is no call to trace. Because a wrapper subroutine gets installed in place of the original subroutine, anything that has cached a reference (with code like $foo = \&foo or $foo = Bar->can( "foo" )) will bypass the tracing. If a subroutine reference is taken while tracing is enabled and then used after tracing is disabled, it will refer to the wrapper subroutine that no longer has something to wrap. Devel::TraceCalls does not pass these through in that case, but it could. The import based use Devel::TraceCalls { ... } feature relies on a CHECK subroutine, which is not present on older perls. See perlmod for details. Doesn't warn if you point it at an empty class, or if you pass no subs. This is because you might be passing in a possibly empty list. Check the return value's subs method to count up how many overrides occured. See Devel::TraceMethods and Aspect::Trace for similar functionality. Merlyn also suggested using Class::Prototyped to implement the instance subclassing, but it seems too simple to do without incurring a prerequisite module. A miscellany of tricky modules like Sub::Versive, Hook::LexWrap, and Sub::Uplevel. Devel::DProf for profiling, Devel::TraceSAX for an example of a client module. Barrie Slaymaker <barries@slaysys.com> Maintainer from version 0.04 is Cosimo Streppone <cosimo@cpan.org> You may use this module under the terms of the Artistic License or the GPL, any version.
http://search.cpan.org/~cosimo/Devel-TraceCalls-0.04/lib/Devel/TraceCalls.pm
CC-MAIN-2016-36
refinedweb
1,945
56.96
Talk:GroundTruth Wishlist Conditional text for labels (1)I would like to write a rule which says: if this feature has a name, use it for the label; otherwise use some fixed text. This could be useful e.g. for churches, hospital, chemistry etc. The syntax could be something like this: name?"Hospital"Which means: if name is set, use it; otherwise use the fixed string "Hospital". Ideally this rule can be used in conjunction with +++ operator. Conditional text for labels (2) Instead of the previous wish you could satisfy this wish instead: every rule should have two label columns. The first one should read "label" and the second one "default label" (or similar texts). The first column ("label") should behave exactly as it behaves now: it should add the feature label in the MP file. The second column ("default label") should be a static text (no variable substitution, just a plain static string) which gets added as "string1" in the TYP file: this string is used for default labels when the feature does not have a label. As an added bonus, support "string2", "string3" and "string4" to support up to 4 different languages (see cGPSmapper manual, section 12.3.4). This wish should be simpler than the previous one, but could make old rules incompatible with a new version. Arithmetic for selectorsCurrently we can check whether a key exists and has some values. I would like to test for inequality with number constants, e.g. population<100000 and population>=10000This could be useful e.g. for cities where the one could write different rules with respect to population, or for roads with respect to maxspeed. - I've been looking for a good (and quick) expression parsing .NET library ever since I started working on Kosmos. When I find one, I'll include this and the feature you describe below --Breki 18:02, 13 March 2009 (UTC) Geographic-computable selectorsI would like to introduce a new namespace for selectors, similar to relation:, which could be named geo:. New selectors within this namespace could be geo:length, geo:area and so on. These selectors should be computed at run-time from the feature itself using some algorithms: e.g. geo:length could compute the way's length. This could be useful combined with arithmetic to write a selector like this: geo:area<=1000One could write different rules for landuse, lakes, rivers, etc. Improved handling of TYP files Currently GroundTruth requires at least a color for each area rule. This is not strictly needed in TYP files: every section, except drawOrder, is optional; if a specific section is missing, the GPS unit / MapSource use the default style (first sentence of section 8.2 of cGPSmapper manual). Currently I am not able to use default Garmin area styles because GroundTruth requires a colour at least. I have not checked whether a similar problem currently applies to lines. I know that points work fine: GroundTruth (correctly) does not need a custom icon. Downloading large map viewports If the viewport is large, downloading is cancelled and the line "<error>Query limit of 1,000,000 elements reached</error>" ends the OSM file, which is corrupted and useless by this. It's surely not a problem of groundtruth but the server, but maybe there can be a workaraound? Unmapped 23:33, 8 November 2009 (UTC) - This error is reported by OSMXAPI since it limits how much data can be downloaded in one go - the idea is to prevent overloading of OSMXAPI servers. The only option I see it to split the download into smaller map areas and then merge them back, but I will leave this to users themselves, for several reasons: - This workaround would effectively circumvent a protection mechanism that I think should not be circumvented, since it is in our common interest to have responsive OSMXAPI servers. - There are other ways to get OSM data, for example using planet extracts (you can cut them to the needed area using osmosis) - There are other better tools for splitting and merging of OSM data (again osmosis) I realize this is not very user-friendly, but then again GroundTruth in itself is not a very user friendly tool - it requires a certain level of manual effort from users. --Breki 13:15, 11 November 2009 (UTC) - ok, I see. I will try to find a way. And maybe the server admins will think about a dynamic limit, as the zoom level I can download is 10, which is not really a large area. Unmapped 20:03, 11 November 2009 (UTC) License? Could you please describe the license of GroundTruth? The license/ directory contains the Apache License, the MIT License, the GNU LGPL, some "Shared Source License", and the GNU GPL. If I understand correctly they apply to various parts used by the application. Are they compatible? What is the license of GroundTruth as a whole (GNU GPL, I guess)? --Lutz.horn 13:33, 12 March 2009 (UTC) - The licenses you mention are for 3rd party software used by GroundTruth. BTW: SharpZipLib's GPL has an exception which effectively means GroundTruth is not bound to GNU GPL under the terms of its usage. I haven't yet decided on the license (I know, it's silly), but it will probably be BSD/MIT. The next release of GroundTruth (which should be soon) will have the license and the source code available. --Breki 18:06, 12 March 2009 (UTC) Publishing maps made by GroundTruth Do I see this correctly that is a violation of copyright if one publishes a map created by Groundtruth? (except if one buys one of the expensive cgpsmapper licenses ?). Cgpsmapper Free forbids commercial use of the maps produced with it, on the other hand any map based on openstreetmap.org must be distributable also commercially! Or would it be enough to offer the used map data for the map as additional download ?--Extremecarver 16:12, 11 November 2009 (UTC) - The last time I checked cgpsmapper's license (for the free version), there was no restriction in publishing maps for non-commercial purposes. What do you mean by "any map based on openstreetmap.org must be distributable also commercially"? --Breki 22:21, 11 November 2009 (UTC) - cgpsmapper free has always only been for non-commercial purposes - "Download Free cGPSmapper version 0099a for Windows | Download exe only (version 0099a) - This is the basic version of the program which allows to create maps which are compatible with Garmin GPS receivers - no needs to pay for other versions if you want just to create a basic map! Now the Free version supports creation of basic marine maps and/or use of extended types - This version is solely for non commercial use only! Be sure to read the terms of use before downloading this software. By downloading you agree with the terms of use." - (3) use of the cGPSmapper Software for - preparing maps for sale or license to a third party, or (4) use of - the cGPSmapper Software to provide any service to an external - organisation for which payment is received. - Even if the map author does not intend to make money from the maps, as far as I understand you are then not allowed to pass on any maps, as openstreetmap data must be given on under the same license (CCBYSA 2.0) and that license allows commercial use of the map data or products thereof as long as you leave the map data under the same license. Using cgpsmapper free to my understanding is therefore incompatible with the openstreetmap CCBYSA 2.0 license (and also the new odbl license). - Share Alike — If you alter, transform, or build upon this work, you may distribute the resulting work only under the same or similar license to this one.CCBYSA 2.0 - Therfore clearly using cgpsmapper is not possible if you ever plan to give OSM maps made by cgpsmapper to a 3. person.--Extremecarver 18:25, 12 November 2009 (UTC) - Yes, you may be right, unfortunately. Looks like I'm going to have to pull down my maps of Slovenia and add some license warnings in the Wiki. --Breki 14:14, 13 November 2009 (UTC) - Well the right to "commercialise" OSM data is the reason why we all have to do loads of efforts with the contourlines as seperate downloads/layers..., You could of course ask Stan if he makes an explicit exception for openstreetmap data based maps.--Extremecarver 16:24, 13 November 2009 (UTC) - I guess he would answer with "buy a commercial version of cgpsmapper" :). Maybe I'll ask him nevertheless. As for contours: I think they should be separate in any case: contour maps don't change and it's wasteful to keep regenerate them and force people to re-download them every time OSM data changes. --Breki 17:26, 14 November 2009 (UTC) - Well as I wrote in a comment to you quite some time ago, I don't understand why cgpsmapper at all. Mkgmap still has some flaws (e.g. address search not yet fully functional on GPS, though it does work well in Mapsource) or some problems with routing over tile boarders but many other features that even the commercial 2800$ version of cgpsmapper is missing. So I cannot see the point in using cgpsmapper anyhow. There is nothing except ESRI support that the free version of cgpsmapper can do that mkgmap can't do (well some TYPFile rendering stuff, but ati.land or maptk do it much better and have no such license restrictions - ati.land is even open source). - I used mkgmap for some time but was frustrated with certain things. I used cgpsmapper because I couldn't find any other way to generate IMG and TYP files from some kind of text input file (like .MP). Basically as a user I want to specify a single rules file and let the tool do all the work for me, not by manually combining 2-3 separate tools - it's just to fiddly. --Breki 19:40, 14 November 2009 (UTC) - Well you could add the code to generate typfiles into mkgmap - shouldn't be too difficult. However I don't think mp gives osm fair justice. It is too limited. Actually Garmin img can support much more then what you can consider clean mp. Also cgpsmapper doesn't correct many OSM data bugs (at least when I tried it) and I had cgpsmapper producing maps that crashed my GPS or locked Mapsource. Mkgmap now more or less corrects all those bugs. Also I see no reason at all why you should not directly convert osm to *.img. Converting to mp has no scripts nearly as capable as mkgmap style-file - Also ground-truth is severly limited compared to what you can do with mkgmap style-files (overlays, continue function, conditional rules,....). - The reason for not converting directly to .img is very simple: it's a closed-source format and I don't have the time to do reverse engineering, since this is not my main project. As for GT limitations: since it shares code with Kosmos, any new features added to Kosmos will also be shared with GT. And anyway, I'm not sure it's that limited compared to mkgmap. Maybe you should take a look at GT's hiking rules and make a simple estimate of how much time an ordinary user would need to prepare similar maps using mkgmap + ati.land +... I developed GT primarily for my own needs and I'm fairly satisfied with the produced maps. When I worked with mkgmap, it took me much more time to produce such maps (and BTW back then it didn't have conditional rules) and the look and feel on Garmin unit wasn't very good, it was difficult to distinguish map features. --Breki 06:49, 15 November 2009 (UTC) - I contacted Stan/Marcin of cgpsmapper mapcenter(2) due to the openstreetmap based maps that they are offering for download. They will change their license regarding mapcenter 2 to allow dual licensing with CCBYSA 2.0 for openstreetmap based maps. I now also asked whether they will do the same for maps created with cgpsmapper free. The contact has been very professional and forecoming. I'll update this here if they do change their terms for cgpsmapper free/personal/shareware version too. --Extremecarver 15:34, 18 November 2009 (UTC) Multiple relations Hi there, GroundTruth is an excellent tool! I like the possibilities based on the rules very much. I have just one problem: If a way is member of multiple relations, it is just recognised as a member of the first relation in the order of the osm-file. For example a way is member of an ncn-network-bicycle route and of an rcn-network-route. GroundTruth is recognising it just as a member of an ncn-route or as a member of an rcn-route. If it is recognised as a member of an rcn-route, it will not be recognised as a member of an ncn-route too. If I want to highlight the ncn-routes, this way will be missed. - Hmmm good question. I haven't really tested this scenario. I'll add it to the "todo" list for the next release, thanks for the report --Breki 17:58, 13 March 2009 (UTC) Run under Linux Because I don't found anything on this question, I tried to run it on a Debian Squeeze, with GT version 1.7.702.14 (not tested with other versions). It run without any problem. I just ran with this command line: mono GroundTruth.exe <command> <options> I don't use GT to make map (so, this is not tested), but to generate OSM contours from the NASA SRTM. I don't know where to post it, so I write on the discussion page. --Lolo 32 09:12, 15 January 2010 (UTC) Problems I use GroundTruth to generate hiking maps. It's great tool, but unfortunately it stopped working for my country (Poland) this weekend. I tried to compile newest map and go map some more but instead I got this nasty bug in log: 2010-04-27 18:52:53,647 DEBUG [1] GroundTruth.Engine.ProgramRunner - Running program .\cpreview.exe ('"E:\Varia\GPS\GroundTruth-1.8.740.17\Drivin\Temp\455_pv.txt"') for the maximum duration of 60 minutes 2010-04-27 18:52:53,647 DEBUG [1] GroundTruth.Engine.ProgramRunner - Setting working directory to 'E:\Varia\GPS\GroundTruth-1.8.740.17\Drivin\Temp' 2010-04-27 18:52:54,101 DEBUG [5] GroundTruth.Engine.ProgramRunner - [exec] ******************************************************************************* 2010-04-27 18:52:54,101 DEBUG [5] GroundTruth.Engine.ProgramRunner - [exec] cGPSMapper home page: 2010-04-27 18:52:54,101 DEBUG [5] GroundTruth.Engine.ProgramRunner - [exec] ******************************************************************************* 2010-04-27 18:52:54,101 DEBUG [5] GroundTruth.Engine.ProgramRunner - [exec] Preview & index builder for cGPSmapper - 6.2 2010-04-27 18:52:54,101 DEBUG [5] GroundTruth.Engine.ProgramRunner - [exec] 2010-04-27 18:52:54,101 DEBUG [5] GroundTruth.Engine.ProgramRunner - [exec] Indexer compatible with cgpsmapper0098 version only 2010-04-27 18:52:54,147 DEBUG [5] GroundTruth.Engine.ProgramRunner - [exec] Codepage set to: 1250 2010-04-27 18:52:54,147 DEBUG [5] GroundTruth.Engine.ProgramRunner - [exec] TDBProcessing: 00235020.img 2010-04-27 18:52:54,398 DEBUG [5] GroundTruth.Engine.ProgramRunner - [exec] trying to read after file 2010-04-27 18:52:54,398 DEBUG [5] GroundTruth.Engine.ProgramRunner - [exec] Fatal error reading IMG file - please report to cgpsmapper@gmail.com 2010-04-27 18:52:54,429 DEBUG [3] GroundTruth.Engine.ProgramRunner - [exec] 2010-04-27 18:52:54,429 DEBUG [5] GroundTruth.Engine.ProgramRunner - [exec] 2010-04-27 18:52:54,491 ERROR [1] GroundTruth.ConsoleApp - ERROR: System.ArgumentException: Map making failed. at GroundTruth.Engine.ProgramRunner.RunExternalProgram(String programExePath, String workingDirectory, String commandLineFormat, Object[] args) at GroundTruth.Engine.MapMaker.RunCPreview(String commandLineFormat, Object[] args) at GroundTruth.Engine.Tasks.GeneratePreviewAndTdbFilesTask.Execute(ITaskRunner taskRunner) at GroundTruth.Engine.MapMaker.Run() at GroundTruth.MapMakingCommand.Execute(IEnumerable`1 args) This is confusing because it looks like img file is damaged, yet no error was shown, and I'm able to use img files to create mapset working in MapSource with external tools, and I can open img file with no problems at all. To add some spice Czech republic gets compiled with no problem. Worklflow for me looks like that - download data from geofabrik, split with mkgmap's tile splitter and compile with GroundTruth. Error occured first on Sunday, and is present ever since. Jaszczur666 19:45, 28 April 2010 (UTC) - Hi, this looks like a problem with cGPSmapper's cpreview.exe tool (it is the one which is reporting the "Fatal error reading IMG file" error). Have you switched to a new version of cGPSmapper lately or has this started to happen just because of the new OSM file? --Breki 15:26, 30 April 2010 (UTC) I thought for a while and realized that I've updated all binaries recently to 0100b version. After playing with older versions and finally keeping new cgpsmapper and downgrading cpreview to 096a GroundTruth works again. I thought that bug that You spotted in march last year was fixed in version 0100b. It seems that is not fixed, at least not fully. I used binary version for new processors. My machine is quad core Phenom so I think it's "new". Jaszczur666 11:58, 1 May 2010 (UTC)
http://wiki.openstreetmap.org/wiki/Talk:GroundTruth
crawl-003
refinedweb
2,913
63.29
In regard to: Re: DDD configure breaks on solaris 8, Melissa Terwilliger...: >Here is the log as requested. > >configure:7944: checking for XOpenDisplay in -lX11 >configure:7966: c++ -o conftest -g -O2 -fpermissive -isystem >/usr/openwin/include -L/usr/lib/sparcv9 -R/usr/lib/sparcv9 >conftest.C -lX11 -lSM -lICE -lsocket -lnsl -lSM -lICE -lsocket -lnsl >1>&5 >/usr/openwin/lib/libdga.so.1: could not read symbols: Invalid operation >collect2: ld returned 1 exit status >configure: failed program was: >#line 7952 "configure" >#include "confdefs.h" >/* Override any gcc2 internal prototype to avoid an error. */ >#ifdef __cplusplus >extern "C" >#endif >/* We use char because int might match the return type of a gcc2 > builtin and then its argument prototype would still apply. */ >char XOpenDisplay(); > >int main() { >XOpenDisplay() >; return 0; } > > >I don't know why it's looking in /usr/openwin/lib/ because I am specifying >--x-libraries=/usr/lib/sparcv9 Well, that's definitely not the way to do it. For any compiler that supports both ABIs (ILP32 on 32 bit Solaris and LP64 on 64 bit Solaris), there's a compiler option to select which ABI you want to generate code for. That option *alone* should tell your compiler (and hopefully it will be passed to your linker too, if the linker is used) which variant library directory to search. So, don't try force c++ to read from the v9 directories, tell it what ABI you want. I'm not sure what c++/gcc flag does this, though it is certainly documented in the gcc/c++ info documentation, and very likely begins with `-m'. >I do have another copy of the file in /usr/openwin/lib/sparcv9/libdga.so.1 >Is there a way to tell it just to grab this file from the other folder? Is >this the file that the --x-includes would refer to? No. --x-includes specifies where to find your header files. --x-libraries specifies where your X11 libraries are hiding *but* for OSes that support multiple ABIs, the development toolchain will quietly modify the directories that are searched for libraries *if* you tell it you want to generate libraries and binaries for the non-default ABI. Don't include the `v9' directories in any of the flags you set up in your environment (LDFLAGS most likely), and don't include the v9 directories in any of the options you pass to configure (e.g. don't say --x-libraries=/usr/openwin/lib/v9). *Do* find the option (in the gcc/c++ info docs) to make gcc/c++ compile for the LP64 abi, and add that to your CFLAGS and CXXFLAGS environment variables before running configure. I'm guessing it's something like -march=64 or -march=v9 or -mabi=v9a but those are just guesses. Once you know the options, you would do something like CFLAGS='-mabi=whatever' CXXFLAGS='-mabi=whatever' ./configure <more stuff> Tim -- Tim Mooney address@hidden Information Technology Services (701) 231-1076 (Voice) Room 242-J6, IACC Building (701) 231-8541 (Fax) North Dakota State University, Fargo, ND 58105-5164
http://lists.gnu.org/archive/html/ddd/2003-05/msg00044.html
CC-MAIN-2014-42
refinedweb
511
60.35
Subject: Re: [boost] why the output is different? From: Steven Watanabe (watanabesj_at_[hidden]) Date: 2009-04-09 11:55:56 AMDG Dongfei Yin wrote: > I write some code like this: > > #include <iostream> > > using namespace std; > > class Base1 > { > public: > typedef Base1 base; > }; > > class Base2 > { > typedef Base2 base; > public: > }; > > class A > : public Base1 > , public Base2 > { > public: > A() > { > base::f(); > } > }; > > the output depends on the sequence of base class, that means the > "typedef" in base class is unreliable. > complier gives no warning. > I try these code at both VS2005 and VS2008, the result are same. I don't think this should actually compile, since base is ambiguous. (gcc doesn't compile it) In Christ, Steven Watanabe Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2009/04/150630.php
CC-MAIN-2018-34
refinedweb
135
74.79
On Friday, Aug 8, 2003, at 11:36 Europe/London, Danny Angus wrote: > Henri wrote: > >> The first thing I'm looking for from Geronimo is not a working J2EE >> server, but a redistributable version of: >> >> javamail.jar > >. One major gotcha, for example, is that the JavaMail API doesn't provide a JavaBean view of a message. That is, you can do ${message.subject} but not ${message.content} since the getContent and setContent methods take different argument types (which buggers up Introspection). Object getContent() void setContent(Part part) It also has some decidedly odd interfaces, such as void setFrom() as well as void setFrom(Address address) >. I concur. JavaMail is fairly fundamentally broken (as described above) and what's needed is a better implementation. However, it doesn't preclude a set of JavaMail-esque wrappers to facilitate interfaces by those that are expecting it, but the preference would be to provide a cleaner implementation. Maillet support would be great, and one thing I don't think any other J2EE server supports is a Mail-request based protocol, to allow (for example) JMS to be sent over the top of SMTP, or trigger incoming mails. Anyone know why the Maillet API didn't subclass Servlet, BTW? I'd have thought that public class Maillet extends GenericServlet { public void service(SMTPRequest request, SMTPResponse) throws ServletException, IOEXcception { } } would have been a really good way to handle incoming SMTP messages (viewing the Request/Response pair as a 'channel' rather than a one-off HTTP input/output). Alex.
http://mail-archives.apache.org/mod_mbox/geronimo-dev/200308.mbox/%3C37A594FE-C99E-11D7-B192-0003934D3EA4@ioshq.com%3E
CC-MAIN-2014-23
refinedweb
252
50.06
Great video, can you explain more about the page.resources? where that is coming from and how it contains the user info for that method. Thanks It's the resource for the current page that you're viewing inside Administrate. Just their naming convention since the admin is generic for any models you may have. Very cool! Looking forward to seeing how to build this from scratch! Thanks, Chris! Excellent! Awesome tutorial! Personally I found this plugin to be a much simpler alternative when adding this feature a few months ago, but both are very good nonetheless:... There's a bunch of great options like this. Cool thing about Pretender is that it can work with anything, but the nice part of devise_masquerade is it handles all the controllers and routes for you. Great episode, will be nice to have another episode with JTW and **without** Devise. Authenticating as another user via admin console is a really nice idea. It may save a lot of time for QA. You inspired me to try something like this in one of my projects. But there is a bit different situation: 1. I have separate model for admin console users. 2. Admin console is running on a separate domain (this is the same Rails app with one common DB though). Aparently in this case I'll have to implement custom solution instead of using devise_masquerade gem. Here's my idea: - Authenticated admin clicks a link on admin console to sign is to primary application as some specific User. - Application creates authentication token and saves it to DB. Something like this tuple: `AuthRequest.create(secret_token, target_user_id, token_expiration_time)` (assuming we have AuthRequest model to keep authentication requests). - After token is persisted, admin console redirects the admin to primary application, using full URL with different domain name. `secret_token` should be one of the parameters for this request. - Primary application validates secret token and authenticates current user with associated User records. Like so (the code is simplified): ``` ruby class AuthRequestController # Assuming this is an action that suppose to handle admin console redirects def authenticate auth_request = AuthRequest.find(param[:id]) auth_request.destroy! # Eliminating authentication request record end end ``` But may be there are more straightforward ways to do this. I'll be grateful if you share your opinion. That seems like a pretty decent solution cross-domain. Since you're sharing the database between the two, you can verify the token is only allowed for the user it was generated for and your expiration can be like 30 seconds so that the chance of that token leaking is very small. You can also scope that AuthRequestsController to only allow admin users to access it as well so you get the same security around these tokens that devise masquerade does when it's only accessible from the admin. Sounds like that'll work pretty nicely. You can atually use < pre >< code > tags for syntax highlighting :)... Hello Chris, I submitted a transcript for this episode, please review it so I can earn a free month. Thank you :) Hi Chris, firstly, thanks a lot for your videos! They're so valuable! My first question is related to using masquerade together with the friendly_id gem. I noticed masquerade_path(@user) is redirecting to /users/masquerade/chris, for instance. If I hardcode the user id - like in /users/masquerade/8 - it works. Any insight on how can I make it work properly? Also, and even more important: if any user tries to open this URI, even if he's not an admin, he's able to access other users' accounts \o/ won't that happen in your application as well? Since this is an administrative thing, you could explicitly pass in the user id like this: masquerade_path(@user.id) which should always put the numerical ID in, or you could take a look at overriding the masquerade query to use the friendly.find that is required for friendly_id lookups. I'd probably just pass in the ID explicitly since it's only accessible to admins. And you can make sure this is accessible only for admins by doing this if you're using CanCan or putting your own before_action in the overridden controller to authorize for only admins:... I don't think I mentioned authorizing that url in the episode like I should have. That's an important piece! Join 22,346+ developers who get early access to new screencasts, articles, guides, updates, and more.
https://gorails.com/forum/0-common-features-devise-masquerade-as-another-user
CC-MAIN-2019-35
refinedweb
741
56.25
I want to use a .py file like a config file. So using the {...} {...} OrderedDict() dict() dict = OrderedDict dict = OrderedDict dictname = { 'B key': 'value1', 'A key': 'value2', 'C key': 'value3' } print dictname.items() [('B key', 'value1'), ('A key', 'value2'), ('C key', 'value3')] To literally get what you are asking for, you have to fiddle with the syntax tree of your file. I don't think it is advisable to do so, but I couldn't resist the temptation to try. So here we go. First, we create a module with a function my_execfile() that works like the built-in execfile(), except that all occurrences of dictionary displays, e.g. {3: 4, "a": 2} are replaced by explicit calls to the dict() constructor, e.g. dict([(3, 4), ('a', 2)]). (Of course we could directly replace them by calls to collections.OrderedDict(), but we don't want to be too intrusive.) Here's the code: import ast class DictDisplayTransformer(ast.NodeTransformer): def visit_Dict(self, node): self.generic_visit(node) list_node = ast.List( [ast.copy_location(ast.Tuple(list(x), ast.Load()), x[0]) for x in zip(node.keys, node.values)], ast.Load()) name_node = ast.Name("dict", ast.Load()) new_node = ast.Call(ast.copy_location(name_node, node), [ast.copy_location(list_node, node)], [], None, None) return ast.copy_location(new_node, node) def my_execfile(filename, globals=None, locals=None): if globals is None: globals = {} if locals is None: locals = globals node = ast.parse(open(filename).read()) transformed = DictDisplayTransformer().visit(node) exec compile(transformed, filename, "exec") in globals, locals With this modification in place, we can modify the behaviour of dictionary displays by overwriting dict. Here is an example: # test.py from collections import OrderedDict print {3: 4, "a": 2} dict = OrderedDict print {3: 4, "a": 2} Now we can run this file using my_execfile("test.py"), yielding the output {'a': 2, 3: 4} OrderedDict([(3, 4), ('a', 2)]) Note that for simplicity, the above code doesn't touch dictionary comprehensions, which should be transformed to generator expressions passed to the dict() constructor. You'd need to add a visit_DictComp() method to the DictDisplayTransformer class. Given the above example code, this should be straight-forward. Again, I don't recommend this kind of messing around with the language semantics. Did you have a look into the ConfigParser module?
https://codedump.io/share/FpLDVOEtqAdB/1/override-the--notation-so-i-get-an-ordereddict-instead-of-a-dict
CC-MAIN-2017-51
refinedweb
379
52.05
Unable to get closing daily values for SPY I'm attempting to capture closing prices for the day in an IB datafeed, requested at bt.TimeFrame.Daysand resampled to the same. As discussed in other threads, it is my goal to capture the closing bar for the day and enter or exit ES based on evaluation of indicator values driven by the SPY data. I've set sessionend=as shown below, yet I am not seeing the bar in my strategy at the 16:00 closing time. Suggestions would be appreciated. Setting up data feed as follows: # SPY Live data timeframe resampled to 1 Day data1 = ibstore.getdata(dataname=args.live_spy, backfill_from=bfdata1, timeframe=bt.TimeFrame.Days, compression=1, sessionend=dt.time(16, 0)) cerebro.resampledata(data1, name="SPY-daily", timeframe=bt.TimeFrame.Days, compression=1) Using the following code in Strategy next()to capture the bar. # We only care about ticks on the Daily SPY if not len(self.data_spy) > self.len_data_spy: return else: import pdb; pdb.set_trace() self.len_data_spy = len(self.data_spy) At the breakpoint above, I can see the following data: (Pdb) self.data_spy.sessionend 0.625 (Pdb) self.data_spy.DateTime 6 (Pdb) self.data_spy.buflen() 4219 (Pdb) self.data_spy.contractdetails.m_tradingHours '20170111:0400-2000;20170112:0400-2000' (Pdb) self.data_spy.contractdetails.m_timeZoneId 'EST' (Pdb) self.data_spy.contractdetails.m_liquidHours '20170111:0930-1600;20170112:0930-1600' This seems bound to fail: if not len(self.data_spy) > self.len_data_spy: return When the lenof your data is >= 1the notturns that to False(consider it 0for the comparison) and it will never be larger than something which is already >= 1 While that logic might not be very intuitive to look at, it does accomplish the goal since I would want to returnif it is False. I think one possible bug there is that I could miss ticks if setting the self.len_data_spycounter equal to the len(self.data_spy). I've since changed this to be a += 1counter, but I think the issue may still remain. Will see today when we hit the close. It seem it is going to be always Falseand never execute the return. Example Assume initialization self.len_data_spy = 0(not shown above) As soon as len(self.data_spy) > 0then not len(self.data_spy) -> False And False > self.len_data_spyevaluates to Falseand you don't return self.len_data_spyis updated and contains for sure a number > 0 And you repeat the cycle (not the initialization) with the same consequences You are mixing Minutes(for ES) and Days(for the SPY) and according to your narrative, making decisions based on the daily data, operating on the minute data. It is therefore assumed you have an indicator and/or lines operation on SPY, which means nextwill 1st be called when len(self.data_spy) > 0evaluates to True, because the indicator/operation has increased the minimum period. (And if this doesn't hold true, then something is buggy in the platform) It may be that you have some other initialization value or the reasoning is incorrect, but it really seems like the logic won't actually do anything. I have market data ticking through next()on both 1-minute interval for ES and on Daily interval for SPY. My goals are: - Ignore ticks coming through every minute for ES - Only see the tick for SPY at close of RTH which is 16:00 EST Is there a way to see if the tick that is coming through next()is for a particular data source? That would allow me to return if the tick is for self.data_es Otherwise, my only option here is to see if len(self.data_spy)is larger than my counter and if not, return. so the following is Falseunless the tick is on self.data_spy: if not len(self.data_spy) > self.len_data_spy: # could be written if self.len_data_spy == len(self.data_spy) return # ignore self.data_es tick ...do something... self.len_data_spy += 1 The above logic seems to be ok except that it lets through one ES tick at startup I'll come back with more detail about what I am doing in the strategy if I fail to get this closing bar in the next :30 minutes. Is there a way to see if the tick that is coming through next() is for a particular data source? That would allow me to return if the tick is for self.data_es As explained, by checking if the lenof a data feed has increased. When two or more data feeds are time aligned a single nextcall will let you evaluate a change in both data feeds at the same time. Still not getting this closing tick on self.data_spy. I've included most of the Strategy code below to see if there is something I am doing wrong here. One thought is that the SPY data is backfilled from static data that has no time information beyond the date. Not sure if that could impact what the IB feed that is supplementing it is storing regarding time. Also, the data in static backfill is daily data and is being augmented with daily timeframe data from IB. Does that show the time for 16:00 close? I guess the next debugging approach might be to break in the debugger at specific time for any tick to see what we have. I am open to other ideas as to how to sort this out. I've snipped out the code that I do not think is relevant. Happy to provide that if that detail is needed. class SampleStrategy(bt.Strategy): params = ( ('live', False), ('maperiod', 200), ) def log(self, txt, dt=None): ... def __init__(self): self.datastatus = False self.data_es = self.data0 self.data_spy = self.data1 # Add a MovingAverageSimple indicator based from SPY self.sma = btind.MovingAverageSimple(self.data_spy, period=self.p.maperiod) def start(self): self.len_data_spy = 0 def notify_data(self, data, status, *args, **kwargs): if status == data.LIVE: self.datastatus = True def notify_store(self, msg, *args, **kwargs): ... def notify_order(self, order): ... def notify_trade(self, trade): ... def next(self): # We only care about ticks on the Daily SPY if len(self.data_spy) == self.len_data_spy: return elif self.len_data_spy == 0: self.len_data_spy = (len(self.data_spy) - 1) if self.order: return # if an order is active, no new orders are allowed if self.p.live and not self.datastatus: return # if running live and no live data, return if self.position: # position is long or short if self.position.size < 0 and self.signal_exit_short: self.order = self.close(data=self.data_es) self.log('CLOSE: BUY TO COVER') elif self.position.size > 0 and self.signal_exit_long: self.order = self.close(data=self.data_es) self.log('CLOSE: SELL TO COVER') else: self.log('NO TRADE EXIT') if not self.position: # position is flat if self.signal_entry_long: self.order = self.buy(data=self.data_es) self.log('OPEN: BUY LONG') elif self.signal_entry_short: self.order = self.sell(data=self.data_es) self.log('OPEN: BUY LONG') else: self.log('NO TRADE ENTRY') self.len_data_spy += 1 - backtrader administrators In another thread it was recommended to use replaydata, because it will continuously give you the current daily bar. It should be something like this data0 = ibstore.getdata(`ES`) cerebro.resampledata(data0, timeframe=bt.TimeFrame.Minutes, compression=1) data1 = ibstore.getdata(`SPY`) cerebro.replaydata(data1, timeframe=bt.TimeFrame.Datay, compression=1) In the strategy def next(self): if self.data0.datetime.time() >= datetime.time(16, 0): # session has come to the end if self.data1.close[0] == MAGICAL_NUMBER: self.buy(data=self.data0) # buying data0 which is ES, but check done on data1 which is SPY Rationale: replaydatawill give you every tick of the data ( SPYin this case) but in a daily bar which is slowly being constructed - Because ESkeeps ticking at minute level, once it has reached (or gone over) the end of session of SPYyou can put your buying logic in place Note This time in this line needs to be adjusted to the local time in which ESis (information available in m_contractDetails if self.data0.datetime.time() >= datetime.time(16, 0): # session has come to the end or as an alternative tell the code to give you the time in EST(aka US/Eastern) timezone. For that timezone the end of the session is for sure 16:00 import pytz EST = pytz.timezone('US/Eastern') ... ... def next(self): ... if self.data0.datetime.time(tz=EST) >= datetime.time(16, 0): # session has come to the end ... Is it also necessary to first call .resampledata()on data1 in your example because it is an IB feed, or is it enough to use .replaydata()instead?. (Pdb) self.data_spy.sessionend 0.7291666666666667 (Pdb) self.data_spy.sessionstart 0.0 (Pdb) self.data_spy.datetime.time() datetime.time(19, 0) (Pdb) self.data_spy.datetime.date() datetime.date(2017, 1, 11) Looking at ibtest.py I think I have answered my question that the .replaydata()is in place of the .resampledata(). However, I am unable to get this to run. Continually erroring out when starting the system. Remember that this is also the data source which I am using backfill_fromto backfill from local static data since these indicators need several years of data. Not sure if that could be a factor. File "backtrader/strategy.py", line 296, in _next super(Strategy, self)._next() File "backtrader/lineiterator.py", line 236, in _next clock_len = self._clk_update() File "backtrader/strategy.py", line 285, in _clk_update newdlens = [len(d) for d in self.datas] File "backtrader/strategy.py", line 285, in <listcomp> newdlens = [len(d) for d in self.datas] File "backtrader/lineseries.py", line 432, in __len__ return len(self.lines) File "backtrader/lineseries.py", line 199, in __len__ return len(self.lines[0]) ValueError: __len__() should return >= 0 Just to add another data point here: In the debugger, when running with .resampledata(), self.data_spy.datetime.time(tz=EST) always reports dt.time(19, 0) I've added a break to debugger today to break if it reports something other than dt.time(19,0) Will start looking at what might be happening when running .replaydata() Finding the following: I have 3 data feeds configured and available in self.datas At the point in the code where this is failing, self.datas[0] has no size and the call to len fails. The code: def _clk_update(self): if self._oldsync: clk_len = super(Strategy, self)._clk_update() self.lines.datetime[0] = max(d.datetime[0] for d in self.datas if len(d)) return clk_len import pdb; pdb.set_trace() newdlens = [len(d) for d in self.datas] if any(nl > l for l, nl in zip(self._dlens, newdlens)): self.forward() self.lines.datetime[0] = max(d.datetime[0] for d in self.datas if len(d)) self._dlens = newdlens return len(self) Debugger output: > /home/inmate/.virtualenvs/backtrader3/lib/python3.4/site-packages/backtrader/strategy.py(286)_clk_update() -> newdlens = [len(d) for d in self.datas] (Pdb) self._oldsync False (Pdb) self.data_spy <backtrader.feeds.ibdata.IBData object at 0x810dcfb70> (Pdb) self.datas [<backtrader.feeds.ibdata.IBData object at 0x810dcf3c8>, <backtrader.feeds.ibdata.IBData object at 0x810dcfb70>, <backtrader.feeds.ibdata.IBData object at 0x810dd6320>] (Pdb) len(self.datas) 3 (Pdb) self.datas[0]._name 'ES-minutes' (Pdb) self.datas[1]._name 'SPY-daily' (Pdb) self.datas[2]._name 'ES-daily' (Pdb) len(self.datas[0]) *** ValueError: __len__() should return >= 0 (Pdb) len(self.datas[1]) 1 (Pdb) len(self.datas[2]) 1 Code to setup the feed that is failing size check: # ES Futures Live data timeframe resampled to 1 Minute data0 = ibstore.getdata(dataname=args.live_es, fromdate=fetchfrom, timeframe=bt.TimeFrame.Minutes, compression=1) cerebro.resampledata(data0, name="ES-minutes", timeframe=bt.TimeFrame.Minutes, compression=1) Removing fromdate=gets past that error. But then the next error... If I change to use only .replaydata()for all of these feeds, and set exactbars < 1, I can avoid the above crash. exactbarsset to = 1 causes crash in linebuffer.py I now find that what I am getting from these replayed feeds now using the datas names I have assigned self.data0and self.data1to is Minute data. What am I missing? Is it also necessary to first call .resampledata() on data1 in your example because it is an IB feed, or is it enough to use .replaydata() instead? Either resampledataor replaydata. They do similar but different things. See docs for Data Replay. The platform tries not to be too intelligent. time(19, 0)for your assets (which seem to be in EST) is time(24, 0)(or time(0, 0)in UTCduring the winter time). sessionendwill be used by the platform as a hint as to when intraday data has gone over the session to put that extra intraday data in the next bar. @RandyT There are only some insights as to what's actually running. For example: up until today you had 2 data feeds, suddenly there are 3. And which value actually fetchfromhas may play a role, since it seems to affect what happens when you have it in place and when you don't. With regards to replaydataand the timeframe/compression you get: - A replayed data feed will tick very often, but with the same length until the boundary of the timeframe/compressionpair is met. That means that for a single minutes, it may tick 60 times (1 per second). The len(self.datax)value remains constant until you move to the next minutes. You are seeing the construction of a 1-minutebar replayed. That's why it was the idea above to use it in combination with the 1-minuteresampled data for the ES, to make sure that you see the final values of the daily bar of the SPY. Since you seem to be stretching the limits of the platform and no real data feeds run during the weekend, it will give time to prepare a sample a see if some of your reports can be duly reproduced. I added a third datafeed to give me some daily ES data to do position size calculations. I had been using the SPY for this but ultimately I want to use ES. fromdateis specifying a 7 hour retrieval start time in an attempt to reduce the startup/backfill times. Calculated as shown below. Seemed to work as expected with .resampledata()but immediately failed when changing to .replaydata()for these feeds. fetchfrom = (dt.datetime.now() - timedelta(hours=7)) With some of the changes made today to avoid the crashes, I managed to get to a point where I could run and could print values for self.data_spy(data1) through the day based on timestamps of the ticks, but discovered that rather than the values building on the daily bar for self.data_spyit instead was giving me minute data. I will attempt to put together a more simple version over the weekend that will demonstrate some of these issues. Thanks again for your help with this. - backtrader administrators Side note following from all of the above: sessionendis currently not used to find out the end of a daily bar. The rationale behind: - Many real markets keep on delivering ticks after the sessionend Example: Even if the official closing time of the Eurostoxx50future is 22:00CET, the reality is that it will not close until around 22:05CET. Because of the end of day auction which takes place. Some platforms deliver that tick later in historical data integrated in the last tick at 22:00CETand some others deliver an extra single tick, usually 5 minutes later (the 5 minutes is a rule of thumb, because it does actually change) This is the same as when you consider the different out-of- RTHperiods for products like ES. A resampled daily bar could be returned earlier by considering the sessionend, but any extra ticks would have to be discarded (or put into the next bar). Of course, the end user could set the sessionend to a time of its choosing to balance when to return the bar and start discarding values. replayed bars on the other hand are constantly returned, hence the recommendation to use them combined with a time chek @randyt - please see this announcement with regards to synchronizing the resampling of daily bars with the end of the session. This should avoid the need to use replaydata Great explanation in the announcement you made. I'll give this change a try. One point that I want to make sure is not lost is that on Friday, will looking at the data being returned by the replayto Daily timeframe, I was seeing minute data being reported for OHLC in bars captured at specific times. I could see this because I was comparing the values to the charts seen in IB for minute data. It was not updating these values from the day, but instead was reporting them for the live data that had been replayed to Daily value. This issue was also seen in the values that my indicators were reporting that should have been calculating based on the Daily timeframe. Not sure if you looked at this in your work this weekend. I am going to revert back to .resampledata()for this system and will give it a try tomorrow. Seems I should be able to do the following in if self.data_spy.datetime.time(tz=EST) != dt.time(16, 0): return
https://community.backtrader.com/topic/133/unable-to-get-closing-daily-values-for-spy
CC-MAIN-2017-30
refinedweb
2,885
59.5
package findCpu; class whatIsCpu { int numberOfCores; int numberOfThreads; String name; double freq; String cache; int uniqueID; public whatIsCpu(String name, int numberOfCores, int numberOfThreads, double freq, String cache, int uniqueID) { this.name = name; this.numberOfCores = numberOfCores; this.numberOfThreads = numberOfThreads; this.freq = freq; this.cache = cache; this.uniqueID = uniqueID; System.out.println("Info:\nCores: " + numberOfCores + "\nThreads: " + numberOfThreads + "\nFrequency: " + freq + "\nCache: " + cache); if(uniqueID == 91826) { System.out.println(" "); System.out.println("CPU DETECTED!"); System.out.println("CPU = 'i5 6600k'!"); System.out.println(" "); } if(uniqueID == 471235) { System.out.println(" "); System.out.println("CPU DETECTED!"); System.out.println("CPU = 'Xeon E5-2697V4'!"); System.out.println(" "); } if(uniqueID != 471235 || uniqueID != 91826) { System.out.println("CPU not found. Are you sure you specified correctly?"); System.out.println(" "); } } } public class FindingCpu { public static void main(String[] args) { new whatIsCpu("i5 6600k", 4, 4, 3.50, "6 MB SmartCache", 91826); new whatIsCpu("Xeon E5-2697V4", 18, 36, 3.60, "45 MB SmartCache", 471235); } } When I run it, it outputs the 'CPU DETECTED' bit and also says 'CPU not found' even though I told it to only output 'CPU not found' if the uniqueID is not (!=) 471235 or 91826, the unique IDs of the only cpus I put in there. there's obviously a flaw in my code but I can't find where, and it'd be great if I could know for future reference and ease. I've moved the if(uniqueID != 471235, etc) to the top of the constructor, below the this.<whatever>, I've also put it inside of ones brackets and all that does is say it once instead of twice. Anyone help? Thanks
http://www.dreamincode.net/forums/topic/409000-cant-find-a-problem-with-my-code-need-help/
CC-MAIN-2018-09
refinedweb
270
61.02
Hi, I'm currently having a little trouble with my app. When my app is launched i have a splash screen and the android status bar is black, after some seconds she turns blue and load MainActivity. I have find some interesting information on stackoverflow to force the color but without success. In the SplashScreenActivity.cs i have this code : [Activity(Label = "StatusBar.Android", Icon = "@drawable/icon", MainLauncher = true, ConfigurationChanges = ConfigChanges.ScreenSize | ConfigChanges.Orientation, Theme = "@style/SplashScreen")] public class SplashActivity : Activity { protected override void OnCreate(Bundle bundle) { base.OnCreate(bundle); StartActivity(typeof(MainActivity)); } } I also have a styles.xml in my values-v21 forldes with those elements <item name="colorPrimary">@color/colorPrimary</item> <item name="colorPrimaryDark">@color/colorPrimaryDark</item> <item name="colorAccent">@color/colorAccent</item> This refer my color.xml file (in the same folders) containing this : <resources> <color name="colorPrimary">#0d4680</color> <color name="colorPrimaryDark">#0d4680</color> <color name="colorAccent">#0d4680</color> </resources> Can i do something to have my color set on the start of the splashscreen when i click on my app icon and not after his loading ? The third image is the first screen of linkedIn app, i don't know if she is native or not but it seem to be possible. Thank for your help. Answers a status bar in a splash screen?.... post the code of "@style/WsmVisitorReception.SplashScreen" Actually i keep the android status bar on the first screen. The only line au the style are the tree indicated in my previous post. Someone has an idea ? I make a little up. Little up again, anyone as an idea ? Hi, Thanks for your answer, i'va tried your idea but the result doesn't change. The App is launch with the status bar in black and turn to the choosen color after 1s. I have opened a ticket on the github of xamarin forms. Here is the link if someone is interested : I'm still open to proposition if someone as an idea.
https://forums.xamarin.com/discussion/comment/311596/
CC-MAIN-2019-35
refinedweb
331
50.33
Use apps to keep customers excited about your Facebook page. Brand X Pictures/Brand X Pictures/Getty Images Simply having a presence on Facebook is a good start for your small business when you want to reach your customers online. Adding a custom app to your Facebook business page offers you an opportunity to keep customers engaged and interested in the content you have to offer. It also gives you a chance to reach beyond your immediate customers to their entire network. Facebook apps can range from very simple Web pages to full-fledged applications. Design Your App Before you rush onto Facebook to set up your app, decide exactly what you want it to do. Define three to five measurable goals for your app. For example, your objective might be to increase your number of followers by 100 per week or to increase online sales of your product by 10 percent. Once you've set some goals, decide how the app will achieve them. An app designed to increase your number of followers should have a strong social element that encourages visitors to share it with their friends; an app designed to increase sales could also have a social element but its primary focus should be on the product itself. Use the Developer App Visit the Developer app on Facebook to create a blank app (link in Resources). A dialog box will open, prompting you to give your app a name and a namespace. The namespace is a unique identifier that will become part of your app's URL. For example, if your app's namespace is "my-great-app," then the app's URL would be. Check the option for Web hosting if you need that feature. Input the requested information and click "Continue." Import to Facebook Facebook provides specific functionality that you can integrate into your custom app via the Developer API but it does not actually host apps. You'll develop and host your app code on your own Web server and import the app into Facebook via a URL. Enter this information on the Settings screen under the "App on Facebook" tab. For example, if your app is located at, you would enter that URL into the "Canvas URL" text box. Facebook will automatically display whatever is at the URL when a user launches your app. Request Permissions By default, Facebook provides some basic information about users who launch Facebook apps. If your app needs more than this limited information, such as access to personal details or the ability to post to users' walls, each user will have to authorize it. Keep in mind that the more access your app requests, the fewer people will allow it. Request as little access as possible for your app to fulfill its goals. References (1) Resources (1) Photo Credits - Brand X Pictures/Brand X Pictures/Getty Images
https://smallbusiness.chron.com/create-facebook-applications-beginners-46671.html
CC-MAIN-2019-51
refinedweb
479
61.67
How to install Python client libraries for remote access to a Machine Learning Server Machine Learning Server includes open-source and Microsoft-specific Python packages for modeling, training, and scoring data for statistical and predictive analytics. For classic client-server configurations, where multiple clients connect to and use a remote Machine Learning Server, installing the same Python client libraries on a local workstation enables you to write and run script locally and then push execution to the remote server where data resides. This is referred to as a remote compute context, operant when you call Python functions from libraries that exist on both client and server environments. A remote server can be either of the following server products: Client workstations can be Windows or Linux. Microsoft Python packages common to both client and server systems include the following: This article describes how to install a Python interpreter (Anaconda) and Microsoft's Python packages locally on a client machine. Once installed, you can use all of the Python modules in Anaconda, Microsoft's packages, and any third-party packages that are Python 3.5 compliant. For remote compute context, you can only call the Python functions from packages in the above list. Check package versions While not required, it's a good idea to cross-check package versions so that you can match versions on the server with those on the client. On a server with restricted access, you might need an administrator to get this information for you. Install Python libraries on Windows Download the installation shell script from (or use for the 9.2. release). The script installs Anaconda 4.2.0, which includes Python 3.5.2, along with all packages listed previously. Open PowerShell window with elevated administrator permissions (right-click Run as administrator). Go to the folder in which you downloaded the installer and run the script. Add the -InstallFoldercommand-line argument to specify a folder location for the libraries. For example: cd {{download-directory}} .\Install-PyForMLS.ps1 -InstallFolder "C:\path-to-python-for-mls") Installation takes some time to complete. You can monitor progress in the PowerShell window. When setup is finished, you have a complete set of packages. For example, if you specified C:\mspythonlibsas the folder name, you would find the packages at C:\mspythonlibs\Lib\site-packages. The installation script does not modify the PATH environment variable on your computer so the new python interpreter and modules you just installed are not automatically available to your tools. For help on linking the Python interpreter and libraries to tools, see Link Python tools and IDEs, replacing the MLS server paths with the path you defined on your workstation For example, for a Python project in Visual Studio, your custom environment would specify C:\mypythonlibs, C:\mypythonlibs\python.exe and C:\mypythonlibs\pythonw.exe for Prefix path, Interpreter path, and Windowed interpreter, respectively. Offline install Download .cab files used for offline installation and place them in your %TEMP% directory. You can type %TEMP% in a Run command to get the exact location, but it is usually a user directory such as C:\Users\<your-user-name>\AppData\Local\Temp. After copying the files, run the PowerShell script using the same syntax as an online install. The script knows to look in the temp directory for the files it needs. Install Python libraries.3.0.3.0.3.0 Test local package installation As a verification step, call functions from the revoscalepy package and from scikit, included in Anaconda. If you get a "module not found" error for any of the instructions below, verify you are loading the python interpreter from the right location. If using Visual Studio, confirm that you selected the custom environment pointing the prefix and interpreter paths to the correct location. Note On Windows, depending on how you run the script, you might see this message: "Express Edition will continue to be enforced". Express edition is one of the free SQL Server editions. This message is telling you that client libraries are licensed under the Express edition. Limits on this edition are the same as Standard: in-memory data sets and 2-core processing. Remote servers typically run higher editions not subjected to the same memory and processing limits. When you push the compute context to a remote server, you work under the full capabilities of that system. Create some data to work with. This example loads the iris data set using scikit. from sklearn import datasets import pandas as pd iris = datasets.load_iris() df = pd.DataFrame(iris.data, columns=iris.feature_names) Print out the dataset. You should see a 4-column table with measurements for sepal length, sepal width, petal length, and petal width. print(df) Load revoscalepy and calculate a statistical summary for data in one of the columns. Print the output to view mean, standard deviation, and other measures. from revoscalepy import rx_summary summary = rx_summary("petal length (cm)", df) print(summary) Results Call: rx_summary(, by_group_out_file = None, summary_stats = ['Mean', 'StdDev', 'Min', 'Max', 'ValidObs', 'MissingObs'], by_term = True, pweights = None, fweights = None, row_selection = None, transforms = None, transform_objects = None, transform_function = None, transform_variables = None, transform_packages = None, overwrite = False, use_sparse_cube = False, remove_zero_counts = False, blocks_per_read = 1, rows_per_block = 100000, report_progress = None, verbose = 0, compute_context = <revoscalepy.computecontext.RxLocalSeq.RxLocalSeq object at 0x000002B7EBEBCDA0>) Summary Statistics Results for: petal length (cm) Number of valid observations: 150.0 Number of missing observations: 0.0 Name Mean StdDev Min Max ValidObs MissingObs 0 petal length (cm) 3.758667 1.76442 1.0 6.9 150.0 0.0 Next steps Now that you have installed local client libraries and verified function calls, try the following walkthroughs to learn how to use the libraries locally and remotely when connected to resident data stores. - Quickstart: Create a linear regression model in a local compute context - How to use revoscalepy in a Spark compute context - How to use revoscalepy in a SQL Server compute context Remote access to a SQL Server is enabled by an administrator who has configured ports and protocols, enabled remote connections, and assigned user logins. Check with your administrator to get a valid connection string when using a remote compute context to SQL Server.
https://docs.microsoft.com/ja-jp/machine-learning-server/install/python-libraries-interpreter
CC-MAIN-2018-26
refinedweb
1,024
53.92
This is the mail archive of the glibc-bugs@sourceware.org mailing list for the glibc project. --- Comment #12 from Alexandre Oliva <aoliva at sourceware dot org> --- Given all the information I got so far, it looks like this will fix it: diff --git a/elf/dl-tls.c b/elf/dl-tls.c index 17567ad..60f4c1d 100644 --- a/elf/dl-tls.c +++ b/elf/dl-tls.c @@ -538,6 +538,10 @@ _dl_allocate_tls_init (void *result) # error "Either TLS_TCB_AT_TP or TLS_DTV_AT_TP must be defined" #endif + /* Set up the DTV entry. The simplified __tls_get_addr that + some platforms use in static programs requires it. */ + dtv[map->l_tls_modid].pointer.val = dest; + /* Copy the initialization image and clear the BSS part. */ memset (__mempcpy (dest, map->l_tls_initimage, map->l_tls_initimage_size), '\0', -- You are receiving this mail because: You are on the CC list for the bug.
https://sourceware.org/legacy-ml/glibc-bugs/2016-09/msg00143.html
CC-MAIN-2021-17
refinedweb
139
67.45
In the past few weeks, I’ve been playing around with some third-party Web APIs for Text Analytics, mainly for some side projects. This article is a short write-up of my experience with the Dandelion API. Notice: I’m not affiliated with dandelion.eu and I’m not a paying customer, I’m simply using their basic (i.e. free) plan which is, at the moment, more than enough for my toy examples. Quick Overview on the Dandelion API The Dandelion API has a set of endpoints, for different text analytics tasks. In particular, they offer semantic analysis features for: - Entity Extraction - Text Similarity - Text Classification - Language Detection - Sentiment Analysis As my attention was mainly on entity extraction and sentiment analysis, I’ll focus this article on the two related endpoints. The basic (free) plan for Dandelion comes with a rate of 1,000 units/day (or approx 30,000 units/month). Different endpoints have a different unit cost, i.e. the entity extraction and sentiment analysis cost 1 unit per request, while the text similarity costs 3 units per request. If you need to pass a URL or HTML instead of plain text, you’ll need to add an extra unit. The API is optimised for short text, so if you’re passing more than 4,000 characters, you’ll be billed extra units accordingly. Getting started In order to test the Dandelion API, I’ve downloaded some tweets using the Twitter Stream API. You can have a look at a previous article to see how to get data from Twitter with Python. As NASA recently found evidence of water on Mars, that’s one of the hot topics on social media at the moment, so let’s have a look at a couple of tweets: - So what you’re saying is we just found water on Mars…. But we can’t make an iPhone charger that won’t break after three weeks? - NASA found water on Mars while Chelsea fans are still struggling to find their team in the league table (not trying to be funny with Apple/Chelsea fans here, I was trying to collect data to compare iPhone vs Android and some London football teams, but the water-on-Mars topic got all the attentions). The Dandelion API also provides a Python client, but the use of the API is so simple that we can directly use a library like requests to communicate with the endpoints. If it’s not installed yet, you can simply use pip: pip install requests Entity Extraction Assuming you’ve signed up for the service, you will have an application key and an application ID. You will need them to query the service. The docs also provide all the references for the available parameters, the URI to query and the response format. App ID and key are passed via the parameters $app_id and $app_key respectively (mind the initial $ symbol). import requests import json DANDELION_APP_ID = 'YOUR-APP-ID' DANDELION_APP_KEY = 'YOUR-APP-KEY' ENTITY_URL = '' def get_entities(text, confidence=0.1, lang='en'): payload = { '$app_id': DANDELION_APP_ID, '$app_key': DANDELION_APP_KEY, 'text': text, 'confidence': confidence, 'lang': lang, 'social.hashtag': True, 'social.mention': True } response = requests.get(ENTITY_URL, params=payload) return response.json() def print_entities(data): for annotation in data['annotations']: print("Entity found: %s" % annotation['spot']) if __name__ == '__main__': query = "So what you're saying is we just found water on Mars.... But we can't make an iPhone charger that won't break after three weeks?" response = get_entities(query) print(json.dumps(response, indent=4)) This will produce the pretty-printed JSON response from the Dandelion API. In particular, let’s have a look at the annotations: { "annotations": [ { "label": "Water on Mars", "end": 51, "id": 21857752, "start": 38, "spot": "water on Mars", "uri": "", "title": "Water on Mars", "confidence": 0.8435 }, { "label": "IPhone", "end": 82, "id": 8841749, "start": 76, "spot": "iPhone", "uri": "", "title": "IPhone", "confidence": 0.799 } ], /* more JSON output here */ } Interesting to see that “water on Mars” is one of the entities (rather than just “water” and “Mars” as separate entities). Both entities are linked to their Wikipedia page, and both come with a high level of confidence. It would be even more interesting to see a different granularity for entity extraction, as in this case there is an explicit mention of one specific aspect of the iPhone (the battery charger). The code snippet above defines also a print_entities() function, that you can use to substitute the print statement, if you want to print out only the entity references. Keep in mind that the attribute spot will contain the text as it appears in the original input. The other attributes of the output are pretty much self-explanatory, but you can check out the docs for further details. If we run the same code using the Chelsea-related tweet above, we can find the following entities: { "annotations": [ { "uri": "", "title": "NASA", "spot": "NASA", "id": 18426568, "end": 4, "confidence": 0.8525, "start": 0, "label": "NASA" }, { "uri": "", "title": "Water on Mars", "spot": "water on Mars", "id": 21857752, "end": 24, "confidence": 0.8844, "start": 11, "label": "Water on Mars" }, { "uri": ".", "title": "Chelsea F.C.", "spot": "Chelsea", "id": 7473, "end": 38, "confidence": 0.8007, "start": 31, "label": "Chelsea" } ], /* more JSON output here */ } Overall, it looks quite interesting. Sentiment Analysis Sentiment Analysis is not an easy task, especially when performed on tweets (very little context, informal language, sarcasm, etc.). Let’s try to use the Sentiment Analysis API with the same tweets: import requests import json DANDELION_APP_ID = 'YOUR-APP-ID' DANDELION_APP_KEY = 'YOUR-APP-KEY' SENTIMENT_URL = '' def get_sentiment(text, lang='en'): payload = { '$app_id': DANDELION_APP_ID, '$app_key': DANDELION_APP_KEY, 'text': text, 'lang': lang } response = requests.get(SENTIMENT_URL, params=payload) return response.json() if __name__ == '__main__': query = "So what you're saying is we just found water on Mars.... But we can't make an iPhone charger that won't break after three weeks?" response = get_sentiment(query) print(json.dumps(response, indent=4)) This will print the following output: { "sentiment": { "score": -0.7, "type": "negative" }, /* more JSON output here */ } The “sentiment” attribute will give us a score (from -1, totally negative, to 1, totally positive), and a type, which is one between positive, negative and neutral. The main limitation here is not identifying explicitely the object of the sentiment. Even if we cross-reference the entities extracted in the previous paragraph, how can we programmatically link the negative sentiment with one of them? Is the negative sentiment related to finding water on Mars, or on the iPhone? As mentioned in the previous paragraph, there is also an explicit mention to the battery charger, which is not capture by the APIs and which is the target of the sentiment for this example. The Chelsea tweet above will also produce a negative score. After downloading some more data looking for some positive tweets, I found this: Nothing feels better than finishing a client job that you’re super happy with. Today is a good day. The output for the Sentiment Analysis API: { "sentiment": { "score": 0.7333333333333334, "type": "positive" }, /* more JSON output here */ } Well, this one was probably very explicit. Summary Using a third-party API can be as easy as writing a couple of lines in Python, or it can be a major pain. I think the short examples here showcase that the “easy” in the title is well motivated. It’s worth noting that this article is not a proper review of the Deandelion API, it’s more like a short diary entry of my experiments, so what I’m reporting here is not a rigorous evaluation. Anyway, the feeling is quite positive for the Entity Extraction API. I did some test also using hash-tags with some acronyms, and the API was able to correctly point me to the related entity. Occasionally there are some pieces of text labelled as entities, completely out of scope. This happens mostly with some movie (or song, or album) titles appearing verbatim in the text, and probably labelled because of the little context you have in Twitter’s 140 characters. On the Sentiment Analysis side, I think providing only one aggregated score for the whole text sometimes doesn’t give the full picture. While it makes sense in some sentiment classification task (e.g. movie reviews, product reviews, etc.), we have seen more and more work on aspect-based sentiment analysis, which is what provides the right level of granularity to understand more deeply what the users are saying. As I mentioned already, this is anyway not trivial. Overall, I had some fun playing with this API and I think the authors did a good job in keeping it simple to use. One thought on “Easy Text Analytics with the Dandelion API and Python”
https://marcobonzanini.com/2015/09/29/easy-text-analytics-with-the-dandelion-api-and-python/
CC-MAIN-2021-31
refinedweb
1,455
53.31
5.4. Wrapping a C library in Python with ctypes Wrapping a C library in Python allows us to leverage existing C code or to implement a critical part of the code in a fast language such as C. It is relatively easy to use externally-compiled libraries with Python. The first possibility is to call a command-line executable with an os.system() command, but this method does not extend to compiled libraries. A more powerful method consists of using a native Python module called ctypes. This module allows us to call functions defined in a compiled library (written in C) from Python. The ctypes module takes care of the data type conversions between C and Python. In addition, the numpy.ctypeslib module provides facilities to use NumPy arrays wherever data buffers are used in the external library. In this example, we will rewrite the code of the Mandelbrot fractal in C, compile it in a shared library, and call it from Python. Getting ready The code of this recipe is written for Unix systems and has been tested on Ubuntu. It can be adapted to other systems with minor changes. A C compiler is required. You will find all compiler-related instructions in this chapter's introduction. How to do it... First, we write and compile the Mandelbrot example in C. Then, we access it from Python using ctypes. 1. Let's write the code of the Mandelbrot fractal in C: %%writefile mandelbrot.c #include "stdio.h" #include "stdlib.h" void mandelbrot(int size, int iterations, int *col) { // Variable declarations. int i, j, n, index; double cx, cy; double z0, z1, z0_tmp, z0_2, z1_2; // Loop within the grid. for (i = 0; i < size; i++) { cy = -1.5 + (double)i / size * 3; for (j = 0; j < size; j++) { // We initialize the loop of the system. cx = -2.0 + (double)j / size * 3; index = i * size + j; // Let's run the system. z0 = 0.0; z1 = 0.0; for (n = 0; n < iterations; n++) { z0_2 = z0 * z0; z1_2 = z1 * z1; if (z0_2 + z1_2 <= 100) { // Update the system. z0_tmp = z0_2 - z1_2 + cx; z1 = 2 * z0 * z1 + cy; z0 = z0_tmp; col[index] = n; } else { break; } } } } } 2. Now, let's compile this C source file with gcc into a mandelbrot.so dynamic library: !!gcc -shared -Wl,-soname,mandelbrot \ -o mandelbrot.so \ -fPIC mandelbrot.c 3. Let's access the library with ctypes: import ctypes lib = ctypes.CDLL('mandelbrot.so') mandelbrot = lib.mandelbrot 4. NumPy and ctypes allow us to wrap the C function defined in the library: from numpy.ctypeslib import ndpointer # Define the types of the output and arguments of # this function. mandelbrot.restype = None mandelbrot.argtypes = [ctypes.c_int, ctypes.c_int, ndpointer(ctypes.c_int), ] 5. To use this function, we first need to initialize an empty array and pass it as an argument to the mandelbrot() wrapper function: import numpy as np # We initialize an empty array. size = 400 iterations = 100 col = np.empty((size, size), dtype=np.int32) # We execute the C function, which will update # the array. mandelbrot(size, iterations, col) 6. Let's show the result: import numpy as np import matplotlib.pyplot as plt %matplotlib inline fig, ax = plt.subplots(1, 1, figsize=(10, 10)) ax.imshow(np.log(col), cmap=plt.cm.hot) ax.set_axis_off() 6. How fast is this function? %timeit mandelbrot(size, iterations, col) 28.9 ms ± 73.1 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) The wrapped C version is slightly faster than the Numba version in the first recipe of this chapter. How it works... The mandelbrot() function accepts as arguments: - The size of the colbuffer (the colvalue is the last iteration where the corresponding point is within a disc around the origin) - The number of iterations - A pointer to the buffer of integers The mandelbrot() C function does not return any value; rather, it updates the buffer that was passed by reference to the function (it is a pointer). To wrap this function in Python, we need to declare the types of the input arguments. The ctypes module defines constants for the different data types. In addition, the numpy.ctypeslib.ndpointer() function lets us use a NumPy array wherever a pointer is expected in the C function. The data type given as argument to ndpointer() needs to correspond to the NumPy data type of the array passed to the function. Once the function has been correctly wrapped, it can be called as if it was a standard Python function. Here, the initially-empty NumPy array is filled with the Mandelbrot fractal after the call to mandelbrot(). There's more... An alternative to ctypes is cffi (), which may be a bit faster and more convenient to use. You can also refer to. See also - Accelerating pure Python code with Numba and just-in-time compilation
https://ipython-books.github.io/54-wrapping-a-c-library-in-python-with-ctypes/
CC-MAIN-2019-09
refinedweb
807
67.76
Differences between Input and Output Input can be termed as any information that is required by a program to execute. Input can be provided in many forms to a program. Some programs may use some visual components like a dialog box to accept and return the character string that is typed by the user.Still other programs, like word processing, get some of their input from a file that is stored on the computer's hard disk drive or any other device. Output can be termed as any information that the program must present to the user. When we execute any program on our computer, the information which we see on our computer screen is output. To perform input/output operations in java there is a package java.io that contains most of the classes to perform input and output operations. A stream can be defined as a sequence of data. There are 2 types of streams available. - Input Stream: used to read data from a source. - OutputStream: used for writing data to a destination. In this article we will discuss the ways of input and output in java with both console and file. - Console Input - Console Output - File Input - File Output Console input Console input is any input that is entered in the console window instead of typing it into any dialog or interactive window. Note: Console window is automatically launched when we run a java program (non GUI). When user types any information in console, there are many java I/O classes that take care of handling the input/output. The java.io and java.util packages contain most of the classes that are uses for I/O purpose.The following two classes are the most important classes that are widely use in java programs. - Java.io.InputStream - This class stores information about the connection between an input device and the program. - Java.util.Scanner – this class is used to read the input available from an InputStream object. The Scanner class has a method called nextLine that returns a line of text as typed by the user.For console input, the Scanner class takes an argument as instance of an InputStream object. Steps to use scanner class in console input - Use the System.in object to create a Scanner object. Scanner scanner = new Scanner ( System.in ); - Display a prompt to the user to type some data. System.out.print(“Type some data for the program: " ); - Use the Scanner object to read a line of text from the user. String input = scanner.nextLine(); - Display the input received from the user. System.out.println( "input = " + input ); To use the Scanner class in java we must import the java.util.Scanner. Listing 1. Console input using class Scanner: Java program that demonstrates console based input and output. import java.util.Scanner; public class ScannerInputClass { private static Scanner scanner = new Scanner( System.in ); public static void main ( String [] args ) { // Prompt the user to type some data System.out.print( "Type some data: " ); // Read a line of text from the user. String input = scanner.nextLine(); // Display the input back to the user. System.out.println( "input = " + input ); } // end main method } // end ScannerInputClass class Output: Type some data: Test Console Input = Test Console Explanation Listing 1 : Here we use scanner class to take input from the user and display the output. Here the method nextLine of scanner class returns the string object of input. Suppose the user types “111” then nextLine will return the object as String, not integer. On receiving the string object , the user will have to convert this string object to integer object using parseInt. Like: - Get a String of characters that is in an integer format, e.g., "123". String input = scanner.nextLine - Use the Integer class to parse the string of characters into an integer. int number = Integer.parseInt( input ); But the Scanner class have other ways to do the same thing, instead of converting the string object into integer using parseInt , it has another method nextInt that returns the integer object. nextInt() reads the next available input as an int value. int number = scanner.nextInt(); // from console input example above. There are several other methods of Scanner class that use to read double , float values too. Console Output What we are watching on console can be termed as console output. We have used system.out.print(...) and System.out.println(...) statements for displaying simple text messages to the user. The System.out object is an instance of the PrintStream class, which is a type of OutputStream. A stream object is used to store information needed to connect a computer program to an input or output device. There is a PrintStream object that adds functionality to output streams. The PrintStream class extends the OutputStream class and contains definitions for all of the versions of the print and println methods that we use to display information . Console output in Java is very easy because the print and println methods will work with any type of data. The java.lang.System class creates three different I/O streams automatically for us when our application begins execution. Each of these streams is public and static so that we can access them directly without having to create an instance of the System class. - System.in - InputStream object named System.in that uses in console input. - System.out – PrintStream object uses to display information on screen, where PrintScreen is type of OutputStream - System.err- this object is also an instance of the PrintStream class and is available for use in displaying information to the computer screen. For example, if the following variables are defined, int x = 3; double rate = 5.5; they can all be printed using print or println as follows: System.out.print( "x = " + x ); System.out.println( rate ); We can also print other types of data, including other objects, using the print and println methods. File Input As we have discussed that data can be read from a variety of different sources, including data files stored on devices such as hard disk drives or any other drives.To read and write from external files, these files will need to be opened and a Scanner will be attached to the file object. The process is actually very similar to the console input. Note: The main difference is that the Scanner will be created from a File object instead of an InputStream object in case of file input. As we know java.io package contains lots of classes that perform input/output in java. There is File class that is needed to do input output operations on files. Java.io.file stores information about a file on a computer drive. The basic thing that must be taken into consideration is to make use of exception handling. In File input, file may not be present on hard disk for which we are ready to read the information so we must inform the compiler that we are calling a method that may cause a checked exception to occur. Creating a Scanner object from a File object may cause a FileNotFoundException. Here's a code reading a text file. Listing 2. File input using class File import java.util.Scanner; Import java.io.*; public class FileInputClass { public static void main ( String [] args ) { System.out.print( "Enter the filename: " ); String fileName = scanner.nextLine(); File file = new File( fileName ); if ( file.exists() ) Scanner inFile = new Scanner( file ); int lineNum = 0; while ( inFile.hasNext() ) { line = inFile.nextLine(); // read the next line // Output the line read to the screen for the user System.out.println( ++lineNum + ": " + line ); } // When we're done reading the file, // close the Scanner object attached to the file inFile.close(); } } Note: - Don't forget to add import java.io.* (as well as java.util.* for the Scanner class). - Close the File object when you're done reading data from the file. File Output Writing data to a file is similar to writing data to the screen. You will open a file for writing and then print to that file any data that you would like to store there. You must remember to close the file or risk having some data not be written and saved to the file. We will use of the java.io.PrintStream for file output. When you intend to write data to a file, you should consider what is the appropriate action to take if the file already exists.The exists method of the File class will return true if the file already exists. Here's an example file output: Listing 3. File output using PrintStream import java.util.Scanner; Import java.io.*; public class FileOutputClass { public static void main ( String [] args ) { // Create a PrintStream attached to a file named "output.txt". // This will overwrite the file if it already exists PrintStream ps = new PrintStream( "output.txt" ); // Buffer some data to write to the file (doesn't actually write until flush) ps.print( "Some test data that will be written when flush is called."); // Flush all buffered data to the file. ps.flush(); // Buffer some more data. ps.println( “Another set of data” ); // Close the file (by closing the PrintStream). // Also flushes any remaining buffered output. ps.close(); } } Conclusion Input and output handling in java revolves around two major packages: java.io.* and java.util.*.These two packages contain almost all the necessary classes to handle input/output operations in java. The above example contains the techniques of performing input and output operations in java.
http://mrbool.com/learn-about-java-input-output/30085
CC-MAIN-2017-09
refinedweb
1,583
66.64
#include <itkPNGImageIO.h> Inheritance diagram for itk::PNGImageIO: Reimplemented from itk::ImageIOBase. Definition at line 40 of file itkPNGImageIO.h. Standard class typedefs. Definition at line 38 of file itkPNGImageIO.h. Definition at line 39 of file itkPNG. Set/Get the level of compression for the output images. 0-9; 0 = none, 9 = maximum.. Determines the level of compression for written files. Range 0-9; 0 = none, 9 = maximum , default = 4 Definition at line 107 of file itkPNGImageIO.h. Set if the compression should be used for writing the value is false by default Definition at line 103 of file itkPNGImageIO.h.
http://www.itk.org/Doxygen16/html/classitk_1_1PNGImageIO.html
crawl-003
refinedweb
102
53.68
eduardocorral + 33 comments For Java 7/8, change the boilerplate code to this public static void main(String[] args) throws IOException { try (BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(System.in))) { int q = Integer.parseInt(bufferedReader.readLine().trim()); List<int[]> queries = new ArrayList<>(q); Pattern p = Pattern.compile("^(\\d+)\\s+(\\d+)\\s*$"); for (int i = 0; i < q; i++) { int[] query = new int[2]; Matcher m = p.matcher(bufferedReader.readLine()); if (m.matches()) { query[0] = Integer.parseInt(m.group(1)); query[1] = Integer.parseInt(m.group(2)); queries.add(query); } } List<Integer> ans = freqQuery(queries); try (BufferedWriter bufferedWriter = new BufferedWriter(new FileWriter(System.getenv("OUTPUT_PATH")))) { bufferedWriter.write( ans.stream() .map(Object::toString) .collect(joining("\n")) + "\n"); } } } and note each query is now an int[2]instead of a List. The method to implement is then static List<Integer> freqQuery(List<int[]> queries) { Reduced the run time in inputs 9 and 13 from ~1.7s ( i7-4710MQ CPU @ 2.50GHz) to ~0.8s, and all cases pass now. fabriziocucci + 2 comments I can confirm that changing the boilerplate code to this made all tests failing due to timeout pass. Thanks! Wanna_Be_The_Guy + 0 comments Can also confirm that changing my (Java 8) boilerplate code to this made all tests that were failing due to timeout (Test cases 9 - 15) pass. Thank you so much, and shame on HackerRank for providing bad boilerplate code! abhijeetgupta23 + 7 comments I did this but still tests 10,12,13 fail due to timeout. Please help my solution : static List freqQuery(List queries) { HashMap<Integer,Integer> hash = new HashMap<Integer,Integer>(); List<Integer> retList = new ArrayList<Integer>(); for(int i=0;i<queries.size();i++) { int op = queries.get(i)[0]; int num = queries.get(i)[1]; if(op==1) { if(!hash.containsKey(num)) hash.put(num,1); else hash.put(num,hash.get(num)+1); } else if(op==2) { if(hash.containsKey(num)) { if(hash.get(num)<=1) hash.remove(num); else hash.put(num,hash.get(num)-1); } } else if(op==3) { if(hash.containsValue(num)){ retList.add(1); } else retList.add(0); } } return retList; } toria_gibbs + 2 comments Same! The new boilerplate fixed 9, 11 but 10, 12, 13 still failing. I even tried taking OP's suggestion a step further and changed the boilerplate to int[q][2](instead of just List<int[2]>) and still times out. Bummer. ferlenbusch + 1 comment I kept adjusting my code, and got my solution to O(n), where n = number of queries, and it still didn't pass test case 10 due to time out. Cut out a bunch of fat from the Boilerplate, and finally got it everything to pass with my solution. Here's what I ended up with for my boilerplate:[] query = bufferedReader.readLine().split(" "); queries[i][0] = Integer.parseInt(query[0]); queries[i][1] = Integer.parseInt(query[1]); } List<Integer> ans = freqQuery(queries); try (BufferedWriter bufferedWriter = new BufferedWriter( new FileWriter(System.getenv("OUTPUT_PATH")))) { bufferedWriter.write(ans.stream().map(Object::toString) .collect(joining("\n")) + "\n"); } } } Solution method does need to take in int[][]instead of List<List<Integer>> avinash_mv + 1 comment Thanks, Finally this worked for me. I used two maps one to track values and another to track frequencies. I don't know why hackerrack adds boiler plate code if it's so bad. tuliobraga + 4 comments Hashmap.containsValue is O(n) time complexity - more expensive than needed. In my solution, I use both a counter Hashmap to know how many times a specific number appears and a frequency Hashmap to count how many numbers appeared for a specific amount of times. Then, for op==3 I just check if frequency is greater than zero. All tests passed after changing the boilerplate. saurabh32gupta + 0 comments He is talking about map.containsValue NOT containsKey. So you basically have to iterate through the map to know whether desired value is present in the map or not. And that's O(n). where n is the number of keys present in the map. gasper_senk + 3 comments This solution works static List<Integer> freqQuery(List<int[]> queries) { final Map<Integer, Integer> valueToFreq = new HashMap<>(); final Map<Integer, Integer> freqToOccurrence = new HashMap<>(); final List<Integer> frequencies = new ArrayList<>(); int key; int value; Integer oldFreq; Integer newFreq; Integer oldOccurrence; Integer newOccurrence; for (int[] query : queries) { key = query[0]; value = query[1]; if (key == 3) { if (value == 0) { frequencies.add(1); } frequencies.add(freqToOccurrence.get(value) == null ? 0 : 1); } else { oldFreq = valueToFreq.get(value); oldFreq = oldFreq == null ? 0 : oldFreq; oldOccurrence = freqToOccurrence.get(oldFreq); oldOccurrence = oldOccurrence == null ? 0 : oldOccurrence; if (key == 1) { newFreq = oldFreq + 1; } else { newFreq = oldFreq - 1; } newOccurrence = freqToOccurrence.get(newFreq); newOccurrence = newOccurrence == null ? 0 : newOccurrence; if (newFreq < 1) { valueToFreq.remove(value); } else { valueToFreq.put(value, newFreq); } if ((oldOccurrence - 1) < 1) { freqToOccurrence.remove(oldFreq); } else { freqToOccurrence.put(oldFreq, oldOccurrence - 1); } freqToOccurrence.put(newFreq, newOccurrence + 1); } } return frequencies; } ciandiarmuidgar1 + 0 comments The following should be removed. if (value == 0) { frequencies.add(1); } In the contstraints section it mentions that "z" is >= 1. Also if 0 was possible you would be adding a 1 and then a 0 from your next statment. aburduk + 0 comments This isn't an issue with boiler plate. Your code is failing for cases where there is large amount of queries for op 3 that don't find a matching value. containsValue is probably O(n) since the value list is not sorted. Imagine your hash had >100k unique entries all with a frequency of one. Each query with op 3 would have to do 100k tests if the the value was two. This would be true even if every call to op 3 was testing for the same exact value each time! That is going to be really really slow. Especially if you do all op 3 queries after all op 1 and op 2 queries. The hash table is not changing and we're passing the same values over and over and yet this code does O(n) look up every time. If you want it to be fast you need a reverse look up of frequency value to list of operand. saurabh32gupta + 0 comments You are iterating the map for finding whether the required frequency is present or not. Try maintaining the frequency to values map. So you won't have to iterate the map. see below code :- static List freqQuery(List queries) { List result = new ArrayList<>(); Map operationsMap = new HashMap(); Map<Integer, Set<Integer>> frequencyMap = new HashMap<>(); for (int[] query : queries) { switch (query[0]) { case 1: Integer m = operationsMap.get(query[1]); if (m == null) { operationsMap.put(query[1], 1); Set<Integer> sampleMap = frequencyMap.get(1); if (sampleMap != null) sampleMap.add(query[1]); else frequencyMap.put(1, new HashSet() { { add(query[1]); } }); } else { operationsMap.put(query[1], ++m); frequencyMap.get(m - 1).remove(query[1]); Set<Integer> sampleMap = frequencyMap.get(m); if (sampleMap != null) sampleMap.add(query[1]); else frequencyMap.put(m, new HashSet() { { add(query[1]); } }); } break; case 2: Integer n = operationsMap.get(query[1]); if (n == null) break; else if (n == 1) { operationsMap.remove(query[1]); frequencyMap.get(1).remove(query[1]); } else { operationsMap.put(query[1], --n); frequencyMap.get(n + 1).remove(query[1]); Set<Integer> sampleMap = frequencyMap.get(n); if (sampleMap != null) sampleMap.add(query[1]); else frequencyMap.put(n, new HashSet() { { add(query[1]); } }); } break; case 3: result.add((frequencyMap.get(query[1]) == null || frequencyMap.get(query[1]).isEmpty()) ? 0 : 1); break; } } return result; } mimibar + 2 comments Thanks! Now tests 9-13 pass without "Terminated due to timeout" status with Java 8. frazmohammed + 1 comment include using namespace std; string ltrim(const string &); string rtrim(const string &); vector split(const string &); // Complete the freqQuery function below. vector freqQuery(vector> queries) { map m; map m1; vector v; for(int i=0;i if(queries[i][0]==1) { m[queries[i][1]]++; m1[m[queries[i][1]]]++; } else if(queries[i][0]==2) { auto f=m.find(queries[i][1]); if(f!=m.end()) { m[queries[i][1]]--; m1[m[queries[i][1]]]--; } } else if(queries[i][0]==3) { int flag=0; auto f=m1.find(queries[i][1]); if(f!=m1.end()) { if(f->second>0) flag=1; } if(flag==0) v.push_back(0); else v.push_back(1); } } return v; } int main() { ofstream fout(getenv("OUTPUT_PATH")); string q_temp; getline(cin, q_temp); int q = stoi(ltrim(rtrim(q_temp))); vector<vector<int>> queries(q); for (int i = 0; i < q; i++) { queries[i].resize(2); string queries_row_temp_temp; getline(cin, queries_row_temp_temp); vector<string> queries_row_temp = split(rtrim(queries_row_temp_temp)); for (int j = 0; j < 2; j++) { int queries_row_item = stoi(queries_row_temp[j]); queries[i][j] = queries_row_item; } } vector<int> ans = freqQuery(queries); for (int i = 0; i < ans.size(); i++) { fout << ans[i]; if (i != ans.size() - 1) { fout << "\n"; } } fout << "; } vector split(const string &str) { vector tokens; string::size_type start = 0; string::size_type end = 0; while ((end = str.find(" ", start)) != string::npos) { tokens.push_back(str.substr(start, end - start)); start = end + 1; } tokens.push_back(str.substr(start)); return tokens; } sgupta4_be16 + 0 comments I am getting compilation error in the main after changing my boilerplate code. Solution.java:63: error: ')' expected .map(Object::toString) ^ Solution.java:63: error: ';' expected .map(Object::toString) ^ Solution.java:63: error: not a statement .map(Object::toString) ^ Solution.java:63: error: ';' expected .map(Object::toString) ^ Solution.java:65: error: not a statement + "\n"); ^ Solution.java:65: error: ';' expected + "\n"); ^ psdiesel03 + 0 comments Good catch! I was hoping something like this was the case. I was pretty confident my implementation was O(1) for each of the query operations, but was still getting timeout errors for cases 9-13. After fixing the boilerplate code, all cases are passing. Thanks! coder_ravikapoor + 0 comments Very true. Saved further debugging time! I was wondering why its still failing instead my algorithm is quite efficient krayvanova + 0 comments I supposed, the result depends on server state. I have to rewrite output code and add inverse map to pass all tests. polo_dez11 + 0 comments Thank you. I didn't know why I had problems of timeout, but your change solved my problem. TY. daum_sebastian + 0 comments Same for me. Solution is in O(n), this new boiler plate made me pass the tests immediately. nish2575 + 0 comments I was surprised that when I changed the boilerplate to use an array list presized to q, with elements that were array lists of size 2, I still couldn't beat the timeout. If anybody gets this to work with arraylist or list as the entries, I would be curious to see what they did in the boilerplate. Maybe above would work with compiled pattern but array list of array lists.... Maybe it's the time lost simply to allocate a wrapped integer instead of a primitive. WittyNotes + 0 comments Arrrrgh.... changed the function to take an int[], AND had it return an int[] (by counting the number of queries on input, and then passing that into the function) and am still timing out on 12 and 13. Stripped down my code so it now uses only a single non-primitive structure (one Map to keep track of the frequency of each number), and it's still not passing. I'll have to come back to this one. rishabhagarwal19 + 0 comments My solution was showing WA for test cases 9-13. Changing boiler plate code fixed it. Thanks a lot @eduardocorral alex_fotland + 0 comments Thank you! I was just coming to the discussion to complain how tests are timing out on my O(n) solution and it's not obvious how to optimize it to pass. ocarlsen + 0 comments lynndang138 + 0 comments I also confirm that changing the boilerplate code to this made all tests failing due to timeout pass. Thank you so much ! sriyashree906 + 1 comment[deleted] saurabh32gupta + 0 comments Test cases 10-13 were failing but after changing the boilerplate code to the above code, they passed. Thanks brother.! feelermorses + 0 comments Bless you. You code executes 2-3 times faster on my machine and I finally passed the tests yuntunghsieh + 0 comments There are two more things have to be done before passing all test cases. - Use String.split()instead of pattern match. - Use 2D array instead of List<int[]>to store all queries.[] qs = bufferedReader.readLine().split(" "); queries[i][0] = Integer.parseInt(qs[0]); queries[i][1] = Integer.parseInt(qs[1]); } List<Integer> ans = freqQuery(queries); try (BufferedWriter bufferedWriter = new BufferedWriter(new FileWriter(System.getenv("OUTPUT_PATH")))) { bufferedWriter.write( ans.stream() .map(Object::toString) .collect(joining("\n")) + "\n"); } } } naomi_nguyen7 + 17 comments My Python solution. Let me know if there's a shorter way: `def freqQuery(queries): freq = Counter() cnt = Counter() arr = [] for q in queries: if q[0]==1: cnt[freq[q[1]]]-=1 freq[q[1]]+=1 cnt[freq[q[1]]]+=1 elif q[0]==2: if freq[q[1]]>0: cnt[freq[q[1]]]-=1 freq[q[1]]-=1 cnt[freq[q[1]]]+=1 else: if cnt[q[1]]>0: arr.append(1) else: arr.append(0) return arr` tieros + 1 comment Nice way to use Counter, I didn't know you can use it like that. But I tested it with 1000000 queries against defaultdict and defaultdict is around 15-20% faster. I guess both of them using a lot of memory but as a dictionaries and hashmaps problem it's not so important here. naomi_nguyen7 + 1 comment Interesting. Do you know why defaultdict is faster. From what I understand, defaultdict is very similar to Counter except that if we call a missing key, the defaultdict will include that key in its key set while Counter does not, which means defaultdict cost more operations and thus cost more time? (I'm a total newbie to computer science so excuse me for my naive question tieros + 1 comment Actually I'm not sure why but I tested dictionary types a few times in competitions to save a few secs and defaultdict had been always fastest. Weird thing is your use of counter is same with defaultdict but according to python.org it's same with Dict. Here is one of the test links I found, some others says speed difference is bigger. max_waser + 1 comment Very clever! I had trouble looking up if a particular frequency existed without timing out; this is (in my opinion) a very elegant solution! naomi_nguyen7 + 1 comment Thank you My first time sharing a solution (as well as earning compliments from it) gramhagen + 1 comment nice work! i ended up with something almost identical, though i used defaultdicts with sets def freqQuery(queries): results = [] lookup = dict() freqs = defaultdict(set) for command, value in queries: freq = lookup.get(value, 0) if command == 1: lookup[value] = freq + 1 freqs[freq].discard(value) freqs[freq + 1].add(value) elif command == 2: lookup[value] = max(0, freq - 1) freqs[freq].discard(value) freqs[freq - 1].add(value) elif command == 3: results.append(1 if freqs[value] else 0) return results kevinbrewer04 + 1 comment Great job! I'm gonna remember the double hash trick for the future! budzinsa + 4 comments Nice work. In my code taste I would just suggest extracting command and value for less square brackets => better readability and simplify result else f.e. like: #!/bin/python import os from collections import Counter def freqQuery(queries): freq = Counter() cnt = Counter() result = [] for action, value in queries: if action == 1: cnt[ freq[value] ] -= 1 freq[value] += 1 cnt[ freq[value] ] += 1 elif action == 2: if freq[value] > 0: cnt[ freq[value] ] -= 1 freq[value] -= 1 cnt[ freq[value] ] += 1 else: result.append(1 if cnt[value] > 0 else 0) return result if __name__ == '__main__': queries = [] for _ in xrange(int(raw_input().strip())): queries.append(map(int, raw_input().rstrip().split())) ans = freqQuery(queries) with open(os.environ['OUTPUT_PATH'], 'w') as fptr: fptr.write('\n'.join(map(str, ans))) fptr.write('\n') tarun29061990 + 1 comment Actually my logic is same, test cases 7,8 and 11 are failing hendraeffendi96 + 0 comments my wrong answer also failing in 7, 8 and 11 i got correct answer after i found that i forgot to skip delete query if the number total is zero JedJulliard + 0 comments Hi Naomi, great solution! i understand the method is called double hashing. I tried solving the problem with single hash but faced timeout issues in several cases. could you explain why double hashing resolves the issue? sheetansh + 1 comment gettings tle in 10,12,13. any suggestions? def freqQuery(queries): dic = dict() res =[] for q, n in queries: if q == 1: try: dic[n] += 1 except Exception: dic[n] = 1 elif q == 2: try: dic[n] -= 1 if (dic[n] == 0): dic.pop(n) except Exception: continue; else: if n in dic.values(): res.append(1) else: res.append(0) return res kosta_tsaf + 0 comments "if n in dic.values()" can start to take too much time if you are doing it too often. Try to store frequencies somewhere instead of searching through your array of values everytime you get a q==3. srinivas135 + 1 comment can you explain the question why you need to use two dictionaries naomi_nguyen7 + 0 comments Hi, one dict is to keep track of the frequency of each elements (key: element, value: frequency). The other keeps track of frequency of the frequency (key: frequency, value: number of elements that has that frequency). Hope it helps! hubrando + 1 comment My question is why is this better than using a defaultdict and a Counter? my solution is as follows: def freqQuery(queries): occ_dict = defaultdict(int) result_array = [] for op in queries: if op[0] == 1: occ_dict[op[1]] += 1 elif op[0] == 2: if occ_dict[op[1]] > 0: occ_dict[op[1]] -= 1 elif op[0] == 3: occ_counter = Counter(occ_dict) if occ_counter[op[1]]: result_array.append(1) else: result_array.append(0) return(result_array) Any help would be appreciated, thanks! naomi_nguyen7 + 0 comments Hi, do you notice that each time, the query starts with 3, it has to count occ_dictall over again? In my code, when the query starts with 3, it just has to search for it, none additional counting is required c_m_s_gan + 0 comments Not shorter but this is a passable solution without the use of Counter from Collections: def freqQuery(queries): a1 = dict() #Keep track of the number of times each queried number occurs in the array a2 = dict() #[0]*len(queries) #Keep track of how many numbers occur once, twice, etc. out = [] for (op,num) in queries: if (op == 1): if not(num in a1): a1[num] = 0 if not(a1[num] in a2): a2[a1[num]] = 1 a2[a1[num]] -= 1 a1[num] += 1 if not(a1[num] in a2): a2[a1[num]] = 0 a2[a1[num]] += 1 if (op == 2) & (num in a1): a2[a1[num]] -= 1 a1[num] -= 1 a2[a1[num]] += 1 if a1[num] <= 0: a1.pop(num) if (op == 3) & (num in a2): out.append(int(a2[num]>0)) #out.append(a2[num]) elif (op==3): out.append(0) return out developersanjeev + 5 comments Used the simple idea that search in keys of map takes O(1) time and search in values of map takes O(n) time. first map will store <element, frequency> second map will store <frequency, frequencyCount> #include<bits/stdc++.h> using namespace std; int main(){ ios_base::sync_with_stdio(false); cin.tie(0); int nq; cin>>nq; // first will contain <element, frequency> pairs map<int,int> first; // second will contain <frequency, frequencyCount> pairs map<int,int> second; for(int i=0; i<nq; i++){ int a, b; cin >>a >>b; if(a == 1) { // Insert b into first map. // Update the frequencies in second map. int elem = first[b]; // ele = current frequency of element b. if(elem > 0) { // b was already present. second[elem]--; } // Add b // increase frequency of b first[b]++; // Update the count of new frequency in second map second[first[b]]++; } else if(a == 2) { // Remove b int temp = first[b]; // temp = current frequency of element b if(temp > 0){ // b is present second[temp]--; // Update frequency count first[b]--; // decrease element frequency second[first[b]]++; // Update frequency count } } else { // check for the b frequency of any element int res = second[b]; if(res > 0) { cout<<1<<endl; } else { cout<<0<<endl; } } } return 0; } baymaxlim2204 + 1 comment What is frequency count for? developersanjeev + 1 comment we just needed to store frequencies of elements in such a way that we can perform search efficiently. So, to store them in keys, just storing their frequency count (frequency of frequencies) in values. so, frequency count is just used to store the actual element frequencies in keys. gnemlock + 1 comment I do not understand how you account for using frequancies as keys, given that keys have to be unique, and frequancies are not always unique. gnemlock + 0 comments I ended up creating a frequancy dictionary to store the frequancy as a key, and a Hash Set of values. I did this because the key needs to be unique, and in most cases, using a frequancy dictionary of integers alone would fail. If you follow this path, always remember to remove the value from the current frequancy hash set before you move it onto the next, whenever you add or subtract. Don't worry about removing the entry, if there are no more values in a certain frequencies Hash Set. Just check that the Hash Set has a Countgreater than 0, when asserting that the frequancy can be found. apps_yoon + 4 comments Is anyone else having timeout issues for test cases 9-13 for Swift? Even with the two hashmap solution. jamesbigmac + 0 comments I'm getting a timeout for those no matter what in Java. I'm not sure I can optimize this solution much more and I'm starting to think it's not my fault that I'm timing out. howard_hw_lee + 2 comments Same here. 2 Hashmaps so no .contains() or anything. In fact not function calls aside from append() is used. Still no go. No one on leaderboard got full point in Swift jemdev + 3 comments armoredblimp + 0 comments I also independently came up with the dual Dictionary hashmap approach and am timeout failing on cases 9 - 11. I even added optimizations such as pre-allocating dictionary size to maximum potential sizes, and automatically rejecting out of bounds frequency checks. kboitmanis + 2 comments The test cases #9 and #11 have z == 0, which contradicts the input constraints. Sergey_Dyshko + 2 comments Agree! Sent an edit suggestion to the admins. billchenxi + 1 comment how to sent suggestion? germanmarianogo1 + 3 comments My solution using Java 8 Streams // Complete the freqQuery function below. static List<Integer> freqQuery(List<List<Integer>> queries) { List<Integer> result = new ArrayList<>(); List<Integer> data = new ArrayList<>(); HashMap<Integer, Function<Integer, Boolean>> mapa = new HashMap<>(); mapa.put(1, o -> data.add(o)); mapa.put(2, o -> data.remove(o)); mapa.put(3, o -> { Map<Integer, Long> counts = data.stream().collect(Collectors.groupingBy(e -> e, Collectors.counting())); return result.add(counts.containsValue(new Long(o))? 1:0); }); queries.forEach(integers -> { mapa.get(integers.get(0)).apply(integers.get(1)); }); return result; } fynx_gloire + 0 comments Can you explain this problem, I cannot even understand what is being asked. What does he mean by x, y, and z, I only see 2 values, an x and y?? Does he mean add x and delete y? Add and delete from where? ivanduka + 0 comments JavaScript solution (passes all tests): const func = arr => { const result = []; const hash = {}; const freq = []; for (let i = 0; i < arr.length; i += 1) { const [action, value] = arr[i]; const initValue = hash[value] || 0; if (action === 1) { hash[value] = initValue + 1; freq[initValue] = (freq[initValue] || 0) - 1; freq[initValue + 1] = (freq[initValue + 1] || 0) + 1; } if (action === 2 && initValue > 0) { hash[value] = initValue - 1; freq[initValue - 1] += 1; freq[initValue] -= 1; } if (action === 3) result.push(freq[value] > 0 ? 1 : 0); } return result; }; The idea is that we count 3 things: 1. Creating a dictionary with all added/removed values as keys and number of elements as values 2. Updating a frequency table 3. Result that we return in the end Obvious solution would be to keep only the dictionary. However, then in case 3 every time we need to traverse the whole dictionary to check if there is a key with value that equals the search value; It works fine on the initial tests but fails on the main set of tests because traversal of the whole dictionary is expensive. OK, then we need to keep an extra list of frequencies. I used a simple array where the index is the number of occurencies, for example [0,2,3] means that there are 2 values which occur twice and 3 values that occur 3 times (dictionary in this case would be something like: {12: 2, 14: 3} This way in case 3 I need only to check freq[number of occurencies] nico_dubus + 0 comments I was timing out on a few of the test cases, even using two hashmaps. I tried the optimization of boilerplate code proposed here, and was still timing out on 10 which, to be fair, has a million operations. So to really turbocharge it, you can pass a BufferedReader to your freqQuery function, and rather than pass in an int array[q][2], undertake your operations as you read them line by line, so you only do a single pass. My code - if there's another optimization I missed, please feel free to point it out to me. import java.io.*; import java.math.*; import java.security.*; import java.text.*; import java.util.*; import java.util.concurrent.*; import java.util.function.*; import java.util.regex.*; import java.util.stream.*; import static java.util.stream.Collectors.joining; import static java.util.stream.Collectors.toList; public class Solution { // Complete the freqQuery function below. static List<Integer> freqQuery (BufferedReader bufferedReader, int q)throws IOException { HashMap<Integer, Integer> valuesToCounts = new HashMap<>(); HashMap<Integer, Set<Integer>> countsToValues = new HashMap<>(); ArrayList<Integer> results = new ArrayList<>(); int size = q; for (int i = 0; i < q; i++) { String[] query = bufferedReader.readLine().split(" "); int operation = Integer.parseInt(query[0]); int number = Integer.parseInt(query[1]); int oldCount = valuesToCounts.getOrDefault(number, 0); int newCount; if (operation == 1) { newCount = oldCount + 1; == 2) { newCount = (oldCount > 1) ? oldCount - 1 : 0; == 3) { if (number > size) results.add(0); else { results.add((number == 0 || countsToValues.getOrDefault(number, Collections.emptySet()).size() > 0) ? 1 : 0); } } } return results; } public static void main(String[] args) throws IOException { try (BufferedReader bufferedReader = new BufferedReader( new InputStreamReader(System.in))) { int q = Integer.parseInt(bufferedReader.readLine().trim()); List<Integer> ans = freqQuery(bufferedReader, q); try (BufferedWriter bufferedWriter = new BufferedWriter( new FileWriter(System.getenv("OUTPUT_PATH")))) { bufferedWriter.write(ans.stream().map(Object::toString) .collect(joining("\n")) + "\n"); } } } } WittyNotes + 0 comments Great googly moogly. Here's a list of the things I had to do to avoid the timeout on cases 11-13. - Change the boilerplate code to use no non-primitive storage. - Change the function to the following signature: static int[] freqQuery(int[][] queries, int numChecks, int numAdds). Passing in the number of checks ('3'-queries) allows the function to return an int[]. Passing in the number of additions ('1'-queries) allows some space optimization by allowing the function to use a shorter int array to track frequencies, rather than a map. - Reduce ALL non-primitive storage in the function to a single map. - In the final print in main(), use a StringBuilder and a single print statement, rather than printing in the loop. Hope that helps people who are as stuck as I was! galante_jayson + 2 comments Here is my solution. Not very elegant but still manages to pass all the tests. vector<int> freqQuery(vector<vector<int>> queries) { //initialize two maps where : //freq has for key the frequence z and the value is the datas having this frequency //values has for key the data inserted and the key is it's frequency std::map<int, int> freq; std::map<int, int> datas; //vector of the final output std::vector<int> output; for (auto &q : queries){ switch(q[0]){ case 1: //incrementing the frequency of the data datas[q[1]]++; //increment the number of datas having this frequency freq[datas[q[1]]]++; //decrement the number of datas having the old frequency of the treated data freq[datas[q[1]]-1]--; //we cannot allow to have a negative number in our values if (freq[datas[q[1]-1]] < 0){ freq[datas[q[1]-1]] = 0; } break; case 2: //case 2 is like case 1 but the incrementation and decrementation will be reversed datas[q[1]]--; freq[datas[q[1]]]++; freq[datas[q[1]]+1]--; if (freq[datas[q[1]]+1] < 0){ freq[datas[q[1]]+1] = 0; } if (datas[q[1]] < 0){ datas[q[1]] = 0; } break; case 3: //check whether this frequency exists in our map and at least one data have this frequency if (freq.find(q[1]) != freq.end() && freq[q[1]] > 0){ output.push_back(1); } else{ output.push_back(0); } break; } } return output; } baymaxlim2204 + 0 comments I used this logic for my Java 8 solution but failed 6/15 test cases. Could you help me check it out? // Complete the freqQuery function below. static List<Integer> freqQuery(List<int[]> queries) { Map<Integer, Integer> countMap = new HashMap<Integer, Integer>(); Map<Integer, Integer> freqMap = new HashMap<Integer, Integer>(); List<Integer> answer = new ArrayList(); for(int[] i:queries){ switch(i[0]){ case 1: countMap.compute(i[1], (k,v)->(v==null)?1:v+1); freqMap.compute(countMap.get(i[1]), (k,v)->(v==null)?1:v+1); freqMap.compute(countMap.get(i[1]-1), (k,v)->(v==null || v-1 == 0)?null:v-1); break; case 2: countMap.compute(i[1], (k,v)->(v==null || v-1==0)?null:v-1); freqMap.compute(countMap.get(i[1]), (k,v)->(v==null)?1:v+1); freqMap.compute(countMap.get(i[1]+1), (k,v)->(v==null || v-1 == 0)?null:v-1); break; case 3: if(freqMap.containsKey(i[1]) && freqMap.get(i[1])!=null ){ answer.add(1); }else{answer.add(0);} } } return answer; } Sort 233 Discussions, By: Please Login in order to post a comment
https://www.hackerrank.com/challenges/frequency-queries/forum
CC-MAIN-2019-30
refinedweb
5,014
57.47
Introduction To get the most out of Standard C++ [1], we must rethink the way we write C++ programs. An approach to such a "rethink" is to consider how C++ can be learned (and taught). What design and programming techniques do we want to emphasize? What subsets of the language do we want to learn first? What subsets of the language do we want to emphasize in real code? This article compares a few simple C++ programs, written in a modern style using the standard library, to traditional C-style solutions. It argues briefly that lessons from these simple examples are relevant to large programs. More generally, this article argues for a use of C++ as a higher-level language that relies on abstraction to provide elegance without loss of efficiency compared to lower-level styles. We want our programs to be easy to write, correct, maintainable, and acceptably efficient. It follows that we ought to use C++ and any other programming language in ways that most closely approximate this ideal. It is my conjecture that the C++ community has yet to internalize the facilities offered by Standard C++; major improvements relative to the ideal can be obtained by reconsidering our style of C++ use. This article focuses on the styles of programming that the facilities offered by Standard C++ support not the facilities themselves. The key to major improvements is a reduction of the size and complexity of the code we write through the use of libraries. Below, I demonstrate and quantify these reductions for a couple of simple examples such as might be part of an introductory C++ course. By reducing size and complexity, we reduce development time, ease maintenance, and decrease the cost of testing. Importantly, we also simplify the task of learning C++. For toy programs and for students who program only to get a good grade in a nonessential course, this simplification would be sufficient. However, for professional programmers efficiency is a major issue. Only if efficiency isn't sacrificed can we expect our programming styles to scale to the data volumes and real-time requirements of modern services and businesses. Consequently, I present measurements that demonstrate that the reduction in complexity can be obtained without loss of efficiency. Finally, I discuss the implications of this view on approaches to learning and teaching C++ Complexity Consider a fairly typical second exercise in using a programming language: write a prompt "Please enter your name" read the name write out "Hello <name>" In Standard C++, the obvious solution is: #include<iostream> // get standard I/O facilities #include<string> // get standard string facilities int main() { // gain access to standard library using namespace std; cout << "Please enter your name: \n"; string name; cin >> name; cout << "Hello" << name << '\n'; } For a real novice, we need to explain the "scaffolding." What is main()? What does #include mean? What does using do? In addition, we need to understand all the "small" conventions, such as what \n does, where semicolons are needed, etc. However, the main part of the program is conceptually simple and differs only notationally from the problem statement. We have to learn the notation, but doing so is relatively simple: string is a string, cout is output, << is the operator we use write to output, etc. To compare, consider a traditional C-style solution [Note 1]: #include<stdio. h> // get standard I/O facilities int main() { const int max = 20; // maximum name length is 19 char name[max]; printf("Please enter your name: \n"); // read characters into name scanf( "%s" , name); printf( "Hello %s\n" ,name); return 0; } Objectively, the main logic here is slightly but only slightly more complicated than the C++-style version because we have to explain about arrays and the magic %s. The main problem is that this simple C-style solution is shoddy. If someone enters a first name that is longer than the magic number 19 (the stated number 20 minus one for a C-style string null termination), the program is corrupted. It can be argued that this kind of shoddiness is harmless as long as a proper solution is presented "later on." However, that line of argument is at best "acceptable" rather than "good." Ideally, a novice user isn't presented with a program that brittle. What would a C-style program that behaved as reasonably as the C++-style one look like? As a first attempt we could simply prevent the array overflow by using scanf in a more appropriate manner: #include<stdio. h> // get standard I/O facilities int min() { const int max 20; char name [max]; printf( "Please enter your first name: \n"); scanf( "%19s", name); // read at most 19 chars printf( "Hello %s\n", name); return 0; } There is no standard way of directly using the symbolic form of the buffer size, max, in the scanf format string, so I had to use the integer literal. That is bad style and a maintenance hazard. The expert-level alternative is not one I'd care to explain to novices: char fmt[10]; // create a format string. plain %s can overflow sprintf(fmt, "%%%ds", max-1); // read at most max-1 characters into name scanf(fmt, name); Furthermore, this program throws "surplus" characters away. What we want is for the string to expand to cope with the input. To achieve that, we have to descend to a lower level of abstraction and deal with individual characters: #include<stdio.h> #include<ctype.h> #include<stdlib.h> void quit() { // write error message and quit fprintf( stderr, "memory exhausted\n"); exit (1); } int main() { int max= 20; // allocate buffer: char* name = (char*) malloc(max); if (name ==0) quit(); printf( "Please enter your first name: \n"); // skip leading whitespace while (true) { int c = getchar(); if (c = EOF) break; // end of file if (!isspace(c)) { ungetc (c, stdin); break; } } int i = 0; while (true) { int c = getchar(); if (c == '\n' || c == EOF) { // at end, add terminating zero name[i] = 0; break; } name[i] = c; if (i == max-1) { // buffer full max = max+max; name = (char*) realloc(name, max); if (name == 0) quit(); } itt; } printf( "Hello %s\n", name); free(name); // release memory return 0; } Compared to the previous versions, this version seems rather complex. I feel a bit bad adding the code for skipping whitespace because I didn't explicitly require that in the original problem statement. However, skipping initial whitespace is the norm and the other versions of the program skip whitespace. One could argue that this example isn't all that bad. Most experienced C and C++ programmers would in a real program probably (hopefully?) have written something equivalent in the first place. We might even argue that if you couldn't write that program, you shouldn't be a professional programmer. However, consider the added conceptual load on a novice. This variant uses seven different standard library functions, deals with character-level input in a rather detailed manner, uses pointers, and explicitly deals with free store. To use realloc while staying portable, I had to use malloc (rather than new). This brings the issues of sizes and casts [Note 2] into the picture. It is not obvious what is the best way to handle the possibility of memory exhaustion in a small program like this. Here, I simply did something obvious to avoid the discussion going off on another tangent. Someone using the C-style approach would have to carefully consider which approach would form a good basis for further teaching and eventual use. To summarize, to solve the original simple problem, I had to introduce loops, tests, storage sizes, pointers, casts, and explicit free-store management in addition to whatever a solution to the problem inherently needs. This style is also full of opportunity for errors. Thanks to long experience, I didn't make any of the obvious off-by-one or allocation errors. Having primarily worked with stream I/O for a while, I initially made the classical beginner's error of reading into a char (rather that into an int) and forgetting to check for EOF. In the absence of something like the C++ standard library, it is no wonder that many teachers stick with the "shoddy" solution and postpone these issues until later. Unfortunately, many students simply note that the shoddy style is "good enough" and quicker to write than the (non-C++ style) alternatives. Thus they acquire a habit that is hard to break and leave a trail of buggy code behind. This last C-style program is 41 lines compared to 10 lines for its functionally equivalent C++-style program. Excluding "scaffolding," the difference is 30 lines vs 4. Importantly, the C++-style lines are also shorter and inherently easier to understand. The number and complexity of concepts needed to be explained for the C++-style and C-style versions are harder to measure objectively, but I suggest a 10-to-1 advantage for the C++-style version. Efficiency Efficiency is not an issue in a trivial program like the one above. For such programs, simplicity and (type) safety is what matters. However, real systems often consist of parts in which efficiency is essential. For such systems, the question becomes "can we afford a higher level of abstraction?" Consider a simple example of the kind of activity that occurs in programs where efficiency matters: read an unknown number of elements do something to each element do something with all elements The simplest specific example I can think of is a program to find the mean and median of a sequence of double-precision floating-point numbers read from input. A conventional C-style solution would be: // C-style solution: #include<stdlib.h> #include<stdio.h> // comparison function for use by qsort() int compare (const void* p, const void* q) { register double p0 = * (double* )p; register double q0 = * (double*)q; if (p0 > q0) return 1; if (pO < qO) return -1; return 0; } void quit() { fprintf(stderr, "memory exhausted\n"); exit(1); } int main(int argc, char*argv[]) { int res = 1000; // initial allocation char* file = argv[2]; double* buf= (double*) malloc(sizeof(double) * res); if (buf == 0) quit(); double median = 0; double mean = 0; int n = 0; FILE* fin = fopen(file, "r"); // openfile for reading double d; while (fscanf(fin, "%lg", &d) == 1) { if(n == res) { res += res; buf = (double*) realloc(buf, sizeof(double) * res); if (buf == 0) quit(); } buf[n++] = d; // prone to rounding errors mean = (n==1) ? d : mean+(d-mean)/n; } qsort(buf, n, sizeof(double), compare); if (n) { int mid=n/2; median = (n%2) ? buf[mid] : (buf[mid-1]+buf[mid])/2; } printf( "number of elements=%d, median=%g, mean=%g\n", n, median, mean); free(buf); } To compare, here is an idiomatic C++ solution. // Solution using the Standard C++ library: #include <vector> #include <fstream> #include <algorithm> using namespace std; main(int argc, char*argv[]) { char* file = argv[2]; vector<double> buf; double median = 0; double mean = 0; fstream fin(file,ios::in); double d; while (fin >> d) { buf.push_back(d); mean = (buf.size() == 1) ? d : mean+(d-mean)/buf.size(); } sort(buf.begin(),buf.end()); if (buf.size()) { int mid = buf.size() /2; median = (buf.size() % 2) ? buf[mid] : (buf[mid-1] + buf[mid] )/2; } cout << "number of> mean >> '\n'; } The size difference is less dramatic than in the previous example (43 vs. 24 non-blank lines). Excluding irreducible common elements such as the declaration of main() and the calculation of the median (13 lines), the difference is 20 lines vs 11. The critical input-and-store loop and the sort are both significantly shorter in the C++-style program (nine vs. four lines for the read-and-store loop, and nine lines vs. one line for the sort). More importantly, the logic contained in each line is far simpler in the C++ version and therefore far easier to get right. Again, memory management is implicit in the C++-style program; a vector grows as needed when elements are added using push_back. In the C-style program, memory management is explicit using realloc. Basically, the vector constructor and push_back in the C++-style program does what malloc, realloc, and the code tracking the size of allocated memory does in the C-style program. In the C++-style program, I rely on the exception handling to report memory exhaustion. In the C-style program, I added explicit tests to avoid the possibility of memory corruption. Not surprisingly, the C++ version was easier to get right. I constructed this C++-style version from the C-style version by cut-and-paste. I forgot to include <algorithm>; I left n in place rather than using buf.size twice; and my compiler didn't support the local using-directive, so I had to move it outside main. On the other hand, after fixing these four errors, the program ran correctly the first time. To a novice, qsort is "odd." Why do you have to give the number of elements? (Because the array doesn't know it.) Why do you have to give the size of a double? (Because qsort doesn't know that it is sorting doubles.) Why do you have to write that ugly function to compare doubles? (Because qsort needs a pointer to function because it doesn't know the type of the elements that it is sorting.) Why does qsort's comparison function take const void* arguments rather than char* arguments? (Because qsort can sort based on non-string values.) What is a void* and what does it mean for it to be const? ("Eh, hmmm, we'll get to that later.") Explaining this to a novice without getting a blank stare of wonderment over the complexity of the answer is not easy. Explaining sort(v.begin( ), v.end()) is comparatively easy: "Plain sort(v) would have been simpler in this case, but sometimes we want to sort part of a container, so it's more general to specify the beginning and end of what we want to sort." To compare efficiencies, I first determined how much input was needed to make an efficiency comparison meaningful. For 50,000 numbers the programs ran in less than half a second each, so I chose to compare runs with 500,000 and 5,000,000 input values. The results appear in Table 1. The key numbers are the ratios; a ratio larger than one means that the C++-style version is faster. Comparisons of languages, libraries, and programming styles are notoriously tricky, so please do not draw sweeping conclusions from these simple tests. The numbers are averages of several runs on an otherwise quiet machine. The variance between different runs of an example was less than 1 percent. I also ran strictly ISO C conforming versions of the C-style programs. As expected there were no performance differences between those programs and their C-style C++ equivalents. I had expected the C++-style program to be only slightly faster. Checking other C++ implementations, I found a surprising variance in the results. In some cases, the C-style version even outperformed the C++- style version for small data sets. However, the point of this example is that a higher level of abstraction and a better protection against errors can be affordable given current technology. The implementation I used is widely available and cheap not a research toy. Implementations that claim higher performance are also available. It is not unusual to find people willing to pay a factor of 3, 10, or even 50 for convenience and better protection against errors. Getting these benefits together with a doubling or quadrupling of speed is spectacular. These figures should be the minimum that a C++ library vendor would be willing to settle for. To get a better idea of where the time was spent, I ran a few additional tests (see Table 2). Naturally, "read" simply reads the data and "read&sort" reads the data and sorts it but doesn't produce output. To get a better feel for the cost of input, "generate" produces random numbers rather than reading. From other examples and other implementations, I had expected C++ stream I/O to be somewhat slower than stdio. That was actually the case for a previous version of this program which used cin rather than a file stream. It appears that on some C++ implementations, file I/O is much faster than cin. The reason is at least partly poor handling of the tie between cin and cout. However, these numbers demonstrate that C++-style I/O can be as efficient as C-style I/O. Changing the programs to read and sort integers instead of floating-point values did not change the relative performance though it was nice to note that making that change was much simpler in the C++-style program (two edits as compared to twelve for the C-style program). That is a good omen for maintainability. The differences in the "generate" tests reflect a difference in allocation costs. A vector plus push_back ought to be exactly as fast as an array plus malloc/free, but it wasn't. The reason appears to be failure to optimize away calls of initializers that do nothing. Fortunately, the cost of allocation is (always) dwarfed by the cost of the input that caused the need for the allocation. As expected, sort was noticeably faster than qsort. The main reason is that sort inlines its comparison operations whereas qsort must call a function. It is hard to choose an example to illustrate efficiency issues. One comment I had from a colleague was that reading and comparing numbers wasn't realistic. I should read and sort strings. So I tried this program: #include<vector> #include<fstream> #include<algorithm> #include<string> using namespace std; int main(int argc, char* argv[]) { char* file = argv[2]; // input file name char* ofile = argv[3]; // output file name vector<string> buf; fstream fin (file,ios::in); string d; while (getline (fin, d)) buf.push_back (d); sort(buf.begin(), buf.end()); fstream fout (ofile, ios: out); copy(buf.begin(), buf.end(), ostream_iterator<string> (fout, "\n")); } I transcribed this into C and experimented a bit to optimize the reading of characters. The C++-style code performs well even against hand-optimized C-style code that eliminates copying of strings. For small amounts of output there is no significant difference, and for larger amounts of data, sort again beats qsort because of its better inlining (see Table 3). I used two million strings because I didn't have enough main memory to cope with five million strings without paging. To get an idea of where time was spent, I also ran the program with the sort omitted (see Table 4). The strings were relatively short (seven characters on average). Note that string is a perfectly ordinary user-defined type that just happens to be part of the standard library. What we can do efficiently and elegantly with a string, we can do efficiently and elegantly with many other user-defined types. Why do I discuss efficiency in the context of programming style and teaching? The styles and techniques we teach must scale to real-world problems. C++ is among other things intended for large-scale systems and systems with efficiency constraints. Consequently, I consider it unacceptable to teach C++ in a way that leads people to use styles and techniques that are effective for toy programs only; that would lead people to failure and to abandon what was taught. The measurements above demonstrate that a C++ style relying heavily on generic programming and concrete types to provide simple and type-safe code can be efficient compared to traditional C styles. Similar results have been obtained for object-oriented styles. It is a significant problem that the performance of different implementations of the standard library differ dramatically. For a programmer who wants to rely on standard libraries (or widely distributed libraries that are not part of the standard), it is often important that a programming style that delivers good performance on one system give at least acceptable performance on another. I was appalled to find examples where my test programs ran twice as fast in the C++ style compared to the C style on one system and only half as fast on another. Programmers should not have to accept a variability of a factor of four between systems. As far as I can tell, this variability is not caused by fundamental reasons, so consistency should be achievable without heroic efforts from the library implementers. Better optimized libraries may be the easiest way to improve both the perceived and actual performance of Standard C++. Compiler implementers work hard to eliminate minor performance penalties compared with other compilers. I conjecture that the scope for improvements is larger in the standard library implementations. Clearly, the simplicity of the C++-style solutions above compared to the C-style solutions was made possible by the C++ standard library. Does that make the comparison unrealistic or unfair? I don't think so. One of the key aspects of C++ is its ability to support libraries that are both elegant and efficient. The advantages demonstrated for the simple examples hold for every application area where elegant and efficient libraries exist or could exist. The challenge to the C++ community is to extend the areas where these benefits are available to ordinary programmers. That is, we must design and implement elegant and efficient libraries for many more application areas and we must make these libraries widely available. Learning C++ Even for the professional programmer, it is impossible to first learn a whole programming language and then try to use it. A programming language is learned in part by trying out its facilities for small examples. Consequently, we always learn a language by mastering a series of subsets. The real question is not "should I learn a subset first?" but "which subset should I learn first?" forcing the student to face many technical difficulties to express anything interesting. The examples in the previous two sections illustrate this point. C++'s better support of libraries, better notational support, and better type checking are decisive against a "C first" approach. However, note that my suggested alternative isn't "pure Object-Oriented Programming first." I consider that the other extreme. For programming novices, learning a programming language should support the learning of effective programming techniques. For experienced programmers who are novices at C++, the learning should focus on how effective programming techniques are expressed in C++ and on techniques that are new to the programmer. For experienced programmers, the greatest pitfall is often to concentrate on using C++ to express what was effective in some other language. The emphasis for both novices and experienced programmers should be concepts and techniques. The syntactic and semantic details of C++ are secondary to an understanding of design and programming techniques that C++ supports. Teaching is best done by starting from well-chosen concrete examples and proceeding towards the more general and more abstract. This is the way children learn and it is the way most of us grasp new ideas. Language features should always be presented in the context of their use. Otherwise, the programmer's focus shifts from the production of systems to delight over technical obscurities. Focusing on language technical details can be fun but it is not effective education. On the other hand, treating programming as merely the handmaiden of analysis and design doesn't work either. The approach of postponing actual discussion of code until every high-level and engineering topic has been thoroughly presented has been a costly mistake for many. That approach drives people away from programming and leads many to seriously underestimate the intellectual challenge in the creation of production-quality code. The extreme opposite to the "design first" approach is to get a C++ compiler and start coding. When encountering a problem, point and click to see what the online help has to offer. The problem with this approach is that it is completely biased towards the understanding of individual features and facilities. General concepts and techniques are not easily learned this way. For experienced programmers, this approach has the added problem of reinforcing the tendency to think in a previous language while using C++ syntax and library functions. For the novice, the result is a lot of if-then-else code mixed with code snippets inserted using cut-and-paste from vendor-supplied examples. Often the purpose of the inserted code is obscure to the novice and the method by which it achieves its effect completely beyond comprehension. This is the case even for clever people. This "poking around approach" can be most useful as an adjunct to good teaching or a solid textbook, but on its own it is a recipe for disaster. To sum up, I recommend an approach that - proceeds from the concrete to the abstract, - presents language features in the context of the programming and design techniques that they exist to support, - presents code relying on relatively high-level libraries before going into the lower-level details (necessary to build those libraries), - avoids techniques that do not scale to real-world applications, - presents common and useful techniques and features before details, and - focuses on concepts and techniques (rather than language features). No. I don't consider this approach particularly novel or revolutionary. Mostly, I see it as common sense. However, common sense often gets lost in heated discussion about more specific topics such as whether C should be learned before C++, whether you must write Smalltalk to really understand Object-Oriented programming, whether you must start learning programming in a pure-OO fashion (whatever that means), and whether a thorough understanding of the software development process is necessary before trying to write code. Fortunately, the C++ community has had some experience with approaches that meet my criteria. My favorite approach is to start teaching the basic language concepts such as variables, declarations, loops, etc. together with a good library. The library is essential to enable to students to concentrate on programming rather than the intricacies of, say, C-style strings. I recommend the use of the C++ standard libraries or a subset of those. This is the approach taken by the Computer Science Advanced Placement course taught in American high schools [2]. A more advanced version of that approach aimed at experienced programmers has also proved successful; for example, see [3]. A weakness of these specific approaches is the absence of a simple graphics library and graphical user interfaces early on. This could (easily?) be compensated for by a very simple interface to commercial libraries. By "very simple," I mean usable by students on day two of a C++ course. However, no such simple graphics and graphical user interface C++ library is widely available. After the initial teaching/learning that relies on libraries, a course can proceed in a variety of ways based on the needs and interests of the students. At some point, the messier and lower-level features of C++ will have to be examined. One way of teaching/learning about pointers, casting, allocation, etc. is to examine the implementation of the classes used to learn the basics. For example, the implementation of string, vector, and list classes are excellent contexts for discussions of language facilities from the C subset of C++ that are best left out of the first part of a course. Classes such as vector and string, which manage variable amounts of data, require the use of free store and pointers in their implementation. Before introducing those features, classes that don't require them concrete classes such as a Date, a Point, and a Complex type can be used to to introduce the basics of class implementation. I tend to present abstract classes and class hierarchies after the discussion of containers and the implementation of containers, but there are many alternatives here. The actual ordering of topics should depend on the libraries used. For example, a course using a graphics library relying on class hierarchies will have to explain the basics of polymorphism and the definition of derived classes relatively early. Finally, please remember there is no one right way to learn and teach C++ and its associated design and programming techniques. The aims and backgrounds of students differ and so does the backgrounds and experience of their teachers and textbook writers. Summary We want our C++ programs to be easy to write, correct, maintainable, and acceptably efficient. To do that, we must design and program at a higher level of abstraction than has typically been done with C and early C++. Through the use of libraries, this ideal is achievable without loss of efficiency compared to lower-level styles. Thus, work on more libraries, on more consistent implementation of widely-used libraries (such as he standard library), and on making libraries more widely available can yield great benefits to the C++ community. Education must play a major role in this move to cleaner and higher-level programming styles. The C++ community doesn't need another generation of programmers who by default use the lowest level of language and library facilities available out of misplaced fear of inefficiencies. Experienced C++ programmers as well as C++ novices must learn to use Standard C++ as a new and higher-level language as a matter of course, and descend to lower levels of abstraction only where absolutely necessary. Using Standard C++ as a glorified C or glorified C with Classes would only be to waste the opportunities offered by Standard C++. Acknowledgements Thanks to Chuck Allison for suggesting that I write an article on learning Standard C++. Thanks to Andrew Koenig and Mike Yang for constructive comments on earlier drafts. My examples were compiled using Cygnus' EGCS 1.1 and run on a Sun Ultrasparc 10. The programs I used can be found on my homepages:. Notes [1] For aesthetic reasons, I use C++-style symbolic constants and C++-style // comments. To get strictly conforming ISO C programs, use #define and /* */ comments. [2] I know that C allows this to be written without explicit casts. However, that is done at the cost of allowing unsafe implicit conversion of a void* to an arbitrary pointer type. Consequently, C++ requires that cast. References [1] X3 Secretariat. Standard The C++ Language. ISO/IEC 14882:1998(E). Information Technology Council (NCITS). Washington, DC, USA. (See). [2] Susan Horwitz. Addison-Wesley's Review for the Computer Science AP Exam in C++ (Addison-Wesley, 1999). ISBN 0-201-35755-0. [3] Andrew Koenig and Barbara Moo. "Teaching Standard C++," Parts 1-4, Journal of Object-Oriented Programming, Vol 11 (8,9) 1998 and Vol 12 (1,2) 1999. [4] Bjarne Stroustrup. The C++ Programming language (Third Edition) (Addison-Wesley, 1997). ISBN 0-201-88954-4. Bjarne Stroustrup is the designer and original implementer of C++. He is the author of The C++ Programming Language and The Design and Evolution of C++. His research interests include distributed systems, operating systems, simulation, design, and programming. He is an AT&T Fellow and the head of AT&T Lab's Large-scale Programming Research department. He is actively involved in the ANSI/ISO standardization of C++. He is a recipient of the 1993 ACM Grace Murray Hopper award and an ACM fellow.
http://www.drdobbs.com/learning-standard-c-as-a-new-language/184403651
CC-MAIN-2017-09
refinedweb
5,225
53.21
This tutorial explains how to use while loops in Python to repeat sections of code. You can also watch the video on YouTube here. Tag Archives: while Do while loops in C# A# While loops in C# This# While loops in C# While loops are used to repeat a section of code while a specified condition evaluates to true. For example, a user could keep being asked to enter a password while the password they are providing is incorrect. When the password they provide is correct, the loop will end. The video below explains how to use while loops in C#. You can also scroll down to see the sample code. Sample code The sample C# code for a solution and project called MyApp is shown below. In this program, a variable called myNumber is created which is initially given an integer value of 1. A while loop is created which checks if the value of myNumber is less than 10. While the value of myNumber is less than 10, the value is displayed to the user and then increased by 1, each time the loop repeats. Each repetition of a loop is called an iteration. Try the code below in your own program. using System; namespace MyApp { class MainClass { public static void Main (string[] args) { Console.WriteLine ("Hello World!"); int myNumber = 1; while(myNumber < 10) { Console.WriteLine(x); myNumber++; } Console.ReadLine (); } } } Do while loops in PHP In the previous tutorial, we looked at how while loops can be used to test a condition before running a loop. While that test condition evaluates to true, the loop will continue running. The while loop tests a condition before the loop runs and will not run the loop if the condition evaluates to false. On the other hand, do while loops check the condition after the loop has already been executed. The loop will always run at least once even if the condition evaluates to false. The do while loop syntax is split into two parts: the ‘do‘ part and the ‘while‘ part. The ‘do‘ part tells the loop what code to run and the ‘while‘ part specifies the condition that will be tested. The ‘while‘ part comes after the ‘do‘ part. Do while loops do not have an in-built counter but you can include a counter in the loop. Watch the video below and then scroll down to see the sample code. Sample PHP code: PHP Manual references: While loops in PHP In this tutorial you will learn how to use a while loop in PHP. A while loop‘s syntax is slightly different to a for loop. A while loop will test a condition and will repeat a section of code inside the loop while that test condition evaluates to true. A while loop always tests the condition before running the code inside the loop (if the condition evaluates to true). Unlike for loops, while loops do not have an inbuilt counter but you can include your own counter variable if you want to use one. While loops and for loops can often be used for the same purpose or to achieve the same goal. However, in different situations, one type of loop may be better than the other. For example, a for loop may be more efficient for going through each element in an array and when written may also express the statement in a clearer way. Watch the video below and then scroll down to see the sample code for a PHP while loop. Sample PHP code: PHP Manual references: Loops (iteration) in C# There are two main types of loops you can use in C# to repeat sections of code. These are called the while loop and for loop. while loop The while loop is the easiest type of loop to use for repetition of code. The basic syntax is as follows: while(<condition>) { // do something } It looks very similar to an if statement. However, an if statement only runs the code it contains once. A while loop will run the code that it contains over and over again until the specified condition evaluates to false. Any code inside the { and } brackets will run inside the loop. Here is an example of a while loop in the C# language: In the example above, the count variable is initially set to 0. The loop will check if the count variable is less than 10. If it is not less than 10, it will add 1 to the count variable. This will keep repeating until the condition evaluates to false when the count variable’s value is no longer less than 10. When this occurs, the loop will end and the value of the count variable will be displayed (on the last line of the code which is outside of the loop). It is important that a condition be specified that will allow the loop to end, otherwise the loop will never end! This is known as an infinite loop. for loop The for loop is a little more complex than the while loop but at its simplest level it is very easy to set up. The syntax looks like this: for(<initialise counter>;<condition>;<increment the counter>) { // do something } Semi-colons separate three important components of the for loop inside the ( and ) brackets. The first part is where a counter is initialised, for example int i=0. The second part is where the condition is specified, for example i<10. The third part is how much to increment the counter by each time the loop runs (each iteration), for example i++ would increment the counter by 1. for loops are great for using as counters to repeat a section of code a certain amount of times. They are also great for repeating operations on each item in an array (looping through an array) or each character in a string. Below is an example of a for loop. Do..while Loops Unlike). While loops A while loop basically runs a piece of code while a test condition evaluates to true. A while loop can repeat a set of instructions (code) over and over again until the test condition evaluates to false and the loop breaks. The rest of the program will then continue running. Watch the video below which explains a few different ways that you can use while loops in a program. You can also view it on YouTube by clicking here.. In the example below, a string is used instead. The loop checks if the password is correct and then displays a message if it is correct. However, there is a problem with the code below…can you spot it? Scroll down to find out what is wrong with the code. The problem with the snippet of code above is that the condition for the while loop will always evaluate to true, and so the loop will never stop running! This is called an infinite loop. The password variable is set to “potato” and while the password stored is equal to “potato”, the loop will run again and again. But there is no way to enter a different password…unless we allow for user input! One way of allowing for user input is to use a JavaScript prompt such as the one used below. This time the while loop is set up a little differently. Tip: You can also use the break command to end a loop if a condition has evaluated to true. You could use an if statement inside a while a loop and then use the break; command to break the loop. Next tutorial: Do..While loops
https://www.codemahal.com/tag/while/
CC-MAIN-2019-22
refinedweb
1,274
80.31
Learn more about Scribd Membership Discover everything Scribd has to offer, including books and audiobooks from major publishers. 2.2 LESSON Lesson OverviewHow does object-oriented programming (OOP) work within the Microsoft .NET Framework? In this lesson, you will explore: PolymorphismUsing interfaces in application design Guiding Questions1. 2.3. Anticipatory Set1. What are the basic objects that are part of the game?For each object, list a few actions that that object does in the game. OOP is a programming paradigm or model in which classes and objects work together to create a program or application. It enables developers to think about a problem in a way that is similar to how we look at the real worldmany objects that interact with each other. The fundamental building blocks of object-oriented programs are classes and objects. You can create multiple houses from one blueprint and multiple batches of cookies from one recipe.Each house (or each batch of cookies) is referred to as an instance. Classes and objects are not the same thing; a class is not an object, just as a recipe is not a batch of cookies. Inheritance Inheritance is a technique in which a class inherits the attributes and behaviors of another class. Allows the developer to reuse, extend, and modify an existing class. Example: The Aircraft class might define the basic characteristics and behaviors of a flying vehicle. Methods in that class might include Fly(), TakeOff(), and Land(). The Helicopter class could add a method called Hover(), which might replace or override the TakeOff() method so that takeoffs are vertical. Inheritance Vocabulary The class whose members are inherited is called the base class. The class that inherits those members is called the derived class.o In our previous example, Aircraft is the base class; Helicopter is the derived class. Some developers refer to a base class as a superclass or a parent class; a derived class could then be called a subclass or a child class. We can test the validity of an inheritance relationship with the so-called Is-A test. A helicopter is an aircraft, so that is an appropriate use of inheritance.o However, the Car class should not inherit from Aircraft because a car is an aircraft doesnt make sense. In Microsoft C#, inheritance is established using the colon (:) public class Helicopter : Aircraft { } In Microsoft Visual Basic, inheritance is indicated with the Inherits keyword: Public Class Helicopter Inherits Aircraft End Class Polymorphism Polymorphism is a language feature that allows a derived class to be used interchangeably with its base class.o In our example, a Helicopter can be treated like an Aircraft when the developer writes code. If you developed a JumboJet class, it could also be used like an Aircraft. Polymorphism is especially useful when working with collections, such as lists or arrays. A collection of Aircraft could be populated with Helicopter instances and JumboJet instances, making the code much more flexible. Interfaces An interface defines a set of properties, methods, and events, but it does not provide any implementation. It is essentially a contract that dictates how any class implementing the interface must be implemented.o Consider a power outlet in the wall, which is similar to an interface. It dictates that any appliance that wants to use that power must follow some basic rules: how many prongs are required, the shape and spacing of those prongs, and so on. Any class that implements an interface but does not include the specified members (properties, methods, and events) will not compile. An object cannot be instantiated (constructed) from an interface. Lesson Review1. 2. 3. Much more than documents. Discover everything Scribd has to offer, including books and audiobooks from major publishers.Cancel anytime.
https://www.scribd.com/presentation/205174904/456
CC-MAIN-2019-47
refinedweb
622
54.32
Ruby 2.7 — Enumerable#tally Christmas has come and passed, 2.6 has been released, and now it’s time to mercilessly hawk the releases page for 2.7 features so we can start our fun little annual tradition of blogging about upcoming features. Typically this means another December release, but there have been cases of methods making it in earlier if they’re merged to trunk this early in the year. This round? We have the new method Enumerable#tally! The Short Version tally counts things: [1, 1, 2].tally # => { 1 => 2, 2 => 1 } [1, 1, 2].map(&:even?).tally # => { false => 2, true => 1 } Examples The example used in Ruby’s official test code is: [1, 2, 2, 3].tally # => { 1 => 1, 2 => 2, 3 => 1 } Without a block, tally works by counting the occurrences of each element in an Enumerable type. If we apply that to a list of another type it might be a bit clearer: %w(foo foo bar foo baz foo).tally => {"foo"=>4, "bar"=>1, "baz"=>1} Currently tally_by has not been accepted into core, so to tally by a function you would instead use map first: %w(foo foo bar foo baz foo).map { |s| s[0] }.tally => {“f” => 4, “b” => 2} There’s discussion happening at the moment on accepting this feature, which would make the above syntax: %w(foo foo bar foo baz foo).tally_by { |s| s[0] } => {“f” => 4, “b” => 2} Why Use It? If you’ve been using Ruby, chances are you’ve used code something like one of these lines to do the same thing tally is doing above: list.group_by { |v| v.something }.transform_values(&:size) list.group_by { |v| v.something }.map { |k, vs| [k, vs.size] }.to_h list.group_by { |v| v.something }.to_h { |k, vs| [k, vs.size] } list.each_with_object(Hash.new(0)) { |v, h| h[v.something] += 1 } There are likely several more variants of this, but those are a few of the more common ones you might see around. This is a nicety method to abbreviate a very common idiom in the Ruby language, and a very welcome one. Vanilla Ruby Equivalent What does this method do? Well if we were to implement it in plain Ruby it might look a bit like this: module Enumerable def tally_by(&function) function ||= -> v { v } each_with_object(Hash.new(0)) do |value, hash| hash[function.call(value)] += 1 end end def tally tally_by(&:itself) end end In the case of no provided function, it would effectively be tallying by itself, or rather an identity function. An identity function is a function that returns what it was given. If you give it 1, it returns 1. If you give it true, it returns true. Ruby also uses this concept in a method called itself. This article will not go into great depth on what the above code does. Part Five of “Reducing Enumerable” covers this code in more detail: This brings us to part five of Reducing Enumerable where we meet Cerulean, the soon to be master of Tally By.medium.com The Source Code Nobu recently committed a patch to Ruby core to add this method: enum.c (enum_tally): new methods Enumerable#tally, which group and count elements of the collection. [Feature #11076]…github.com It was accepted by the Ruby core team under the name tally : Redminebugs.ruby-lang.org Tally? Let’s start with what the word means: A tally is a record of amounts or numbers which you keep changing and adding to as the activity which affects it progresses. - Now where did this name come from? Originally the name count_by was proposed, but the name was rejected as it differed from count which has a different return type and behavior. On a car ride back from the Tahoe area and RailsCamp West we (myself, David, Stephanie, and Shannon) were discussing potentially alternate names to try and propose to see if the feature could get in under a different name. David had proposed tally, and formally suggested it. It looks like the name stuck, and the code’s been merged into trunk. Now I’d presented a talk at a few conferences, and decided to mention tally_by instead of count_by in my RubyConf talk in one section. The written version is over here: This brings us to part five of Reducing Enumerable where we meet Cerulean, the soon to be master of Tally By.medium.com Just a bit of interesting backstory. Wrapping Up 2.7 is on its way, let’s see what it’ll bring! I’m looking forward to seeing where Ruby goes from here.
https://medium.com/@baweaver/ruby-2-7-enumerable-tally-a706a5fb11ea
CC-MAIN-2019-09
refinedweb
772
73.47
The general recommendation is for developers to have their own virtual machine (or physical machine) that contains a complete SharePoint installation. There’s a variant where the SQL Server is shared but it will still be several SharePoint farms with a single server in each. The problem with these is that you end up with multiple farms to manage. This can work good for the first deployment but will be difficult for incremental releases. It can also be good for developers working for different production environments. However, when developers are working for a single production environment, other requirements will come in: Developers already had those requirements without impacting anyone else: After living with multiple farms using shared resources, I implemented a single farm at a customer. While there are identifiable caveats, it also bring a lot of value while diminishing overall time spent on maintenance and on bug due to non-synchronized environments. For WCM, you will have 2 types of development : 1) content related and 2) pure development artefact. The first is easy to handle, your web designer can simply work like before and it will generally not impact anyone else, in fact, it’s beneficial since page layouts will be updated for everyone at the same time. The only drawback is that I’ve often seen web designers doing check-ins or publishing layouts to test them. This isn’t optimal for anyone as it uses more resources on SharePoint and if the layout has a display issue, everyone will have it. Therefore, some assistance (how-to’s) should be planned to ensure that web designers are working optimal for everyone. For Development artefact, it can be a little bit more complicated depending on what type of artefact. For example, if you are developing a custom SharePoint Field, results will be inconsistent if you add a new field instance to a content type that is used for everyone. In the same fashion as the previous case, working on a separate site collection (or at least content type) might be a more manageable solution for the few times when you are developing those artefacts. In fact, for most cases, it’s the visibility that is more important than the actual change. The reason for this is that a single developer working on a custom Field or a Web Part will only work on his development machine for obvious reason. If the farm + web application + site collection are shared, then we may have to create an unused sub-site for testing purposes during the development. While this may seem cumbersome, at least on “paper”, it’s actually quite simple and is much simpler to manage than multiple environments. This allows to have less requirements on the development workstations and resources are shared – the database/disk/backups footprint is lessen. At the next level, you can have the bulk of your development done in a single shared farm but this doesn’t mean you cannot have 1 or 2 other farms for specific development artefacts if they prove too cumbersome. For example, you may want to have a specific farm (with content) used for various special operations such as testing new CU/Service Packs, 3rd party tools and solutions, etc. All without affecting your standard integrated testing environment – it’s still easier to maintain 2 farms than 8-10. I had a scenario where we created a development farm for support. It’s purpose was to test anything else that wasn’t in the standard development line. So far, I’ve had good success with the single farm topology. There is some added governance to be done while implementing – primarily for developers and web designers – but this ensures that you have a good definition of what type of work is being done and which one you wishes to support. On a daily basis, developers didn’t have a negative impact due to improper updates by someone else. On a longer basis, management was much easier, especially with content updates and patches – we are now 100% positive that everyone sees the same thing. While this may not be ideal for some situations, it certainly proved to be working well for WCM development. On June 1st, I started the Microsoft Certified Master (MCM) : SharePoint journey with 16 other candidates from across the world. I can proudly say today that I successfully met all requirements to pass the first ‘RTM’ SharePoint Master rotation. This has been an incredible experience and by far the best professional training I’ve ever attended in nearly 14 years of development experience. It’s challenging, it’s tough, it’s time-consuming – but it’s also extremely rewarding, valuable, and worth the investment (both personal/time and financial). While you will learn a lot on SharePoint (especially how much you didn’t know to begin with), you will also meet several other experts with who you will share a connection – you have been selected through a rigorous review as a candidate for SharePoint Master. This stays. This is true value over time. I’m also very proud to be amongst the first Canadian to become an MCM SharePoint – I had a fellow Canadian candidate on the RTM rotation – and will be able to share this experience with both French and English audiences. There was discussions on the value of going for MCM SharePoint 2007 while knowing that SharePoint 2010 is coming out – the first rotation of SharePoint MCM 2010 is scheduled for next year. Personally, I’m very happy to have done this one for several reasons : (1) There are a lot of customers with 2007 today and more will be on SharePoint before 2010 comes out; (2) We will have to migrate to 2010 which means a good knowledge of 2007 to start with; (3) There is a lot to learn on SharePoint 2007 in only 3 weeks, 2010 will also be 3 weeks and there will be even more to learn! (4) it was announced that there will be an upgrade path from MCM SharePoint 2007 to 2010. If you are wondering if the cost is worth it, especially for non-Microsoft employees – I haven’t heard a single partner mention that they didn’t get their investment’s worth. If you compare this to any other training program – it will still cost several thousands for 3 weeks and it won’t begin to compare. You have access to several worldwide experts on the product. You have access to incredible hardware to test whatever you need on SharePoint (multi-farm / domains included!). You get to not only learn the best from the best, but also share with them. Also, discounts and new dates have been announced on the master’s blog. As an ending comment : Be prepared – both technically, mentally, and physically. Don’t assume you will breeze through it – you won’t and nobody did. You will get what you will put into it and be very proud of it. Hope to see more of you in the MCM community soon! There is a lot of discussions and articles on SharePoint Variations which contains good arguments on why the feature isn’t all what it’s prepped up to be. This will not be a post to debate whether they are useful or not but merely how to make them more useful than what I see in the field. First of all, when discussing variations with a customer, he’s likely to be sold on the ‘multilingual’ bullet from a PowerPoint slide. I sometime hear that the feature should do a complete translation automatically (nothing less :)). The first thing about Variations is to know when to recommend them and when they aren’t useful enough. After that, the next thing is to have a good discussion with the customer about what it means to use variations. Here’s a few quick and simple facts on Variations: While this should make sense, the default language is usually also the one with the most content updates. For example, you set ‘English’ as the source label, and you create 2 labels ‘French’ and ‘Spanish’. For the ‘Go-Live’, you create all your content in English and it will copy it over in the other 2 labels. You will have translators going over and fixing your other labels. Within a couple days (if not before the Go-Live), you start updating content in your English label and authors start complaining in the other 2 labels as their localized content is ‘reverted back in English’. What happens is that the French Version 1.0 of a page got bumped by a v.1.1 in English. While the author can edit the page, go back in history and revert the 1.0, he will have to re-approve/publish the page as a 2.0. This is very annoying indeed. Ask your customer if they really want to translate their complete web site all the time? This is usually not the case. They may want their press releases but not all news as they are more targeted. They may want their products or maybe only a segment of products (i.e.: Corp products versus localized offerings). Well that' one’s a simple fact, variations are only for publishing pages. In my experience, while some documents are clearly desired in a few languages, all the documents in the sites are unlikely to be required as translated. Also, I often see more of a need of a ‘document set’ where you want to see all languages available for a document from any site (i.e.: show me the French and English document in both the French and English site). It’s not. Simple. A human is required as the glue in between. The feature does provide a way to export a label (or a section) and there are 3rd-party products that can take that export and provide a system for translators to work more convivially, but there’s a still a human involved. Well that’s true but SP2 fixed it by providing an STSADM to create labels. The problem was that it took too long for the web process to finish the creation and you were left hanging. That’s simply not true. You can author as much as you want in all labels. The worst that can happen is if you create a site or a page in a sub label; and then you try to create a content with the same name in the source variation. What will happen is that content from the source will not be pushed down on that sub label only. This isn’t necessary bad actually, and the variation log will tell you the issue and you can resolve it if necessary. Well if you want to think outside the box, you can surely come up with some workflows or custom events to do the work. Is it worth it? Maybe if you really need all the content to be brought on all languages only one time, but hardly. In my opinion, you usually need roughly 30% of your content in all languages, give or take. But this works even if you have 100%! What this source variation is simply a ‘creator’. If you need content for all labels, that’s the place to go. If you need to update a page’s content with something major for all labels, that’s the place to go and then you’ll have to update all labels. But when you want to change a typo or add some comments in one language, that’s not the place to go. So in our previous sample, we had 3 labels required : English, French, and Spanish. Instead of picking up English as the source, create a label called ‘Source’ or ‘GlobalContent’ (or something that makes most sense to your authors). This ‘Source’ will be in English but your visitors will not go there. In order to do that, you can either pick a label language that isn’t yours (i.e.: if you are in the US only, pick English UK as your source; and then English US for the English sub label). The other way is to customer the Variation Root Landing logic (). Alright, so now you have 4 variations instead : Source (English UK), English US, French, and Spanish. You will probably have few Contributors in your source variation, maybe only Hierarchy Managers. You would then set appropriate permissions in your labels only. Remember, only create or update content in the ‘Source’ when it’s global. When it’s not, authors should go in their respective label and simply create it there. This is totally safe and supported. This way, you will rarely erase ‘French’ or ‘Spanish’ content with ‘English’ at each update. If you define your information architecture and governance right, and you explain the variation process accordingly with your customer and content authors, this will probably be the simplest solution for variations and should cover most of your customer’s needs. I was helping a customer yesterday that had an unexplained issue in production, a Web Part was behaving as if it had rolled back its code. As it turns out, the following happened: Debugging this obviously took some time, until we could have access to the production server to download the DLL in the GAC and disassemble it, but shows how bad a practice it is to have manually updated DLLs in the GAC that were originally deployed from WSP. Unfortunately, Best Practices goes only so far where we recommend using WSPs and sometimes we understand the benefits but not consequences such as this one. There may be other operations that could redeploy solutions such as adding a farm server or installing a CU or SP but I didn't confirm this. So all in all, if you have to manually update a deployed-through-WSP-DLL in the GAC, for whatever (political) reasons (which is usually because of permissions or rigid processes), plan to update the WSP as soon as possible.). A few months ago, I came across this issue where the number of returned items in a query wasn’t as expected. Context with normal queries To contextualize, let’s say you have a home page with several articles section which are in turned served by Content By Queries. Your queries will either target a specific article sub-site or it will use metadata to query something like : SELECT TOP 5 Article.Title FROM Article-Site-URL [WHERE MetadataA='value'] ORDER BY LastUpdated DESC This will show the last 5 updated articles from that site, and if necessary filtered by a metadata. With daily data being added to all article categories, your home page design is designed to display 5 articles in each category. Usually no less, but certainly no more than 5 items. Context with audiences Now if you have an authenticated portal and you’d like to have such an article section with targeted content. For example, the articles are regrouped by a logical business meaning but they are targeted to different groups of users. You will use the same Content By Query and check the “Filter results by audiences” to have only the articles for that user’s audiences. You would expect the “Top 5” to work correctly. Unfortunately, the query resembles more something like : SELECT Article.Title FROM (SELECT TOP 5 Article.Title FROM Article-Site-URL ORDER BY LastUpdated DESC) WHERE Article.TargetAudiences CONTAINS (Users.AudiencesGUIDs) If you look closely at the query, it does a standard TOP 5 query in the content without an audience check. This first query may return 5 items but after the audience filtering, 0 to 5 items will be displayed to the user. The query should rather be something like : SELECT TOP 5 Article.Title FROM Article-Site-URL WHERE Article.TargetAudiences CONTAINS (Users.AudiencesGUIDs) ORDER BY LastUpdated DESC **note: the queries aren’t like this in the APIs, I simply simplified them for explanation** For a customer using audiences, it can be a show-stopper as you cannot effectively plan your home page queries. The only workaround was to “augment the limit of items return” but since you cannot effectively plan how much you will need, it will break the home page’s layout by either showing too many articles or not enough. Fix! Originally, it was apparently by-design and wouldn’t be updated until the next release of Office 14. Fortunately, I’m very happy to have received news that it will be released in an upcoming Cumulative Update along with the fix for multiple audiences explained here.. The expected display results is the following:). We came across this issue lately where performance was degrading gradually and we couldn’t figure out exactly why. What we noticed was that our Page Output Caching that should last 2 hours wasn’t always lasting that long. The WCM Intranet portal is used by several thousands authenticated (through Active Directory using NTLM) users every day. We have targeted and secured content throughout the site. Page Output Caching is enabled for maximum efficiency (and definitely required for a large audience) along with the other caching mechanisms such as Object Caching (which has been augmented) and application caching. Unfortunately, in testing environments, we couldn’t originally reproduce the problem as it seemed to be random in production. Also, while we had most of the production content, we hadn’t noticed one critical component : one of the link in the top navigation bar was targeted, as planned, but to multiple audiences instead of a single audience. Now, this doesn’t seem like a problem, after all, authors are able to do this and we cannot block it. Unfortunately, it seems that as soon as a member of one of those audiences is logging to the site, it will flush all page output caches for that page. So whenever a page renders a targeted Web Part or a targeted link, if that targeting contains multiple audiences, all caches for that page will be flushed. We are looking at finding a solution with the product and support teams and I will let you know when I have a change of status. As a workaround, for now, you have to use a single audience to target content. You have 2 choices: compile an audience for that need, or if it’s targeted to multiple active directory groups, create a MOSS group that contains all those AD groups and target the content on the MOSS group. Hi everyone and sorry for the no-post for several months. My ‘Draft’ list keeps piling but I didn’t take the time to unpile it much. Anyhow, I came across this problem again last week with the out of the box Content Editor Web Part (CEWP). Basically, if you use the rich html text editing capability, all URLs are absolute. I came across a similar issue last year while using the HtmlField and Content Deployment where the URLs were still pointing to the authoring web site. The CEWP was also renowned to have the same issue. Back then, we had fixed the general issue with a quick HttpModule installing an HttpFilter which was essentially rewriting URLs. In terms of architecture, I didn’t like the idea but it was a quick fix done in the early hours. It did the job. Since then, we removed Content Deployment so we didn’t have the issue anymore, thus we removed the HttpModule. While adding new Alternate Access Mappings URLs to the web site, we noticed some link errors with only the CEWP this time. I looked at past support cases to see if the issue had been resolved in any Cumulative Update (up to December 2008) but no, it’s not. Apparently, the fix involves so many changes in the source code that it’s too high a risk. As of right now, it’s not scheduled to be fixed. The official solutions were : While I’d definitely recommend the RadEditor control, it wasn’t fixing my currently existing pages that uses the CEWP. My next thought was back to the HttpModule but then I remembered looking at ASP.NET Control Adapters. ASP.NET Control Adapters is basically a way to modify the Render of a control. It’s a little bit like inheritance but only for Render AND it allows to inherit Sealed class such as the ContentEditorWebPart class. Since I was a bit rusted with Control Adapters, I typed in a search for “ContentEditorWebPart ASP.NET Control Adapter” and came across this post from Waldek Mastykarz. He lays out most of the essential but he essentially strips all absolute URLs and this was impractical in my scenario. In most scenario, you’ll want to strip a list of URLs. In the case of Content Deployment, you’ll want to supply the list of AAMs from the authoring environment (if it’s in the same farm, you could do this programmatically much like what I’m doing in this code but you supply a different URL to the SPWebApplication.Lookup method, if it’s a different farm, supply the list in a config file). In the case of a single site with multiple AAMs, this code will work great: 1: using System; 2: using System.Collections.Generic; 3: using System.Text; 4: using System.Web; 5: using System.Web.UI; 6: using System.Web.UI.WebControls; 7: using System.Web.UI.Adapters; 8: using System.IO; 9: using System.Text.RegularExpressions; 10: using Microsoft.SharePoint.Administration; 11: 12: 13: namespace MaximeBBlog 14: { 15: public class ContentEditorWebPartAdapter : ControlAdapter 16: { 17: protected override void Render(System.Web.UI.HtmlTextWriter writer) 18: { 19: StringBuilder sb = new StringBuilder(); 20: HtmlTextWriter htw = new HtmlTextWriter(new StringWriter(sb)); 21: base.Render(htw); 22: string output = sb.ToString(); 23: 24: List<Uri> alternateUrls = GetAlternateUrls(); 25: foreach (Uri alternateUrl in alternateUrls) 26: { 27: output = output.Replace(alternateUrl.ToString(), "/"); 28: } 29: 30: writer.Write(output); 31: } 32: 33: private List<Uri> GetAlternateUrls() 34: { 35: List<Uri> alternateUrls = (List<Uri>)HttpContext.Current.Cache["AlternateUrls"]; 36: if (alternateUrls == null) 37: { 38: alternateUrls = new List<Uri>(); 39: 40: SPWebApplication webApp = SPWebApplication.Lookup(System.Web.HttpContext.Current.Request.Url); 41: foreach (SPAlternateUrl alternateUrl in webApp.AlternateUrls) 42: { 43: alternateUrls.Add(alternateUrl.Uri); 44: } 45: 46: HttpContext.Current.Cache.Add("AlternateUrls", alternateUrls, null, DateTime.Now.AddHours(12), System.Web.Caching.Cache.NoSlidingExpiration, System.Web.Caching.CacheItemPriority.Normal, null); 47: } 48: 49: return alternateUrls; 50: } 51: } 52: } In a nutshell, I’m doing the following: In terms of performance, it’s negligible and it won’t be reprocessed if you have Page Output Caching enabled. The SPWebApplication.Lookup is also very fast and I’m keeping the list in cache. As for the architecture, it ensures you are only stripping the correct URLs for only that control. You’ll notice that there is no mention of the ContentEditorWebPart so far and that’s because we aren’t inheriting this Web Part. That’s where the Control Adapter magic starts and you could even attach it to multiple controls if there’s a need! To bind it, you need to add (or update if you have one already) a compat.browser file in the App_Browsers folder of your web site. This file requires the following: 1: <browsers> 2: <browser refID="Default"> 3: <controlAdapters> 4: <adapter controlType="Microsoft.SharePoint.WebPartPages.ContentEditorWebPart" adapterType="MaximeBBlog.ContentEditorWebPartAdapter, MaximeBBlog, Version=1.0.0.0, Culture=neutral, PublicKeyToken=febc2e2cb2c3d564" /> 5: </controlAdapters> 6: </browser> 7: </browsers> Note: All you need to update is the namespace in both the code and compat.browser file to match your own; as well as signing it with your key. You’ll notice that the DLL is signed, this allows me to deploy it in the GAC (through a WSP for example) and NOT to require Full trust. I’m still using WSS_Minimal in fact. There is NO requirement for a SafeControl entry in the Web.Config either. Once your DLL is in the GAC and the compat.browser file in the App_Browsers directory of your web site, that web site will render the ContentEditorWebPart through your class. Note: if you application domain is already loaded, you may need to recycle it for the update to start working, especially if the page was in the page output already. Thanks again to Waldek for his early start on the issue. Cheers! In a customer deployment, we had several custom Site Definitions to create (because they have variations) and we started noticing that the list of sites available in the Create Page is sorted in an odd way. In fact, it's using neither the creation order, the configuration ID, nor the title (which is okay since it's multilingual). Using Reflector, I noticed that the page was using the GetAvailableWebTemplates() method of the SPWeb class in order to get its list. I tried sorting the list like I wanted and then using the SetAvailableWebTemplate() but the list was still being returned in the original sorted way. Digging deeper with Reflector, I noticed that it uses the web property "__WebTemplates" and that returns a list of 'All' web templates as well as web templates by LCID. Here's the method I created to sort the web's site creation list. You need to pass an SPWeb, a flag to determine if you want to be recursive, and the LCID of your web. In my case, I only had All + 1033 in the English, and All + 1036 in French. If you have multiple languages available in the same site, you may need to update the code a little bit. 1: private void SortWebTemplates(SPWeb web, bool recursive, uint lcid) 2: { 3: SPWebTemplateCollection webTemplates = web.GetAvailableWebTemplates(lcid); 4: StringBuilder sb = new StringBuilder(); 5: 6: Collection<SPWebTemplate> collection = new Collection<SPWebTemplate>(); 7: foreach (SPWebTemplate template in webTemplates) 8: { 9: bool itemAdded = false; 10: 11: for (int i = 0; i < collection.Count; i++) 12: { 13: if (template.Title.CompareTo(collection[i].Title) < 0) 14: { 15: collection.Insert(i, template); 16: itemAdded = true; 17: break; 18: } 19: } 20: 21: if (!itemAdded) 22: collection.Add(template); 23: } 24: 25: sb.Append("<webtemplates><lcid id=\"all\">"); 26: foreach (SPWebTemplate webTemplate in collection) 27: { 28: sb.Append("<webtemplate name=\"" + webTemplate.Name + "\" />"); 29: } 30: sb.Append("</lcid><lcid id=\"" + lcid.ToString() + "\">"); 31: foreach (SPWebTemplate webTemplate in collection) 32: { 33: sb.Append("<webtemplate name=\"" + webTemplate.Name + "\" />"); 34: } 35: sb.Append("</lcid></webtemplates>"); 36: 37: web.AllProperties["__WebTemplates"] = sb.ToString(); 38: web.Update(); 40: if (recursive && web.Webs.Count > 0) 41: { 42: foreach (SPWeb subWeb in web.Webs) 43: { 44: SortWebTemplates(subWeb, recursive, lcid); 45: subWeb.Dispose(); 46: } 47: } 48: } Maxime Whoever worked with the BlobCache has wanted to clear it someday (especially when developing or debugging issues). The BlobCache is the caching mechanism that keeps a local copy of large files. For example, your pictures, styles, and scripts, can be cached on each front-end. This will reduce SQL traffic. The only visual way to clear the BlobCache is to navigate to and click on "Force this server to reset its disk based cache". Unfortunately, this only works for the server you are on, which is very interesting in a load-balanced scenario (you'll need to use a host file and point to each server, turn by turn, and navigate to the web site). Then, if you go about reading on Object Caching at, you will notice that there is a note on BlobCache where you can execute the following command to clear the BlobCache "stsadm -o setproperty -propertyname blobcacheflushcount -propertyvalue 11 -url". The note is unclear whether this should work for all servers or just one, but it works for all server ... Unfortunately, it works once! The reason is simple, the BlobCacheFlushCount has to be incremented each time. Now this isn't very practical if you want to script it! If you look at the Disk Based Caching for Binary Large Objects,, you notice that it mentions you can simply check "Force all servers in the farm to reset their disk based cache". This would be great but where is that CheckBox??? If you look at the ObjectCacheSettings.aspx file (in Layouts), you will indeed find such a checkbox. If you use Reflector to open the class behind the page, you'll notice that, in the OnLoad method, the checkbox's visible property is set to False! Isn't that practical... On the other hand, by using Reflector, you can see that the available CheckBox calls a FlushCache() method from the BlobCache class, but that one is internal. But you can also notice that the invisible CheckBox's code is available and is actually quite simple. It does what the STSADM command executes, but it increments the value. Here's the code that allows you to clear the BlobCache on demand and for all servers: 1: SPSite site = new SPSite(""); 2: string s = "0"; 3: 4: if (site.WebApplication.Properties.ContainsKey("blobcacheflushcount") && site.WebApplication.Properties["blobcacheflushcount"] != null) 5: s = site.WebApplication.Properties["blobcacheflushcount"] as string; 6: 7: site.WebApplication.Properties["blobcacheflushcount"] = (int.Parse(s, CultureInfo.InvariantCulture) + 1).ToString(CultureInfo.InvariantCulture); 8: site.WebApplication.Update(); 9: site.Dispose(); I've also created a small Windows application that does just that, or you can create your own custom STSADM extension to do just the same thing. You can download the application here : Update: I’m also happy to mention that Sean McDonough created a Feature deployed through a WSP that does the same thing with a more convivial web interface. It can be found on Codeplex at this address :. We have been having this error for quite some time, and very randomly. It's not that the exception happens often, it's that when it does happen, it breaks that Content Deployment Job until you delete it (along with the path). As it turns out, reading through the May 20th WSS fix, there was the same error but in what seems to be another situation. Basically, the error occurs with Quick Deployments. So we tried to break the environment and found out how : What the article doesn't mention is that if you have the following: As for the WSS + MOSS May 20th patches, they are still private and I do not recommend installing a private hotfix in production unless every other option have ran out. When they become public, we'll test them here to see if it also fix the other job types. For the past few weeks, I have been in parental leave (we just had our first baby girl) and unable to write much. However, it seems that there was some issues with Content Deployment at work. Actually, the installation of SP1 and Blackout had been scheduled for a while but it didn't resolve as many issues as we had hoped. As such, we were still plagued with some "Object Reference Not Set" and "Primary Key constraint violation ..." during the Export phase. By calling support, my team was directed to another hotfix (950279), that had been released just after Blackout and we hadn't noticed it. This is the 3rd hotfix that I remember seeing that fixes "Primary Key constraint violation...". What you need to know about these errors is that they are mostly generic. Also, it's not by exporting the content and importing it in your dev environment that you will be able to reproduce the issue. The error is "Content Deployment" specific (I didn't check which database hosts the Content Deployment jobs and paths but it's likely to be Configuration, or otherwise Central Administration). By deleting the Jobs & Paths, you will likely remove the issue that causes the Export to fail. It seems that issues that were still there before the upgrade were still there after. Also, the thing to know is that, after deleting/recreating the jobs & paths, the next incremental deployment will not have any knowledge of what has been deployed previously and will deploy everything. For some reason, the first incremental gave us some issues where we had several 'file already exists with the same name' issues. I went ahead and deleted the Jobs/Paths again, did a Full Content Deployment once, and all other deployments have been Incremental and running fine. So unless you have no problem installing hotfixes (privates and/or publics), I would try this simple no-risk step before installing anything in production. While, in some situation, it may only be a temporary solution, it may buy you enough time to test the hotfix deployment in your environments and hopefully wait for a public fix or even a Service Pack. I ran into some more Content Deployment issues lately and I was going through the manifest (see for more examples) with some difficulties. First of all, let me give you a quick background. When you use Content Deployment, it will create a series of CAB files that will contain a file called manifest.xml. This is the file that drives the show. Unfortunately, especially with Incremental deployments, the file is deleted at the end of the deployment, even if it failed. Basically, right after the "transporting" phase, the "importing" phase kicks in. This is the best time to go, with Windows Explorer, on the destination server and make a copy of the Content Deployment's files (that are in the last modified folder). If you open ExportedFiles.cab (not ExportedFiles1.cab or any other cab files), you will be able to get the Manifest.xml file. What this tool will require is simply that file as well as the last number of imported objects that you have in the Content Deployment report. Type in the same number and it will show you the next object that was trying to be deployed and failed. As you can see, I simply get the right object and create an Xml document with only that object. You can then navigate it as if you would in Internet Explorer. The 2nd tab contains the actual text only of the object and you can do a 'select all / copy' if you need: Note that the manifest location and number of objects imported are saved in the application configuration so that you don't have to enter them all the time. Last, the 'Solution Helper' tab will be the one that I update over time (or you can send me manifest samples with solutions so that I can test + develop + document the solution in the tab for others. The first example of this is that I had an issue in an environment where all columns were being deleted and recreated. The Incremental deployment was failing with a message stating that you cannot delete a column that's in use. The Solution Helper, when it sees that the object is of 'DeploymentFieldTemplate' (a field), it will attempt to get all the document libraries that uses the field (according to the manifest) and show them there. (In our case, it turned out that a RetractSolution + AddSolution had occured and was deleting + recreating the same columns. A Full Content Deployment will 'fix' the error. While it's in no way a complete solution, I hope it may help you see which objects are failing and give you ideas on what to fix. You can download it here : As you probably know if you read my blog, we like to automate as much as we can our WCM portal deployments. One of the thing we do is to update the Publishing Navigation to hide some sites/pages or simply re-order them correctly. Unfortunately, when you have variations and you want to automate this for all languages, you have to specify some kind of "sleep" time so that it waits for the Variation Timer Job to kick in and create the site in all labels. Obviously, the sleep time period is guess work and is different on all servers (even on the same one!). During the first attempts, our sleep timer was rather low and we noticed that if we tried opening a site in a target variation/label while the variation process was doing its work, the process was simply stopping! I tried restarting the OWSTIMER service, rebooting the server, wait a day ... but no luck, it simply wasn't picking up again. The Timer Job definitions were still 'running' every minutes, they simply didn't check if there was any updates to propagate. Unfortunately, so far, I haven't found a way to fix it and to be honest, we simply delayed running our updates on all labels to a later stage. While I did take a look at trying to 'kick start' the process, I didn't go very deep and if I have the issue again, I'll try Gary Lapointe's variation relationship fix (which is VERY interesting and practical even if you don't have the issue I outlined here). If that doesn't work, I'll go see if the relationship are created anyway but it's the Timer Job that cannot read them anymore. I'll update the post when I have the time to reproduce the error and debug it. Trademarks | Privacy Statement
http://blogs.msdn.com/maximeb/default.aspx
crawl-002
refinedweb
6,215
61.67
README useMediaQuery and useElementQueryuseMediaQuery and useElementQuery These React hooks make it easy for you to match with your media queries. Get it: npm install use-element-query --save What?What? Given things you want to do with your media queries... const queries = { '(max-width: 299px)': { thing: 'A' }, '(min-width: 300px) and (max-width: 599px)': { thing: 'B' }, '(min-width: 600px)': { thing: 'C' } } ... you can ask for a list of things that match! function ResponsiveComponent() { const things = useMediaQuery(queries) // server side: = [] // client side: = [{ thing: 'A' }] // or: = [{ thing: 'B' }] // or: = [{ thing: 'C' }] return ( <div> Things that match: <pre>{JSON.stringify(things, null, 4)}</pre> </div> ) } As the output is an array you can have multiple matching queries at one time. Also, you can use whatever you want as your "thing": it can be a string, object, array, React component, CSS styles, callback function... useMediaQuery works with window.matchMedia so you can use any query that works with it. What about useElementQuery? This is an extension that creates <object data="about:blank" type="text/html" /> element inside container element that is determinted via ref. A position that is not static is required as the object element is made as large as the container using position: absolute. This is a trick from 2013 used for element resize detection. This library however instead uses matchMedia of the object which then allows to use media queries for a viewport that is the same size as the container element! UsageUsage import React, { useMemo } from 'react' import { useElementQuery } from 'use-element-query' // define this outside component to keep the same reference // or if you need this to be dynamic: useMemo const queries = { '(max-width: 299px)': 'small', '(min-width: 300px) and (max-width: 599px)': 'medium', '(min-width: 600px)': 'large' } function ResponsiveComponent() { // third value is always an array with all matching items const [ref, baseStyle, [size = 'default']] = useElementQuery(queries) // for demo purposes set a width that is smaller than the viewport width const style = useMemo(() => ({ width: '50%', ...baseStyle }), [baseStyle]) // ref and baseStyle always passed to the container return ( <div ref={ref} style={style}> <h2>Size is <code>{size}</code></h2> </div> ) } Can useElementQuery be used with Styled Components? Yes, check out this CodeSandbox! You will end up with a bit more boilerplate code than with :container() syntax that is used by solutions like Styled Container Query, but this library is much smaller yet also supports changing your render path entirely depending on available container width; although one could argue you should always keep the HTML/DOM the same and only change the appearance via CSS. CaveatsCaveats While this trick is performant there are some memory consumption issues so you should avoid using this on very many elements at the same time. Also, object element is added to the DOM and this may have some effect to your CSS if you need :first-child or :last-child on the container. The current minimalist design tries to make usage as easy as possible and thus object is added without React: if this proves to be problematic it might be necessary the change the API at some point. Circularity issues are shared with all solutions that do container queries these days, and is the reason there are container query libraries instead of a CSS standard. MotivationMotivation I got annoyed at work how hard it was to calculate "content width" in CSS, especially with a dynamic sidebar and having complex product lists. So originally this was a quick weekend project to get something usable that gives CSS / matchMedia style of syntax via a React hook. I didn't find any other solution that would keep familiar media query syntax, or that would allow for anything to be used as a result value from query. Also, I do not know of any other solution that has used matchMedia from a <object data="about:blank" type="text/html" />. Container queries and element queries have been long in the discussion, but ultimately it seems they might never be actually implemented despite numerous proposals, polyfills, and other JavaScript based implementations. Alternatives for Container QueriesAlternatives for Container Queries Minzipped sizes are as of 2019-11-28. - React Container Query (6,5 kB) - React Element Query (8,7 kB) - Styled Container Query (5,6 kB) - ZeeCoder: React Container Query (16,1 kB) Note about the status of the repoNote about the status of the repo I really should upgrade to latest Babel and stuff, instead of copy-pasting from old projects. Packages are a bit old. Also, should see the trouble to learn Storybook and make use of it.
https://www.skypack.dev/view/use-element-query
CC-MAIN-2022-21
refinedweb
758
57.3
Issues in mapping objects from java to flex - using flex4jigarinu Oct 13, 2010 7:24 AM Hi, I have a class in java which i want to send to flex4 using BlazeDS as middleware. There are a few issues that i am facing and they are: - When sending the object across (java to flex), the properties with boolean data type having value as true gets converted to properties with value as false. Even after setting the value to true it still comes as false on flex side. Can't understand why this is happening. - When sending the list of object containing property with boolean data type, the object on flex side does not show those properties at all. As of there were no boolean properties in that object. - Last but not the least, When sending List<ContractFilterVO> contractFilterVOs to flex using remote call, the result typecasted to ArrayCollection does not show the holding objects as ContractFilterVOs but as plain default Object though having all the properties send, except the boolean one mentioned in above points. Basically it is not able to typecast the objects in arraycoolection but the same objects gets typecasted when sent individually. In all the above points i am using Remote Service through BlazeDS for connectivity with Java. I have done a lot of this stuff in Flex 3 but doing it for the first time in flex 4, is there anything that Flex 4 needs specific. Below is the pasted code for reference purpose. Flex Object package com.vo { [RemoteClass(alias="com.vo.ContractFilterVO")] public class ContractFilterVO{ public function ContractFilterVO(){ } public var contractCode:String; public var contractDescription:String; public var isIndexation:Boolean; public var isAdditional:Boolean; } } /** * Rmote Part of code */ var remoteObject:RemoteObject = new RemoteObject(); remoteObject.destination="testService"; remoteObject.addEventListener(ResultEvent.Result,handleResult); public function handleResult(event:ResultEvent):void{ var contarctFilterVOs:ArrayCollection = event.result as ArrayCollection; //Point 2&3 probelem, if list sent form java var contarctFilterVO:ContractFilterVO= event.result as ContractFilterVO; //Point 1 probelem, if only single Object of type ContractFilterVO sent form java } Java Object package com.vo { public class ContractFilterVO implements Serializable { public function ContractFilterVO(){ } private static final long serialVersionUID = 8067201720546217193L; private String contractCode; private String contractDescription; private Boolean isIndexation; private Boolean isAdditional; } } I don't understand what is wron in my code on either side, it looks syntactically right. It would be great anyone could help me point out my mistake here. Waiting for right solutions... Thanks and Regards, Jigar I don't understand what is wron in my code on either side, it looks syntactically right. It would be great anyone could help me point out my mistake here. Waiting for right solutions... Thanks and Regards, Jigar 1. Re: Issues in mapping objects from java to flex - using flex4JeffreyGong Oct 13, 2010 7:58 AM (in response to jigarinu)1 person found this helpful Hi Jagar, You need public getters and setters in your Java com.vo.ContractFilterVO. I had the same problem as item 3 in your list sometimes for Flex 4. I solved this by putting something like new ContractFilterVO() somewhere before calling the RemotingObject. Help this will help you. Jeffrey 2. Re: Issues in mapping objects from java to flex - using flex4jigarinu Oct 13, 2010 10:25 PM (in response to JeffreyGong) Hi Jeffery, Thanks for your reply, it did solve my query @ point 3 as well as point 2 where the objects in arraycollection were not geting converted and boolean properties did not appear when list of an objects were received. And hey, i did have public functions for properties defined java class, just forgot to mention here in post, sorry for that. The solution you gave was right, but than what if i have a VO which has multiple List of objects coming from Java, than i would have to create an instance of each type of object on flex side this is too tedious, is'nt it? Is there any better solution... out there. And jeffery do you some tricks up your sleeve for this Boolean issues to that i am facing in point 1... Still struggling with this one... Anyone out there would be more than welcome to point my mistake, if any and provide tips/tricks or solutions... Thanks again to Jeffery... Waiting for more solutions sooner... thanks and Regards, Jigar 3. Re: Issues in mapping objects from java to flex - using flex4JeffreyGong Oct 14, 2010 11:57 AM (in response to jigarinu) Hi Jigar, Glad you solved point 2 and 3! Which is almost like fixing a bug of Flex 4 from client's side although tedious. In your Flex VO, you may need to add [Bindable] like package com.vo { [RemoteClass(alias="com.vo.ContractFilterVO")] [Bindable] public class ContractFilterVO{ ........ And in your Java VO, Could you get "private Boolean isIndexation;" compiled? I think it should be "boolean", all lowercase in Java. Hope you are luck again. Jeffrey
https://forums.adobe.com/thread/737709
CC-MAIN-2017-51
refinedweb
816
53.71
I would really appreciate if someone could test this and see if they can reproduce the problem I am having. I am working with ArcGIS 10.5 and PyCharm Community Edition 2017.3.3 (the most recent update). I have built a script in PyCharm, and I run it inside of PyCharm. I have PyCharm set to the ArcGIS interpreter (C:\Python27\ArcGIS10.5\python.exe) so I have access to the arcpy library. I have taken a snippet of my code and pasted below. I have attached a zip file with a shapefile for you to run it with (you'll have to change the directory of the output shapefile on the crop_intersect line). If you run it in "Run" mode, it should work. If you run it in "Debug" mode WITHOUT any breakpoints, it should also work. Now... place a breakpoint at the qh_buffer line. Run it in "debug" mode and step over that line when you get to it. It should break and most likely it will give you an error 000735 about invalid parameters/invalid input. Next, remove that breakpoint. Place a new breakpoint further down on the crop_intersect line. Run it in "debug" mode and step over that line when you get to it. It should run successfully to that point and then break giving you an error 000732: Input Features: Dataset #1; #2 does not exist or is not supported. I have checked and rechecked that these errors are invalid and these datasets are being created/the tools are written with correct syntax so the parameters are valid. I know this is a PyCharm IDE problem because I rolled back to a previous version of PyCharm and can run my script succesfully in debug mode with breakpoints, without throwing illegitimate errors. It also clearly is an IDE problem because there is no reason it should work on "run" mode, "debug" mode without breakpoints, and be able to successfully run through a line that previously caused an error when setting a new breakpoint further down the script, but not be able to run succesfully through debug mode with breakpoints. It is related to the breakpoints somehow. Can anyone reproduce this so I can be sure 100% sure it's IDE related and has nothing to do with my computer's configuration? Evidence suggests that the problem lies in how the debugger is interacting with the arcpy library, but I have no idea what the problem is beyond that. If anybody has any ideas, I'd love to hear them. import arcpy, sys arcpy.env.overwriteOutput = True svyPtFC = sys.argv[1] select_query = '"FID" = 9' qhPolys = arcpy.Select_analysis(svyPtFC, 'in_memory/qhPolys', select_query) qh_buffer = arcpy.Buffer_analysis(qhPolys, 'in_memory/qh_buffer', '50 Meters') cropFID = '"FID" = 1' cropPoly = arcpy.Select_analysis(svyPtFC, 'in_memory/cropPoly', cropFID) crop_intersect = arcpy.Intersect_analysis([[cropPoly, 1], [qh_buffer, 2]], r'C:\Users\xxx\GIS_Testing\crp_int.shp') feature_count = arcpy.GetCount_management(crop_intersect) print feature_count I haven't had time to test/check, but it seems like you have already isolated the issue, and the issue isn't ArcGIS: Have you reached out to JetBrains, either their technical support or PyCharm forum?
https://community.esri.com/thread/208549-can-someone-reproduce-this-issue-pycharmarcpy-errors
CC-MAIN-2018-22
refinedweb
520
65.62
This project shows how to control a MKR Relay Shield using a MKR GSM 1400 and the Short Message Service (SMS); this solution works with a plain SIM with no data plan and it is suitable for applications where the GSM network is with poor coverage. What You Need The project uses a MKR GSM 1400, the antenna, a battery pack, a mobile phone, one SIM card, two LEDs, two 220 ohm resistors, a breadboard, cables, and a MKR Relay Shield. - The MKR GSM 1400 executes the sketch and manages the incoming and outgoing SMSs; - Antenna and battery pack are respectively used tor allow the connection to the cellular network with a good signal and to power the device when other power supplies are not available; - The MKR Relay Proto Shield is a board that includes two relays and is made for MKR format boards. It is used to switch loads and voltages usually not manageable with solid state solutions (MOSFET). In this project it just switches LEDs; - The mobile phone is required to send and receive the text messages; - The SIM card is required to access the GSM network and allow network operation; -. How It Works This project uses the SMS capabilities of the GSM network to receive some text that is parsed and used to toggle the two relays. For security reasons, the sketch checks the number of the sender and this information must be stored in the arduino_secrets.h file. We don't know the status of the relays and therefore the sketch uses a "toggle" approach, where each SMS received with 1 or 2 as text toggles the status of the corresponding relays. To let you know the result of the operation, an SMS is sent back with the message "Relay <nunber>, state: <0 or 1>". Looking at the history of the messages you should be able to keep track of the relays status. The Sketch The first code section is used to includes the libraries required by the application; MKRGSM include all the GSM connection functionalities, these are available through the object GSMClient, GPRS and GSM, the header GSM_SMS import the APIs through which the sketch is able manage the SMS sended or received by the MKRGSM: #include <MKRGSM.h> GSM gsmAccess; GSM_SMS sms; After the include section, we provide the two lines of code that refer to your secret pin number and incoming number. With this syntax the web editor generates a new Tab called Secrets where you input the two constants: char PINNUMBER [] = SECRET_PINNUMBER; String sender = SECRET_YOUR_NUMBER; The setup section initializes all the objects used by the sketch; the GSM connection sequence that allow to register to the network occupies most of this section. void setup() { // initialize serial communications and wait for port to open: Serial.begin(9600); while (!Serial) { ; // wait for serial port to connect. Needed for native USB port only } Serial.println("SMS Messages Receiver"); // connection state bool connected = false; // Start GSM connection while (!connected) { if (gsmAccess.begin(PINNUMBER) == GSM_READY) { connected = true; } else { Serial.println("Not connected"); delay(1000); } } Serial.println("GSM initialized"); Serial.println("Waiting for messages"); } Last code section is the loop function where the basic control logic is executed. In our caseit simply checks if an SMS is available, if true, gets the SMS from the SIM and checks the text. If it is from the right sender it decodes the first character and switches the relay specified by the user. To confirm the operation it sends an SMS with the status of the Relay switched. It is important to delete the received SMS from the SIM card to avoid that it fills up and stops receiving SMSs. The sketch uses the ASCII code of the numbers 1 and 2 in the swtch ... case structure. void loop() { int c; String texmsg = ""; // If there are any SMSs available() if (sms.available()) { Serial.println("Message received from:"); // Get remote number sms.remoteNumber(senderNumber, 20); Serial.println(senderNumber); if (String(senderNumber) == sender) { // An example of message disposal // Any messages starting with # should be discarded if (sms.peek() == '#') { Serial.println("Discarded SMS"); sms.flush(); } c = sms.read(); switch (c) { case 49: digitalWrite(1, !digitalRead(1)); texmsg = "Relay 1, state: " + String(digitalRead(1)); sms.beginSMS(senderNumber); sms.print(texmsg); sms.endSMS(); break; case 50: digitalWrite(2, !digitalRead(2)); texmsg = "Relay 2, state: " + String(digitalRead(2)); sms.beginSMS(senderNumber); sms.print(texmsg); sms.endSMS(); break; default: break; } Serial.println("\nEND OF MESSAGE"); // Delete message from modem memory sms.flush(); Serial.println("MESSAGE DELETED"); } else { sms.flush(); Serial.println("MESSAGE DELETED"); } } delay(1000); } How to Use It Double check that your phone number is properly stored in the Secrets Tab, then upload the sketch and open the Serial Monitor to verify that the module properly attaches to the network. From a simple mobile phone, send an SMS with just "1" or "2" to the mobile number associated with the SIM inserted in the MKR GSM 1400 board. You should see in a few seconds the LED turning on or off depending on its state before the SMS was sent. Together with the change of state the sketch sends back a message containing the current state of the toggled LED.
https://create.arduino.cc/projecthub/Arduino_Genuino/control-two-relays-with-an-sms-7c0eb2
CC-MAIN-2019-43
refinedweb
864
61.26
Join devRant Pipeless API From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple APILearn More Search - "package" - - I got my reward stress ball about a week ago, but it vanished within minutes of opening the package 😧 Today I found out who had taken it.17 - Python: let me manage those packages for you. Node: here's the whole post office. You're welcome. c: Write the packages yourself. Luarocks: What the fuck is a package13 - - Merch package signed by dfox himself :D Maybe I should keep that part, should we ever blow up like Facebook hehehe28 - Notepad++: An update package is available, do you want to download it? Me: Maybe next time Notepad++: Sure, that's what they all say4 -17 - Just released my JS devRant API wrapper. It has support for posting, viewing, voting and much more. If you are interested here is the NPM package: - - - Boss: We need a new functionality to record company names for now. Me: Ok. (This will be a quick one) (few mins later) Me: Ok, adding/editing/deleting company names.. done. I also added "date recorded" field, just in case we need it. Boss: Ok, thanks. (~20 mins later) Boss: We also need a functionality for the users which has "this" permission to be able to "request" for a company registration. We need to add fields to record the contact person, email, phone, etc.. Once a "request" has been submitted, "this" person-in-charge has to get a notification on the dashboard. And the requesters, should get a notification that they have a pending request sent. Once the registration is done, the requester has to be notified. Me: 👀6 - Wow, my girlfriend has been really efficient wrapping and organising the presents under the tree... You could say she’s a Swift Package Manager!!3 - "What the hell, you got some custom tuning package?" No man, it's called DevRant, it's awesome, spread the love!12 - Guys try installing Termux. Its Linux for Android. It comes with its own version of Apt package manager. Using this opens up alot of possibilities ;)11 - Brought two laptops: Linux and windows Forgot charger for windows. Graphics not working properly in linux Three hours of package installation, documentation diving, package removal, driver fuckery and other assorted fun I can now play Minecraft.4 - I just released a package that lets you use Fluent Design's Acrylic material in react if anyone is interested 🙂 I love this effects, looks really cool if you ask me 😃 - I am leaving my job from a very big company with very high paying package for doing my own startup. 1 month notice period on.10 - While updating a remote production server, accidentally uninstalled a package that was required for openssh to work. That was fun to recover... 😐1 - 3 off the most dreadful things to do as a developer 1) Documentation 2) Testing (Multiple device/browser) 3) Wearing formal clothes8 - - I've - [14 dependencies] "Geez this project has a lot of dependencies. I know, I should use a package manager!" [15 dependencies]4 - - So a friend of me bought 200 rubber ducks for 90 bucks. He's now waiting for a 14Kg package to arrive.7 - - I just hate Eclipse with passion. Stopped using it when I couldn't even get it's package manager work without crashing it.11 - QUALITY MATTERS* *typo is a part of good quality They just focused on quality of the Product (maybe) and not on its package.4 - Finally installed Ubuntu and successfully configured the wifi setting, any package I should install?31 - - Greatest thing I've done with my life: alias fucking="sudo apt-get" Now FUCKING INSTALL THAT PACKAGE!!!3 - - As a Java developer, I'm disappointed that GitHub does not offer free protected and package-protected repos.4 - - To all Linux Wizards out there: You should create an alias to your package manager called 'installman' to praise the grand master.10 - - You know it's 2016 when you have to download a package manager to download a package manager to... :S There are a shitload of them.5 - - Another package from NY? More stickers? - No. It's a ball this time! People are starting to look weird at me over here...1 - - When you finally finish your app and have it ready for first release, but when you package it, the app instantly closes when you open it.2 - - You just have to use jQuery. And the whole jQuery UI package, because we need to animate the background color of that single div.8 - Mojang released the minecraft official launcher for Gnu/linux, and they fucking have an Arch package! That’s right motherfuckers: AN ARCH PACKAGE!5 - The moment when you learn Java for ~6 months at the university and someone in the back asks: "whats a integer?"...😅😐5 - - !rant For all those who dislike they way you install things on Windows and would prefer the cmd simplicity that linux provides I give you: It's "Winux" or "Lindows" from now on...😉12 - - Package not found Add dependency Package not found WTF is wrong with you IntelliJ?! ... 5 Minutes later ... Oops, wrong module 😥2 - OSX: `brew install package` Linux: `apt-get install package` Windows: "took hours to figure out but this is how I did it" - forum c. 20031 - Today's archiviements: publish my first Python package I made it to control my Mipow BT Le lightbulb, package name python-mipow1 - sudo pacman -S [package] Sudo: password for algo: *Types y to accept package install* Password incorrect, try again3 - - - So I finally decided to get a theme for sublime (And other packages). I'm loving it. Post your IDE/Text editors or whatever you use to code.32 - - When did it become OK to make a software installer package 33.8GB. I miss the days when 3MB was a big executable.10 - - const nsfw = require('nsfw'); //Now that's a sexy name for an npm package - When dev who insists on using their own vbox rather the officially maintained vagrant package asks you to debug a non-code issue...2 - - I finally signed my severance package deal today! I think this work relationship couldn't have had any happier ending than this. So many years of unhappiness to be replaced by a new job and brighter opportunities next year. I am so looking forward to it! - - - - - KALI FOR THE LOVE OF GOD CAN YOU NOT BREAK YOUR BOOT PACKAGE FOR 24 FUCKING HOURS the initrd isn't at all valid and the vmlinuz package is 0 bytes.32 - #DevOps life: Reading package lists... Done Calculating upgrade... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. ... my day is done here. - Woooooo! Someone ran a cleanup script that deleted all the RPM package our dev Oracle databases use. Glad I'm not that person. - - I was talking with some of my co-workers about the rise of all these package managers (and one I came across for Windows), and this thought occurred in my silly head: - Check out my web calculator designed with Google Material Design 😀😀 - - - So apparently Package Manager Console adds 6 hours to the boot time of Visual Studio... I was wondering why it took so long to start up. - Definition of C++: super fast language with lots of problems. Definition of Rust: C++ without all the problems, an awsome package manager and community made of real people!11 - I released a new npm package! I’m actually kind of proud of this one. - - - - - - - I just wrote, "We don't use braces for if in this package". For if in! What the hell am I speaking!1 - - - - - I want to ask for my devrant stickers but I'm worried the South African postal service will just lose the package or steal it themselves6 - Finding motivation to work on stuff i dont like, because its sometimes part of the package. I'm still having problems with that, but its getting there!1 - - DRINKING GAME. Start package installation in any NodeJS based project. Drink every time you see a deprecation notice.4 - Constructors, generics, collections, package versioning, immutability, syntactic sugar, option types? Meh. Unused imports? NEVER! #golang - - - Homebrew eats shit. It is easily the slowest and least effective package manager I have ever used on the command line. It feels like software that was great in 2006, but hasn't changed since then.9 - - - - semicolon.java out now!... Credits to @sharktits for the contribution.3 - - - - Why python can't into proper dependency management? I Node.js we use npm. Modules are downloaded per project and packaging is easy. In Java we use maven/gradle. Never been so easy to build and download libraries and package your project. But in Python? No, it's not easy. You have to use virtualenv first so pip/anaconda won't download globally, then you must write setyp.py in a million different ways. Packaging and distribution to clients? Good luck with that.21 - - If your a developer please learn to program. There is no reason that this should ever be a package - - It's disgusting how a package like compress-commons can have 1.3 million weekly downloads yet no documentation whatsoever and not even any relevant comments in the code. Honestly fuck the javascript ecosystem3 - Lets create new project!!!! Oh our favorite package is 5 major released ahead. It completely another piece of technology after 7 years of development. Lets use 7 years old package!!! - Wish installing a Vim plugin were as simple as installing an Npm package. Yet another time when I try to learn Vim. :|4 - - Guys, I have recently made a python package which is published on PyPI. I am a very junior python lover 😍6 - When life gives you lemons and everything is going well, you'd best be prepared because you haven't gotten the whole package.1 - Thinks of some cool npm package to build. Thinks of a cool name. Goes to npm to check if the name is available. Finds the exact package instead ;-;2 - If the package.json is longer than your actual code this maybe shouldnt be a package. Just saying - Why TF to I have to authenticate with GPR just to download a *PUBLIC* package?? Why not just use Gradle's *BUILT IN* GitHub package loader? What's the point of this service?!??5 - Can someone give me the name of this beautiful holographic IDE ? I hope it's compatible with linux and have a debian package3 - Developer problems: 6 different package managers to keep up to date. Gem Pip Npm Emacs Homebrew Aptitude Good thing bash scripts and cron jobs exist5 - - - - - THEFUCK!! guys try this package in linux if you are feeling pissed off of typos github.com/nvbn/thefuck2 - - So I actually prefer npm to most other package managers (with the exception of go's package handling). Like you need to look no further than to pip's hell of package management, to start appreciating how clean npm is. ***Shots fired***6 - - The only way for me to learn a new language/framework is by building stuff with it, or heck writing shitty blog posts. So here's one on Elixir - - When you find colleagues still using the Java Calendar class despite having moved to Java 8 and the java.time package long ago. - I wonder if NPM prevents allowing a package to be it's own dependency. If not I have important trolling business to tend to.1 - when you spend half a day trying to figure out why your aws api calls aren't working and it turns out you're using a deprecated package -. - *Lists packages with name "weechat" 1 result *Tries to start package* *Says package is not installed* Wat6 - - /* * This question isn't about diminishing other operating systems ! * I'm just have no idea. 😫 */ Is there a big difference on the amount of packages on pacman, apt-get and yum. Can I use either of them and be sure that most applications are available ?3 - - - So, after day 3 of Rust programming I have an observation to make: Rust package management (and conflicts that arise) is not very good. To the rusters out there, am I wrong? Is there something to make it better - Package Installer on android needs to show something other than just the progress bar. Even a basic log like windows installers that say, "copying this, extracting this, done..." If it affects the minimalism of the interface, they could try doing what Tor browser does- swipe to see a log. It just feels heartbreaking to wait 5 long minutes for it to process on this tortoise device, and then get, "app not installed." with an OK button. :( Like, whyyyy? There should be a "THAT'S NOT OK" button. Is there any magisk module for this? Or some other tweak?5 - - - - aside from the fact that i have to install the basic utilities package to use "add-apt-repository" and that it runs kinda slowly, i'm loving eOS <35 - Fucking Java 8 java.time package. I want to kill the damn developers of this useless piece of shit code.1 - - I wish python had a better package and environment manager. Everybody has their own way around this, but nothing without bugs or 10 steps to install a package in the env. it just sucks...4 - So php7 deprecated all support for mysql none of my previous projects are working on localhost.Moreover,Ubuntu removed the php5 package from it's latest version . FML.17 - I just released my first NPM package that is actually functional and used in a private project (...) and I have to say, the quality of debug tooling for Node is abysmal. I spent 4 hours just on Webpack's "Field browser doesn't contain a valid alias configuration" error which simply means "package not found", and then getting Rollup to output a working compiled javascript _and_ a d.ts was its own day-long ordeal.4 - - - I tried to install a simple Python package today for a simple 10 line program and spent over an hour installing dependencies and reading documentation.1 - - Removed a package from my web app yesterday, turns out the package also removes Bootstrap as it's a dependency. TF?! Who removes Bootstrap with an insignificant plugin? Why?!3 - Imagine an annual $50k+ enterprise software package that didn't distinguish between a null and an empty string in valuing critical data. Not noticed for years - wtf?3 - * package-lock.json * merge conflict ME: fuck fuck fuck, C-s I-Search: HEAD ME: this shit is much i can't handle it, fuck ME: rm package-lock.json ; npm install1 - when your IT policy means that you have to submit a tech request for each nuget package you know you're in for a long day - Created an affiliate tracker / split test tracker / campaign tracker for my Laravel project in 1.5 days. Not bad, not bad. Now, should I offer it on github? Seems like I might be kicking myself in the balls if I did. On the one hand, I don't have a lot of time atm, on the other, I'd love to meet fellow programmers who seek out and would want this, and perhaps contribute. Could lead to some great partnerships down the line.. Anyone have experience with this? Did it take a lot more time than you thought, did you meet other programmers and ended up collaborating on future projects? Curious.. - - - So I had someone question why their system was broken after they installed my software and all of its dependencies to /usr by hand... ._. - - Why do package maintainers stop answering and go silent?! I've waited for more than two weeks on acceptance on my PR, the maintainers hasn't been active and I've even notified them of my worries.. But so far no activity. Why the fuck does a package like date-fns have maintainers that doesn't answer? Furthermore, I can see one of them making private contributions on Github.. I need this package to help on another package to finish my project 😭8 - - Everyone here seems to be hating java, but being a java dev I really like it! Java is a solid benchmark, sure there could be better languages and alternative ways to do stuff. But java will always be that complete package with a real sense of object orientation.7 -.. - - installing a composer package is like trying to get a kid to sleep while the upstairs neighbors have a party.3 - When I search desperately for a missing package that prevents me to compile some Python package on an old Freebsd system and turned out the answer in SO was my own answer. - - !rant Guys, will a 150kb minified js package for frontend slow my site down? How large is too large for a frontend script?6 - The API that powers devrant bot has been abstracted and fleshed out into it's own package.... - - The more I use snaps on Linux, the more I feel like we came the full circle. Installable package (snap) contains a bundle of all dependencies, and installs them in an isolated system path to avoid version incompatibilities. Snaps can can have some sort of install-time configuration, and they create links to a handful of entry points rather than adding all executables to the path. In short, they do the exact same thing Windows installers have been doing for the past 30 years.18 - In my previous post I wrote about dumb NPM package I’ve made and asked about naming suggestions. Finally found one that sticks. Let me present you SHIET. Debug your error in no time. - Integrated vterm package in Emacs, I'm glad having true-and-efficient terminal emulation inside Emacs4 - - How the fu k do I remove this shit?! I can't find the fucking package name. Big hint: IT'S NOT FIRE!!!8 - woman(package) definition by Emacs: -"browse UN*X manual pages `wo (without) man' " -"woman is a built-in package." Emacs, pls... :D1 - - - > Build project > 13 Errors: Could not find reference > Uninstall and reinstall NuGet package > Build project > 158 Errors: Could not find reference - - - - - Sudo apt-get install -y php7.1 E: unable to locate package php7.0 E: couldn't find any package by regex php7.12 - The offizial guide to install Tensorflow(Python) on windows is BS. The stackoverflow solution only adds the package, but it wouldn't work. Time to build it myself... - -. - So, I have worked with two different package managers. Sbt and npm. And I don't get them. Let's take npm. I install a module. That module has its own package.json and npm runs for that, too. And it drops it into the same node_modules directory, just like the modules of my main application. Sbt constantly gives me warnings about possible binary incompatible versions and I have to exclude packages and hope that those packages work together nicely. I don't get it. This is how I would build a package manager and please tell me if I am just naive or why it is not done like it. 1. All packages get their own namespace, consisting out of the namespace and the version in a global folder. 2. If a submodule has installed a package in version 1.5, then it will be symlinked to that package. If another submodule needs 2.5, it is symlinked to the 2.5 version. 3. If I want to minimise the build I can try to override the packages to see if they are still working when they are all given the same version. Seems pretty simple to me. So why the fuck are those package managers constantly loading everything in a global namespace not differentiating it by version?8 - Installed chocolatey for the first time because hey a package manager for windows sounds cool Turns out it automatically installs random ass windows updates Fuck that noise7 - Yesterday I was so high. Today when I open my data science project notebook and I saw that I used seaborn package without importing it...😂😂😂 - Would be interested to hear if anyone is using HTTP/2 and on which framework. Anyone have experience with Python - what is the community package compatibility like? Thanks!8 - - - Just published a Python package the other day. It's nothing more than a simple script, but it's nice, and I really hope it will be useful to someone.3 - WTF debian? no mysql server or client in your package lists? Ah i see, its just not stable enough... #6 - When you have to refactor the whole microservice to fix package structuring so it can be modular...why can't people do it correctly the first time? - Am I the only person in this world that hate package managers? I can live perfectly without any of them1 - - I was watching a video by Karl Smallwood about the size of Superman's package when I noticed the YouTube algorithm decided that only three videos were similar. It just seemed strange6 - Use Swift Package Manager, they said. I'll be great, they said. "Note that at this time the Package Manager has no support for iOS, watchOS, or tvOS platforms." -... - -?18 - Just published my first npm package. A Mongoose-like interface for Firebase. - - Developers: We can install and build a package without any errors easily. NPM exists: Surprise MF's.6 - What is the difference between Armageddon and the Apocalypse? Not sure which npm package to install8 - - How I envision the package maintainer for gstreamer, every time they're getting ready to push updates, knowing that the end user will have to spend the next 35 minutes in front of their bash console, watching each package build...1 - - Am I the only one thinking that maatwebsite/laravel-excel package is poorly documented? Trying to make it work for excel file reading I have to do. 4 attempts (every attempt by 6h) - shit's not working like intended. Poor examples, code itself - just..not connecting dem dots, m9. Just had to let it out from my system.3 - - It really sucks that there isn't a de facto package/dependency manager for c++. Conan, cppan, hunter, cget, lots of projects trying to fill this gap. Yet, none seems to be mature enough -.- - Finally published a demo packaged on npm to learn how it works. Like it - "Npm loves you" 😜 I'll publish my own package in 2-3 days. Tip : it seems you can't delete a npm package, you can only unpublish it. - Finally got my Hacktoberfest care package!! Thought it'd never arrive... Anyone know a good way to get more stickers? Or personal preference as to where to buy them?3 - - ! rant Just installed Atom to try out. Has a decent package ecosystem. Just found vim-mode-plus. Do you believe in love at first sight? 😉1 - - I've ordered hardware like 24 days ago. Shipping company lost the package and I've just got confirmation the to see my money I have to wait another 20+ days. FML - Using canary versions of android studio is fun but I hate it when you have to download entire package again and again because of some new fixes :/ - interesting thing when I google "timer js" and see no package for timer.js yet? like timer.start(thing). timer.end(thing). should I make it? 🤔😁2 - How should I name NPM package which works as console log for errors, but throws user to stack overflow page with error massage included in the link? Found a meme here at DevRant in which this idea was presented, haha.14 - If anyone complains one more time about "windows is built upon a DLL-Hell", i will challenge this specific anyone to implement react into an existing PHP-Project. Installing matching package versions via npm is the real struggle. Especially if you decide to be a node psycho who's delivering his react code via webpack. *projectile vomiting in a straight beam of acid vomit* Wasted a complete day of my life, dealing with Facebook's naughty shit.... - Just discovered the H parameter in the float package after two days of dealing with latex figures... I need a break - Last week, I start creating a small npm package And I literally don't know how to create it. Please check it for the issues and let me know - I just released version 2.0 of my UI package. Laravel Livewire & Bootstrap 5 UI starter kit. This package is a modernized version of the old laravel/ui package for developers who prefer using Bootstrap 5 and full page Livewire components to build their projects. It also comes with a few features to boost your development speed even more. GitHub: Demo Video: - We are dependent on dependency injection and package management... which also comes with a lot of dependencies installed by another package manager1 - - - - - - - - - - This "binaryextensions" NPM package is a fraud (not to be confused with "binary-extensions"!):...; it contains a single JSON array of purportedly "all binary extensions", reaches 700k downloads a week, yet only lists 13 binary extensions (...). This is a huge danger to security, especially if it's being used in production environments for input checking. For comparison, here is a much more robust version of a repo with the same goal (...)1 - - - Does anyone have any experience with KeepSolidVPN? $150 for a lifetime package seems like an excellent deal.5 - - - Creating a script that switches a global node_module package version (because we have legacy projects mixed with normal ones) - I don't understand front end vs back end. Like synaptic vs apt-get, you're still manuplating you're package system right?1 - - That second or so when installing a package and GDebi is checking whether the dependencies are satisfiable - - - - Is there a way to tell NuGet Package Manager to not install/run a package when in certain environments? Just like flags on NPM. Thanks in advance - Which extension do you recommend for VS Code, which has most of the features of SFTP package by Will Bond for Sublime Text? -? - Okay now I messed with my package manger and I don’t know from where things are wrong. This cli command, error. That cli command error. What the hell? - Just release my new Laravel CRUD package: Hope someone likes it. - I need to stop messing with all the new vim plugin package managers. The plugins themselves were already distraction enough. ;_; - - - Discussion forum software: what is the most stable and secure as well as regularly updated package out there?6 - - - - - SonarQube just showed my package name as duplicated code because I have a second class in the same package with the same package name -.- - !rant Hey, God bless the pamac devs - I'm not constantly trying to queue packages for install when others are already installing niw, since it's all locked until the first install is finished felt compelled to write about this idk - - Thought the package-lock.json file wasn't working. Turns out it wasn't being copied into the Dockerfile. :/2 - I recently released my latest UI, Auth, & CRUD scaffolding package called Laravel Livewire UI. This package provides Laravel Livewire & Bootstrap UI, Auth, & CRUD scaffolding commands to make your development speeds blazing fast. With it, you can generate full, preconfigured Bootstrap UI, complete Auth scaffolding including password resets and profile updating, and Create, Read, Update, and Delete operations. The CRUD files even have searching, sorting, and filtering. This package also comes with full PWA capabilities. - Demo Video:... - Github Repo:... Thanks for your time. - I just launched my new UI package. bastinald/ui allows you to create web apps using Laravel Livewire + Bootstrap 5 in record time. Thanks for checking it out. - - - Turns out that neglecting 590 updates on Apricity Linux may result in a broken package manager when you try let the system update. Who knew!1 - I just released another UI, Auth, & CRUD scaffolding/starter kit package. This is similar to my last package, but I've put everything inside one package. This makes it easier for me to integrate different features, as well as maintain it. This package has a bunch of improvements and some new features. - Video:... - Repo: Thanks for checking it out. Hopefully someone finds it useful. - - For the angular devs. Should I turn this to an NPM package? It can be a little tedious to set up. I think most would rather have a plug and play package. - I'm having a hell of a time figuring out how to use atom editor with my website somebody please direct me to the right package!!2 - So I've been working on a python package for quite sometime. It is a package that allows for multithreaded/multiprocessed downloads and various control commands as well. It is not complete but can be used successfully in scripts barring some exceptions. I was hoping that the kind people here at devrant would help me better the package and contribute towards it as well. It is also a great opportunity for newbies as well to learn and develop new insight about the package.... - Top Tags
https://devrant.com/search?term=package
CC-MAIN-2021-31
refinedweb
4,892
74.29
Công Nghệ Thông Tin Quản trị mạng Tải bản đầy đủ - 0 (trang) Drip: A Stream-Based Storage System Drip: A Stream-Based Storage System Tải bản đầy đủ - 0trang 182 • • • • • Chapter 9. Drip: A Stream-Based Storage System Middleware for batch processing Wiki system storage and full-text search system Twitter timeline archiving and its bot framework Memos while using irb Is this still too vague? Starting with the next section, I’ll introduce how to use Drip by comparing it with other familiar data structures: Queue as a process coordination structure and Hash as an object storage. 9.2 Drip Compared to Queue Let’s first compare Drip with Queue to understand how process coordination works differently. We use the Queue class that comes with Ruby. Queue is a FIFO buffer in which you can put any object as an element. You can use push to add an object and pop to take it out. Multiple threads can pop at the same time, but an element goes to only one thread. The same element never goes to multiple threads with pop. If you pop against an empty Queue, then pop will block. When a new object is added, then the object reaches to only the thread that acquired the object. Drip has an equivalent method called read. read will return an element newer than the specified cursor. However, Drip doesn’t delete the element. When multiple threads read with the same cursor, Drip returns the same element. Both Drip and Queue can wait for the arrival of a new element. The key difference is whether the element is consumed. Queue#pop will consume the element, but Drip#read doesn’t consume elements. This means multiple people can read the same element repeatedly. In Rinda, if you lose a tuple because of an application bug or system crash, it means that the entire system could go down. In Drip, you never need to worry about the loss of an element. Let’s see how this works in Drip with code. Basic Operations with Read and Write Methods You can use two methods to compare with Queue. Drip#write(obj, *tags) Drip#write adds an element to Drip. This is the only operation to change the state of Drip. This operation stores an element obj into Drip and returns the report erratum • discuss Drip Compared to Queue • 183 key that you use to retrieve the object. You can also specify tags to make object access easier, which I’ll explain in Using Tags, on page 187. Another method is read. Drip#read(key, n=1, at_least=1, timeout=nil) This is the most basic operation for browsing Drip. key is a cursor, and this method returns n number of arrays that consist of key and value pairs added later than the requested key. n specifies the number of matched pairs to return. You can configure it so that the method is blocked until at_least number of elements arrive with timeout. In short, you can specify “Give me n elements, but wait until at_least elements are there.” Installing and Starting Drip Oops, we haven’t installed Drip yet. Drip depends on an external library called RBTree. If you use gem, it should install the dependency as well. gem install drip Next, let’s start the Drip server. Drip uses a plain-text file as a default secondary storage. To create a Drip object, you need to specify a directory. The next script generates Drip and serves via dRuby. drip_s.rb require 'drip' require 'drb' class Drip def quit Thread.new do synchronize do |key| exit(0) end end end end drip = Drip.new('drip_dir') DRb.start_service('druby://localhost:54321', drip) DRb.thread.join The quit method terminates the process via RMI. The script waits until Drip doesn’t write to any secondary storage using synchronize (see Locking Single Resources with Mutex, on page 88 for more detail). report erratum • discuss 184 • Chapter 9. Drip: A Stream-Based Storage System Start Drip like this: % ruby drip_s.rb It won’t print out anything; it simply runs as a server. MyDrip I prepared a single-user Drip server called MyDrip. This works only for POSIXcompliant operating systems (such as Mac OS X), but it’s very handy. It creates a .drip storage directory under your home directory and communicates with the Unix domain socket. Since this is just a normal Unix domain socket, you can restrict permission and ownership using the file system. Unlike TCP, a Unix socket is handy, because you can have your own socket file descriptor on your own path, and you don’t have to worry about port conflict with other users. To use MyDrip, you need to require my_drip (my_drip.rb comes with Drip gem, so you don’t have to download the file by yourself). Let’s invoke the server. # terminal 1 % irb -r my_drip --prompt simple >> MyDrip.invoke => 51252 >> MyDrip.class => DRb::DRbObject MyDrip is actually a DRbObject pointing to the fixed Drip server port, but it also has a special invoke method. MyDrip.invoke forks a new process and starts a Drip daemon if necessary. If your own MyDrip server is already running, it finishes without doing anything. Use MyDrip.quit when you want to stop MyDrip. MyDrip is a convenient daemon to store objects while running irb. In my environment, I always have MyDrip up and running to archive my Twitter timeline. I also use it to take notes or to use as middleware for a bot. I always require my_drip so that I can write a memo to MyDrip while running irb. You can insert the following line in .irbrc to include it by default: require 'my_drip' Going forward, we’ll use Drip for most of the exercises. If you can’t use MyDrip in your environment, you can create the following client: drip_d.rb require 'drb/drb' MyDrip = DRbObject.new_with_uri('druby://localhost:54321') You can use drip_d.rb and drip_s.rb as an alternative to MyDrip. report erratum • discuss Drip Compared to Queue • 185 Peeking at Ruby Internals Through Fixnum Speaking of a Fixnum class in a 64-bit machine, let’s find out the range of Fixnum. First let’s find out the largest Fixnum. Let’s start from 63 and make it smaller. (2 ** 63).class (2 ** 62).class (2 ** 61).class #=> Bignum #=> Bignum #=> Fixnum It looks like the border of Bignum and Fixnum is somewhere between 2 ** 62 and 2 ** 61. Let’s try it with 2 ** 62 – 1. (2 ** 62 - 1).class #=> Fixnum Found it! 2 ** 62 - 1 is the biggest number you can express with Fixnum. Let’s convert this into Time using Drip’s key generation rule. Time.at(* (2 ** 62 - 1).divmod(1000000)) #=> 148108-07-06 23:00:27 +0900 How about the smallest Fixnum? As you may have guessed, it is -(2 ** 62). This is equivalent to a 63-bit signed integer, not 64-bit. Fixnum in Ruby has a close relationship with object representation. Let’s find out the object_id of an integer. 0.object_id #=> 1 1.object_id #=> 3 2.object_id #=> 5 (-1).object_id #=> -1 (-2).object_id #=> -3 The object_id of Fixnum n is always set to 2 * n + 1. Inside Ruby, objects are identified by pointer width. Most objects show the allocated area, but that won’t be very efficient to allocate memory for Fixnum. To avoid the inefficiency, Ruby has a rule of treating objects as integers if the last 1 bit is 1. Because this rule takes up 1 bit, the range of Fixnum is a 63-bit signed char rather than 64-bit. By the way, it will be a 31-bit signed integer for a 32-bit machine. There are also objects with a special object_id. Here’s the list: [false, true, nil].collect {|x| x.object_id} #=> [0, 2, 4] And that’s our quick look into the world of Ruby internals. Comparing with Queue Again Let’s experiment while MyDrip (or the equivalent drip_s.rb) is up and running. Let’s add two new objects using the write method. As explained earlier, write is the only method to change the state of Drip. The response of write returns the key that’s associated with the added element. The key is an integer generated from a timestamp (usec). The number will be a Fixnum class in a 64-bit machine. report erratum • discuss 186 • Chapter 9. Drip: A Stream-Based Storage System # terminal 2 % irb -r my_drip --prompt simple >> MyDrip.write('Hello') => 1312541947966187 >> MyDrip.write('world') => 1312541977245158 Next, let’s read data from Drip. # terminal 3 % irb -r my_drip --prompt simple >> MyDrip.read(0, 1) => [[1312541947966187, "Hello"]] read is a method to read n number of elements since the specified cursor, and it returns an array consisting of a key and value pair. To read elements in order, you can move the cursor as follows: >> => >> => >> => k = 0 0 k, v = MyDrip.read(k, 1)[0] [1312541947966187, "Hello"] k, v = MyDrip.read(k, 1)[0] [1312541977245158, "World"] So far, you’ve read two elements. Let’s try to read one more. >> k, v = MyDrip.read(k, 1)[0] It will be blocked since there are no elements newer than k. If you add a new element from terminal 2, it will unblock and be able to read the object. # terminal 2 >> MyDrip.write('Hello, Again') => 1312542657718320 >> k, v = MyDrip.read(k, 1)[0] => [1312542657718320, "Hello, Again"] How did it go? Were you able to simulate the waiting operation? Let’s increase the number of the listener and start reading from 0. terminal 4 % irb -r my_drip --prompt simple >> k = 0 => 0 >> k, v = MyDrip.read(k, 1)[0] => [1312541947966187, "Hello"] >> k, v = MyDrip.read(k, 1)[0] => [1312541977245158, "World"] >> k, v = MyDrip.read(k, 1)[0] => [1312542657718320, "Hello, Again"] report erratum • discuss Drip Compared to Hash • 187 You should be able to read the same element. Unlike Queue, Drip doesn’t consume elements, so you can keep reading the same information. Instead, you need to specify where to read, every time you request. Let’s try to restart MyDrip. The quit method terminates the process when no one is writing. Call invoke to restart. MyDrip.invoke may take a while to start up if the log size is big. # terminal 1 >> MyDrip.quit => # >> MyDrip.invoke => 61470 Let’s call the read method to check whether you recovered the previous state. # terminal 1 >> MyDrip.read(0, 3) => [[1312541947966187, "Hello"], [1312541977245158, "World"], [1312542657718320, "Hello, Again"]] Phew, looks like it’s working fine. Let’s recap what we’ve learned so far. Drip is similar to Queue, where you can retrieve data in a time order, and also you can wait for new data to arrive. It’s different because data does not decrease. You can read the same elements from different processes, and the same process can read the same element again and again. You may have experienced that batch operations tend to stop often while developing them as well as running them in a production environment. With Drip, you can work around this if you make use of the functionality because you can restart from the middle many times. So far, we’ve seen two basic operations, write and read, in comparison with Queue. 9.3 Drip Compared to Hash In this section, you’ll learn advanced usage of Drip by comparing it to KVS or Hash. Using Tags Drip#write will allow you to store an object with tags. The tags must be instances of String. You can specify multiple tags for one object. You can read with tag names, which lets you retrieve objects easily. By leveraging these tags, you can simulate the behavior of Hash with Drip. report erratum • discuss 188 • Chapter 9. Drip: A Stream-Based Storage System Let’s treat tags as Hash keys. “write with tags” in Drip is equivalent to “set a value to a key” in Hash. “read the latest value with the given tag” is equivalent to reading a value from Hash with the given tag. Since “the latest value” in Drip is equivalent to a value in Hash, “the older than the latest value” in Drip is similar to a Hash with version history. Accessing Tags with head and read_tag Methods In this section, we’ll be using the head and read_tag methods. Drip#head(n=1, tag=nil) head returns an array of the first n elements. When you specify tags, then it returns n elements that have the specified tags. head doesn’t block, even if Drip has fewer than n elements. It only views the first n elements. Drip#read_tag(key, tag, n=1, at_least=1, timeout=nil) read_tag has a similar operation to read, but it allows you to specify tags. It only reads elements with the specified tags. If elements newer than the specified keys don’t have at_least elements, then it will block until enough elements arrive. This lets you wait until elements with certain tags are stored. Experimenting with Tags Let’s emulate the behavior of Hash using head and read_tag. We’ll keep using the MyDrip we invoked earlier. First, let’s set a value. This is how you usually set a value in a Hash. hash['seki.age'] = 29 And here is the equivalent operation using Drip. You write a value 29 with the tag seki.age. >> MyDrip.write(29, 'seki.age') => 1313358208178481 Let’s use head to retrieve the value. Here is the command to take the first element with a seki.age tag. >> MyDrip.head(1, 'seki.age') => [[1313358208178481, 29, "seki.age"]] The element consists of [key, value, tags] as an array. If you’re interested only in reading values, you can assign key and value into different variables as follows: report erratum • discuss Drip Compared to Hash >> => >> => • 189 k, v = MyDrip.head(1, 'seki.age')[0] [[1313358208178481, 29, "seki.age"]] v 29 Let’s reset the value. Here is the equivalent operation in Hash: hash['seki.age'] = 49 To change the value of seki.age to 49 in Drip, you do exactly the same as before. You write 49 with the tag seki.age. Let’s try to check the value with head. >> => >> => MyDrip.write(49, 'seki.age') 1313358584380683 MyDrip.head(1, 'seki.age') [[1313358584380683, 49, "seki.age"]] You can check the version history by retrieving the history data. Let’s use head to take the last ten versions. >> MyDrip.head(10, 'seki.age') => [[1313358208178481, 29, "seki.age"], [1313358584380683, 49, "seki.age"]] We asked for ten elements, but it returned an array with only two elements, because that’s all Drip has for seki.age tags. Multiple results are ordered from older to newer. What happens if you try to read a nonexistent tag (key in Hash)? >> MyDrip.head(1, 'sora_h.age') => [] It returns an empty array. It doesn’t block either. head is a nonblocking operation and returns an empty array if there are no matches. If you want to wait for a new element of a specific tag, then you should use read_tag. >> MyDrip.read_tag(0, 'sora_h.age') It now blocks. Let’s set up the value from a different terminal. >> MyDrip.write(12, 'sora_h.age') => 1313359385886937 This will unblock the read_tag and return the value that you just set. >> MyDrip.read_tag(0, 'sora_h.age') => [[1313359385886937, 12, "sora_h.age"]] Let’s recap again. In this section, we saw that with tags we can simulate the basic operation of setting and reading values from Hash. report erratum • discuss 190 • Chapter 9. Drip: A Stream-Based Storage System The difference is as follows: • You can’t remove an element. • It has a history of values. • There are no keys/values. You can’t remove an element like you do in Hash, but you can work around by adding nil or another special object that represents the deleted status. As a side effect of not being able to remove elements, you can see the entire history of changes. I didn’t create keys and each methods on purpose. It’s easy to create them, so I created them once but deleted them later. There are no APIs in Drip at this moment. To implement keys, you need to collect all elements first, but this won’t scale when the number of elements becomes very big. I assume this is why many distributed hash tables don’t have keys. There are also some similarities with TupleSpace. You can wait for new elements or their changes with read_tag. This is a limited version of read pattern matching in Rinda TupleSpace. You can wait until elements with certain tags arrive. This pattern matching is a lot weaker than Rinda’s pattern matching, but I expect that this is enough for the majority of applications. When I created Drip, I tried to make the specification narrower than that of Rinda so that it’s simple enough to optimize. Rinda represents an in-memory, Ruby-like luxurious world, whereas Drip represents a simple process coordination mechanism with persistency in mind. To verify my design expectations, we need a lot more concrete applications. In the previous two sections, we explored Drip in comparison with Queue and Hash. You can represent some interesting data structures using this simple append-only stream. You can stream the world using Drip because you can traverse most of the data structures one at a time. 9.4 Browsing Data with Key In this section, we will learn multiple ways to browse the data stored in Drip. In Drip, all the elements are stored in the order they were added. Browsing data in Drip is like time traveling. Most browsing APIs take the cursor as an argument. Let’s first see how keys are constructed and then see how to browse the data. report erratum • discuss Browsing Data with Key • 191 How Key Works Drip#write returns a key that corresponds to the element you stored. Keys are incremented integers, and the newly created key is always bigger than the older ones. Here’s the current implementation of generating a key: def time_to_key(time) time.tv_sec * 1000000 + time.tv_usec end Keys are integers generated from a timestamp. In a 64-bit machine, a key will be a Fixnum. The smallest unit of the key depends on μsec (microsecond), so it will collide if more than one element tries to run within the same μsec. When this happens, the new key will increment from the latest key by one. # "last" is the last (and the largest) key key = [time_to_key(at), last + 1].max Zero becomes the oldest key. Specify this number as a key when you want to retrieve the oldest element. Browsing the Timeline So far, we’ve tried the read, read_tag, and head methods for browsing. There are other APIs: read, read_tag, newer Browsing to the future head, older Browsing to the past In Drip, you can travel the timeline forward and backward using these APIs. You can even skip elements by using tags wisely. In this section, you’ll find out how to seek for certain elements using tags and then browse in order. The following pseudocode takes out four elements at a time. k is the cursor. You can browse elements in order by repeatedly passing the key of the last element to the cursor. while true ary = drip.read(k, 4, 1) ... k = ary[-1][0] end report erratum • discuss 192 • Chapter 9. Drip: A Stream-Based Storage System To emulate the preceding code, we’ll manually replicate the operation using irb. Is your MyDrip up and running? We also use MyDrip for this experiment. Let’s write some test data into Drip. # terminal 1 % irb -r my_drip --prompt simple >> MyDrip.write('sentinel', 'test1') => 1313573767321912 >> MyDrip.write(:orange, 'test1=orange') => 1313573806023712 >> MyDrip.write(:orange, 'test1=orange') => 1313573808504784 >> MyDrip.write(:blue, 'test1=blue') => 1313573823137557 >> MyDrip.write(:green, 'test1=green') => 1313573835145049 >> MyDrip.write(:orange, 'test1=orange') => 1313573840760815 >> MyDrip.write(:orange, 'test1=orange') => 1313573842988144 >> MyDrip.write(:green, 'test1=green') => 1313573844392779 The first element acts as an anchor to mark the time we started this experiment. Then, we wrote objects in the order of orange, orange, blue, green, orange, orange, and green. We added tags that corresponded with each color. # terminal 2 % irb -r my_drip --prompt simple >> k, = MyDrip.head(1, 'test1')[0] => [1313573767321912, "sentinel", "test1"] >> k => 1313573767321912 We first got a key of the anchor element with the “test1” tag. This is the starting point of this experiment. It’s a good idea to fetch this element with the fetch method. Then, we read four elements after the anchor. >> ary = MyDrip.read(k, 4) => [[1313573806023712, :orange, "test1=orange"], [1313573808504784, :orange, "test1=orange"], [1313573823137557, :blue, "test1=blue"], [1313573835145049, :green, "test1=green"]] Were you able to read as expected? Let’s update the cursor and read the next four elements. report erratum • discuss Tài liệu liên quan 699 the druby book Pass by Reference, Pass by Value Drip: A Stream-Based Storage System 699 the druby book -0 (trang) Tải bản đầy đủ ngay(0 tr) ×
https://toc.123doc.org/document/2692928-drip-a-stream-based-storage-system.htm
CC-MAIN-2018-39
refinedweb
3,514
68.16
I'm guessing you meant winscp. re: AAALoadFirstExtensions dependency: sounds good, although personnally I would still have both atexit.register: thanks, it seems to work well, I don't know if it works if sublime crashes though. You can always kill the winscp.com processes through the task manager, but I'd like to avoid that. I use WinSCP 4.2.3, by default it installs to "C:\Program Files\WinSCP3". Maybe it's because I upgraded from WinSCP 3.xx in the past? There's an option in WinSCP to store the settings in an INI file instead of the registry. You can find it in Preferences -> Storage -> INI file.I suppose it shouldn't be too hard to add some code to search the registry as well. re: winscp.com: I'm not sure how to make it work with winscp.exe. It spawns a new console window. According to, you can copy winscp.com to a portable installation and it should work. The package is now on BitBucket: Downloading files works now, just select a file, it'll download it in a temp folder and open it in sublime. When you close the connection, all of the temp files will be deleted. Changes are not propagated back to the server for now; I'm debating using "put" or "keepuptodate". Unfortunately I'll have to go with the 'put' route. The problem with 'keepuptodate' is that it doesn't return until we're finished. So you can only edit one file at a time (actually, you can edit more than one file at a time, as long as they are in the same remote directory.) If you want to edit two files in two different folders at the same time, you're out of luck. Fortunately, Sublime has 'onSave' events, so we can still achieve the same functionality using 'put'. Made a few important changes:- file changes are now uploaded to server- if INI file is not found, registry is used insteadand smaller changes:- reconnects on session timeout- better handling of different FTP LIST output So the base functionnality is there. You can connect to a server, get a file, modify it, and each time you save it, it'll be uploaded back automatically. I've only done VERY LIMITED testing so be careful. Changes: It will now ask for a password if the password is not stored instead of hanging. Question: does anyone know how to kill a subprocess (in windows, with Python 2.5)? subprocess.Popen.terminate() doesn't exist prior to 2.6. I tried this (found on the web) but it doesn't seem to work: import ctypes PROCESS_TERMINATE = 1 handle = ctypes.windll.kernel32.OpenProcess(PROCESS_TERMINATE, False, pid) ctypes.windll.kernel32.TerminateProcess(handle, -1) ctypes.windll.kernel32.CloseHandle(handle) this is really great great stuff! i havent really tested this plugin but so far it looks good. although i work with svn most of the time, there are older projects with are edited live (with winscp), so this plugin comes in very handy. one feature that would be very cool: set a subfolder of the remote server as an project mount point, so we can use sublimes project management features... ps: it is ingenious that it relies on winscp which is by far the best sftp client for windows that i know of. Sounds cool... I'll try to do something like that after the basic functionnality is properly tested @ sublimator: thanks! @! hm, i nearly got it working myself... i just added a recursive function to iterate though all subdirectories ( please bear with me, first time i do something with python ): def get_folder_list(self, folder, wait=1): files = self.recursive_get_folder_list( folder ) # rewrite / to \ so we can use the sublime directory filter stuff renamed = ] for singleFile in files: renamed += [singleFile.replace( "/", "\\")] yield renamed def recursive_get_folder_list(self, folder, wait=1): entries = self.winscp.listDirectory(folder) files = ] dirs = ] for line in entries: try: info = parse.parse_ftp_list_line(line.rstrip()) except: continue if not info: continue name = info.name isDir = info.try_cwd if name in '.', '..']: continue if isDir: dirs += [folder + '/' + name] else: files += [folder + '/' + name] # recursivly iterate through subdirectories for subDir in dirs: files.extend( self.recursive_get_folder_list( subDir ) ) # yielding so scheduled.threaded works return sorted(files) i have to replace / with \ so i can use sublimes directory file stugg ("\foo index.html"), but it breaks everything else . is there any way that i can get / working? python has a lib for that: os look more on os.path.normpath(path), see docs.python.org/library/os.path.html I'm trying to get this plugin to work but can't seem to. I have AAA and WinSCP installed. I can use WinSCP just fine. When I open Sublime though and hit F12 nothing happens. Any ideas what I'm doing wrong? Thanks. EDIT: All is well now. My problem was fixed in the latest push. when i try to go inside a subfolder under the host, i got this error Does your FTP have file names that have special characters or non-english characters (i.e. 'ö')? yes
https://forum.sublimetext.com/t/winscp-integration/392/6
CC-MAIN-2017-22
refinedweb
849
67.35
Important: This migration guide was originally released in February 2008 and written for migration to the 2008-01-01 WSDL version. In April 2009, Amazon SQS released a new WSDL version (2009-02-01). If you haven't yet migrated to the 2008-01-01 WSDL version, we recommend you migrate directly to the 2009-02-01 version. This guide has been updated for migration to the 2009-02-01 version. Audience This migration guide is intended for all current Amazon SQS users who are using the 2006-04-01 or 2007-05-01 WSDL versions. Any users of those WSDL versions who want to continue using Amazon SQS after November 6, 2009, must migrate to version 2008-01-01 or 2009-02-01. To ensure that you have sufficient time to adapt your applications, Amazon SQS will continue to accept requests from WSDL versions 2007-05-01 and 2006-04-01 until November 6, 2009, but only if you have sent a message to one of your queues during the period of April 6 to May 6, 2009. If you haven't sent a message to one of your queues during that period, your access to the 2006-04-01 and 2007-05-01 WSDLs will be discontinued immediately as of May 6, 2009. Further details and timing are covered in the subsequent sections. Why the Change? We took an in-depth look at how our customers were using Amazon SQS and explored whether there were opportunities to further lower costs for them. We found that our customers were primarily interested in whether we could reduce the overall cost to use Amazon SQS, especially in conjunction with Amazon EC2. There are several factors that impact the cost of Amazon SQS, including the size of the message, the amount of time the message is in the queue, and the overall number of API calls to the service. We reengineered how we administer the service internally, deprecated some Amazon SQS actions, and adjusted some built-in limits so that we could reduce the cost to operate the service. We are passing these savings on to our customers. We have changed the pricing model so that the price follows the costs of the service. Specifically, we eliminated the per-message cost and introduced a small request-based charge. To get the benefit of the new price, you must use WSDL version 2008-01-01 or later. New Price for Amazon SQS This section describes the differences between the old and new price models. Previous Price (for previous WSDL versions) Messages $0.10 per 1,000 messages sent ($0.0001 per message sent) New Price (for the 2008-01-01 WSDL version and later) Requests $0.01 per 10,000 Amazon SQS requests ($0.000001 per request) Data Transfer Data transfer rates are unchanged. However, because many customers want to use Amazon SQS in conjunction with Amazon EC2, all data transferred between Amazon EC2 and Amazon SQS is free of charge. Tips for Reducing Your Costs Because the previous price model was message-based and not request-based, some Amazon SQS users designed their applications to poll for new messages at a very high frequency. With the new price model, you're charged $0.000001 for each request. We therefore recommend that when you update your application to use the 2008-01-01 or 2009-02-01 version, you also evaluate your application's polling efficiency. If a large percentage of your ReceiveMessage requests return no messages, then you're probably polling too often. To reduce the chance that you'll receive a message again that you've already processed, we recommend you promptly delete your messages after you're done processing them. Also, set the visibility timeout for the queue or message to a time sufficient to cover the processing and deletion of your messages. Timing and Transition New customers who sign up before May 6, 2009, can use any of the WSDL versions and will be charged according to the pricing model associated with the particular WSDL being used. After May 6, 2009, only the 2008-01-01 and 2009-02-01 versions are available to new customers. Amazon SQS customers using the old WSDL versions have until November 6, 2009 to migrate to the 2008-01-01 or 2009-02-01 WSDL version. However, you must send a message to one of your 2006-04-01 or 2007-05-01 queues during the period of April 6 to May 6, 2009, in order to have that full migration period. If you have not sent a message to your queues during the April 6 to May 6 period, your access to the 2006-04-01 and 2007-05-01 WSDLs will be discontinued immediately on May 6, 2009. To ensure you have sufficient time to adapt your applications, Amazon SQS will accept requests using the previous WSDL versions during the migration period. Requests using the new WSDL version will be charged according to the new pricing model, and requests using the previous WSDL versions will continue to be charged according to the previous pricing model. After November 6, 2009, Amazon SQS will only accept requests using version 2008-01-01 or 2009-02-01. At that point, your queues created with the previous WSDL versions will no longer be accessible with any WSDL version. Queues created with the previous WSDL versions cannot be accessed with version 2008-01-01 or 2009-02-01, and queues created with version 2008-01-01 or 2009-02-01 cannot be accessed with the previous versions. During the migration period, you can use the previous WSDL versions to send requests to queues created with previous WSDL versions, and you can send requests using the new versions to queues created with the new versions. Important: Queue names are unique for each Amazon SQS user, regardless of the WSDL version. In other words, you have a single namespace for your queues, regardless of which WSDL you use to create the queue. You must not create a queue with the 2008-01-01 or 2009-02-01 version that has the same name as a queue you created with a previous WSDL version. However, when you request ListQueues, the WSDL version you use for the request affects which queues are included in the response. Responses to a ListQueues request using either the 2006-04-01 or 2007-05-01 WSDL include only the queues created with either of those WSDL versions, but not queues created using the 2008-01-01 or 2009-02-01 WSDL. Likewise, responses to a ListQueues request using the 2008-01-01 or 2009-02-01 WSDL include only queues created with the 2008-01-01 or 2009-02-01 WSDL version. We did this because you can't perform 2008-01-01 or 2009-02-01 operations on a queue created with a previous WSDL version (and vice versa), and it might cause problems for applications to receive names of queues they can't act upon. Changes to the AWS Account Activity Page Starting in February 2008, your AWS Account Activity page began displaying information about requests you sent to the previous WSDL versions and requests you sent to the new versions. The following table lists the items that are displayed in the Amazon SQS section of the AWS Account Activity page. Changes to the Usage Report In February 2008, the usage parameters that you can use choose to generate the AWS Usage Report changed. The following table lists the options you now have for Amazon SQS. Summary of Changes The technical changes in version 2009-02-01 include the following categories: - New limits - Deprecated actions - Other changes The following table lists the changes to Amazon SQS comparing the old WSDL versions with the 2009-02-01 version (note that most of the changes occurred originally in the 2008-01-01 WSDL; they're still applicable in the 2009-02-01 WSDL). Subsequent sections give more information about each change. Details of the Changes Message Retention Time In version 2009-02-01, the message retention time is now 4 days instead of 15 days. Amazon SQS now automatically deletes any messages that have been in your queues longer than 4 days. Message Size In version 2009-02-01, the maximum message size is now 8 KB instead of 256 KB for both Query and SOAP requests. If you need to send messages to the queue that are larger than 8 KB, we recommend you split the information into separate messages. Alternately, you could use Amazon Simple Storage Service or Amazon SimpleDB to hold the information and include the pointer to that information in the Amazon SQS message. If you send a message that is larger than 8 KB to the queue, you receive a MessageTooLong error (HTTP status code 400). Number of Messages Received When you request ReceiveMessage, you can specify the number of messages you'd like to receive, and Amazon SQS returns that number or fewer, depending on how many are available to be received from the queue at that moment. To make it clear that the number you've specified is the maximum number of messages Amazon SQS returns, we've changed the parameter name from NumberOfMessages to MaxNumberOfMessages. Also, we've changed the maximum value for this parameter from 256 to 10. The following table shows a Query request for ReceiveMessage with the old API and with the 2009-02-01 version. Queues with No Activity We reserve the right to delete a queue without notification if one of the following actions hasn't been performed against the queue for more than 30 consecutive days: SendMessage, ReceiveMessage, DeleteMessage, GetQueueAttributes, and SetQueueAttributes. Visibility Timeout Each queue has a default visibility timeout of 30 seconds. You can change that value for the entire queue, and you can set the visibility timeout for messages when you receive them without affecting the queue's visibility timeout. In version 2009-02-01, the maximum visibility timeout that you can specify for a queue or message has changed from 24 hours (86400 seconds) to 12 hours (43200 seconds). You can use the ChangeMessageVisibility action to extend the message's timeout; however, the total visibility timeout for the message can't exceed 12 hours. For example, if the default visibility timeout is 1 hour, you can extend it by a maximum of 11 hours. SetVisibilityTimeout and GetVisibilityTimeout With the 2007-05-01 version of Amazon SQS, the SetVisibilityTimeout and GetVisibilityTimeout actions were deprecated in favor of the new SetQueueAttributes and GetQueueAttributes actions. In version 2009-02-01, SetVisibilityTimeout and GetVisibilityTimeout are no longer available. PeekMessage In version 2009-01-01, the PeekMessage action has been removed. Typically, Amazon SQS users have used PeekMessage to debug small systems—specifically to confirm a message was successfully sent to the queue or deleted from the queue. To do this with version 2009-02-01, you can log the message ID and receipt handle for your messages and correlate them to confirm when a message has been received and deleted. For more information about the receipt handle, see New Receipt Handle. Access Control In version 2009-02-01, the access control functionality has changed. The actions AddGrant and RemoveGrant have been replaced with the new actions AddPermission and RemovePermission. The ListGrants functionality has been replaced with the GetQueueAttributes action; you can get a list of the queue's permissions by getting the queue's attributes (specifically the attribute for the access control policy for the queue). The functionality has also been expanded, so you can specifically deny someone access to a queue, or restrict access based on conditions such as the date and time, the IP address of the requester, or whether the requester is using SSL. The permissions you can grant have changed. You can now grant only these permissions: SendMessage, ReceiveMessage, DeleteMessage, ChangeMessageVisibility, GetQueueAttributes, or * (which is the sum of all the preceding permissions). You can no longer grant someone full control permission on a queue (only you as the owner can have full control). If you're using access control with the old APIs, there's no automatic migration of your grants to the new queues you create with the 2009-02-01 version. To manually migrate, you need to get the AWS account number for each person you want to grant permission to (the new access control system uses the AWS account number as the grantee's identifier instead of the e-mail address), and then recreate each permission on your new 2009-02-01 queues. The new access control system also allows you to share a queue you own with anonymous users. Anonymous requests don't require an AWS Access Key ID or signature. Keep in mind that you as the queue owner are still responsible for all costs associated with requests and messages from anyone you share your queue with (be it anonymous or authenticated users). New Receipt Handle In previous API versions, when you sent a message to the queue, you received a message ID for that message. When you deleted the message, you had to provide the message ID. With version 2009-02-01, you still receive the message ID when you send the message to the queue. However, you also get a receipt handle when you receive that message from the queue later. This handle is a different identifier from the message ID. When you delete the message, you must provide the receipt handle instead of the message ID. Following is an example receipt handle. MbZj6wDWli+JvwwJaBV+3dcjk2YW2vA3+STFFljTM8tJJg6HRG6PYSasuWXPJB+CwLj1FjgXUv1uSj1gUGAWV76FU/WeR4mq2OKpEGYWbnLmpRCJVAyeMjeU5ZBdtcQ+QEauMZc8ZRv37sIW2iJKq3M9MFx1YvV11A2x/KSbkJ0= Note: The receipt handle is associated with the act of receiving, not with the message itself. If you receive a particular message more than once, you get a different receipt handle each time you receive it. The message ID stays the same, however, and is returned each time you receive the message. The following table shows a Query request for DeleteMessage with the old API and with the 2009-02-01 version. A consequence of this change is that you must always receive a given message before you can delete it. With previous APIs, you could recall a message from the queue (send a message and then delete it without ever receiving it); you can no longer do this. New MD5 Digests In version 2009-02-01, the response for both SendMessage and ReceiveMessage includes an MD5 digest of the non-URL-encoded message body string. You can use the digests to confirm that the message was not corrupted or changed during transmission to and from the queue. For information about MD5, go to. The following table shows the old and new response for SendMessage. For more information about the ResponseMetadata element, see Change To Response Structure. The next table shows the old and new response for ReceiveMessage. The following table gives details about the child elements of Note: You can also optionally receive two attributes about the message: who sent it and when the message was put into the queue. For more information, see New Attributes. Deprecation of ForceDeletion with DeleteQueue With previous API versions, when you requested DeleteQueue, the queue wasn't deleted if there were messages in it (instead you got an AWS.SimpleQueueService.NonEmptyQueue error). However, you could specify the ForceDeletion parameter to override that behavior, and your queue was deleted even if there were messages in it. In version 2009-02-01, we've changed how DeleteQueue works so that your queue is deleted regardless of whether it's empty. The ForceDeletion parameter is no longer available. The consequence of this change is that you can no longer use DeleteQueue to determine if a queue is empty. Instead, we recommend that you use the GetQueueAttributes action to retrieve the ApproximateNumberOfMessages for the queue. Because of the distributed architecture of Amazon SQS, we do not have an exact count of the number of messages in a queue; the value we provide is our best guess. In most cases it should be close to the actual number of messages in the queue, but you should not rely on the count being precise. However, if this value remains at zero over a period of time, you can assume the queue is empty. Unavailability of the REST API With version 2009-02-01, the REST API is not available. The Query API is similar to the REST API and includes all the same functionality. The following example shows a CreateQueue request with the Query API. ?Action=CreateQueue &QueueName=queue2 &AWSAccessKeyId=0GS7573JW74RZM612K0AEXAMPLE &Version=2009-02-01 &Expires=2009-02-02T12:00:00Z &Signature=Dqlp3Sd6ljTUA9Uf6SGtEExwUQEXAMPLE &SignatureVersion=2 &SignatureMethod=HmacSHA256 For comparison, here is an example REST request to create a queue. POST /?QueueName=queue2 HTTP/1.1 Host: queue.amazonaws.com Authorization: 0GS7573JW74RZM612K0AEXAMPLE:Esdki4ll9ogJUV3Ad6tH1QEgpMREXAMPLE Content-Type: text/plain AWS-Version: 2007-05-01 Date: Mon, 21 May 2007 21:12:00 GMT Following are the main differences you should be aware of when moving from REST to Query: - Query uses only GET or POST. Query uses an Actionparameter to indicate the action to perform. Query doesn't require the use of specific HTTP headers. - The authentication information normally held in the Authorizationheader for REST requests is split and passed in two parameters in the Query request: one for the Access Key ID and one for the signature. - With Query, the signature is based on a different string. For REST, the string is formed from the request headers. For Query, the string is formed from the query parameters. Instructions on how to form the string and encode it are in the Amazon Simple Queue Service Developer Guide. - Both Query and REST require a date in the request, but the required format is different. Query uses one of the ISO 8601 formats (e.g., 2009-04-08T21:12:00Z), whereas REST uses RFC1123 (e.g., Wed, 08 Apr 2009 21:12:00 GMT). - With REST, when you send or receive messages from a queue, you append /backor /frontto the end of the queue URL corresponding to whether you're sending or receiving messages. With Query, this is not necessary. Amazon SQS returns the same errors for Query that it did for REST. Change to Format of Queue URL In version 2009-02-01, the format of the queue URL has changed. It now contains your AWS account number. For example, if your account number is 123456789012, and you have a queue named queue1, the queue URL is. We recommend that you always store the entire queue URL as it is returned to you by SQS when you create the queue. Don't build the queue URL from its separate components each time you need to specify the queue URL in a request; we could change the components that make up the queue URL, as we have done with the 2009-02-01 version. Change to CreateQueue With previous API versions, if you tried to create a queue that had the name of an existing queue, and you also provided a visibility timeout value different from that of the existing queue, you didn't get an error. The request succeeded and the queue URL of the existing queue was returned to you. In version 2009-02-01, this same request returns the error AWS.SimpleQueueService.QueueNameExists (HTTP status code 400). Change to Response Structure In version 2009-02-01, the structure of the response has changed. Both successful responses and error responses have new structures. Successful Responses In version 2009-02-01, if the action returns action-specific data, the main response element has a child element named ActionNameResult. For example, the CreateQueue action returns a queue URL, and therefore it has a CreateQueueResult element that wraps the queue URL data in the response. See the following examples. The Amazon SQS schema shows the exact structure of the response for each action. Also, with previous API versions, a successful response included a ResponseStatus element. In version 2009-02-01, this element has been replaced with a ResponseMetadata element, which has different contents. The following table shows an example of a successful CreateQueue response with the old API and with the 2009-02-01 version. Error Responses With previous API versions, an error response had a Response element. In version 2009-02-01, this element has been replaced with an ErrorResponse element, which has different contents. The following table gives details about the child elements of Error. The following table shows an example error response with the old API and with the 2009-02-01 version. Change to Attributes Structure The SetQueueAttributes action and the GetQueueAttributes action let you set and get the available attributes for queues. With previous API versions, when you called GetQueueAttributes, you could specify the particular attribute to get using the Attribute parameter. In version 2009-02-01, this parameter's name has changed to AttributeName. The following table compares a Query request with the old API with version 2009-02-01. Also in version 2009-02-01, the structure for attribute data has changed slightly. With the old API, you used an AttributedValue data structure to set an attribute's value, and Amazon SQS used that same structure when returning the queue's attributes in a GetQueueAttributes response. In version 2009-02-01, that data structure is called Attribute, and it has different child elements than the AttributedValue element. The following table compares the data structure in the old API with version 2009-02-01. The following table compares Query requests for SetQueueAttributes with the old API and with version 2009-02-01. In version 2009-02-01, the structure of the response for GetQueueAttributes has changed. Also, there are additional attributes you can get (for more information, see New Attributes). The following table shows an example response for GetQueueAttributes with the old API and with the 2009-02-01 version. New Attributes As of version 2009-02-01, you can get two new attributes related to the queue's creation and modification times: CreatedTimestamp, LastModifiedTimestamp (both in UNIX milliseconds timestamp format). You can also set and get the Policy attribute, which is a JSON document listing the access control permissions on the queue. You can also get two new attributes related to the message: SenderId (which returns the AWS account number of the person who sent the message, or the IP address if the request was anonymous), and SentTimestamp (when the message was put into the queue, in UNIX milliseconds timestamp format). The following is an example ReceiveMessage request that gets the two attributes. ?Action=ReceiveMessage &MaxNumberOfMessages=5 &VisibilityTimeout=15 &AttributeName.1=SenderId &AttributeName.2=SentTimestamp &AWSAccessKeyId=0GS7553JW74RRM612K02EXAMPLE &Version=2009-02-01 &Expires=2009-02-02T12:00:00Z &Signature=Dddr3Sd6ljTUA9Uf6SGt8hl2UQEXAMPLE &SignatureVersion=2 &SignatureMethod=HmacSHA256 The following is an example Lj1FjgXUv1uSj1gUPAWV66FU/WeR4mq2OKpEGYWbnLmpRCJVAyeMjeU5ZBdtcQ+QE asuWXPJB+Cwa> </Message> </ReceiveMessageResult> <ResponseMetadata> <RequestId>b6633655-283d-45b4-aee4-4e84e0ae6afa</RequestId> </ResponseMetadata> </ReceiveMessageResponse> Changes to Request Authentication Query Requests In version 2009-02-01, signature version 0 is no longer supported for Query requests, and signature version 1 is deprecated. Signature version 2 is the recommended version. In your Query requests, you must set the parameter SignatureVersion to 2, and set the new SignatureMethod parameter to either HmacSHA1 or HmacSHA256 to indicate the particular HMAC-SHA algorithm you're using. The Amazon Simple Queue Service Developer Guide has all the details about how to form signatures using signature version 2. SOAP Requests You must now use HTTPS when sending all SOAP requests. Also, you must now provide the authentication information in the SOAP header, using the namespace. It is no longer accepted in the SOAP body. The authentication information includes elements for AWSAccessKeyID, Timestamp, and Signature. Following is an example request showing the information in the SOAP header. <?xml version="1.0"?> <soap:Envelope xmlns: <soap:Header xmlns: <aws:AWSAccessKeyId>1D9FVRAYCP1VJS767E02EXAMPLE</aws:AWSAccessKeyId> <aws:Timestamp>2008-02-02T12:00:00Z</aws:Timestamp> <aws:Signature>SZf1CHmQnrZbsrC13hCZS061ywsEXAMPLE</aws:Signature> </soap:Header> ... </soap:Envelope> Base Service Name for the WSDL In the 2009-02-01 WSDL, the base name of the service has changed. Specifically, QueueService has changed to QueueServiceBinding and MessageQueue has changed to MessageQueueBinding. Because the generated code from the 2009-02-01 WSDL uses these new names, you'll need to change any code you have that uses the generated code.
http://aws.amazon.com/articles/1148?jiveRedirect=1
CC-MAIN-2016-18
refinedweb
4,057
51.68
Hi, Curious about running Jess on Android I first read up and found the issues with JavaBeans on Android. Geez, yuck! Advertising Wondered if there was already a legal alternative implementation of those beans maybe from Harmony, Classpath, or OpenJDK. GNU Classpath is GPL but gives a linking exception so a port of java.beans to a new namespace might work: Harmony is Apache licensed so you can link it with commercial software as long as you give attribution: And there is already a port here: OpenJDK is the most mainstream open implementation backed by Oracle among others. It has the classpath same GNU Classpath exception on linking: OpenJDK seems like the best bet to me. Not sure how best to proceed but as a developer myself I would like to volunteer to: 1. Port OpenJDK's java.beans 2. Find as many test suites as possible utilizing java.beans to include here to test it. 3. Put it on github or something. 4. Possibly test out migrating Jess source code (I would need to get a license). 5. Test out Jess on it on a pc. 6. Test out Jess on it on android. Not sure whether other folks are interested in this or not but if so please reply so we can coordinate our efforts. Best wishes, -- Grant Rettke | ACM, AMA, COG, IEEE gret...@acm.org | Wisdom begins in wonder. ((λ (x) (x x)) (λ (x) (x x))) -------------------------------------------------------------------- To unsubscribe, send the words 'unsubscribe jess-users y...@address.com' in the BODY of a message to majord...@sandia.gov, NOT to the list (use your own address!) List problems? Notify owner-jess-us...@sandia.gov. --------------------------------------------------------------------
http://www.mail-archive.com/jess-users@sandia.gov/msg11958.html
CC-MAIN-2017-34
refinedweb
277
74.39
External modules ↩ - Some useful modules - Installing modules - Testing a module Some useful modules - grapefruit - working with colors in different color modes, creating gradients etc. - markdown - generating html from Markdown sources - some other module - what the module can be used for Installing modules Installing with pip … Installing with setup scripts … Installing with git clone … Installing manually with a .pth file Create a simple text file in a code editor, containing the path to the root folder where the module lives./myModule.pth And that’s it. This method creates a reference to the folder in which the module lives. Testing a module To see if a module is installed, just try to import it in the environment you wish to work in: import myModule If no error is raised, you’re good to go. - include other useful external packages - expand installation instructions - inlude markdown, grapefruit, etc. examples?
https://doc.robofont.com/documentation/building-tools/python/external-modules/
CC-MAIN-2019-09
refinedweb
146
50.87
Xalan-for-xslt2 - Implement XSLT 2 for Xalan GSoC 2011 Project - Implement XSLT 2 for XalanApache Xalan-Java is a powful XSLT processor for XML transforming based on W3C recommendation XSL Transformations (XSLT) Version 1.0 and XML Path Language (XPath) Version 1.0, XSL Transformations (XSLT) Version 2.0 specification was published on 23 January 2007,this new version give us more powful functions, new dat model and serialization etc. In my daily work, i use XSLT and XPath to parse XML and change XML document to another format, i did this with Xalan.As we know,Xalan support XSLT version 1.0 and XPath version 1.0 only.But my XML parse job is so complex, and usually, i must use some XPath v2.0 functions to finish my parse job. So, i encountered trouble. Then I Google keywords with Xalan and XSLT 2.0,i found many developers hope that Xalan based XSLT 2.0 could become reality besides me. At the same time, i discuss this topic with open source developers in mail list, so many guys are interested in implementing complete XSTL 2.0 specification for Xalan. We discuss this project deeply, and i got many good ideas and advises from the discussion. And also they told me Xalan lost its most active contributors a few years ago due to time demands from Real Jobs, and development slowed down as a result,but they love to see Xalan tackle it.Due to XSLT 2.0 and XPath 2.0 are large specifications that I imagine could take the developers years to implement correctly,anyway it need some guy get the ball rolling, i want to be this guy. Compared with XSLT 1.0, XSLT 2.0 and XPath 2.0 have some important new characteristic. New features in XPath 2.0: Sequences, in XPath2.0 everything is sequences 2. For expression,in XPath 2.0,we can use for expression to iterator every item in the sequence, compute expression value for each item, at last return the sequence result by connect all the expression values. 3. Condition expression, XPath 2.0's condition expression can be used to compute different value on the basis of the condition's true or false value. 4. Limited expression,limited expression in XPath 2.0 can be used to judge that whether every item in the sequence satisfy appointed condition, its value is always true of false. There are two kinds of limited expression: some expression and every expression. 5. Datatype,XPath 2.0's datetype is based on XML Schema, it supports all the basic built-in datatype,such as xs:integer,xs:string and xs:date ect. 6. Date and time, XSLT 1.0 has not date and time datatype, it must use string to represent date and time. Due to XPath 2.0 system is based on XML schema, so there is date and time data type in XPath 2.0. 7. More functions support,XSLT 2.0 specification has more powful functions definition.Functions in XPath 2.0 is defined in a special recommedation "XQuery 1.0 and XPath 2.0 Functions and Operations", functions in this recommendation specification belong to namespace "", this namespace is bound with prefix "fn". It includes these serveral types: Constructor Functions Functions and Operators on Numerics Functions on Strings Functions on anyURI Functions and Operators on Boolean Values Functions and Operators on Durations, Dates and Times Functions Related to QNames Operators on base64Binary and hexBinary Operators on NOTATION Functions and Operators on Nodes Functions and Operators on Sequences Context Functions There are also many new features in XSLT 2.0: Group, basic group syntax,group sort,group-adjacent and group-starting-with etc. 2. Connotive document node, or we can call it temporary tree 3. Element result-document, we can use to generate multi output file in XPath 2.0 4. Improvement of element value-of 5. Char mapping,XSLT 2.0 supply more flexible solution for us to handler specific characters in XML such as '<' and '&' 6. Custom stylesheet function,in XSLT 2.0, we can use to create our own functions in stylesheet 7. Other new features. In order to implement all the XSLT 2.0 specification,it requires some moderately basic data structure changes to track Schema types and to handle Sequences which aren't simply nodesets,and also,i must implements all the new functions defined in XSLT 2.0,fortunately,there is a open source XPath 2.0 processor () which is at Eclipse.It is already being used by Xerces-J for XSD 1.1 Assertion support.Why re-invent the wheel if we can leverage something that already exists ? PsychoPath already is XML Schema Aware, leverages Xerces-J() and passes about 99.9% of the Xpath 2.0 test suite. It is currently undergoing testing against the the official W3C Test Suite. So, in order to implement all the XSLT 2.0 specification for Xalan, there are two steps: 1. Merge Xalan codes with PsychoPath codes,this will help us a lot,once this job is done, Xalan has implemented all the XPath 2.0 features. There are some issues to resolve during this merging process,for example: PsychoPath is not "stre Related Products_0<< Arbica Arabica is an XML and HTML processing toolkit, providing SAX, DOM, XPath, and partial XSLT implementations, written in Standard C++. xsldbg A debugger for xsl/xslt stylesheets which has functionality similar to a Unix/Linux quot;gdbquot;. This is achieved using libxml2 and libxslt (gnome-xml). A summary of commands types are Help, running, template, breakpoint and node viewing Additional XSL properties for Pango PangoPDF is now PangoXSL. PangoXSL implements several of the inline properties defined by XSL that are not currently implemented by Pango. DotNetNuke DotNetNuke is the most widely adopted web content management system (WCM or CMS) and application development platform for building web sites and web applications on Microsoft .NET. XeuMeuLeu XeuMeuLeu is an open-source cross-platform C++ stream oriented interface on top of Apache Xerces for manipulating XML and Apache Xalan for applying XSL transformations. Xtrans A compact editor for building XSL templates and testing their transformation with XML documents. Very easy to use. No setup required, excellent tool for XSL authoring and debugging. . XQilla An XQuery and XPath 2.0 library, written in C++ and built on top of Xerces-C. Db2Xml structure generator with Xsl Generates XML containing database tables structure and allows easy XSLT sheet application using the gui interface
http://www.findbestopensource.com/product/xalan-for-xslt2
CC-MAIN-2014-52
refinedweb
1,089
59.19
7.13. Implementing Knight’s Tour¶. In this section we will look at two algorithms that implement a depth first search. The first algorithm we will look at directly solves the knight’s tour problem by explicitly forbidding a node to be visited more than once. The second implementation is more general, but allows nodes to be visited more than once as the tree is constructed. The second version is used in subsequent sections to develop additional graph algorithms. knightTour function takes four parameters: n, the current depth in the search tree; path, a list of vertices visited up to this point; u, the vertex in the graph we wish to explore; and limit the number of nodes in the path. The knightTour function is recursive. When the knightTour function is called, it first checks the base case condition. If we have a path that contains 64 vertices, we return from knightTour with a status of True, indicating that we have found a successful tour. If the path is not long enough we continue to explore one level deeper by choosing a new vertex to explore and calling knightTour recursively for that vertex. DFS also uses colors to keep track of which vertices in the graph have been visited. Unvisited vertices are colored white, and visited vertices are colored gray. If all neighbors of a particular vertex have been explored and we have not yet reached our goal length of 64 vertices, we have reached a dead end. When we reach a dead end we must backtrack. Backtracking happens when we return from knightTour with a status of False. In the breadth first search we used a queue to keep track of which vertex to visit next. Since depth first search is recursive, we are implicitly using a stack to help us with our backtracking. When we return from a call to knightTour with a status of False, in line 11, we remain inside the while loop and look at the next vertex in nbrList. Listing 3 from pythonds.graphs import Graph, Vertex def knightTour(n,path,u,limit): u.setColor('gray') path.append(u) if n < limit: nbrList = list(u.getConnections()) i = 0 done = False while i < len(nbrList) and not done: if nbrList[i].getColor() == 'white': done = knightTour(n+1, path, nbrList[i], limit) i = i + 1 if not done: # prepare to backtrack path.pop() u.setColor('white') else: done = True return done Let’s look at a simple example of knightTour in action. You can refer to the figures below to follow the steps of the search. For this example we will assume that the call to the getConnections method on line 6 orders the nodes in alphabetical order. We begin by calling knightTour(0,path,A,6) knightTour starts with node A Figure 3. The nodes adjacent to A are B and D. Since B is before D alphabetically, DFS selects B to expand next as shown in Figure 4. Exploring B happens when knightTour is called recursively. B is adjacent to C and D, so knightTour elects to explore C next. However, as you can see in Figure 5 node C is a dead end with no adjacent white nodes. At this point we change the color of node C back to white. The call to knightTour returns a value of False. The return from the recursive call effectively backtracks the search to vertex B (see Figure 6). The next vertex on the list to explore is vertex D, so knightTour makes a recursive call moving to node D (see Figure 7). From vertex D on, knightTour can continue to make recursive calls until we get to node C again (see Figure 8, Figure 9, and Figure 10). However, this time when we get to node C the test n < limit fails so we know that we have exhausted all the nodes in the graph. At this point we can return True to indicate that we have made a successful tour of the graph. When we return the list, path has the values [A,B,D,E,F,C], which is the the order we need to traverse the graph to visit each node exactly once. Figure 3: Start with node A Figure 4: Explore B Figure 5: Node C is a dead end Figure 6: Backtrack to B Figure 7: Explore D Figure 8: Explore E Figure 9: Explore F Figure 10: Finish Figure 11 shows you what a complete tour around an eight-by-eight board looks like. There are many possible tours; some are symmetric. With some modification you can make circular tours that start and end at the same square. Figure 11: A Complete Tour of the Board
https://runestone.academy/runestone/static/pythonds/Graphs/ImplementingKnightsTour.html
CC-MAIN-2019-26
refinedweb
787
71.04
Long gone are the days when software is built using a fixed Waterfall approach of establishing product requirements, designing software architecture, coding an implementation, verifying results and finally entering the maintenance mode. Nearly all software projects these days have adopted agile methodologies that take an iterative approach; where there are no permanent requirements and instead the incremental evolutionary product lifecycle is the standard. Unlike most software applications upgrade paths, which simply replace existing application files, deploying database structural changes is hard, because the data in the tables cannot be thrown away when a new structure is deployed. Harder still are changes requiring both structural and data changes. Another problem is maintaining the change logs of specific software versions and associated database versions. There is no easy way of documenting which software versions depend on which database schema version. Therefore, based on principles set forth by evolutionary database design, Continuous Database (CD) is the latest SuperOffice process for instrumenting incremental changes toward the SuperOffice database that enables changes in a continuous way. This new process is a way both SuperOffice and third-parties can continuously update a database schema that reflect ever-changing business requirements without ever having to depend on a hardcoded fixed-system again. Before diving into details, let’s first establish a point of reference. You are going to start off by interacting with a SuperOffice database that is either pre-version 8.1, version 8.1 or post-version 8.1, and one that either does or does not have third-party tables. Under those assumptions, you are most likely to encounter one or more of the following scenarios: With respect to both #1 and #2, we recommend using ServerSetup to upgrade customer installations. This will upgrade both the customers’ installation and database to the latest continuous database process. If customers do not upgrade to SuperOffice 8.1, the only available option for third-party tables is continued use of the legacy Dictionary SDK to create new or modified existing third-party tables. With respect to #3 and #4, all third-parties must come to accept, understand and adopt the continuous database processes – as the remainder of this article presents. So how does SuperOffice isolate itself from unpredictable database changes? From version 8.1, SuperOffice creates an in-memory model of the database from the one stored in the DatabaseModel table. The model is a direct representation of what tables physically exists in the database. The database model also contains a list of the latest schema changes that have been applied to the database as a list of dictionary steps. CD defines database variations in two dimensions. Individual features refer to step names. Each feature, or step name, will have individual steps for each version of the feature. Teams can work in-parallel on their features and produce steps in numbered series. SuperOffice CRM demonstrates multiple features being developed in parallel. Even though this is not exposed for customizations by partners, the concept of features being developed in parallel is verified, the same way as if it were partners developing customizations in different versions in parallel. The current version of SuperOffice CRM Online consist of 9 different features with 35+ steps. For each table, field, index and relation definition in the database model, there is a reference to the dictionary step responsible for its creation, as well as the last step update. This is useful for tracing artifact changes and origin. A dictionary step is responsible for defining a list of schema changes and optionally importing priming data. Schema changes are actions, such as a new table, new fields in an existing table, new indexes and many more. Each dictionary step is uniquely identified by its name and step number combination. While the name is generally associated with a product name or feature, the step number is usually equal to an iteration. The step number is used to indicate in which order each dictionary step is applied to the database model to ensure includes all necessary changes a present and accounted for. The dictionary step description should represent a general description of what changes are performed by the dictionary step. Let’s look at an example. Suppose a vendor called Uno creates a dictionary step that adds a string field to the contact table that is to be 25 characters in length. As seen in the figure below, the initial dictionary step number to perform that action is defined as having a StepNumber set to 1. Next, suppose Uno decides to change the string field property to support 255 characters in length. The third-party must then define a new dictionary step that sets the StepNumber to 2. A second example is when there are two third-party integrations that make database schema changes. In addition to the previously mentioned Uno, third-party Duo comes along and adds a field to the contact table. Duo’s dictionary step must be uniquely named and the step number is then 1. The dictionary step state property is used to indicate whether this dictionary step is an “InDevelopment” or “Released” state. Note that third-parties must respect each other and only change tables and fields they themselves have created. Third parties should also do their best to prevent field naming collisions and use a suitable prefix for their tables and fields. The process of making changes to the database is called “Application of dictionary steps”. Since dictionary steps contain actions such as “Add field” or “Add table”, applying them means making that change – simultaneously – to the database model and the physical database. The steps themselves are not stored in the database (only their names and numbers, for tracking purposes). A dictionary step can only be applied once; steps with the same name are applied strictly consecutively (no gaps); each chain of steps that share a name has to start with step 1. The result of applying steps is a changed database schema, and a corresponding DatabaseModel that describes the changed schema, and thus the code can know what the database now looks like. Uninstalling a DictionaryStepInfo from the model is accomplished by creating a DictionaryStep with the StepNumber set to Integer.MaxValue, or 2147483647. It's your responsibility to implement the class and completely remove all structural changes every made by your dictionary step. So far, the explanation of a dictionary step has only included the concept of what it is and how it contributes towards smooth evolutionary database design. So how are they defined? On one hand, there is the definition of the dictionary step and on the other there is the implementation. In terms of API dependencies, third-parties must create a .NET assembly that references two SuperOffice assemblies for Continuous Database: Third-parties must create a class decorated with a DictionaryAttribute attribute, and the class must inherit from the base class DictionaryStep. Both the DictionaryAttribute and DictionaryStep base class are defined in SuperOffice.CD.DSL.dll. A third-party is expected to override and implement at least one of three primary methods: CustomPriming: performs unique priming data actions or data transformations after ImpFileNames is complete, using direct SQL statements. Below is an example DictionaryStep that overrides the Structure and ImpFileNames. using SuperOffice.CD.DSL.V1.StepModel; using System.Collections.Generic; namespace SuperOffice.DevNetCddLib.DictionarySteps { [Dictionary("DevNetChat", 1, ReleaseState.Released)] public class ChatRoom1 : DictionaryStep { public override void Structure() { CreateTable("DN_ChatRoom", "Contains chatroom settings") .TableProperties.Replication(ReplicationFlags.Down | ReplicationFlags.Up | ReplicationFlags.Prototype) .TableProperties.CodeGeneration(MDOFlags.None, HDBFlags.None, UdefFlags.None, SentryFlags.None) .AddString("Name", "Name of the chatroom", 75, true) .AddString("Topic", "Description of room content", 255, false) .AddEnum<DNRoomStatus>("RoomStatus", "Determines if chat room is open or closed") ; } /// <summary> /// Return the hard-coded list of standard IMP files for a new 8.1 installation; /// </summary> /// <returns></returns> public override List<string> ImpFileNames() { // these are the .IMP files return new List<string> { @"I_ChatRoom.imp", }; } } [DbEnum("Value for field 'RoomStatus' in table 'DN_ChatRoom'.", Layer.Core)] public enum DNRoomStatus : short { [DbEnumMember("Set chat room status to closed.")] Closed = 0, [DbEnumMember("Set chat room status to open")] Open = 1, } } Only implement the methods that have actual content, i.e. do not create empty overrides as that leads to degraded performance during upgrades. The DictionaryStep is conceptually a pipeline to: While none of the methods are required, each routine presents an opportunity to make database changes. Whether physical schema changes, priming data-related, or simply data transformation in the database, actions done in the pipeline are a means to ensure an agile and evolutionary database design. Structure is where all database table, field and index changes must be made. There are several intuitive methods in the base class to make such actions easy, and are explored more thoroughly in the next section. Once a table is created, and populating the table with priming data is desired, the class need only implement ImpFieldNames. ImpFieldNames returns a list of IMP file names to the Continuous Database library which iterates over the list and begins to import data. When more flexibility is required, such as calculated fields or transforming data from existing tables, the dictionary step should override the CustomPriming method. Methods declared in the base class, ExecuteSelect and ExecuteSql, can issue SQL statements to the database to performing queries or perform actions, respectively. However, these methods should be used sparingly. Issuing SQL statements using these methods is covered in a later section. Finally, it’s important to understand that all schema changes should be done in a small, compact and isolated manner. Changes should be done in such a way that make it easy to understand and manage. Evolutionary database design benefits from small incremental changes. Once a dictionary step is committed to the database, it is final. It cannot be undone. The only way to change the last dictionary step is to create a new dictionary step that makes yet another change to the database! Changing the database schema is facilitated by three methods in the base class: Each method is a fluent interface, and therefore easy-to-read and easy-to-use. This simple example illustrates this point: public override void Structure() { CreateTable("Movie", "Movie table for movie buffs") .AddString("Name", "English name of the movie", 255, notNull: true) .AddString("Description", "Description of the movie", 255, notNull: false) .AddBool("Rated", "Has this movie been rated?") .AddBlob("MoviePreviewImage", "Compressed image string", notNull: false) ; ModifyTable("Movie") .AddString("Genres", "The style or category of the movie ", 255) .AddInt("Rating", "The rating for this movie") ; ModifyTable("Movie") .ModifyField("Description", maxLength: 1024) ; DropTable("Movie"); } Each fluent method expects a minimal set of required parameters while permitting several named parameters to override field defaults, such as notNull: false. In addition to fluent methods for tables, there are fluent methods that make is easy to add and modify fields of all supported field types. There are also handy macro-methods to create common field patterns, such as AddRegisteredUpdated(). AddRegisteredUpdated adds five fields: registered, registered_associate_id, updated, updated_associate_id and updatedCount, all with standard settings. Method Description Legacy Access methods to set legacy parameters - please do not use unless you know what you are doing TableProperties Access methods to set table properties that influence the generated code (rather than the database table itself) CodeGeneration Set code-generation flags for MDO, HDB (table objects), Udef and Sentry functionality Replication Set replication/Travel flags; please note that Travel is no longer being actively developed and do not create new products depending on it. HasVisibleFor Set that records in this table have access control via corresponding records in VisibleFor See ITableBuilderBase below. ModifyField Modifies the properties of a field definition. DeleteField Delete a field. Be careful, this is a drastic thing to do and breaks backwards compatibility, so you need to be sure no old code is going to run against this database. Any indexes that contain this field will be deleted automatically, both in the database model and in the physical database. Physical indexes will be detected and deleted even if they have been made outside of this product, for instance by DBA's and optimizers. DeleteIndex Delete the index that spans the given fields, if it exists. Nothing bad happens if no such index exists. AddBool Add a boolean field; generally represented by a 16-bit integer in the database AddBlob Add a BLOB (Binary Large Object) field, up to 1GB in size AddDateTime Add a datetime field. An optional parameter is used to indicate UTC AddDouble Add a double floating-point field. AddEnum Add an enum field, represented in the database by a 16-bit integer. The enum reference causes strongly typed properties in TableInfo and Row objects. See below for declaring enums AddForeignKey Add a foreign key field. AddId Add a 32-bit integer field, that is to be understood as a row ID. Use AddForeignKey to add foreign keys, and use AddRecordId/AddTableNumber to set up universal relations AddInt Add a 32-bit signed integer field. AddRecordId Add a Record Number, a foreign key that forms the second half of a universal relation. AddString Add a string field. See maxLength parameter for important notes. AddTableNumber Add a Table Number, represented by a 16-bit integer in the database. The value is understood to be a table number corresponding to a table as defined in the sequence table; such numbers are constant over the lifetime of a database but can vary between databases. The table number forms one half of a universal relation AddUShort Add a 16-bit unsigned integer field. Index Methods AddIndex Add an index over one or more columns. A particular column name list (order matters) can only occur once AddFulltextIndex Add a fulltext index over one or more columns. A table can have at most one fulltext index. Later calls to this method simply add fields to any existing fulltext index. Note that fulltext indexing only happens on Sql Server with the feature installed; otherwise no action is taken. AddUniqueIndex Add a unique index over one or more columns. A column name list (order matters) can only occur once Field Methods SetHash Set special hash flags for a field; a field can be a member of a row hash, or can be the hash itself SetPrivacy Set privacy tags, which can be used to ensure compliance with privacy rules SetSearch Set special search flags on a field Macro Methods AddMDO Add four fields that make up the standard start of a list table: name, rank, tooltip, deleted AddRegisteredUpdated Add five fields: registered, registered_associate_id, updated, updated_associate_id, updatedCount with all standard options. Used on most tables and populated automatically by NetServer database layer. The ITableRemover interface only contains one method DropTable that accepts the table name of the table to be dropped. This permanently removes the table, so be careful. This is a drastic action that breaks backwards compatibility and you need to make sure no old code runs against this database. Using the AddEnum method, a field in the database can be connected to an enumeration type in the code. Here it is important to understand that such an enum will exist in two versions. To define the enum so that it can be used in a dictionary step, it needs to be declared in the same assembly as the dictionary step. The enum itself is decorated using the DbEnumAttribute, and each member is decorated with DbEnumMember. For instance: [DbEnum("Value for field 'private' in table 'appointment'.", Layer.Core)] public enum AppointmentPrivate : short { [DbEnumMember("This appointment can be read by anyone")] Public = 0, [DbEnumMember("This appointment can only be read/seen by the owner")] PrivateUser = 1, [DbEnumMember("This appointment can only be read by members of the owners group")] PrivateGroup = 2, } This enum is what we call the model enum. Once the enum xyz is declared, it can be used in an AddEnum<xyz>() method call within the Structure() method of a dictionary step. This will cause the creation of a 16-bit integer field in the database, and code generated for the TableInfo and the Row object will have strongly-typed references to it. However, that generated NetServer code does not reference the enum declared in the dictionary step – it references a generated version of the same enum in the NetServer generated code! The reason for this is simple: the enum in NetServer must be known and available to all NetServer code, whether the dictionary step class is present or not. In a customer installation, the assembly containing the dictionary step is only present in the DevNet Toolbox while the database is being upgraded and is never required to be in the NetServer installations. Thus, a new NetServer-native enum is generated together with the TableInfo and Row code. This one is the NetServer enum. While the model-enum lives in the same assembly as the dictionary steps, it is not a member of any particular step and is generally declared directly in the namespace. If it needs to be changed (for instance, adding new values), then just change it, generate new NetServer code from it and release that. Changing an enum does not need a dictionary step, and is in fact not connected to a step. The only connection between steps and enums happens when an AddEnum<> method call declares a new database field to be of an enum kind. Under the covers, a call to CreateTable, DropTable or ModifyTable generates a BuildCommand, which is an objectified representation of the actions they define. For example, CreateTable creates a BuilderAddTableCommand instance, DropTable creates a BuilderRemoveTableCommand instance and ModifyTable creates a BuilderModifyTableCommand instance. BuildCommands are applied towards the database after the Structure method is called. IMP files are a means to populate a table with priming data once the DictionaryStep has completed performing any schema changes defined the Structure method. The sole purpose of ImpFileNames() is to return a list of file names containing data that Continuous Database will discover and import. public override List<string> ImpFileNames() { return new List<string> { @"abc.imp" }; } Dictionary steps can provide IMP files is one of two ways: If using the physical file means, understand the resource resolver will begin in the executing directory, and then look for a folder called ImpFiles. It expects to find your imp files inside this folder. That said, Embedded resource files are the preferred method, and must be structured in the following way. To ensure your IMP files are embedded as a resource select the file to display the IMP file properties, then set the Build Action property to Embedded Resource. When compiled, the embedded resource file is structure in the following format: ProjectName.ImpFiles.DictionaryName._StepNumber.Files.Filename.imp In a de-compiler the embedded resource will appear in the assembly Resources folder. While embedded resource files are the preferred way to supply priming data, it’s also possible to discover IMP files that exist in the current directory from which the application applying the step resides. The advantage of embedded resources is easy deployment: there is only one file, the compiled assembly, that is needed; and thus the chance of deployment failures due to missing IMP files is eliminated. If you are using Visual Basic, or some .NET language other than C#, know that the compiler might not embed resources the same as shown above. We know that the Visual Compiler ignores the folders when naming the embedded resource. Below is an image of a Visual Basic project with an embedded resource with the same folder structure as shown above. Visual Basic Project folder structure: When compiled, the Visual Basic compiler writes the resources differently than the C# compiler. Compiled Visual Basic Resources: To solve this, if you are using Visual Basic projects to manage your dictionary steps, you must change the filename so that it is written in such a way that Continuous Database can discover it. By changing the name of the file, regardless of the folder structure, the embedded resource will be named accordingly. As seen in the following image, the folder structure didn’t matter and the filename matches that which the Continuous Database resource resolver is able to find. As stated earlier, we believe embedded resources are the best way to manage your import files. Tables can be populated with language specific data from IMP files that are in language specific folders. The folder structure must be the same as before, where IMP files are placed in a folder called Files, only now there must be a language-code named subfolder where the preferred IMP files reside. The language specific file structure becomes: ProjectName.ImpFiles.DictionaryName._StepNumber.Files.LangCode.Filename.imp Looking at the image with language folders, you can see two language specific folders with two IMP files in each, one for Danish and one for US English. During execution, the pipeline determines which file is used based on which language was selected during the installation of SuperOffice. During an installation, or upgrade to SuperOffice 8.1, SuperOffice inserts what the preferred language is as a record in the ProductVersion table. ProductionVersion ownerName codeName version productVersion_id SuperOffice databaselanguage US With this information, the underlying database management routine knows from which folder the correct language specific data is used. When ImpFileNames returns a list of IMP filenames, the preferred language is used to first search for a folder with a name that matches, and contains a file with that name. Files with a matching name in the language folder take precedence. Therefore, if an IMP file with the same name resides in both the language folder and root Files folder, only the one that resides in the language folder is used. Imp files are tab-delimited data files used to populate tables with priming data. There are several configuration options that partners can leverage to control priming data in their applications. These files are conceptually broken into two components: a header section and a data section. Line 1: ;This is a comment line, describe your table, intentions, etc. Line 2: [UnoRoom] Line 3: Truncate_Table Line 4: ;room_id nam creatr Registered regby updated updatedby updated count Line 5: 0 Room1 0 0 0 0 0 0 0 Line 6: 0 Room2 0 0 0 0 0 0 0 Line 7: 0 Room3 Line 8: 0 Room4 0 0 0 0 0 0 0 Line 9: 0 Room5 The header section contains the table element and optional functions that perform operations such as truncate a table or field. Header elements you are likely to see in IMP files are: Header Item Function Description [TABLENAME] Table declaration before all functions and data lines. TRUNCATE_TABLE Removes all rows from a table. TRUNCATE_FIELD Removes specific rows from a table. SET_AUTODATEUPDATE_OFF Turns off setting date-time to fields named ‘registered’. SET_BUILTIN Turns on setting fields named “builtin” to 1. Except for a comment line, which is a line that begins with a semi-colon, the table name must always be declared first. Header elements you are likely to use in your own IMP files are: Other functions in the table above are primarily reserved for default priming data used during SuperOffice installations and upgrades. In cases where you need to truncate an entire table or just certain rows, IMP files supports: Both functions will apply to the previously declared table. For example: [abc] Truncate_Table Truncate_Table will delete all rows in the table named abc. This directly translates into the SQL statement: TRUNCATE TABLE 'abc'; When Truncate_Field is used, it only deletes the row where the criteria match. The format is a tab-delimited line in the IMP file that defines the table on a line followed by the function and then the parameters. [Table] Truncate_Field \t columnName \t ColumnValue A demonstration of how that looks is seen in this example: [abc] truncate_field xyz 2 This translates into the following SQL statement: DELETE FROM abc WHERE abc.xyz = 2; Multiple calls to the same function must be specified on a new line. [Table] truncate_field columnName 2 truncate_field columnName 10 truncate_field columnName 12 Other truncate functions used exclusively used by SuperOffice. TRUNCATE_BUILTIN DELETE FROM TableName WHERE TableName.isBuiltIn = 1 TRUNCATE_BUILTIN_FIELD DELETE FROM TableName WHERE TableName.FieldName = ColumnValue AND TableName.isBuiltIn = 1 Below is a simple IMP file that contains a table named UnoRoom. The first line is a comment, followed by the table declaration in square brackets. The truncate_table function on Line 3 is an instruction to the priming engine to truncate the existing table prior to importing the following data. Line 4 is another comment line that describes the table structure. Lines 5 through 9 are the actual row data that is imported into the UnoRoom table. The first column of row data lines beginning with 0 instruct the priming engine to automatically generate the sequence id values and insert them. Columns called registered are by default auto-populated with the current datetime, however this can be switched off with the SET_AUTODATEUPDATE_OFF function. While you can let the priming engine automatically assign id values, there may be cases where it’s preferred to hardcode the id values instead. In that case, you could simple type the desired id values directly inline. The id values do not have to be in an ordered sequence. Also, notice how Line 7 and 9 contain tab-delimited null values. This is completely legal and the priming engine will insert default values based on the field data type. Line 1: ;This is a comment line, describe your table, intentions, etc. Line 2: [UnoRoom] Line 3: ;room_id nm creatr Registered regby updated updatedby updated count Line 4: 1 Room1 0 0 0 0 0 0 0 Line 5: 2 Room2 0 0 0 0 0 0 0 Line 6: 3 Room3 0 0 0 0 0 0 0 Line 7: 4 Room4 0 0 0 0 0 0 0 Line 8: 5 Room5 0 0 0 0 0 0 0 In the simple example above, the column data will be imported into the UnoRoom table, and the rows are assigned the id values defined inline. This is useful when you need to reference these rows by id in other Imp files. Below is an example that does just that – it hard codes the id values defined in the room Imp file above. Line 1: [UnoGroup] Line 2: ;group_id nm rm_id Registered regby updated updatedby updated count Line 3: 0 Grp1 1 0 0 0 0 0 0 Line 4: 0 Grp2 1 0 0 0 0 0 0 Line 5: 0 Grp3 2 0 0 0 0 0 0 Line 6: 0 Grp4 2 0 0 0 0 0 0 Line 7: 0 Grp5 3 0 0 0 0 0 0 There are additional options for handling id referencing cases using variables declared with the pound symbol (#). The following IMP files declare three tables: unogroup, unoroom and a relations table called unogrouprooms. Both the group and room table use variables in place of assigned id values, and then the grouprooms table uses the variables to populate the table with their actual values. Line 1: [UnoGroup] Line 2: ;group_id name Registered regby updated updatedby updated count Line 3: #GRP1 Grp1 0 0 0 0 0 0 Line 4: #GRP2 Grp2 0 0 0 0 0 0 Line 5: #GRP3 Grp3 0 0 0 0 0 0 Line 6: #GRP4 Grp4 0 0 0 0 0 0 Line 7: #GRP5 Grp5 0 0 0 0 0 0 Line 1: [UnoRoom] Line 2: ;rm_id nm creatr Registered regby updated updatedby updated count Line 3: #RM1 Room1 0 0 0 0 0 0 0 Line 4: #RM2 Room2 0 0 0 0 0 0 0 Line 5: #RM3 Room3 0 0 0 0 0 0 0 Line 6: #RM4 Room4 0 0 0 0 0 0 0 Line 7: #RM5 Room5 0 0 0 0 0 0 0 Line 1: [UnoGroupRooms] Line 2: ;grouproom_id group_id room_id Line 3: 0 #GRP1 #RM1 Line 4: 0 #GRP2 #RM2 Line 5: 0 #GRP3 #RM3 Line 6: 0 #GRP4 #RM4 Line 7: 0 #GRP5 #RM5 It’s important to note that variables must be declared and resolved before they can be referenced. While primarily for referencing primary keys, they can also be used to reference foreign key columns of type int, short and long. CustomPriming is the third and final method executed during the DictionaryStep pipeline, and is used to make data transformations that are not otherwise supported. In order to support complex data transformation, the base class exposes two helper methods to perform SQL actions towards the database: ExecuteSql and ExecuteSelect. While you really should use Imp files for the bulk of priming data, CustomPriming is used for circumstances when you need to perform raw SQL towards existing tables. In those cases, ExecuteSql is there to help execute non-select SQL statements. Let’s begin with a simple Insert statement example. In the following code, two datetime variables are declared for use as parameters to the Insert statement. Then the ExecuteSql method is invoked with two parameters: the SQL statement performing the INSERT, and then the anonymous type containing the parameter values._id}, { @xyz}, { @registered}, { @registered_associate_id}, { @updated}, { @updated_associate_id}, { @updatedCount} )", new { abc_id = 1, xyz = "A String", registered = utcNow, registered_associate_id = 0, updated = never, updated_associate_id = 0, updatedCount = 0 }); } There are three representations of curly braces used to define the table, fields and values: Continuous Database will look up the table and field names, do quoting and case conversion and anything else that might be needed to make valid SQL, and then properly encode the parameters. Please always use parameterized values. Doing so avoids two major hazards: formatting problems (particularly dates!) and the possibility of sql injection. As a rule a dictionary step should never depend on externally supplied values, but even an update from one field to another that does not use parameterization could still expose us to sql injection from values latent in the database. There is no excuse for sql injection caused by lack of parameterization. A handy helper method GetNextId(‘tableName’) is useful when you want to get a tables next id value from the sequence table. public override void CustomPriming() { var utcNow = DateTime.UtcNow; var never = DateTime.MinValue; var nextIdValue = GetNextId("abc"); ExecuteSql(@"INSERT INTO { abc} ( { abc.abc_id}, { abc.xyz}, { abc.registered}, { abc.registered_associate_id}, { abc.updated}, { abc.updated_associate_id}, { abc.updatedCount} ) VALUES ( { @id}, { @xyz}, { @registered}, 0, { @updated}, 0, 0 )", new { id = nextIdValue, xyz = "A String", registered = utcNow, updated = never }); } Another useful trick is to use $nextId to automatically obtain the next id value from the sequence table..$nextId }, { @xyz }, { @registered }, 0, { @updated }, 0, 0 )", new { xyz = "A String", registered = utcNow, updated = never }); } The following example demonstrates how to use explicit types as parameters. This is a convenient way to bundle all the parameters in a predefined way that can be used by multiple DictionarySteps. The explicit type can use public or private fields or properties that map to parameters. Casing of the fields or properties is case-sensitive. ExplicitParameters is a class that contains five fields: private class ExplicitParams { internal int id; string xyz = "A string"; int zero = 0; DateTime utcNow = DateTime.UtcNow; DateTime never = DateTime.MinValue; } All fields in this example are prepopulated except id. In that case, the field is populated in the constructor of the class as seen in the following example. public override void CustomPriming() { var nextIdValue = GetNextId("abc"); ExecuteSql(@"INSERT INTO { abc } ( { abc.abc_id }, { abc.xyz }, { abc.registered }, { abc.registered_associate_id }, { abc.updated }, { abc.updated_associate_id }, { abc.updatedCount } ) VALUES ( { @id }, { @xyz }, { @utcNow }, 0, { @never }, 0, 0 )", new ExplicitParams { id = nextIdValue }); } Remember that a DictionaryStep assembly should be self-contained, with no external referenced dependencies. Therefore, do not place explicit type for SQL parameters in external libraries. One more common scenario is when new columns are introduced and the pre-existing data must be migrated or transformed in some way. The following code snippet is an example of a new field added to an existing table. Then the CustomPriming method executes an UPDATE statement that transfers the data from the old field into the new field and sets the updated field. public override void Structure() { ModifyTable("abc") .AddString("def", "Descrition", 100, notNull: false) ; } public override void CustomPriming() { ExecuteSql(@"UPDATE {abc} SET {abc.def} = {abc.xyz}, {unogroup.updated } = {@now}", new { now = DateTime.Now }); } ExecuteSql is a great way to migrate data when the data is known. However, for scenarios when you don’t know the data, or the needed data is in the database, you use ExecuteSelect. When there is data in the database that needs to be obtained during the DictionaryStep pipeline, ExecuteSelect is there to execute a query and return the result in a DataTable. ExecuteSelect accepts two parameters: the SQL statement to execute, and optional parameters. It returns a standard DataTable object that is disconnected from the database. A useful place for ExecuteSelect is in the ImpFileNames method, to first check if priming data exists, and potentially only set it by returning the name of the imp file if the tables doesn’t contain any data. public override List<string> ImpFileNames() { var abcData = ExecuteSelect(@"SELECT {abc.abc_id} FROM {abc}"); if (abcData.Rows.Count == 0) { return new List<string> { @"abc.imp" }; } else return new List<string>(); } The formatting of the SQL statements must use the same structure as the ExecuteSql method. When query criteria parameters are needed, use the second parameter to pass in an explicit or anonymous type with the fields or properties that contain the values. var sql = @"SELECT {abc.abc_id} FROM {abc} WHERE {abc.abc_id} = {@criteria}"; var abcData = ExecuteSelect(sql, new { criteria = 123 }); That’s all there is to it! SuperOffice DevNet provides tooling to help create a DictionaryStep from an existing third-party table. Please refer to the DevNet Toolbox for more information. We have covered the conceptual overview of what SuperOffice Continuous Database is, what DictionarySteps are and how they are used, as well as diving down into all the methods available in the DictionaryStep pipeline and base class. Evolutionary database design is the way forward. The implementation development, tested and used by SuperOffice in its’ own environment has proven to be very useful and a powerful tool for ensuring database integrity, flexibility and longevity. We believe that once you begin to leverage it, you will be impressed and assured that SuperOffice Continuous Database is the correct decision and direction for living in an agile world. CloseThis site is using cookies. SuperOffice is using cookies mainly for monitoring website traffic and content optimization. Please continue to use this site if you accept our use of cookies. Read more: Privacy Statement
https://community.superoffice.com/en/content/content/database/continuous-database/
CC-MAIN-2021-43
refinedweb
5,748
52.19
Hi everyone, I am having a small problem as follows, please help me: Implement, using LLVM/Clang, to count the number of memory operations executed in a given function (recording reads and writes of fields and arrays elements) of C programs. Here is an example: int* test(int* b, int* c) { ... for (int i=0; i<10; i++) { a[i] =b[i] + c[i]; } return a; } After execution of the whole program, the output of the instrumentation should return something like this for each method: "function test: reads = 20; writes = 10". Thank you very much! Are you interested in counting access to any objects or only particular objects? If only particular objects: you can use C++ to create a class behaving like your array, but override operators to count usage. The rest of your program could be kept in C as it is, usually without modifications. > you can use C++ to create a class behaving > like your array, but override operators to count usage. That's nothing to do with instrumentation: instrumenting code does not require to change the sources. ... ok, but it would answer his question "how man reads/writes?" > ok, but it would answer his question "how man reads/writes?" But this was not the question :-) Apart from that the numer of reads / writes can be determined by just looking at the function: "function test: reads = 20; writes = 10". The question is: How to change Clang so that Clang can determine the number of reads / writes for any input (program), either by statically analyzing the input or by instrumenting the generated code without changing its semantics. In general, you don't even know the input to clang as you are writing a specific analysis or transformaton, the given source is just a sample code to explain the issue w.r.t. changing clang. Static analysis only will not help you: #include <stdio.h> #include <stdlib.h> #include <time.h> int a[5] = { 0, }; // does this count as "write" as well? :-) int b[5] = { 1, 2, 3, 4, 5 }; int c[5] = { 6, 7, 8, 9, 0 }; int main() { int i, n; time_t t; /* Initialize random number generator */ srand((unsigned) time(&t)); /* pick a number */ n = rand() % 5; /* fill a - partially */ for( i = 0 ; i < n ; i++ ) { a[i] = b[i] + c[i]; // <-- now, how many times is this executed? } return(0); }
https://embdev.net/topic/362427
CC-MAIN-2017-43
refinedweb
393
60.24
13 September 2011 20:32 [Source: ICIS news] LONDON (ICIS)--Solvay has decided to “squeeze out” remaining stakeholders who did not tender shares or convertible bonds as part of its takeover of Rhodia, the Belgian chemicals producer said on Tuesday. Solvay said respective Rhodia shareholders will receive a cash payment of €31.60/share, corresponding to the price Solvay paid for shares that were tendered. Meanwhile, the remaining holders of Rhodia’s convertible bonds - known as OCEANEs – will be squeezed out at a price of €52.35/OCEANE, Solvay said. On 31 August, Solvay announced that Rhodia shareholders and bondholders had overwhelmingly accepted its €3.4bn ($4.6bn) offer from April for the French specialty chemicals firm. Rhodia shares and the OCEANEs will be delisted from trading on French financial markets on 16 September, the date on which the squeeze-out will be implemented, Solvay said. In related news, ?xml:namespace> ($1 = €0.73) For more on Solv
http://www.icis.com/Articles/2011/09/13/9492113/belgiums-solvay-to-squeeze-out-remaining-rhodia-stakeholders.html
CC-MAIN-2014-41
refinedweb
158
66.44
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. Inherit Method Get Code and Name with 1 Field [Closed] i have 1 field kd_provinsi , i want value of this field are code and name of the form fed.state. for example : [BDG] Bandung my code.py 'kd_provinsi':fields.many2one('res.country.state','Province',domain="[('name','<>','code')]",required=True), can i inherit method in res.country.state ? i have use this method but its doesnt work def prov_get(self, cr, uid, ids, context=None): if context is None: context = {} res = [] for record in self.browse(cr, uid, ids, context=context): tit = "[%s] %s" % (record.name, record.code) res.append((record.id, tit)) return res anyone please help me name_get ORM method needs to be used. () def name_get(self, cr, uid, ids, context=None): if not ids: return [] if isinstance(ids, (int, long)): ids = [ids] reads = self.read(cr, uid, ids, ['name', 'code'], context=context) res = [] for record in reads: name = record['name'] if record['code']: name = record['code'] + ' ' + name res.append((record['id'], name)) return res Also, check this question: About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/inherit-method-get-code-and-name-with-1-field-71429
CC-MAIN-2018-17
refinedweb
225
64.51
CodeMirror Spell CheckerCodeMirror Spell Checker Spell checking so simple, you can set up in 60 seconds. It will highlight any misspelled words in light red. Works great in conjunction with other CodeMirror modes, like GitHub Flavored Markdown. Inspired by fantastic work done by Wes Cossick. It works only on Electron apps since it depends on NodeJS. InstallInstall npm install @inkdropapp/codemirror-spell-checker --save Quick startQuick start Once CodeMirror is installed and loaded, first provide CodeMirror Spell Checker with the correct CodeMirror function. Then, just set the primary mode to "spell-checker" and the backdrop mode to your desired mode. Be sure to load/require overlay.min.js if you haven't already. import SpellChecker from '@inkdropapp/codemirror-spell-checker' SpellChecker(CodeMirror) CodeMirror.fromTextArea(document.getElementById("textarea"), { mode: "spell-checker", backdrop: "gfm", // Your desired mode spellCheckLang: "en_US" }); That's it! Other languagesOther languages Some languages are tested and bundled in the data directory. It supports other languages by downloading the dictionary files from titoBouzout/Dictionaries. In order to use another language instead of en_US you just have to provide a language name listed in the above repository like so: CodeMirror.fromTextArea(document.getElementById("textarea"), { mode: "spell-checker", backdrop: "gfm", spellCheckLang: "en_AU" }); CustomizingCustomizing You can customize the misspelled word appearance by updating the CSS. All misspelled words will have the .cm-spell-error class. .CodeMirror .cm-spell-error{ /* Your styling here */ }
https://preview.npmjs.com/package/@inkdropapp/codemirror-spell-checker
CC-MAIN-2021-25
refinedweb
230
50.94
Good Day Learners! In our previous tutorial, we discussed about Python unittest module. Today we will look into python socket programming example. We will create python socket server and client applications. Table of Contents Python Socket Programming To understand python socket programming, we need to know about three interesting topics – Socket Server, Socket Client and Socket. So, what is a server? Well, a server is a software that waits for client requests and serves or processes them accordingly. On the other hand, a client is requester of this service. A client program request for some resources to the server and server responds to that request. Socket is the endpoint of a bidirectional communications channel between server and client. Sockets may communicate within a process, between processes on the same machine, or between processes on different machines. For any communication with a remote program, we have to connect through a socket port. The main objective of this socket programming tutorial is to get introduce you how socket server and client communicate with each other. You will also learn how to write python socket server program. Python Socket Example We have said earlier that a socket client requests for some resources to the socket server and the server responds to that request. So we will design both server and client model so that each can communicate with them. The steps can be considered like this. - Python socket server program executes at first and wait for any request - Python socket client program will initiate the conversation at first. - Then server program will response accordingly to client requests. - Client program will terminate if user enters “bye” message. Server program will also terminate when client program terminates, this is optional and we can keep server program running indefinitely or terminate with some specific command in client request.. See the below python socket server example code, the comments will help you to understand the code. import socket def server_program(): # get the hostname host = socket.gethostname() port = 5000 # initiate port no above 1024 server_socket = socket.socket() # get instance # look closely. The bind() function takes tuple as argument server_socket.bind((host, port)) # bind host address and port together # configure how many client the server can listen simultaneously server_socket.listen(2) conn, address = server_socket.accept() # accept new connection print("Connection from: " + str(address)) while True: # receive data stream. it won't accept data packet greater than 1024 bytes data = conn.recv(1024).decode() if not data: # if data is not received break break print("from connected user: " + str(data)) data = input(' -> ') conn.send(data.encode()) # send data to the client conn.close() # close the connection if __name__ == '__main__': server_program() So our python socket server is running on port 5000 and it will wait for client request. If you want server to not quit when client connection is closed, just remove the if condition and break statement. Python while loop is used to run the server program indefinitely and keep waiting for client request. Python Socket Client We will save python socket client program as socket_client.py. This program is similar to the server program, except binding. The main difference between server and client program is, in server program, it needs to bind host address and port address together. See the below python socket client example code, the comment will help you to understand the code. import socket def client_program(): host = socket.gethostname() # as both code is running on same pc port = 5000 # socket server port number client_socket = socket.socket() # instantiate client_socket.connect((host, port)) # connect to the server message = input(" -> ") # take input while message.lower().strip() != 'bye': client_socket.send(message.encode()) # send message data = client_socket.recv(1024).decode() # receive response print('Received from server: ' + data) # show in terminal message = input(" -> ") # again take input client_socket.close() # close the connection if __name__ == '__main__': client_program() Python Socket Programming Output To see the output, first run the socket server program. Then run the socket client program. After that, write something from client program. Then again write reply from server program. At last, write bye from client program to terminate both program. Below short video will show how it worked on my test run of socket server and client example programs. pankaj$ python3.6 socket_server.py Connection from: ('127.0.0.1', 57822) from connected user: Hi -> Hello from connected user: How are you? -> Good from connected user: Awesome! -> Ok then, bye! pankaj$ pankaj$ python3.6 socket_client.py -> Hi Received from server: Hello -> How are you? Received from server: Good -> Awesome! Received from server: Ok then, bye! -> Bye pankaj$ Notice that socket server is running on port 5000 but client also requires a socket port to connect to the server. This port is assigned randomly by client connect call. In this case, it’s 57822. So, that’s all for Python socket programming, python socket server and socket client example programs. Reference: Official Documentation Simple example. Thanks. It works at localhost nicely… Thank you friend File “C:/Users/hp/AppData/Local/Programs/Python/Python37/socket_client.py”, line 25, in client_program() File “C:/Users/hp/AppData/Local/Programs/Python/Python37/socket_client.py”, line 9, in client_program client_socket.connect((host, port)) # connect to the server ConnectionRefusedError: [WinError 10061] No connection could be made because the target machine actively refused it I am getting this error how do we chat with different persons should we give client program to them and ask to run. and i would host the server in my pc offcourse USE in server program host=’ ‘ and in client program host=”ip address of server machine” host = socket.gethostname() AttributeError: ‘module’ object has no attribute ‘gethostname’ im getting this error.. your server code is wrong you have to use different names to send the data data = conn.recv(1024).decode() if not data: # if data is not received break break print(“from connected user: ” + str(data)) message = input(‘ -> ‘) #use different name at the place of data conn.send(message.encode()) # send data to the client Good Job Great little example, thanks!
https://www.journaldev.com/15906/python-socket-programming-server-client
CC-MAIN-2019-39
refinedweb
997
58.58
POSIX::RT::SharedMem version 0.08 use POSIX::RT::SharedMem qw/shared_open/; shared_open my $map, '/some_file', '>+', size => 1024, perms => oct(777); Map the shared memory object $name into $map. For portable use, a shared memory object should be identified by a name of the form '/somename'; that is, a string consisting of an initial slash, followed by one or more characters, none of which are slashes. $mode determines the read/write mode. It works the same as in open and map_file. Beyond that it can take three named arguments: This determines the size of the map. If the map is map has writing permissions and the file is smaller than the given size it will be lengthened. Defaults to the length of the file and fails if it is zero. It is mandatory when using mode > or +>. This determines the permissions with which the file is created (if $mode is '+>'). Default is 0600. This determines the offset in the file that is mapped. Default is 0. Extra flags that are used when opening the shared memory object (e.g. O_EXCL). It returns a filehandle that can be used to with stat, chmod, chown. You should not assume you can read or write directly from it. Remove the shared memory object $name from the namespace. Note that while the shared memory object can't be opened anymore after this, it doesn't remove the contents until all processes have closed it. Leon Timmermans <leont@cpan.org> This software is copyright (c) 2010 by Leon Timmermans. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.
http://search.cpan.org/~leont/POSIX-RT-SharedMem-0.08/lib/POSIX/RT/SharedMem.pm
CC-MAIN-2017-26
refinedweb
278
75.61
2.2. Linear Algebra¶ Now that you can store and manipulate data, let’s briefly review the subset of basic linear algebra that you will need to understand most of the models. We will introduce all the basic concepts, the corresponding mathematical notation, and their realization in code all in one place. If you are already confident in your basic linear algebra, feel free to skim through or skip this chapter. In [1]: from mxnet import nd 2.2.1. Scalars¶ If you never studied linear algebra or machine learning, you are probably used to working with one number at a time. And know how to do basic things like add them together or multiply them. For example, in Palo Alto, the temperature is \(52\) degrees Fahrenheit. Formally, we call these values \(scalars\). If you wanted to convert this value to Celsius (using metric system’s more sensible unit of temperature measurement), you would evaluate the expression \(c = (f - 32) * 5/9\) setting \(f\) to \(52\). In this equation, each of the terms \(32\), \(5\), and \(9\) is a scalar value. The placeholders \(c\) and \(f\) that we use are called variables and they represent unknown scalar values. In mathematical notation, we represent scalars with ordinary lower-cased letters (\(x\), \(y\), \(z\)). We also denote the space of all scalars as \(\mathcal{R}\). For expedience, we are going to punt a bit on what precisely a space is, but for now, remember that if you want to say that \(x\) is a scalar, you can simply say \(x \in \mathcal{R}\). The symbol \(\in\) can be pronounced “in” and just denotes membership in a set. In MXNet, we work with scalars by creating NDArrays with just one element. In this snippet, we instantiate two scalars and perform some familiar arithmetic operations with them, such as addition, multiplication, division and exponentiation. In [2]: x = nd.array([3.0]) y = nd.array([2.0]) print('x + y = ', x + y) print('x * y = ', x * y) print('x / y = ', x / y) print('x ** y = ', nd.power(x,y)) x + y = [5.] <NDArray 1 @cpu(0)> x * y = [6.] <NDArray 1 @cpu(0)> x / y = [1.5] <NDArray 1 @cpu(0)> x ** y = [9.] <NDArray 1 @cpu(0)> We can convert any NDArray to a Python float by calling its asscalar method. Note that this is typically a bad idea. While you are doing this, NDArray has to stop doing anything else in order to hand the result and the process control back to Python. And unfortunately Python is not very good at doing things in parallel. So avoid sprinkling this operation liberally throughout your code or your networks will take a long time to train. In [3]: x.asscalar() Out[3]: 3.0 2.2.2. Vectors¶ You can think of a vector as simply a list of numbers, for example [1.0,3.0,4.0,2.0]. Each of the numbers in the vector consists of a single scalar value. We call these values the entries or components of the vector. Often, we are interested in vectors whose values hold some real-world significance. For example, if we are studying the risk that loans default, we might associate each applicant with a vector whose components correspond to their income, length of employment, number of previous defaults, etc. If we were studying the risk of heart attacks hospital patients potentially face, we might represent each patient with a vector whose components capture their most recent vital signs, cholesterol levels, minutes of exercise per day, etc. In math notation, we will usually denote vectors as bold-faced, lower-cased letters (\(\mathbf{u}\), \(\mathbf{v}\), \(\mathbf{w})\). In MXNet, we work with vectors via 1D NDArrays with an arbitrary number of components. In [4]: x = nd.arange(4) print('x = ', x) x = [0. 1. 2. 3.] <NDArray 4 @cpu(0)> We can refer to any element of a vector by using a subscript. For example, we can refer to the \(4\)th element of \(\mathbf{u}\) by \(u_4\). Note that the element \(u_4\) is a scalar, so we do not bold-face the font when referring to it. In code, we access any element \(i\) by indexing into the NDArray. In [5]: x[3] Out[5]: [3.] <NDArray 1 @cpu(0)> 2.2.3. Length, dimensionality and shape¶ Let’s revisit some concepts from the previous section.cal{R}^n\). The length of a vector is commonly called its \(dimension\). As with an ordinary Python array, we can access the length of an NDArray by calling Python’s in-built len() function. We can also access a vector’s length via its .shape attribute. The shape is a tuple that lists the dimensionality of the NDArray along each of its axes. Because a vector can only be indexed along one axis, its shape has just one element. In [6]: x.shape Out[6]: (4,) Note that the word dimension is overloaded and this tends to confuse people. Some use the dimensionality of a vector to refer to its length (the number of components). However some use the word dimensionality to refer to the number of axes that an array has. In this sense, a scalar would have \(0\) dimensions and a vector would have \(1\) dimension. To avoid confusion, when we say 2D array or 3D array, we mean an array with 2 or 3 axes respectively. But if we say :math:`n`-dimensional vector, we mean a vector of length :math:`n`. In [7]: a = 2 x = nd.array([1,2,3]) y = nd.array([10,20,30]) print(a * x) print(a * x + y) [2. 4. 6.] <NDArray 3 @cpu(0)> [12. 24. 36.] <NDArray 3 @cpu(0)> 2.2.4. Matrices¶ Just as vectors generalize scalars from order \(0\) to order \(1\), matrices generalize vectors from \(1D\) to \(2D\). Matrices, which we’ll typically denote with capital letters (\(A\), \(B\), \(C\)), are represented in code as arrays with 2 axes. Visually, we can draw a matrix as a table, where each entry \(a_{ij}\) belongs to the \(i\)-th row and \(j\)-th column. We can create a matrix with \(n\) rows and \(m\) columns in MXNet by specifying a shape with two components (n,m) when calling any of our favorite functions for instantiating an ndarray such as ones, or zeros. In [8]: A = nd.arange(20).reshape((5,4)) print(A) [[ 0. 1. 2. 3.] [ 4. 5. 6. 7.] [ 8. 9. 10. 11.] [12. 13. 14. 15.] [16. 17. 18. 19.]] <NDArray 5x4 @cpu(0)> Matrices are useful data structures: they allow us to organize data that has different modalities of variation. For example, rows in our matrix might correspond to different patients, while columns might correspond to different attributes. We can access the scalar elements \(a_{ij}\) of a matrix \(A\) by specifying the indices for the row (\(i\)) and column (\(j\)) respectively. Leaving them blank via a : takes all elements along the respective dimension (as seen in the previous section). We can transpose the matrix through T. That is, if \(B = A^T\), then \(b_{ij} = a_{ji}\) for any \(i\) and \(j\). In [9]: print(A.T) [[ 0. 4. 8. 12. 16.] [ 1. 5. 9. 13. 17.] [ 2. 6. 10. 14. 18.] [ 3. 7. 11. 15. 19.]] <NDArray 4x5 @cpu(0)> 2.2.5. Tensors¶ Just as vectors generalize scalars, and matrices generalize vectors, we can actually build data structures with even more axes. Tensors give us a generic way of discussing arrays with an arbitrary number of axes. Vectors, for example, are first-order tensors, and matrices are second-order tensors. Using tensors will become more important when we start working with images, which arrive as 3D data structures, with axes corresponding to the height, width, and the three (RGB) color channels. But in this chapter, we’re going to skip this part and make sure you know the basics. In [10]: X = nd.arange(24).reshape((2, 3, 4)) print('X.shape =', X.shape) print('X =', X) X.shape = (2, 3, 4) X = [[[ 0. 1. 2. 3.] [ 4. 5. 6. 7.] [ 8. 9. 10. 11.]] [[12. 13. 14. 15.] [16. 17. 18. 19.] [20. 21. 22. 23.]]] <NDArray 2x3x4 @cpu(0)> 2.2.6. Basic properties of tensor arithmetic¶ Scalars, vectors, matrices, and tensors of any order have some nice properties that we will often rely on. For example, as you might have noticed from the definition of an element-wise operation, given operands with the same shape, the result of any element-wise operation is a tensor of that same shape. Another convenient property is that for all tensors, multiplication by a scalar produces a tensor of the same shape. In math, given two tensors \(X\) and \(Y\) with the same shape, \(\alpha X + Y\) has the same shape (numerical mathematicians call this the AXPY operation). In [11]: a = 2 x = nd.ones(3) y = nd.zeros(3) print(x.shape) print(y.shape) print((a * x).shape) print((a * x + y).shape) (3,) (3,) (3,) (3,) Shape is not the the only property preserved under addition and multiplication by a scalar. These operations also preserve membership in a vector space. But we will postpone this discussion for the second half of this chapter because it is not critical to getting your first models up and running. 2.2.7. Sums and means¶ The next more sophisticated thing we can do with arbitrary tensors is to calculate the sum of their elements. In mathematical notation, we express sums using the \(\sum\) symbol. To express the sum of the elements in a vector \(\mathbf{u}\) of length \(d\), we can write \(\sum_{i=1}^d u_i\). In code, we can just call nd.sum(). In [12]: print(x) print(nd.sum(x)) [1. 1. 1.] <NDArray 3 @cpu(0)> [3.] <NDArray 1 @cpu(0)> We can similarly express sums over the elements of tensors of arbitrary shape. For example, the sum of the elements of an \(m \times n\) matrix \(A\) could be written \(\sum_{i=1}^{m} \sum_{j=1}^{n} a_{ij}\). In [13]: print(A) print(nd.sum(A)) [[ 0. 1. 2. 3.] [ 4. 5. 6. 7.] [ 8. 9. 10. 11.] [12. 13. 14. 15.] [16. 17. 18. 19.]] <NDArray 5x4 @cpu(0)> [190.] <NDArray 1 @cpu(0)> A related quantity is the mean, which is also called the average. We calculate the mean by dividing the sum by the total number of elements. With mathematical notation, we could write the average over a vector \(\mathbf{u}\) as \(\frac{1}{d} \sum_{i=1}^{d} u_i\) and the average over a matrix \(A\) as \(\frac{1}{n \cdot m} \sum_{i=1}^{m} \sum_{j=1}^{n} a_{ij}\). In code, we could just call nd.mean() on tensors of arbitrary shape: In [14]: print(nd.mean(A)) print(nd.sum(A) / A.size) [9.5] <NDArray 1 @cpu(0)> [9.5] <NDArray 1 @cpu(0)> 2.2.8. Dot products¶ So far, we have only performed element-wise operations, sums and averages. And if this was all we could do, linear algebra probably would not deserve its own chapter. However, one of the most fundamental operations is the dot product. Given two vectors \(\mathbf{u}\) and \(\mathbf{v}\), the dot product \(\mathbf{u}^T \mathbf{v}\) is a sum over the products of the corresponding elements: \(\mathbf{u}^T \mathbf{v} = \sum_{i=1}^{d} u_i \cdot v_i\). In [15]: x = nd.arange(4) y = nd.ones(4) print(x, y, nd.dot(x, y)) [0. 1. 2. 3.] <NDArray 4 @cpu(0)> [1. 1. 1. 1.] <NDArray 4 @cpu(0)> [6.] <NDArray 1 @cpu(0)> Note that we can express the dot product of two vectors nd.dot(x, y) equivalently by performing an element-wise multiplication and then a sum: In [16]: nd.sum(x * y) Out[16]: [6.] <NDArray 1 @cpu(0)> Dot products are useful in a wide range of contexts. For example, given a set of weights \(\mathbf{w}\), the weighted sum of some values \({u}\) could be expressed as the dot product \(\mathbf{u}^T \mathbf{w}\). When the weights are non-negative and sum to one \(\left(\sum_{i=1}^{d} {w_i} = 1\right)\), the dot product expresses a weighted average. When two vectors each have length one (we will discuss what length means below in the section on norms), dot products can also capture the cosine of the angle between them. 2.2.9. Matrix-vector products¶ Now that we know how to calculate dot products we can begin to understand matrix-vector products. Let’s start off by visualizing a matrix \(A\) and a column vector \(\mathbf{x}\). We can visualize the matrix in terms of its row vectors where each \(\mathbf{a}^T_{i} \in \mathbb{R}^{m}\) is a row vector representing the \(i\)-th row of the matrix \(A\). Then the matrix vector product \(\mathbf{y} = A\mathbf{x}\) is simply a column vector \(\mathbf{y} \in \mathbb{R}^n\) where each entry \(y_i\) is the dot product \(\mathbf{a}^T_i \mathbf{x}\). So you can think of multiplication by a matrix \(A\in \mathbb{R}^{n \times m}\) as a transformation that projects vectors from \(\mathbb{R}^{m}\) to \(\mathbb{R}^{n}\). These transformations turn out to be remarkably useful. For example, we can represent rotations as multiplications by a square matrix. As we will see in subsequent chapters, we can also use matrix-vector products to describe the calculations of each layer in a neural network. Expressing matrix-vector products in code with ndarray, we use the same nd.dot() function as for dot products. When we call nd.dot(A, x) with a matrix A and a vector x, MXNet knows to perform a matrix-vector product. Note that the column dimension of A must be the same as the dimension of x. In [17]: nd.dot(A, x) Out[17]: [ 14. 38. 62. 86. 110.] <NDArray 5 @cpu(0)> 2.2.10. Matrix-matrix multiplication¶ If you have gotten the hang of dot products and matrix-vector multiplication, then matrix-matrix multiplications should be pretty straightforward. Say we have two matrices, \(A \in \mathbb{R}^{n \times k}\) and \(B \in \mathbb{R}^{k \times m}\): To produce the matrix product \(C = AB\), it’s easiest to think of \(A\) in terms of its row vectors and \(B\) in terms of its column vectors: Note here that each row vector \(\mathbf{a}^T_{i}\) lies in \(\mathbb{R}^k\) and that each column vector \(\mathbf{b}_j\) also lies in \(\mathbb{R}^k\). Then to produce the matrix product \(C \in \mathbb{R}^{n \times m}\) we simply compute each entry \(c_{ij}\) as the dot product \(\mathbf{a}^T_i \mathbf{b}_j\). You can think of the matrix-matrix multiplication \(AB\) as simply performing \(m\) matrix-vector products and stitching the results together to form an \(n \times m\) matrix. Just as with ordinary dot products and matrix-vector products, we can compute matrix-matrix products in MXNet by using nd.dot(). In [18]: B = nd.ones(shape=(4, 3)) nd.dot(A, B) Out[18]: [[ 6. 6. 6.] [22. 22. 22.] [38. 38. 38.] [54. 54. 54.] [70. 70. 70.]] <NDArray 5x3 @cpu(0)> 2.2.11. Norms¶ Before we can start implementing models, there is one last concept we are going to introduce. Some of the most useful operators in linear algebra are norms. Informally, they tell us how big a vector or matrix is. We represent norms with the notation \(\|\cdot\|\). The \(\cdot\) in this expression is just a placeholder. For example, we would represent the norm of a vector \(\mathbf{x}\) or matrix \(A\) as \(\|\mathbf{x}\|\) or \(\|A\|\), respectively. All norms must satisfy a handful of properties: - \(\|\alpha A\| = |\alpha| \|A\|\) - \(\|A + B\| \leq \|A\| + \|B\|\) - \(\|A\| \geq 0\) - If \(\forall {i,j}, a_{ij} = 0\), then \(\|A\|=0\) To put it in words, the first rule says that if we scale all the components of a matrix or vector by a constant factor \(\alpha\), its norm also scales by the absolute value of the same constant factor. The second rule is the familiar triangle inequality. The third rule simply says that the norm must be non-negative. That makes sense, in most contexts the smallest size for anything is 0. The final rule basically says that the smallest norm is achieved by a matrix or vector consisting of all zeros. It is possible to define a norm that gives zero norm to nonzero matrices, but you cannot give nonzero norm to zero matrices. That may seem like a mouthful, but if you digest it then you probably have grepped the important concepts here. If you remember Euclidean distances (think Pythagoras’ theorem) from grade school, then non-negativity and the triangle inequality might ring a bell. You might notice that norms sound a lot like measures of distance. In fact, the Euclidean distance \(\sqrt{x_1^2 + \cdots + x_n^2}\) is a norm. Specifically it is the \(\ell_2\)-norm. An analogous computation, performed over the entries of a matrix, e.g. \(\sqrt{\sum_{i,j} a_{ij}^2}\), is called the Frobenius norm. More often, in machine learning we work with the squared \(\ell_2\) norm (notated \(\ell_2^2\)). We also commonly work with the \(\ell_1\) norm. The \(\ell_1\) norm is simply the sum of the absolute values. It has the convenient property of placing less emphasis on outliers. To calculate the \(\ell_2\) norm, we can just call nd.norm(). In [19]: nd.norm(x) Out[19]: [3.7416573] <NDArray 1 @cpu(0)> To calculate the L1-norm we can simply perform the absolute value and then sum over the elements. In [20]: nd.sum(nd.abs(x)) Out[20]: [6.] <NDArray 1 @cpu(0)> 2.2.12. Norms and objectives¶ While we do not want to get too far ahead of ourselves, we do want you to anticipate why these concepts are useful. In machine, these objectives, perhaps the most important component of a machine learning algorithm (besides the data itself), are expressed as norms. 2.2.13. Intermediate linear algebra¶ If you have made it this far, and understand everything that we have covered, then honestly, you are ready to begin modeling. If you are feeling antsy, this is a perfectly reasonable place to move on. You already know nearly all of the linear algebra required to implement a number of many practically useful models and you can always circle back when you want to learn more. But there is a lot more to linear algebra, even as concerns machine learning. At some point, if you plan to make a career in machine learning, you will need to know more than what we have covered so far. In the rest of this chapter, we introduce some useful, more advanced concepts. 2.2.13.1. Basic vector properties¶ Vectors are useful beyond being data structures to carry numbers. In addition to reading and writing values to the components of a vector, and performing some useful mathematical operations, we can analyze vectors in some interesting ways. One important concept is the notion of a vector space. Here are the conditions that make a vector space: - Additive axioms (we assume that x,y,z are all vectors): \(x+y = y+x\) and \((x+y)+z = x+(y+z)\) and \(0+x = x+0 = x\) and \((-x) + x = x + (-x) = 0\). - Multiplicative axioms (we assume that x is a vector and a, b are scalars): \(0 \cdot x = 0\) and \(1 \cdot x = x\) and \((a b) x = a (b x)\). - Distributive axioms (we assume that x and y are vectors and a, b are scalars): \(a(x+y) = ax + ay\) and \((a+b)x = ax +bx\). 2.2.13.2. Special matrices¶ There are a number of special matrices that we will use throughout this tutorial. Let’s look at them in a bit of detail: - Symmetric Matrix These are matrices where the entries below and above the diagonal are the same. In other words, we have that \(M^\top = M\). An example of such matrices are those that describe pairwise distances, i.e. \(M_{ij} = \|x_i - x_j\|\). Likewise, the Facebook friendship graph can be written as a symmetric matrix where \(M_{ij} = 1\) if \(i\) and \(j\) are friends and \(M_{ij} = 0\) if they are not. Note that the Twitter graph is asymmetric - \(M_{ij} = 1\), i.e. \(i\) following \(j\) does not imply that \(M_{ji} = 1\), i.e. \(j\) following \(i\). - Antisymmetric Matrix These matrices satisfy \(M^\top = -M\). Note that any square matrix can always be decomposed into a symmetric and into an antisymmetric matrix by using \(M = \frac{1}{2}(M + M^\top) + \frac{1}{2}(M - M^\top)\). - Diagonally Dominant Matrix These are matrices where the off-diagonal elements are small relative to the main diagonal elements. In particular we have that \(M_{ii} \geq \sum_{j \neq i} M_{ij}\) and \(M_{ii} \geq \sum_{j \neq i} M_{ji}\). If a matrix has this property, we can often approximate \(M\) by its diagonal. This is often expressed as \(\mathrm{diag}(M)\). - Positive Definite Matrix These are matrices that have the nice property where \(x^\top M x > 0\) whenever \(x \neq 0\). Intuitively, they are a generalization of the squared norm of a vector \(\|x\|^2 = x^\top x\). It is easy to check that whenever \(M = A^\top A\), this holds since there \(x^\top M x = x^\top A^\top A x = \|A x\|^2\). There is a somewhat more profound theorem which states that all positive definite matrices can be written in this form. 2.2.14. Summary¶ In just a few pages (or one Jupyter notebook) we have taught you all the linear algebra you will need to understand a good chunk of neural networks. Of course there is a lot more to linear algebra. And a lot of that math math much later on, we will wrap up this chapter here. If you are eager to learn more about linear algebra, here are some of our favorite resources on the topic - For a solid primer on basics, check out Gilbert Strang’s book Introduction to Linear Algebra - Zico Kolter’s Linear Algebra Review and Reference
https://d2l.ai/chapter_crashcourse/linear-algebra.html
CC-MAIN-2019-18
refinedweb
3,737
64.91
Ever wondered about a quick way to tell what some document is focusing on? What is its main topic? Let me give you this simple trick. List the unique words mentioned in the document, and then check how many times each word has been mentioned (frequency). This way would give you an indication of what the document is mainly about. But that wouldn't work easily manually, so we need some automated process, don't we? Yes, an automated process will make this much easier.+. Based on the frequency of words, let's guess which of my tutorials this text was extracted from. Let the game begin! Regular Expressions Since we are going to apply a pattern in our game, we need to use regular expressions (regex). If "regular expressions" is a new term to you, this is a nice definition from Wikipedia: If you want to know more about regular expressions before moving ahead with this tutorial, you can see my other tutorial Regular Expressions In Python, and come back again to continue this tutorial. Building the Program Let's work step by step on building this game. The first thing we want to do is to store the text file in a string variable. document_text = open('test.txt', 'r') text_string = document_text.read() Now, in order to make applying our regular expression easier, let's turn all the letters in our document into lower case letters, using the lower() function, as follows: text_string = document_text.read().lower() Let's write our regular expression that would return all the words with the number of characters in the range [3-15]. Starting from 3 will help in avoiding words that we may not be interested in counting their frequency like if, of, in, etc., and words having a length larger than 15 might not be correct words. The regular expression for such a pattern looks as follows: \b[a-z]{3,15}\b \b is related to word boundary. For more information on the word boundary, you can check this tutorial. The above regular expression can be written as follows: match_pattern = re.search(r'\b[a-z]{3,15}\b', text_string) Since we want to walk through multiple words in the document, we can use the findall function:. At this point, we want to find the frequency of each word in the document. The suitable concept to use here is Python's Dictionaries, since we need key-value pairs, where key is the word, and the value represents the frequency words appeared in the document. Assuming we have declared an empty dictionary frequency = { }, the above paragraph would look as follows: for word in match_pattern: count = frequency.get(word,0) frequency[word] = count + 1 We can now see our keys using: frequency_list = frequency.keys() Finally, in order to get the word and its frequency (number of times it appeared in the text file), we can do the following: for words in frequency_list: print words, frequency[words] Let's put the program together in the next section, and see what the output looks like. Putting It All Together Having discussed the program step by step, let's now see how the program looks: import re import string frequency = {} document_text = open('test] If you run the program, you should get something like the following: Let's come back to our game. Going through the word frequencies, what do you think the test file (with content from my other Python tutorial) was talking about? (Hint: check the word with the maximum frequency). Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
https://code.tutsplus.com/tutorials/counting-word-frequency-in-a-file-using-python--cms-25965
CC-MAIN-2020-16
refinedweb
606
61.97
Get even more from the Asset Store Greetings from the Asset Store team! We had a great time on our booth at Unite Boston, meeting so many of you and demonstrating some of the coolest packages our publishers have to offer. We also launched three new features that will make the Asset Store experience more efficient and helpful for everyone. Big improvements for search You can now search in the store by combining filters, including price, last update date, and size of package. You can also search within categories. Watch this quick intro video or just go try it for yourself. New notification system Notifications keep you updated on your favorite products. You can modify your notification settings at any time, by logging in to your account (top right corner). You will be notified when: - A package you bought or downloaded is updated to a new version - A publisher replies to one of your reviews - A package on your wish list goes on sale Baring it all We made our Asset Store roadmap public! You can now see what features are coming up for both publishers and customers in the next 6-9 months, and where we are investing effort and research beyond that time frame. Finally, a big thanks to our publishers who showed off their awesome wares at Unite Boston: Stephan Bouchard Fungus Quantum Theory Entertainment Invert Game Studios Cinema Suite RUST LTD Opsive Persistant (PopCornFx) ProCore LaneMax Happy developing! 25 ComentáriosInscrever-se nos comentários Comentários encerrados. Hannesoutubro 5, 2015 às 11:23 pm Great changes! Really improves usability. Would be nice if we could also choose to get our notifications as e-mails, and the possibility to subscribe to get daily e-mails about the 24 hour sale. I’ve already missed many deals and sales of wishlist items that I would have been interested in. As a side note, I would also love to be able to subscribe to get Unity Blog posts as e-mails. Hannesoutubro 6, 2015 às 12:22 am Forgot to say: Would also be nice if we could see the resolutions of textures when browsing the Package Contents before buying an asset. koblavioutubro 6, 2015 às 10:58 am +1 FOR SUBSCRIBING TO BLOG POSTS laurentoutubro 3, 2015 às 6:48 pm I don’t think filtering by price is a good idea, it will level quality to the bottom. I would not have bought what I purchased, hitting only freebies. BitBarreloutubro 4, 2015 às 9:29 am I don’t think assets are filtered by price by default, that is up to the user whether they want to use that or not. Personally I think it can be useful, especially if you want to find high quality stuff. Just filter with highest price on top. That is assuming that you get what you pay for of course. Jackoutubro 7, 2015 às 11:24 pm I often will buy assets at $5 or less that I know I am very unlikely to use after sorting from lowest to highest price just because I like the assets. That is a suggestion that would hurt asset store sales. BitBarreloutubro 1, 2015 às 7:09 am It would also be nice if you update the Asset Store submission guidelines page with all undocumented requirements such as the need to show your prior credentials. BitBarreloutubro 1, 2015 às 7:05 am Next phase would be to staff up. I mean, waiting for more then one month to get an Asset Store submission reviewed is a bit slow. Also you never reply to emails. Jacksetembro 30, 2015 às 6:12 pm What I’d really, really like is for assets that had any scripting to be encapsulated in the Kiosk that sold / gave away that asset’s own namespace, say preferably the a namespace assigned to the kiosk on creation of their account. e.g. Phase 1: I create a Unity Asset store under the name of goat. Unity then assigns my kiosk the name of goat. Phase 2: I upload a free asset under the kiosk name of goat. Therefore Unity requires all art and code in the asset to be under a folder be the name of goat. It requires all code under that folder to be encapsulated in the namespace goat.* It is optional but desirable for the asset creator, in this case goat, to further organize logically all the assets they might give away or sell in their goat folder logically to help the end use effectively utilize those assets. Phase 2a: Optional, but probably makes further sense for each asset *.unitypackage offered in the asset store be kept in the Unity folder // because we know that mostly doesn’t matter to the Unity editor or compilers it is just being helpful to the endusers. Phase 3: People stop being frustrated by assets defining a class and so on multiple times and breaking compilation. The stop being frustrated on trying to find where that asset is located. Jacksetembro 30, 2015 às 6:18 pm It deleted my part of my comment because of greater/less than redirection symbols but anyway Phase 2a: keep individual unitypackages in their own folder under the kiosk owner’s name: Assets/kioskname/unitypackagename Maybe all that is all ready planned. Seneralsetembro 30, 2015 às 6:01 pm Very cool, just remove the notification when a item on the wishlist is on sale when we already own it;) I tend to leave my assets on the wish list when I finally buy them… Seneralsetembro 30, 2015 às 6:51 pm Additionally, the Package contents window is kinda useless when packages get bigger… Maybe make it sizeable and make it load everything at once, or even better add the ability to collapse folders, so we can get a feel on the folder structure:) thomaskoutubro 2, 2015 às 3:46 pm Assets are removed from the wish list when purchased. Seneraloutubro 3, 2015 às 8:08 am Such a case made me write this comment, after all;) Then maybe it’s a single case for me… Michaeloutubro 16, 2015 às 6:25 am I love it. Dejansetembro 30, 2015 às 12:03 am Would be nice to have a filter for the supported platforms, too. Makkarsetembro 29, 2015 às 3:03 pm I already bought a whole lot of assets, mainly on the daily deals. I don’t know all of them by heart, I mean I don’t know every feature they have. So I many times thought that searching for keywords among my bought assets would be fine to check if I already have an asset for my needs. Also, it would be nice to see on the asset listing page what assets are already purchased, perhaps a small icon in the corner would be enough. AAKsetembro 29, 2015 às 2:46 pm what about supporting download from outside Unity (by using a download accelerator) and the support for downloading resume … Caylsetembro 30, 2015 às 3:59 pm I second you on those ones. It seems from the roadmap that suppor for download resume and pause is comming in H1 of 2016 as well as filter for supported platforms. But unfortunately nothing on direct download from the browser. Matteosetembro 29, 2015 às 2:20 pm Will buyers be notified about the possibility to upgrade the asset? What about upgrades already set up? Will There be some notifications about their readily availablility? thomaskoutubro 2, 2015 às 3:39 pm Currently customers will not be notified of a paid major version upgrade. It is on our list to do. I doubt that we will notify about existing upgrades. It might be part of allowing publishers to push out notifications which is on the road map. Cemsetembro 29, 2015 às 2:19 pm Please, please, pleeeeeease sort the reviews by most recent to oldest ! thomaskoutubro 2, 2015 às 3:40 pm The reviews are already sorted by most recent to oldest. First the 3 most helpful reviews are shown sorted by helpfulness. Choose “Show all reviews” and all reviews are shown sorted by age. Chris O'Sheasetembro 29, 2015 às 2:16 pm Great work on the search. I’d love it if there was a way to see with your project which Assets you have installed and what versions of those assets. One way to do it could be a settings file, so if you import via the asset store it logs that asset and the version into a file. Obviously if people buy assets/plugins outside of the store this won’t work, but you wouldn’t mind :)
https://blogs.unity3d.com/pt/2015/09/29/unite-2/
CC-MAIN-2019-26
refinedweb
1,439
67.28
Diving deeper into the Java transient modifier Last week I published an article to help you understand how references do work in Java. It had a great acceptance, and I got a lot of constructive feedback. That is why I love the software community. Today I want to present you another article diving into a topic that it is not widely used: the transient modifier. Personally, when I started using it I recall I was able to quickly grasp the theoretical aspect of it, although applying was a question of a different nature. Let´s gonna check closer The transient modifier A transient modifier applied to a field tells Java that this attribute should be excluded when the object is being serialized. When the object is being deserialized, the field will be initialized with its default value (this will typically be a null value for a reference type, or zero/false if the object is a primitive type). You will agree with me: the theory is quite easy, but we initially fail to see the practical aspect. Where should we apply a transient modifier? When it will be useful? Is hard to come up with a factual example unless you have used it before. Like a dog chasing its tail, we fail to find a use case and therefore we cannot apply practice to the theory. My intention with this article is to help you break this vicious cycle. Let´s check a few practical examples. Think of an User object. This User contains among all its properties login, email and password. When the data is being serialized and transmitted through the network, we can think of a few security reasons why we would not like to send the field password altogether with the entire object. In this case, marking the field as transient will solve this security problem. How would this look like in code? @Data @NoArgsConstructor @AllArgsConstructor public class User implements Serializable { private static final long serialVersionUID = 123456789L; private String login; private String email; private transient String password; public void printInfo() { System.out.println("Login is: " + login); System.out.println("Email is: " + email); System.out.println("Password is: " + password); } } Note that this object is implementing the Interface Serializable, which is compulsory when you intend to serialize an object. If this interface is not implemented, you will receive a NotSerializableException. Note as well the declared field serialVersionUID. If you use any of the major Development Environments or Eclipse it will generally be recreated automatically. If you serialise and then deserialize now an object of type User, the value password will be null afterwards, since it has been marked as transient. See the annotations @Data, @NoArgsConstructor and @AllArgsConstructor? They are provided by Lombok, a Java library that makes things easier. Although in 2016 Lombok is not as useful as it was before (now languages like Kotlin generate setters and getters automatically, and you can do it with two clicks in any major Development Environment and Eclipse) I still like to use it in certain domains to keep a clean collection of Domain Models. There is another use case I can think of when using transient modifiers: when an object is being derived of another. In that case, we can make our code more efficient by making the derived field transient. Let´s take a look at this piece of code: @Data @NoArgsConstructor @AllArgsConstructor public class GalleryImage implements Serializable { private static final long serialVersionUID = 123456789L; private Image image; private transient Image thumbnailImage; private void generateThumbnail() { // This method will derive the thumbnail from the main image } private void readObject(ObjectInputStream inputStream) throws IOException, ClassNotFoundException { inputStream.defaultReadObject(); generateThumbnail(); } } In this case, the class contains a main image and a thumbnailImage field. The latest field will derive from the former. If we can make the thumbnailImage transient our code we will be more efficient: a field that derives from another one will not be conveyed when the object has been serialized. A minor point at the end of the article: as you can imagine, a transient variable cannot be static (or it does not make a lot of sense if it is). static fields are implicitly transient and will not be serialized. Summary: use transient when an object contains sensitive-data that you do not want to transmit, or when it contains data that you can derive from other elements. static fields are implicitly transient. I write my thoughts about Software Engineering and life in general in my Twitter account. If you have liked this article or it did help you, feel free to share and/or leave a comment. This is the currency that fuels amateur writers.
https://medium.com/google-developer-experts/diving-deeper-into-the-java-transient-modifier-3b16eff68f42
CC-MAIN-2017-09
refinedweb
774
53.61
In this quick tip tutorial, you'll learn how to use application logging support in your Android applications for diagnostic purposes. This quick tip shows you the steps to incorporate logging support into your applications and then use the LogCat logging utility to monitor your application’s log output, either on the emulator or a device that is attached to Eclipse via the debugger. This skill is invaluable for debugging issues, even when great debuggers are available for stepping through code. Step 1: Create an Android Application Begin by creating an Android project. Implement your Android application as normal. Once you've setup your Android project, you are ready to proceed with this quick tip. Step 2: Logging Options for Android Applications The Android SDK includes a useful logging utility class called android.util.Log. Logging messages are categorized by severity (and verbosity), with errors being the most severe, then warnings, informational messages, debug messages and verbose messages being the least severe. Each type of logging message has its own method. Simply call the method and a log message is created. The message types, and their related method calls are: - The Log.e() method is used to log errors. - The Log.w() method is used to log warnings. - The Log.i() method is used to log informational messages. - The Log.d() method is used to log debug messages. - The Log.v() method is used to log verbose messages. - The Log.wtf() method is used to log terrible failures that should never happen. ("WTF" stands for "What a Terrible Failure!" of course.) The first parameter of each Log method is a string called a tag. It’s common practice"; You will often find that the tag is defined as the class in which the Log statement occurs. This is a reasonable convention, but anything identifiable or useful to you will do. Now anytime you use a Log method, you supply this tag. An informational logging message might look like this: Log.i(TAG, "I am logging something informational!"); You can also pass a Throwable object, usually on Exception, that will allow the Log to print a stack trace or other useful information. try { // ... } catch (Exception exception) { Log.e(TAG, "Received an exception", exception); } NOTE: Calling the Log.wtf() method will always print a stack trace and may cause the process to end with an error message. It is really intended only for extreme errors. For standard logging of exceptions, we recommend using the Log.e() method. The Log.wtf() method is available only in Android 2.2 or later. The rest are available in all versions of Android. Step 3: Adding Log Support to an Activity Class Now let’s add some logging to your Activity class. First, add the appropriate import statement for the logging class android.util.Log. Next, declare a logging tag for use within your class (or whole application); in this case, we call this variable DEBUG_TAG. Finally, add logging method calls wherever you want logging output. For example, you might add an informational log message within the onCreate() method of your Activity class. Below is some sample code that illustrates how all these steps come together: package com.mamlambo.simpleapp; import android.app.Activity; import android.os.Bundle; import android.util.Log; public class MySimpleAppActivity extends Activity { private static final String DEBUG_TAG= "MySimpleAppLogging"; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); Log.i(DEBUG_TAG, "Info about MySimpleAppActivity."); } } Step 4: Monitoring Application Log Output – The Easy Way You can use the LogCat utility from within Eclipse to view the log output from an emulator or device. LogCat is integrated into Eclipse using the Android Development plug-in. You’ll find the LogCat panel integrated into the DDMS and Debug perspectives of Eclipse. Step 5:Monitoring Application Log Output – Creating Log Filters As you can see, the basic LogCat logging output includes log data from lots of different sources. For debugging purposes, it can be useful to filter the output to only the tags for your specific application. This way, you can focus on your application log output, amidst all the clutter. You can use the LogCat utility from within Eclipse to filter your log messages to the tag string you supplied for your application. To add a new filter, click the green plus sign button in the LogCat pane of Eclipse. Name the filter—perhaps using the tag name—and fill in the tag you want to filter on. A new tab is created in LogCat that will show only log messages that contain this tag. You can create filters that display items by severity level. Performance Considerations with Logging Logging output puts a damper on application performance. Excessive use can result in decreased application performance. At minimum, debug and verbose logging should be used only for development purposes and removed before release. It’s also a good idea to review other logging output before publication as well. Conclusion Logging is a very handy debugging and diagnostic technique used by developers. Use the logging class provided as part of the Android SDK to log important information about your application to LogCat, but make sure you review your application’s logging implementation prior to publication, as logging has performance drawbacks. [email protected], via their blog at androidbook.blogspot.com, and on Twitter @androidwireless.
http://code.tutsplus.com/tutorials/logcat_android-sdk--mobile-4578
CC-MAIN-2014-10
refinedweb
890
57.87
Hi this is my first post, I am having a problem with loops. I am reading through South-Western's Introduction to C++ 3rd ed. In chapeter 6, activity 6-2 it tells you to make a program that asks or a series of integers one at a time except for 0. When you enter 0 the program takes the numbers you put in and finds the average, the largest number you put in, the smallest number you put in, and the range of the numbers you put in. I got it to do all of the following, but for some reason it wont print to the screen. I think I just have my statements in the wrong order or something, I'm using Dev-C++ and here is my source: Can anyone help me out? I've been messing around with it for an hour and decided to ask for help.Can anyone help me out? I've been messing around with it for an hour and decided to ask for help.Code:#include <iostream> using namespace std; int main() { int i, lar, sma, non; float av; av=0.0; non=0; sma=2147483647; lar=-2147483648; do { cin >> i; cout << "You Entered " << i << endl; av=av+i; non+=1; if(i>lar) {lar=i;}k if(i<sma) {sma=i;} if(i=0) {cout << "Average = " << av/non << endl; cout << "Largest = " << lar << endl; cout << "Smallest = " << sma << endl; cout << "Range = " << lar-sma << endl; break;} } while(1); return 0; }
http://cboard.cprogramming.com/cplusplus-programming/84496-do-while-loops.html
CC-MAIN-2014-52
refinedweb
247
85.32
@{ Html.Syncfusion().Accordion("myAccordion") .TargetContentId("AccordionContents").Render(); } I am a programmer who has to deliver and I am looking for tools that help me get more productive in stead of costing me hours to find out how to use them. So please put some effort in good documentation, the one that is available now is of very bad quality. Hi Ruurd, Thanks for contacting Syncfusion support. We have updated with response for this reported query in incident 114617, which was created by you. Kindly we request you to follow that incident for more assistance. Thanks, Gurunathan Hi Robbie, Sorry for the late reply. We have updated the response for the ApplicationController issue in corresponding incident already. For your information, Please find the details below: We have inherited the ApplicationController for internal purpose in sample browser. So we suggest you to use the following code snippet to inherit the controller. Please refer the code snippet. <code> public class HomeController : Controller { } </code> Please let us know if you required further assistance on this. Thanks, Gurunathan This post will be permanently deleted. Are you sure you want to continue? Sorry, An error occured while processing your request. Please try again later.
https://www.syncfusion.com/forums/113970/examples-are-not-working
CC-MAIN-2018-39
refinedweb
198
58.38
[ RFC Index | RFC Search | Usenet FAQs | Web FAQs | Documents | Cities ] Alternate Formats: rfc2748.txt | rfc2748.txt.pdf RFC 2748 - The COPS (Common Open Policy Service) Protocol RFC2748 - The COPS (Common Open Policy Service) Protocol Network Working Group D. Durham, Ed. Request for Comments: 2748 Intel Category: Standards Track J. Boyle Level 3 R. Cohen Cisco S. Herzog IPHighway R. Rajan AT&T A. Sastry Cisco January 2000 The COPS (Common Open Policy Service). Table Of Contents 1. Introduction....................................................3 1.1 Basic Model....................................................4 2. The Protocol....................................................6 2.1 Common Header..................................................6 2.2 COPS Specific Object Formats...................................8 2.2.1 Handle Object (Handle).......................................9 2.2.2 Context Object (Context).....................................9 2.2.3 In-Interface Object (IN-Int)................................10 2.2.4 Out-Interface Object (OUT-Int)..............................11 2.2.5 Reason Object (Reason)......................................12 2.2.6 Decision Object (Decision)..................................12 2.2.7 LPDP Decision Object (LPDPDecision).........................14 2.2.8 Error Object (Error)........................................14 2.2.9 Client Specific Information Object (ClientSI)...............15 2.2.10 Keep-Alive Timer Object (KATimer)..........................15 2.2.11 PEP Identification Object (PEPID)..........................16 2.2.12 Report-Type Object (Report-Type)...........................16 2.2.13 PDP Redirect Address (PDPRedirAddr)........................16 2.2.14 Last PDP Address (LastPDPAddr).............................17 2.2.15 Accounting Timer Object (AcctTimer)........................17 2.2.16 Message Integrity Object (Integrity).......................18 2.3 Communication.................................................19 2.4 Client Handle Usage...........................................21 2.5 Synchronization Behavior......................................21 3. Message Content................................................22 3.1 Request (REQ) PEP -> PDP.....................................22 3.2 Decision (DEC) PDP -> PEP....................................24 3.3 Report State (RPT) PEP -> PDP................................25 3.4 Delete Request State (DRQ) PEP -> PDP........................25 3.5 Synchronize State Request (SSQ) PDP -> PEP...................26 3.6 Client-Open (OPN) PEP -> PDP.................................26 3.7 Client-Accept (CAT) PDP -> PEP...............................27 3.8 Client-Close (CC) PEP -> PDP, PDP -> PEP.....................28 3.9 Keep-Alive (KA) PEP -> PDP, PDP -> PEP.......................28 3.10 Synchronize State Complete (SSC) PEP -> PDP..................29 4. Common Operation...............................................29 4.1 Security and Sequence Number Negotiation......................29 4.2 Key Maintenance...............................................31 4.3 PEP Initialization............................................31 4.4 Outsourcing Operations........................................32 4.5 Configuration Operations......................................32 4.6 Keep-Alive Operations.........................................33 4.7 PEP/PDP Close.................................................33 5. Security Considerations........................................33 6. IANA Considerations............................................34 7. References.....................................................35 8. Author Information and Acknowledgments.........................36 9. Full Copyright Statement.......................................38 1. Introduction This document describes a simple query and response protocol that can be used to exchange policy information between a policy server (Policy Decision Point or PDP) and its clients (Policy Enforcement Points or PEPs). One example of a policy client is an RSVP router that must exercise policy-based admission control over RSVP usage [RSVP]. We assume that at least one policy server exists in each controlled administrative domain. The basic model of interaction between a policy server and its clients is compatible with the framework document for policy based admission control [WRK]. A chief objective of this policy control protocol is to begin with a simple but extensible design. The main characteristics of the COPS protocol include: 1. The protocol employs a client/server model where the PEP sends requests, updates, and deletes to the remote PDP and the PDP returns decisions back to the PEP. 2. The protocol uses TCP as its transport protocol for reliable exchange of messages between policy clients and a server. Therefore, no additional mechanisms are necessary for reliable communication between a server and its clients. 3. The protocol is extensible in that it is designed to leverage off self-identifying objects and can support diverse client specific information without requiring modifications to the COPS protocol itself. The protocol was created for the general administration, configuration, and enforcement of policies. 4. COPS provides message level security for authentication, replay protection, and message integrity. COPS can also reuse existing protocols for security such as IPSEC [IPSEC] or TLS to authenticate and secure the channel between the PEP and the PDP. 5. The protocol is stateful in two main aspects: (1) Request/Decision state is shared between client and server and (2) State from various events (Request/Decision pairs) may be inter-associated. By (1) we mean that requests from the client PEP are installed or remembered by the remote PDP until they are explicitly deleted by the PEP. At the same time, Decisions from the remote PDP can be generated asynchronously at any time for a currently installed request state. By (2) we mean that the server may respond to new queries differently because of previously installed Request/Decision state(s) that are related. 6. Additionally, the protocol is stateful in that it allows the server to push configuration information to the client, and then allows the server to remove such state from the client when it is no longer applicable. 1.1 Basic Model +----------------+ | | | Network Node | Policy Server | | | +-----+ | COPS +-----+ | | PEP |<-----|-------------->| PDP | | +-----+ | +-----+ | ^ | | | | | \-->+-----+ | | | LPDP| | | +-----+ | | | +----------------+ Figure 1: A COPS illustration. Figure 1 Illustrates the layout of various policy components in a typical COPS example (taken from [WRK]). Here, COPS is used to communicate policy information between a Policy Enforcement Point (PEP) and a remote Policy Decision Point (PDP) within the context of a particular type of client. The optional Local Policy Decision Point (LPDP) can be used by the device to make local policy decisions in the absence of a PDP. It is assumed that each participating policy client is functionally consistent with a PEP [WRK]. The PEP may communicate with a policy server (herein referred to as a remote PDP [WRK]) to obtain policy decisions or directives. The PEP is responsible for initiating a persistent TCP connection to a PDP. The PEP uses this TCP connection to send requests to and receive decisions from the remote PDP. Communication between the PEP and remote PDP is mainly in the form of a stateful request/decision exchange, though the remote PDP may occasionally send unsolicited decisions to the PEP to force changes in previously approved request states. The PEP also has the capacity to report to the remote PDP that it has successfully completed performing the PDP's decision locally, useful for accounting and monitoring purposes. The PEP is responsible for notifying the PDP when a request state has changed on the PEP. Finally, the PEP is responsible for the deletion of any state that is no longer applicable due to events at the client or decisions issued by the server. When the PEP sends a configuration request, it expects the PDP to continuously send named units of configuration data to the PEP via decision messages as applicable for the configuration request. When a unit of named configuration data is successfully installed on the PEP, the PEP should send a report message to the PDP confirming the installation. The server may then update or remove the named configuration information via a new decision message. When the PDP sends a decision to remove named configuration data from the PEP, the PEP will delete the specified configuration and send a report message to the PDP as confirmation. The policy protocol is designed to communicate self-identifying objects which contain the data necessary for identifying request states, establishing the context for a request, identifying the type of request, referencing previously installed requests, relaying policy decisions, reporting errors, providing message integrity, and transferring client specific/namespace information. To distinguish between different kinds of clients, the type of client is identified in each message. Different types of clients may have different client specific data and may require different kinds of policy decisions. It is expected that each new client-type will have a corresponding usage draft specifying the specifics of its interaction with this policy protocol. The context of each request corresponds to the type of event that triggered it. The COPS Context object identifies the type of request and message (if applicable) that triggered a policy event via its message type and request type fields. COPS identifies three types of outsourcing events: (1) the arrival of an incoming message (2) allocation of local resources, and (3) the forwarding of an outgoing message. Each of these events may require different decisions to be made. The content of a COPS request/decision message depends on the context. A fourth type of request is useful for types of clients that wish to receive configuration information from the PDP. This allows a PEP to issue a configuration request for a specific named device or module that requires configuration information to be installed. The PEP may also have the capability to make a local policy decision via its Local Policy Decision Point (LPDP) [WRK], however, the PDP remains the authoritative decision point at all times. This means that the relevant local decision information must be relayed to the PDP. That is, the PDP must be granted access to all relevant information to make a final policy decision. To facilitate this functionality, the PEP must send its local decision information to the remote PDP via an LPDP decision object. The PEP must then abide by the PDP's decision as it is absolute. Finally, fault tolerance is a required capability for this protocol, particularly due to the fact it is associated with the security and service management of distributed network devices. Fault tolerance can be achieved by having both the PEP and remote PDP constantly verify their connection to each other via keep-alive messages. When a failure is detected, the PEP must try to reconnect to the remote PDP or attempt to connect to a backup/alternative PDP. While disconnected, the PEP should revert to making local decisions. Once a connection is reestablished, the PEP is expected to notify the PDP of any deleted state or new events that passed local admission control after the connection was lost. Additionally, the remote PDP may request that all the PEP's internal state be resynchronized (all previously installed requests are to be reissued). After failure and before the new connection is fully functional, disruption of service can be minimized if the PEP caches previously communicated decisions and continues to use them for some limited amount of time. Sections 2.3 and 2.5 detail COPS mechanisms for achieving reliability. 2. The Protocol This section describes the message formats and objects exchanged between the PEP and remote PDP. 2.1 Common Header Each COPS message consists of the COPS header followed by a number of typed objects. 0 1 2 3 +--------------+--------------+--------------+--------------+ |Version| Flags| Op Code | Client-type | +--------------+--------------+--------------+--------------+ | Message Length | +--------------+--------------+--------------+--------------+ Global note: //// implies field is reserved, set to 0. The fields in the header are: Version: 4 bits COPS version number. Current version is 1. Flags: 4 bits Defined flag values (all other flags MUST be set to 0): 0x1 Solicited Message Flag Bit This flag is set when the message is solicited by another COPS message. This flag is NOT to be set (value=0) unless otherwise specified in section 3. Op Code: 8 bits The COPS operations: 1 = Request (REQ) 2 = Decision (DEC) 3 = Report State (RPT) 4 = Delete Request State (DRQ) 5 = Synchronize State Req (SSQ) 6 = Client-Open (OPN) 7 = Client-Accept (CAT) 8 = Client-Close (CC) 9 = Keep-Alive (KA) 10= Synchronize Complete (SSC) Client-type: 16 bits The Client-type octets, which includes the standard COPS header and all encapsulated objects. Messages MUST be aligned on 4 octet intervals. 2.2 COPS Specific Object Formats All the objects follow the same object format; each object consists of one or more 32-bit words with a four-octet header, using the following format: 0 1 2 3 +-------------+-------------+-------------+-------------+ | Length (octets) | C-Num | C-Type | +-------------+-------------+-------------+-------------+ | | // (Object contents) // | | +-------------+-------------+-------------+-------------+ The. Typically, C-Num identifies the class of information contained in the object, and the C-Type identifies the subtype or version of the information contained in the object. C-num: 8 bits 1 = Handle 2 = Context 3 = In Interface 4 = Out Interface 5 = Reason code 6 = Decision 7 = LPDP Decision 8 = Error 9 = Client Specific Info 10 = Keep-Alive Timer 11 = PEP Identification 12 = Report Type 13 = PDP Redirect Address 14 = Last PDP Address 15 = Accounting Timer 16 = Message Integrity C-type: 8 bits Values defined per C-num. 2.2.1 Handle Object (Handle) The Handle Object encapsulates a unique value that identifies an installed state. This identification is used by most COPS operations. A state corresponding to a handle MUST be explicitly deleted when it is no longer applicable. See Section 2.4 for details. C-Num = 1 C-Type = 1, Client Handle. Variable-length field, no implied format other than it is unique from other client handles from the same PEP (a.k.a. COPS TCP connection) for a particular client-type. It is always initially chosen by the PEP and then deleted by the PEP when no longer applicable. The client handle is used to refer to a request state initiated by a particular PEP and installed at the PDP for a client-type. A PEP will specify a client handle in its Request messages, Report messages and Delete messages sent to the PDP. In all cases, the client handle is used to uniquely identify a particular PEP's request for a client-type. The client handle value is set by the PEP and is opaque to the PDP. The PDP simply performs a byte-wise comparison on the value in this object with respect to the handle object values of other currently installed requests. 2.2.2 Context Object (Context) Specifies the type of event(s) that triggered the query. Required for request messages. Admission control, resource allocation, and forwarding requests are all amenable to client-types that outsource their decision making facility to the PDP. For applicable client- types a PEP can also make a request to receive named configuration information from the PDP. This named configuration data may be in a form useful for setting system attributes on a PEP, or it may be in the form of policy rules that are to be directly verified by the PEP. Multiple flags can be set for the same request. This is only allowed, however, if the set of client specific information in the combined request is identical to the client specific information that would be specified if individual requests were made for each specified flag. C-num = 2, C-Type = 1 0 1 2 3 +--------------+--------------+--------------+--------------+ | R-Type | M-Type | +--------------+--------------+--------------+--------------+ R-Type (Request Type Flag) 0x01 = Incoming-Message/Admission Control request 0x02 = Resource-Allocation request 0x04 = Outgoing-Message request 0x08 = Configuration request M-Type (Message Type) Client Specific 16 bit values of protocol message types 2.2.3 In-Interface Object (IN-Int) The In-Interface Object is used to identify the incoming interface on which a particular request applies and the address where the received message originated. For flows or messages generated from the PEP's local host, the loop back address and ifindex are used. This Interface object is also used to identify the incoming (receiving) In-Interface is typically relative to the flow of the underlying protocol messages. The ifindex is the interface on which the protocol message was received. C-Num = 3 C-Type = 1, IPv4 Address + Interface 0 1 2 3 +--------------+--------------+--------------+--------------+ | IPv4 Address format | +--------------+--------------+--------------+--------------+ | ifindex | +--------------+--------------+--------------+--------------+ For this type of the interface object, the IPv4 address specifies the IP address that the incoming message came from. C-Type = 2, IPv6 Address + Interface 0 1 2 3 +--------------+--------------+--------------+--------------+ | | + + | | + IPv6 Address format + | | + + | | +--------------+--------------+--------------+--------------+ | ifindex | +--------------+--------------+--------------+--------------+ For this type of the interface object, the IPv6 address specifies the IP address that the incoming message came from. The ifindex is used to refer to the MIB-II defined local incoming interface on the PEP as described above. 2.2.4 Out-Interface Object (OUT-Int) The Out-Interface is used to identify the outgoing interface to which a specific request applies and the address for where the forwarded message is to be sent. For flows or messages destined to the PEP's local host, the loop back address and ifindex are used. The Out- Interface has the same formats as the In-Interface Object. This Interface object is also used to identify the outgoing (forwarding) Out-Interface is typically relative to the flow of the underlying protocol messages. The ifindex is the one on which a protocol message is about to be forwarded. C-Num = 4 C-Type = 1, IPv4 Address + Interface Same C-Type format as the In-Interface object. The IPv4 address specifies the IP address to which the outgoing message is going. The ifindex is used to refer to the MIB-II defined local outgoing interface on the PEP. C-Type = 2, IPv6 Address + Interface Same C-Type format as the In-Interface object. For this type of the interface object, the IPv6 address specifies the IP address to which the outgoing message is going. The ifindex is used to refer to the MIB-II defined local outgoing interface on the PEP. 2.2.5 Reason Object (Reason) This object specifies the reason why the request state was deleted. It appears in the delete request (DRQ) message. The Reason Sub-code field is reserved for more detailed client-specific reason codes defined in the corresponding documents. C-Num = 5, C-Type = 1 0 1 2 3 +--------------+--------------+--------------+--------------+ | Reason-Code | Reason Sub-code | +--------------+--------------+--------------+--------------+ Reason Code: 1 = Unspecified 2 = Management 3 = Preempted (Another request state takes precedence) 4 = Tear (Used to communicate a signaled state removal) 5 = Timeout (Local state has timed-out) 6 = Route Change (Change invalidates request state) 7 = Insufficient Resources (No local resource available) 8 = PDP's Directive (PDP decision caused the delete) 9 = Unsupported decision (PDP decision not supported) 10= Synchronize Handle Unknown 11= Transient Handle (stateless event) 12= Malformed Decision (could not recover) 13= Unknown COPS Object from PDP: Sub-code (octet 2) contains unknown object's C-Num and (octet 3) contains unknown object's C-Type. 2.2.6 Decision Object (Decision) Decision made by the PDP. Appears in replies. The specific non- mandatory decision objects required in a decision to a particular request depend on the type of client. C-Num = 6 C-Type = 1, Decision Flags (Mandatory) 0 1 2 3 +--------------+--------------+--------------+--------------+ | Command-Code | Flags | +--------------+--------------+--------------+--------------+ Commands: 0 = NULL Decision (No configuration data available) 1 = Install (Admit request/Install configuration) 2 = Remove (Remove request/Remove configuration) Flags: 0x01 = Trigger Error (Trigger error message if set) Note: Trigger Error is applicable to client-types that are capable of sending error notifications for signaled messages. Flag values not applicable to a given context's R-Type or client-type MUST be ignored by the PEP. C-Type = 2, Stateless Data This type of decision object carries additional stateless information that can be applied by the PEP locally. It is a variable length object and its internal format SHOULD be specified in the relevant COPS extension document for the given client-type. This object is optional in Decision messages and is interpreted relative to a given context. It is expected that even outsourcing PEPs will be able to make some simple stateless policy decisions locally in their LPDP. As this set is well known and implemented ubiquitously, PDPs are aware of it as well (either universally, through configuration, or using the Client-Open message). The PDP may also include this information in its decision, and the PEP MUST apply it to the resource allocation event that generated the request. C-Type = 3, Replacement Data This type of decision object carries replacement data that is to replace existing data in a signaled message. It is a variable length object and its internal format SHOULD be specified in the relevant COPS extension document for the given client-type. It is optional in Decision messages and is interpreted relative to a given context. C-Type = 4, Client Specific Decision Data Additional decision types can be introduced using the Client Specific Decision Data Object. It is a variable length object and its internal format SHOULD be specified in the relevant COPS extension document for the given client-type. It is optional in Decision messages and is interpreted relative to a given context. C-Type = 5, Named Decision Data Named configuration information is encapsulated in this version of the decision object in response to configuration requests. It is a variable length object and its internal format SHOULD be specified in the relevant COPS extension document for the given client-type. It is optional in Decision messages and is interpreted relative to both a given context and decision flags. 2.2.7 LPDP Decision Object (LPDPDecision) Decision made by the PEP's local policy decision point (LPDP). May appear in requests. These objects correspond to and are formatted the same as the client specific decision objects defined above. C-Num = 7 C-Type = (same C-Type as for Decision objects) 2.2.8 Error Object (Error) This object is used to identify a particular COPS protocol error. The error sub-code field contains additional detailed client specific error codes. The appropriate Error Sub-codes for a particular client-type SHOULD be specified in the relevant COPS extensions document. C-Num = 8, C-Type = 1 0 1 2 3 +--------------+--------------+--------------+--------------+ | Error-Code | Error Sub-code | +--------------+--------------+--------------+--------------+ Error-Code: 1 = Bad handle 2 = Invalid handle reference 3 = Bad message format (Malformed Message) 4 = Unable to process (server gives up on query) 5 = Mandatory client-specific info missing 6 = Unsupported client-type 7 = Mandatory COPS object missing 8 = Client Failure 9 = Communication Failure 10= Unspecified 11= Shutting down 12= Redirect to Preferred Server 13= Unknown COPS Object: Sub-code (octet 2) contains unknown object's C-Num and (octet 3) contains unknown object's C-Type. 14= Authentication Failure 15= Authentication Required 2.2.9 Client Specific Information Object (ClientSI) The various types of this object are required for requests, and used in reports and opens when required. It contains client-type specific information. C-Num = 9, C-Type = 1, Signaled ClientSI. Variable-length field. All objects/attributes specific to a client's signaling protocol or internal state are encapsulated within one or more signaled Client Specific Information Objects. The format of the data encapsulated in the ClientSI object is determined by the client-type. C-Type = 2, Named ClientSI. Variable-length field. Contains named configuration information useful for relaying specific information about the PEP, a request, or configured state to the PDP server. 2.2.10 Keep-Alive Timer Object (KATimer) Times are encoded as 2 octet integer values and are in units of seconds. The timer value is treated as a delta. C-Num = 10, C-Type = 1, Keep-alive timer value Timer object used to specify the maximum time interval over which a COPS message MUST be sent or received. The range of finite timeouts is 1 to 65535 seconds represented as an unsigned two-octet integer. The value of zero implies infinity. 0 1 2 3 +--------------+--------------+--------------+--------------+ | ////////////// | KA Timer Value | +--------------+--------------+--------------+--------------+ 2.2.11 PEP Identification Object (PEPID) The PEP Identification Object is used to identify the PEP client to the remote PDP. It is required for Client-Open messages. C-Num = 11, C-Type = 1 Variable-length field. It is a NULL terminated ASCII string that is also zero padded to a 32-bit word boundary (so the object length is a multiple of 4 octets). The PEPID MUST contain an ASCII string that uniquely identifies the PEP within the policy domain in a manner that is persistent across PEP reboots. For example, it may be the PEP's statically assigned IP address or DNS name. This identifier may safely be used by a PDP as a handle for identifying the PEP in its policy rules. 2.2.12 Report-Type Object (Report-Type) The Type of Report on the request state associated with a handle: C-Num = 12, C-Type = 1 0 1 2 3 +--------------+--------------+--------------+--------------+ | Report-Type | ///////////// | +--------------+--------------+--------------+--------------+ Report-Type: 1 = Success : Decision was successful at the PEP 2 = Failure : Decision could not be completed by PEP 3 = Accounting: Accounting update for an installed state 2.2.13 PDP Redirect Address (PDPRedirAddr) A PDP when closing a PEP session for a particular client-type may optionally use this object to redirect the PEP to the specified PDP server address and TCP port number: C-Num = 13, C-Type = 1, IPv4 Address + TCP Port 0 1 2 3 +--------------+--------------+--------------+--------------+ | IPv4 Address format | +--------------+--------------+--------------+--------------+ | ///////////////////////// | TCP Port Number | +-----------------------------+-----------------------------+ C-Type = 2, IPv6 Address + TCP Port 0 1 2 3 +--------------+--------------+--------------+--------------+ | | + + | | + IPv6 Address format + | | + + | | +--------------+--------------+--------------+--------------+ | ///////////////////////// | TCP Port Number | +-----------------------------+-----------------------------+ 2.2.14 Last PDP Address (LastPDPAddr) When a PEP sends a Client-Open message for a particular client-type the PEP SHOULD specify the last PDP it has successfully opened (meaning it received a Client-Accept) since the PEP last rebooted. If no PDP was used since the last reboot, the PEP will simply not include this object in the Client-Open message. C-Num = 14, C-Type = 1, IPv4 Address (Same format as PDPRedirAddr) C-Type = 2, IPv6 Address (Same format as PDPRedirAddr) 2.2.15 Accounting Timer Object (AcctTimer) Times are encoded as 2 octet integer values and are in units of seconds. The timer value is treated as a delta. C-Num = 15, C-Type = 1, Accounting timer value Optional timer value used to determine the minimum interval between periodic accounting type reports. It is used by the PDP to describe to the PEP an acceptable interval between unsolicited accounting updates via Report messages where applicable. It provides a method for the PDP to control the amount of accounting traffic seen by the network. The range of finite time values is 1 to 65535 seconds represented as an unsigned two-octet integer. A value of zero means there SHOULD be no unsolicited accounting updates. 0 1 2 3 +--------------+--------------+--------------+--------------+ | ////////////// | ACCT Timer Value | +--------------+--------------+--------------+--------------+ 2.2.16 Message Integrity Object (Integrity) The integrity object includes a sequence number and a message digest useful for authenticating and validating the integrity of a COPS message. When used, integrity is provided at the end of a COPS message as the last COPS object. The digest is then computed over all of a particular COPS message up to but not including the digest value itself. The sender of a COPS message will compute and fill in the digest portion of the Integrity object. The receiver of a COPS message will then compute a digest over the received message and verify it matches the digest in the received Integrity object. C-Num = 16, C-Type = 1, HMAC digest The HMAC integrity object employs HMAC (Keyed-Hashing for Message Authentication) [HMAC] to calculate the message digest based on a key shared between the PEP and its PDP. This Integrity object specifies a 32-bit Key ID used to identify a specific key shared between a particular PEP and its PDP and the cryptographic algorithm to be used. The Key ID allows for multiple simultaneous keys to exist on the PEP with corresponding keys on the PDP for the given PEPID. The key identified by the Key ID was used to compute the message digest in the Integrity object. All implementations, at a minimum, MUST support HMAC-MD5-96, which is HMAC employing the MD5 Message-Digest Algorithm [MD5] truncated to 96-bits to calculate the message digest. This object also includes a sequence number that is a 32-bit unsigned integer used to avoid replay attacks. The sequence number is initiated during an initial Client-Open Client-Accept message exchange and is then incremented by one each time a new message is sent over the TCP connection in the same direction. If the sequence number reaches the value of 0xFFFFFFFF, the next increment will simply rollover to a value of zero. The variable length digest is calculated over a COPS message starting with the COPS Header up to the Integrity Object (which MUST be the last object in a COPS message) INCLUDING the Integrity object's header, Key ID, and Sequence Number. The Keyed Message Digest field is not included as part of the digest calculation. In the case of HMAC-MD5-96, HMAC-MD5 will produce a 128-bit digest that is then to be truncated to 96-bits before being stored in or verified against the Keyed Message Digest field as specified in [HMAC]. The Keyed Message Digest MUST be 96-bits when HMAC-MD5-96 is used. 0 1 2 3 +-------------+-------------+-------------+-------------+ | Key ID | +-------------+-------------+-------------+-------------+ | Sequence Number | +-------------+-------------+-------------+-------------+ | | + + | ...Keyed Message Digest... | + + | | +-------------+-------------+-------------+-------------+ 2.3 Communication The COPS protocol uses a single persistent TCP connection between the PEP and a remote PDP. One PDP implementation per server MUST listen on a well-known TCP port number (COPS=3288 [IANA]). The PEP is responsible for initiating the TCP connection to a PDP. The location of the remote PDP can either be configured, or obtained via a service location mechanism [SRVLOC]. Service discovery is outside the scope of this protocol, however. If a single PEP can support multiple client-types, it may send multiple Client-Open messages, each specifying a particular client- type to a PDP over one or more TCP connections. Likewise, a PDP residing at a given address and port number may support one or more client-types. Given the client-types it supports, a PDP has the ability to either accept or reject each client-type independently. If a client-type is rejected, the PDP can redirect the PEP to an alternative PDP address and TCP port for a given client-type via COPS. Different TCP port numbers can be used to redirect the PEP to another PDP implementation running on the same server. Additional provisions for supporting multiple client-types (perhaps from independent PDP vendors) on a single remote PDP server are not provided by the COPS protocol, but, rather, are left to the software architecture of the given server platform. It is possible a single PEP may have open connections to multiple PDPs. This is the case when there are physically different PDPs supporting different client-types as shown in figure 2. +----------------+ | | | Network Node | Policy Servers | | | +-----+ | COPS Client Type 1 +-----+ | | |<-----|-------------------->| PDP1| | + PEP + | COPS Client Type 2 +-----+ | | |<-----|---------\ +-----+ | +-----+ | \----------| PDP2| | ^ | +-----+ | | | | \-->+-----+ | | | LPDP| | | +-----+ | | | +----------------+ Figure 2: Multiple PDPs illustration. When a TCP connection is torn down or is lost, the PDP is expected to eventually clean up any outstanding request state related to request/decision exchanges with the PEP. When the PEP detects a lost connection due to a timeout condition it SHOULD explicitly send a Client-Close message for each opened client-type containing an <Error> object indicating the "Communication Failure" Error-Code. Additionally, the PEP SHOULD continuously attempt to contact the primary PDP or, if unsuccessful, any known backup PDPs. Specifically the PEP SHOULD keep trying all relevant PDPs with which it has been configured until it can establish a connection. If a PEP is in communication with a backup PDP and the primary PDP becomes available, the backup PDP is responsible for redirecting the PEP back to the primary PDP (via a <Client-Close> message containing a <PDPRedirAddr> object identifying the primary PDP to use for each affected client-type). Section 2.5 details synchronization behavior between PEPs and PDPs. 2.4 Client Handle Usage The client handle is used to identify a unique request state for a single PEP per client-type. Client handles are chosen by the PEP and are opaque to the PDP. The PDP simply uses the request handle to uniquely identify the request state for a particular Client-Type over a particular TCP connection and generically tie its decisions to a corresponding request. Client handles are initiated in request messages and are then used by subsequent request, decision, and report messages to reference the same request state. When the PEP is ready to remove a local request state, it will issue a delete message to the PDP for the corresponding client handle. A handle MUST be explicitly deleted by the PEP before it can be used by the PEP to identify a new request state. Handles referring to different request states MUST be unique within the context of a particular TCP connection and client-type. 2.5 Synchronization Behavior When disconnected from a PDP, the PEP SHOULD revert to making local decisions. Once a connection is reestablished, the PEP is expected to notify the PDP of any events that have passed local admission control. Additionally, the remote PDP may request that all the PEP's internal state be resynchronized (all previously installed requests are to be reissued) by sending a Synchronize State message. After a failure and before a new connection is fully functional, disruption of service can be minimized if the PEP caches previously communicated decisions and continues to use them for some appropriate length of time. Specific rules for such behavior are to be defined in the appropriate COPS client-type extension specifications. A PEP that caches state from a previous exchange with a disconnected PDP MUST communicate this fact to any PDP with which it is able to later reconnect. This is accomplished by including the address and TCP port of the last PDP for which the PEP is still caching state in the Client-Open message. The <LastPDPAddr> object will only be included for the last PDP with which the PEP was completely in sync. If the service interruption was temporary and the PDP still contains the complete state for the PEP, the PDP may choose not to synchronize all states. It is still the responsibility of the PEP to update the PDP of all state changes that occurred during the disruption of service including any states communicated to the previous PDP that had been deleted after the connection was lost. These MUST be explicitly deleted after a connection is reestablished. If the PDP issues a synchronize request the PEP MUST pass all current states to the PDP followed by a Synchronize State Complete message (thus completing the synchronization process). If the PEP crashes and loses all cached state for a client-type, it will simply not include a <LastPDPAddr> in its Client-Open message. 3. Message Content This section describes the basic messages exchanged between a PEP and a remote PDP as well as their contents. As a convention, object ordering is expected as shown in the BNF for each COPS message unless otherwise noted. The Integrity object, if included, MUST always be the last object in a message. If security is required and a message was received without a valid Integrity object, the receiver MUST send a Client-Close message for Client-Type=0 specifying the appropriate error code. 3.1 Request (REQ) PEP -> PDP The PEP establishes a request state client handle for which the remote PDP may maintain state. The remote PDP then uses this handle to refer to the exchanged information and decisions communicated over the TCP connection to a particular PEP for a given client-type. Once a stateful handle is established for a new request, any subsequent modifications of the request can be made using the REQ message specifying the previously installed handle. The PEP is responsible for notifying the PDP whenever its local state changes so the PDP's state will be able to accurately mirror the PEP's state. The format of the Request message is as follows: <Request Message> ::= <Common Header> <Client Handle> <Context> [<IN-Int>] [<OUT-Int>] [<ClientSI(s)>] [<LPDPDecision(s)>] [<Integrity>] <ClientSI(s)> ::= <ClientSI> | <ClientSI(s)> <ClientSI> <LPDPDecision(s)> ::= <LPDPDecision> | <LPDPDecision(s)> <LPDPDecision> <LPDPDecision> ::= [<Context>] <LPDPDecision: Flags> [<LPDPDecision: Stateless Data>] [<LPDPDecision: Replacement Data>] [<LPDPDecision: ClientSI Data>] [<LPDPDecision: Named Data>] The context object is used to determine the context within which all the other objects are to be interpreted. It also is used to determine the kind of decision to be returned from the policy server. This decision might be related to admission control, resource allocation, object forwarding and substitution, or configuration. The interface objects are used to determine the corresponding interface on which a signaling protocol message was received or is about to be sent. They are typically used if the client is participating along the path of a signaling protocol or if the client is requesting configuration data for a particular interface. ClientSI, the client specific information object, holds the client- type specific data for which a policy decision needs to be made. In the case of configuration, the Named ClientSI may include named information about the module, interface, or functionality to be configured. The ordering of multiple ClientSIs is not important. Finally, LPDPDecision object holds information regarding the local decision made by the LPDP. Malformed Request messages MUST result in the PDP specifying a Decision message with the appropriate error code. 3.2 Decision (DEC) PDP -> PEP The PDP responds to the REQ with a DEC message that includes the associated client handle and one or more decision objects grouped relative to a Context object and Decision Flags object type pair. If there was a protocol error an error object is returned instead. It is required that the first decision message for a new/updated request will have the solicited message flag set (value = 1) in the COPS header. This avoids the issue of keeping track of which updated request (that is, a request reissued for the same handle) a particular decision corresponds. It is important that, for a given handle, there be at most one outstanding solicited decision per request. This essentially means that the PEP SHOULD NOT issue more than one REQ (for a given handle) before it receives a corresponding DEC with the solicited message flag set. The PDP MUST always issue decisions for requests on a particular handle in the order they arrive and all requests MUST have a corresponding decision. To avoid deadlock, the PEP can always timeout after issuing a request that does not receive a decision. It MUST then delete the timed-out handle, and may try again using a new handle. The format of the Decision message is as follows: <Decision Message> ::= <Common Header> <Client Handle> <Decision(s)> | <Error> [<Integrity>] <Decision(s)> ::= <Decision> | <Decision(s)> <Decision> <Decision> ::= <Context> <Decision: Flags> [<Decision: Stateless Data>] [<Decision: Replacement Data>] [<Decision: ClientSI Data>] [<Decision: Named Data>] The Decision message may include either an Error object or one or more context plus associated decision objects. COPS protocol problems are reported in the Error object (e.g. an error with the format of the original request including malformed request messages, unknown COPS objects in the Request, etc.). The applicable Decision object(s) depend on the context and the type of client. The only ordering requirement for decision objects is that the required Decision Flags object type MUST precede the other Decision object types per context binding. 3.3 Report State (RPT) PEP -> PDP The RPT message is used by the PEP to communicate to the PDP its success or failure in carrying out the PDP's decision, or to report an accounting related change in state. The Report-Type specifies the kind of report and the optional ClientSI can carry additional information per Client-Type. For every DEC message containing a configuration context that is received by a PEP, the PEP MUST generate a corresponding Report State message with the Solicited Message flag set describing its success or failure in applying the configuration decision. In addition, outsourcing decisions from the PDP MAY result in a corresponding solicited Report State from the PEP depending on the context and the type of client. RPT messages solicited by decisions for a given Client Handle MUST set the Solicited Message flag and MUST be sent in the same order as their corresponding Decision messages were received. There MUST never be more than one Report State message generated with the Solicited Message flag set per Decision. The Report State may also be used to provide periodic updates of client specific information for accounting and state monitoring purposes depending on the type of the client. In such cases the accounting report type should be specified utilizing the appropriate client specific information object. <Report State> ::== <Common Header> <Client Handle> <Report-Type> [<ClientSI>] [<Integrity>] 3.4 Delete Request State (DRQ) PEP -> PDP When sent from the PEP this message indicates to the remote PDP that the state identified by the client handle is no longer available/relevant. This information will then be used by the remote PDP to initiate the appropriate housekeeping actions. The reason code object is interpreted with respect to the client-type and signifies the reason for the removal. The format of the Delete Request State message is as follows: <Delete Request> ::= <Common Header> <Client Handle> <Reason> [<Integrity>] Given the stateful nature of COPS, it is important that when a request state is finally removed from the PEP, a DRQ message for this request state is sent to the PDP so the corresponding state may likewise be removed on the PDP. Request states not explicitly deleted by the PEP will be maintained by the PDP until either the client session is closed or the connection is terminated. Malformed Decision messages MUST trigger a DRQ specifying the appropriate erroneous reason code (Bad Message Format) and any associated state on the PEP SHOULD either be removed or re-requested. If a Decision contained an unknown COPS Decision Object, the PEP MUST delete its request specifying the Unknown COPS Object reason code because the PEP will be unable to comply with the information contained in the unknown object. In any case, after issuing a DRQ, the PEP may retry the corresponding Request again. 3.5 Synchronize State Request (SSQ) PDP -> PEP The format of the Synchronize State Query message is as follows: <Synchronize State> ::= <Common Header> [<Client Handle>] [<Integrity>] This message indicates that the remote PDP wishes the client (which appears in the common header) to re-send its state. If the optional Client Handle is present, only the state associated with this handle is synchronized. If the PEP does not recognize the requested handle, it MUST immediately send a DRQ message to the PDP for the handle that was specified in the SSQ message. If no handle is specified in the SSQ message, all the active client state MUST be synchronized with the PDP. The client performs state synchronization by re-issuing request queries of the specified client-type for the existing state in the PEP. When synchronization is complete, the PEP MUST issue a synchronize state complete message to the PDP. 3.6 Client-Open (OPN) PEP -> PDP The Client-Open message can be used by the PEP to specify to the PDP the client-types the PEP can support, the last PDP to which the PEP connected for the given client-type, and/or client specific feature negotiation. A Client-Open message can be sent to the PDP at any time and multiple Client-Open messages for the same client-type are allowed (in case of global state changes). <Client-Open> ::= <Common Header> <PEPID> [<ClientSI>] [<LastPDPAddr>] [<Integrity>] The PEPID is a symbolic, variable length name that uniquely identifies the specific client to the PDP (see Section 2.2.11). A named ClientSI object can be included for relaying additional global information about the PEP to the PDP when required (as specified in the appropriate extensions document for the client- type). The PEP may also provide a Last PDP Address object in its Client-Open message specifying the last PDP (for the given client-type) for which it is still caching decisions since its last reboot. A PDP can use this information to determine the appropriate synchronization behavior (See section 2.5). If the PDP receives a malformed Client-Open message it MUST generate a Client-Close message specifying the appropriate error code. 3.7 Client-Accept (CAT) PDP -> PEP The Client-Accept message is used to positively respond to the Client-Open message. This message will return to the PEP a timer object indicating the maximum time interval between keep-alive messages. Optionally, a timer specifying the minimum allowed interval between accounting report messages may be included when applicable. <Client-Accept> ::= <Common Header> <KA Timer> [<ACCT Timer>] [<Integrity>] If the PDP refuses the client, it will instead issue a Client-Close message. The KA Timer corresponds to maximum acceptable intermediate time between the generation of messages by the PDP and PEP. The timer value is determined by the PDP and is specified in seconds. A timer value of 0 implies no secondary connection verification is necessary. The optional ACCT Timer allows the PDP to indicate to the PEP that periodic accounting reports SHOULD NOT exceed the specified timer interval per client handle. This allows the PDP to control the rate at which accounting reports are sent by the PEP (when applicable). In general, accounting type Report messages are sent to the PDP when determined appropriate by the PEP. The accounting timer merely is used by the PDP to keep the rate of such updates in check (i.e. Preventing the PEP from blasting the PDP with accounting reports). Not including this object implies there are no PDP restrictions on the rate at which accounting updates are generated. If the PEP receives a malformed Client-Accept message it MUST generate a Client-Close message specifying the appropriate error code. 3.8 Client-Close (CC) PEP -> PDP, PDP -> PEP The Client-Close message can be issued by either the PDP or PEP to notify the other that a particular type of client is no longer being supported. <Client-Close> ::= <Common Header> <Error> [<PDPRedirAddr>] [<Integrity>] The Error object is included to describe the reason for the close (e.g. the requested client-type is not supported by the remote PDP or client failure). A PDP MAY optionally include a PDP Redirect Address object in order to inform the PEP of the alternate PDP it SHOULD use for the client- type specified in the common header. 3.9 Keep-Alive (KA) PEP -> PDP, PDP -> PEP The keep-alive message MUST be transmitted by the PEP within the period defined by the minimum of all KA Timer values specified in all received CAT messages for the connection. A KA message MUST be generated randomly between 1/4 and 3/4 of this minimum KA timer interval. When the PDP receives a keep-alive message from a PEP, it MUST echo a keep-alive back to the PEP. This message provides validation for each side that the connection is still functioning even when there is no other messaging. Note: The client-type in the header MUST always be set to 0 as the KA is used for connection verification (not per client session verification). <Keep-Alive> ::= <Common Header> [<Integrity>] Both client and server MAY assume the TCP connection is insufficient for the client-type with the minimum time value (specified in the CAT message) if no communication activity is detected for a period exceeding the timer period. For the PEP, such detection implies the remote PDP or connection is down and the PEP SHOULD now attempt to use an alternative/backup PDP. 3.10 Synchronize State Complete (SSC) PEP -> PDP The Synchronize State Complete is sent by the PEP to the PDP after the PDP sends a synchronize state request to the PEP and the PEP has finished synchronization. It is useful so that the PDP will know when all the old client state has been successfully re-requested and, thus, the PEP and PDP are completely synchronized. The Client Handle object only needs to be included if the corresponding Synchronize State Message originally referenced a specific handle. <Synchronize State Complete> ::= <Common Header> [<Client Handle>] [<Integrity>] 4. Common Operation This section describes the typical exchanges between remote PDP servers and PEP clients. 4.1 Security and Sequence Number Negotiation COPS message security is negotiated once per connection and covers all communication over a particular connection. If COPS level security is required, it MUST be negotiated during the initial Client-Open/Client-Accept message exchange specifying a Client-Type of zero (which is reserved for connection level security negotiation and connection verification). If a PEP is not configured to use COPS security with a PDP it will simply send the PDP Client-Open messages for the supported Client- Types as specified in section 4.3 and will not include the Integrity object in any COPS messages. Otherwise, security can be initiated by the PEP if it sends the PDP a Client-Open message with Client-Type=0 before opening any other Client-Type. If the PDP receives a Client-Open with a Client-Type=0 after another Client-Type has already been opened successfully it MUST return a Client-Close message (for Client-Type=0) to that PEP. This first Client-Open message MUST specify a Client-Type of zero and MUST provide the PEPID and a COPS Integrity object. This Integrity object will contain the initial sequence number the PEP requires the PDP to increment during subsequent communication after the initial Client-Open/Client-Accept exchange and the Key ID identifying the algorithm and key used to compute the digest. Similarly, if the PDP accepts the PEP's security key and algorithm by validating the message digest using the identified key, the PDP MUST send a Client-Accept message with a Client-Type of zero to the PEP carrying an Integrity object. This Integrity object will contain the initial sequence number the PDP requires the PEP to increment during all subsequent communication with the PDP and the Key ID identifying the key and algorithm used to compute the digest. If the PEP, from the perspective of a PDP that requires security, fails or never performs the security negotiation by not sending an initial Client-Open message with a Client-Type=0 including a valid Integrity object, the PDP MUST send to the PEP a Client-Close message with a Client-Type=0 specifying the appropriate error code. Similarly, if the PDP, from the perspective of a PEP that requires security, fails the security negotiation by not sending back a Client-Accept message with a Client-Type=0 including a valid Integrity object, the PEP MUST send to the PDP a Client-Close message with a Client-Type=0 specifying the appropriate error code. Such a Client-Close message need not carry an integrity object (as the security negotiation did not yet complete). The security initialization can fail for one of several reasons: 1. The side receiving the message requires COPS level security but an Integrity object was not provided (Authentication Required error code). 2. A COPS Integrity object was provided, but with an unknown/unacceptable C-Type (Unknown COPS Object error code specifying the unsupported C-Num and C-Type). 3. The message digest or Key ID in the provided Integrity object was incorrect and therefore the message could not be authenticated using the identified key (Authentication Failure error code). Once the initial security negotiation is complete, the PEP will know what sequence numbers the PDP expects and the PDP will know what sequence numbers the PEP expects. ALL COPS messages must then include the negotiated Integrity object specifying the correct sequence number with the appropriate message digest (including the Client- Open/Client-Accept messages for specific Client-Types). ALL subsequent messages from the PDP to the PEP MUST result in an increment of the sequence number provided by the PEP in the Integrity object of the initial Client-Open message. Likewise, ALL subsequent messages from the PEP to the PDP MUST result in an increment of the sequence number provided by the PDP in the Integrity object of the initial Client-Accept message. Sequence numbers are incremented by one starting with the corresponding initial sequence number. For example, if the sequence number specified to the PEP by the PDP in the initial Client-Accept was 10, the next message the PEP sends to the PDP will provide an Integrity object with a sequence number of 11... Then the next message the PEP sends to the PDP will have a sequence number of 12 and so on. If any subsequent received message contains the wrong sequence number, an unknown Key ID, an invalid message digest, or is missing an Integrity object after integrity was negotiated, then a Client-Close message MUST be generated for the Client-Type zero containing a valid Integrity object and specifying the appropriate error code. The connection should then be dropped. 4.2 Key Maintenance Key maintenance is outside the scope of this document, but COPS implementations MUST at least provide the ability to manually configure keys and their parameters locally. The key used to produce the Integrity object's message digest is identified by the Key ID field. Thus, a Key ID parameter is used to identify one of potentially multiple simultaneous keys shared by the PEP and PDP. A Key ID is relative to a particular PEPID on the PDP or to a particular PDP on the PEP. Each key must also be configured with lifetime parameters for the time period within which it is valid as well as an associated cryptographic algorithm parameter specifying the algorithm to be used with the key. At a minimum, all COPS implementations MUST support the HMAC-MD5-96 [HMAC][MD5] cryptographic algorithm for computing a message digest for inclusion in the Keyed Message Digest of the Integrity object which is appended to the message. It is good practice to regularly change keys. Keys MUST be configurable such that their lifetimes overlap allowing smooth transitions between keys. At the midpoint of the lifetime overlap between two keys, senders should transition from using the current key to the next/longer-lived key. Meanwhile, receivers simply accept any identified key received within its configured lifetime and reject those that are not. 4.3 PEP Initialization Sometime after a connection is established between the PEP and a remote PDP and after security is negotiated (if required), the PEP will send one or more Client-Open messages to the remote PDP, one for each client-type supported by the PEP. The Client-Open message MUST contain the address of the last PDP with which the PEP is still caching a complete set of decisions. If no decisions are being cached from the previous PDP the LastPDPAddr object MUST NOT be included in the Client-Open message (see Section 2.5). Each Client-Open message MUST at least contain the common header noting one client-type supported by the PEP. The remote PDP will then respond with separate Client-Accept messages for each of the client-types requested by the PEP that the PDP can also support. If a specific client-type is not supported by the PDP, the PDP will instead respond with a Client-Close specifying the client-type is not supported and will possibly suggest an alternate PDP address and port. Otherwise, the PDP will send a Client-Accept specifying the timer interval between keep-alive messages and the PEP may begin issuing requests to the PDP. 4.4 Outsourcing Operations In the outsourcing scenario, when the PEP receives an event that requires a new policy decision it sends a request message to the remote PDP. What specifically qualifies as an event for a particular client-type SHOULD be specified in the specific document for that client-type. The remote PDP then makes a decision and sends a decision message back to the PEP. Since the request is stateful, the request will be remembered, or installed, on the remote PDP. The unique handle (unique per TCP connection and client-type), specified in both the request and its corresponding decision identifies this request state. The PEP is responsible for deleting this request state once the request is no longer applicable. The PEP can update a previously installed request state by reissuing a request for the previously installed handle. The remote PDP is then expected to make new decisions and send a decision message back to the PEP. Likewise, the server MAY change a previously issued decision on any currently installed request state at any time by issuing an unsolicited decision message. At all times the PEP module is expected to abide by the PDP's decisions and notify the PDP of any state changes. 4.5 Configuration Operations In the configuration scenario, as in the outsourcing scenario, the PEP will make a configuration request to the PDP for a particular interface, module, or functionality that may be specified in the named client specific information object. The PDP will then send potentially several decisions containing named units of configuration data to the PEP. The PEP is expected to install and use the configuration locally. A particular named configuration can be updated by simply sending additional decision messages for the same named configuration. When the PDP no longer wishes the PEP to use a piece of configuration information, it will send a decision message specifying the named configuration and a decision flags object with the remove configuration command. The PEP SHOULD then proceed to remove the corresponding configuration and send a report message to the PDP that specifies it has been deleted. In all cases, the PEP MAY notify the remote PDP of the local status of an installed state using the report message where appropriate. The report message is to be used to signify when billing can begin, what actions were taken, or to produce periodic updates for monitoring and accounting purposes depending on the client. This message can carry client specific information when needed. 4.6 Keep-Alive Operations The Keep-Alive message is used to validate the connection between the client and server is still functioning even when there is no other messaging from the PEP to PDP. The PEP MUST generate a COPS KA message randomly within one-fourth to three-fourths the minimum KA Timer interval specified by the PDP in the Client-Accept message. On receiving a Keep-Alive message from the PEP, the PDP MUST then respond to this Keep-Alive message by echoing a Keep-Alive message back to the PEP. If either side does not receive a Keep-Alive or any other COPS message within the minimum KA Timer interval from the other, the connection SHOULD be considered lost. 4.7 PEP/PDP Close Finally, Client-Close messages are used to negate the effects of the corresponding Client-Open messages, notifying the other side that the specified client-type is no longer supported/active. When the PEP detects a lost connection due to a keep-alive timeout condition it SHOULD explicitly send a Client-Close message for each opened client-type specifying a communications failure error code. Then the PEP MAY proceed to terminate the connection to the PDP and attempt to reconnect again or try a backup/alternative PDP. When the PDP is shutting down, it SHOULD also explicitly send a Client-Close to all connected PEPs for each client-type, perhaps specifying an alternative PDP to use instead. 5. Security Considerations The COPS protocol provides an Integrity object that can achieve authentication, message integrity, and replay prevention. All COPS implementations MUST support the COPS Integrity object and its mechanisms as described in this document. To ensure the client (PEP) is communicating with the correct policy server (PDP) requires authentication of the PEP and PDP using a shared secret, and consistent proof that the connection remains valid. The shared secret minimally requires manual configuration of keys (identified by a Key ID) shared between the PEP and its PDP. The key is used in conjunction with the contents of a COPS message to calculate a message digest that is part of the Integrity object. The Integrity object is then used to validate all COPS messages sent over the TCP connection between a PEP and PDP. Key maintenance is outside the scope of this document beyond the specific requirements discussed in section 4.2. In general, it is good practice to regularly change keys to maintain security. Furthermore, it is good practice to use localized keys specific to a particular PEP such that a stolen PEP will not compromise the security of an entire administrative domain. The COPS Integrity object also provides sequence numbers to avoid replay attacks. The PDP chooses the initial sequence number for the PEP and the PEP chooses the initial sequence number for the PDP. These initial numbers are then incremented with each successive message sent over the connection in the corresponding direction. The initial sequence numbers SHOULD be chosen such that they are monotonically increasing and never repeat for a particular key. Security between the client (PEP) and server (PDP) MAY be provided by IP Security [IPSEC]. In this case, the IPSEC Authentication Header (AH) SHOULD be used for the validation of the connection; additionally IPSEC Encapsulation Security Payload (ESP) MAY be used to provide both validation and secrecy. Transport Layer Security [TLS] MAY be used for both connection-level validation and privacy. 6. IANA Considerations The Client-type identifies the policy client application to which a message refers. Client-type values within the range 0x0001-0x3FFF are reserved Specification Required status as defined in [IANA- CONSIDERATIONS]. These values MUST be registered with IANA and their behavior and applicability MUST be described in a COPS extension document. Client-type values in the range 0x4000 - 0x7FFF are reserved for Private Use as defined in [IANA-CONSIDERATIONS]. These Client-types are not tracked by IANA and are not to be used in standards or general-release products, as their uniqueness cannot be assured. Client-type values in the range 0x8000 - 0xFFFF are First Come First Served as defined in [IANA-CONSIDERATIONS]. These Client-types are tracked by IANA but do not require published documents describing their use. IANA merely assures their uniqueness. Objects in the COPS Protocol are identified by their C-Num and C-Type values. IETF Consensus as identified in [IANA-CONSIDERATIONS] is required to introduce new values for these numbers and, therefore, new objects into the base COPS protocol. Additional Context Object R-Types, Reason-Codes, Report-Types, Decision Object Command-Codes/Flags, and Error-Codes MAY be defined for use with future Client-types, but such additions require IETF Consensus as defined in [IANA-CONSIDERATIONS]. Context Object M-Types, Reason Sub-Codes, and Error Sub-codes MAY be defined relative to a particular Client-type following the same IANA considerations as their respective Client-type. 7. References [RSVP] Braden, R., Zhang, L., Berson, S., Herzog, S. and S. Jamin, "Resource ReSerVation Protocol (RSVP) Version 1 - Functional Specification", RFC 2205, September 1997. [WRK] Yavatkar, R., Pendarakis, D. and R. Guerin, "A Framework for Policy-Based Admission Control", RFC 2753, January 2000. [SRVLOC] Guttman, E., Perkins, C., Veizades, J. and M. Day, "Service Location Protocol , Version 2", RFC 2608, June 1999. [INSCH] Shenker, S. and J. Wroclawski, "General Characterization Parameters for Integrated Service Network Elements", RFC 2215, September 1997. [IPSEC] Atkinson, R., "Security Architecture for the Internet Protocol", RFC 2401, August 1995. [HMAC] Krawczyk, H., Bellare, M. and R. Canetti, "HMAC: Keyed-Hashing for Message Authentication", RFC 2104, February 1997. [MD5] Rivest, R., "The MD5 Message-Digest Algorithm", RFC 1321, April 1992. [RSVPPR] Braden, R. and L. Zhang, "Resource ReSerVation Protocol (RSVP) - Version 1 Message Processing Rules", RFC 2209, September 1997. [TLS] Dierks T. and C. Allen, "The TLS Protocol Version 1.0", RFC 2246, January 1999. [IANA]- notes/iana/assignments/port-numbers [IANA-CONSIDERATIONS] Alvestrand, H. and T. Narten, "Guidelines for Writing an IANA Considerations Section in RFCs", BCP 26, RFC 2434, October 1998. 8. Author Information and Acknowledgments Special thanks to Andrew Smith and Timothy O'Malley our WG Chairs, Raj Yavatkar, Russell Fenger, Fred Baker, Laura Cunningham, Roch Guerin, Ping Pan, and Dimitrios Pendarakis Shannon Laboratory 180 Park Avenue P.O. Box 971 Florham Park, NJ 07932-0971 EMail: rajan@research.att.com ]
http://www.faqs.org/rfcs/rfc2748.html
crawl-002
refinedweb
10,601
51.07
In this example we are going to create a function which will count the number of occurrences of each character and return it as a list of tuples in order of appearance. For example, ordered_count("abracadabra") == [('a', 5), ('b', 2), ('r', 2), ('c', 1), ('d', 1)] The above is a 7 kyu question on CodeWars, this is the only question I can solve today after the first two fail attempts. I am supposed to start the Blender project today but because I want to write a post for your people I have spent nearly an hour and a half working on those three python questions on CodeWars, I hope you people will really appreciate my effort and will share this post to help this website to grown. def ordered_count(input): already = [] input_list = list(input) return_list = [] for word in input_list: if(word not in already): return_list.append((word, input_list.count(word))) already.append(word) return return_list The solution above is short and solid, hope you like it.
https://www.cebuscripts.com/2019/04/15/codingdirectional-count-the-number-of-occurrences-of-each-character-and-return-it-as-a-list-of-tuples-in-order-of-appearance/
CC-MAIN-2019-18
refinedweb
166
56.73
0 I'm creating a tv schedule for a hotel and need to be able to fill the schedule with particular programs that take up various amounts of time each slot is 30 mins so a film with a 1hr 30 min run time would take up 3 slots but i'm not sure how to go about doing this. i've created a boolean array that would have the slots and if a show takes up 3 slots it will set the values of the boolean array to true and then the user would no longer be able to use those slots. I'm a bit confused as how to go about this though. import java.util.*; public class Schedule { private int maxduration = 18*2; private ArrayList<Programme>schedule; private boolean [] times; public Schedule() { schedule = new ArrayList<Programme>(); times= new boolean[maxduration]; } public boolean[] getTimes() { return times; } public void setTimes(boolean[] times) { this.times = times; } public void addProgramme(int statTime, Programme p) { schedule.add(p); } public void printSchedule() { System.out.println(schedule); } public void removeProgramme(String title){ Programme prog = null; for(Programme p: schedule){ if(p.getTitle().equals(title)){ prog = p; break; } } schedule.remove(prog); } }
https://www.daniweb.com/programming/software-development/threads/415078/coursework-help
CC-MAIN-2016-50
refinedweb
196
58.32
C Development Under DragonFly BSD-Volume 7 Glossary and Tables for all Volumes C Development Under DragonFly BSD Volume 7: Glossary and Tables for all Volumes Glossary Symbol - #define - A keyword that is used to define a macro, variable, etc. for the C preprocessor. - #include - A keyword that is used to inform the C preprocessor to open another file and process that. - #if - A keyword that is used to inform the C preprocessor to only include the encapsulated code in the program if a given criteria is met. - #else - A keyword that is used in conjunction with #if that informs the C preprocessor to include the encapsulated code in the program if the criteria is not matched by the corresponding #if statement. - #elif - A keyword that is used in conjunction with #if that informs the C preprocessor to include the encapsulated code in the program if the criteria is not matched by the corresponding #if statement and it matches the supplied logical statement. - #endif - A keyword that is used to inform the C preprocessor to end the last #if statement 0 - 9 A B C Comment C comments are statements not interpreted by the compiler and are used by the programmer to leave helpful notes on what is happening in the code. Comments are contained within the /* and */ symbols. D E F G H I int A C keyword used to express a non-fractional number that is commonly called an integer. J K Keyword C keywords are words recognized by the compiler to denote certain operations. A full list of standard keywords includes: auto break case char const continue default do double else enum extern float for goto if int long register return short signed sizeof static struct switch typedef union unsigned void volatile while In addition, GCC allows the use of in-line assembler by using the asm keyword. L M Modifier Standard C language modifiers include: auto extern register static typedef volatile N O Operators Operators are symbols that when used, perform an operation on one or more operands. C uses the following operators: , Used to separate expressions foo, bar = Assignment foo = bar ? : Conditional foo?bar:baz || Logical OR foo || bar && Logical AND foo && bar | Bitwise OR foo | bar ^ Bitwise Exclusive-OR (XOR) foo ^ bar & Bitwise AND foo & bar == Equality foo == bar != Inequality foo != bar <= Less than or Equals foo <= bar >= Greater than or Equals foo >= bar < Less than foo < bar > Greater than foo > bar << Left shift foo << bar >> Right shift foo >> bar + Addition or no-op foo + bar or +foo - Subtraction or negation foo - bar or -foo * Multiplication or de-assignment foo * bar or *foo / Division foo/bar % Modulus foo%bar ~ Bitwise compliment ~foo ! Logical compliment !foo ++ pre- or post- decrement ++foo or foo++ -- pre- or post- decrement --foo or foo-- () Type casting or precedence (int) foo or (char) foo, etc or (2 + 3) * 8, etc -> Structure de-referencing foo -> bar . Structure reference foo.bar [] Array reference foo[bar]
http://www.dragonflybsd.org/docs/developer/C_Development_Under_DragonFly_BSD-Volume_7_Glossary_and_Tables_for_all_Volumes/
CC-MAIN-2013-20
refinedweb
493
52.83
After a brief comment thread on a pretty cool blog I decided to post a little bit more about generators. We were discussing the viability of using a generator for finding the sequence of numbers following the Collatz Conjecture for any number. For those unfamiliar with the Collatz Conjecture, it is as follows from Wikipedia:. [… It will eventually end in 1 …] Or more aptly put by xkcd.com: The original code in question was well written, but relied upon recursion to perform it’s function. I proposed that perhaps a generator would be a better way to test the conjecture (which by the way has neither been proven or disproved). Here is the code I came up with for the generator: def collatz(start): while start != 1: yield start if start > 1: if start % 2 == 0: start = start / 2 else: start = start * 3 + 1 yield 1 if __name__ == "__main__": for i in collatz(10): print i Mike Moran/ March 8, 2010 Just tried out your code, and it is pretty slick. I’m pretty sure I’m understanding generators much better now after reading through your post and the “Dive Into Python” pages on it. Definitely a good skill to learn! It would be interesting to see over which starting numbers one or the other of our codes grabs the sequence faster. Do you know of any way we could check that? brennydoogles/ March 8, 2010 That would be cool. Let me poke around and see what I can find. In order to have an easily measurable benchmarking, we may have to find the longest known Collatz chain and try to emulate it. I will poke around for some benchmarking tools and a nice number for the test. brennydoogles/ March 10, 2010 Tests are being run now, I hope to post some results later tonight. Mensanator/ March 24, 2010 Longest known sequence? Sequences are unbounded. It’s trivial to create a sequence in reverse that will run for an infinite (or more practically, arbitrary) length sequence.
https://brennydoogles.wordpress.com/2010/03/08/collatz-conjecture-in-python/
CC-MAIN-2015-27
refinedweb
336
71.65
Usually when working with the Node fs module you need to decide between using the synchronous methods or the asynchronous methods with callbacks. The synchronous version could look like this: fs import fs from 'fs' const files = fs.readdirSync(path.join(process.cwd(), 'content')) and the asynchronous version could look like this: import fs from 'fs' const files = fs.readdir(path.join(process.cwd(), 'content'), (err, files) => { ... } I've always seen developers recommend using the asynchronous version for performance reasons (non-blocking), however many still reach for the synchronous version to avoid callback hell and because it's easier to reason about. I recently learned that as of Node 12 a Promise version of the functions is available as part of the standard library. You could use this in combination with async/await for a version that is non-blocking and easy to reason about. The best of aspects of both. import { promises as fs } from 'fs' const files = await fs.readdir(path.join(process.cwd(), 'content'))
https://jeffjewiss.com/bits/node-fs-async-await
CC-MAIN-2021-21
refinedweb
167
54.93
Troubleshooting Guide For Some Basic Java Problems. Step One The first and fore most requirement is the installation of the required(or most current j2sdk, the newer versions of jdk are known as J2SE) from java.sun.com. if you have done that then you will have a new folder created in the C:\ drive. Typically it will be something along the following format C:\j2sdkxxxxx (I think jdk 5 is installed in C:\Program Files\ folders?) where xxxxx will be the version number for the current j2sdk you installed. At this point on my machine it looks like this C:\j2sdk1.4.1_05 Step Two Next step is setting up the path and class paths for your system so that windows knows what to do with commands like javac or java. Lets see what happens if we want to test our setup as it is right now. on your windows machine go to bottom left corner and click on Start>Run in the text field type in cmd if you are one of those who are still on win98/95 or win me then you will need to type in command This will bring up a dos window. in the dos window at the prompt type in the following javac At this point we are expecting the following or a message like this 'javac' is not recognized as an internal or external command operable program or batch file. This is due to the reason that windows machine does not know as yet that what to do with "javac" command which essentially is required to compile java src code.Close the does window,because windows will not apply new variable changes in this session, you will have to open up a new window once you are all set with variable settings. Step Three Lets go and set the environment variable.On machines with win2k or newer systems go to Start>Settings>Control Panel> Locate and click on the "System" icon. This will bring up "System Properties" window. by default "General" is selected, locate and click on "Advanced" tab. Locate and click on the button that has the following caption on it. Environment Variables This will bring up the a window with 2 sections in it, top one is for the user as whom you have logged in,and the lower section is for system. We will work in the user variables section. JAVA_HOME Variable value:______________________ in the variable value field type the folder name + path where on C drive you have j2sdk installed. in my machine this value looks like this C:\j2sdk1.4.1_05 If you look in the window you will see the following record added which you have just added Variable Value JAVA_HOME C:\j2sdk1.4.1_05 of course there may be others too.We are half way thru..:-) Step Four Now lets set the path to required java bin files to run several java commands. Look in the window for the user variables and try to locate a Variable name Path if you see one then we need to edit and add new values,if you don't see one then we need to create one. And then add values. First lets create one. PATH Variable value:______________________ in the variable value field type the folder name + path where on C drive you have j2sdk installed. in my machine this value looks like this %PATH%;%JAVA_HOME%\bin;%JAVA_HOME%\lib; If you look in the window you will see the following record added which you have just added.You will see that JAVA_HOME is resolved to its actual path on the disk, again on my machine it looks like below. Variable Value PATH C:\j2sdk1.4.1_05\bin;C:\j2sdk1.4.1_05\lib; Click "Ok" at the bottom of window, and then click on again. Now lets repeat Step Two here again. This time around(provided everything was done exactly as I mentioned above) when you type in javac at the command prompt you should get the following message C:\Documents and Settings\Khalid.Ali>javac -source <release> Provide source compatibility with specified release -target <release> Generate class files for specific VM version -help Print a synopsis of standard options if we see the above message,we are all set for some serious java coding..:-). Compilation and Running of Java Code Open note pad, and copy and paste the following code in it There are couple of little precautions you will need to take.Make sure that file name is HelloWorld.java, class name must alwaysThere are couple of little precautions you will need to take.Make sure that file name is HelloWorld.java, class name must alwaysCode: public class HelloWorld{ public static void main(String[] args){ //declare and define a variable of type String String helloWorld = "Hello Java World! Here I come."; //Now print it out on screen System.out.println(helloWorld); } } match the file name.If everything is fine and dandy, nothing will happen and you will be brought back to the dos prompt. C:\development\java>javac HelloWorld.java C:\development\java> In this case now is the time to run the newly written java application.if you want to make sure,just look in the directory where you have this file,now there is another file with the same name but different extension. HelloWorld.class java src files always have *.java extension and compiled classes have *.class extension. Anyways, let run this app, type in the following at the dos prompt. C:\development\java>java HelloWorld Hello Java World! Here I come. C:\development\java> The java runtime engine will print the value of the variable then come to dos prompt for a new command. At this point typically a beginner may see the following error. C:\development\java>java HellowWorld Exception in thread "main" java.lang.NoClassDefFoundError: HellowWorld as you can see it makes total sense that its not finding the class name, because HelloWorld has a "w" in it,once the class name is corrected it will run. If you have class name that's different then the file name then you will get the following message. C:\development\java>javac HelloWorld.java HelloWorld.java:1: class HellowWorld is public, should be declared in a file named HellowWorld.java public class HellowWorld{ ^ 1 error C:\development\java> Obvious enough...I'll just go take a look at my class name and remove the w from so that its the same as the file name HelloWorld.java.... I hope this helps, I'd probably add more stuff as it comes to mind... Edit:Sep 06, 2005 MySQL' centric jdk settings Deploying a Servlet on TomcatDeploying a Servlet on TomcatCode: Following are the settings for MySQL to work on my windows 2000 work station. Make sure you have same settings on your machines for MySQL to work. JDK installed at the following address C:\j2sdk1.4.2 Environment variable for java home set as follows JAVA_HOME=C:\j2sdk1.4.2 Paths set to point to java home as follows Path=C:\j2sdk1.4.2\bin;C:\j2sdk1.4.2\lib; Mysql driver is placed at the following location C:\Program Files\Java\j2re1.4.2\lib\mysql-connector-java-3.1.0-alpha-bin.jar I have attached a zip file that contains a DbTest.java class to test the connection. The tutorial above is mostly the works for any servlet on any server, only difference there will be that the root folders names could be different in different server environments. Sometimes people face problems in their JSP pages that their user created objects/classes throw an error that cannot resolve symbol. This error is almost always caused by the fact that there is a problem in the setup and the classes are not at right place for the jsp page to find them. Deploying a Servlet on Tomcat--> this tutorial will actually guide you through the process of setting up a correct context for any new webapplication as well as where required files should be at.
http://www.webdeveloper.com/forum/printthread.php?t=44232&pp=15&page=1
CC-MAIN-2016-44
refinedweb
1,345
62.98
The Java Database Control in BEA Weblogic This is Chapter 6: The Database Control from the book BEA WebLogic Workshop Kick Start (ISBN:0-672-32417-2) written by Joseph Weber and Mark Wutka, published by Sams Publishing. Chapter 6: The Database Control Creating a Database Control Defining a Database Connection Creating an SQL String Including Variables Getting a Result Set Executing Multiple Statements A Sample Application. Creating a Database Control To add a database control to your application, select Service, Add Control, and Add Database Control. You will see the Add Database Control dialog box, as shown in Figure 6.1. Figure 6.1 You can add a database control to handle database access. As with other controls, you must give the control a variable name and then either specify an existing control, or create a new control. When you create a new control, you must also specify a data source. A data source is a factory for creating JDBC database connections, similar to the JDBC DriverManager class. One of the advantages of a data source is that you can locate it using JNDI, giving you a central place to keep your database URLs. Data sources were introduced as part of Java 2 Enterprise Edition and represent a cleaner way to access database connections. A data source gets its connections from a JDBC connection pool. A connection pool can keep several database connections open at one time. Without a connection pool, you might encounter delays while the JDBC driver sets up a new connection each time you need one. Because the pool maintains a group of reusable connections, your program is more efficient when allocating a connection. Defining a Database Connection To use the database control, you must define a data source. To define a data source in WebLogic, you must first define a connection pool. The easiest way to define a connection pool is through the WebLogic server console. Figure 6.2 shows the page for creating a connection pool. Notice that you must supply the JDBC connection URL and the JDBC driver class. Figure 6.2 You can create connection pools using the WebLogic console. After you create a connection pool, you can create a data source that uses the connection pool. The WebLogic console makes it easy to create connection pools, as shown in Figure 6.3. Simply give the data source a name and enter the name of the connection pool that the data source should use to obtain database connections. Figure 6.3 You can create data sources using the WebLogic console. The WebLogic samples server includes a sample data source that you can use for test programs. The data source name is cgSampleDataSource. You can use this data source to create new database tables and then execute SQL statements to manipulate these tables. Creating an SQL String To execute database statements from a database control, you first create a method that accepts any parameters you want to pass to the database statement. For example, you might want to pass an order number to search for, or a customer's new address. Next, you use the @jws:sql JavaDoc tag to create the database statement. You can find more information on SQL, including a tutorial and links to the SQL specification, on the support page for this book at. The only attribute in the @jws:sql tag is statement, which contains the database statement. For example: /** * @jws:sql statement="select * from accounts" */ If a statement is long, you can split it across multiple lines. Instead of enclosing the statement in quotes, use the statement:: form of the attribute using :: to end the statement, like this: /** * @jws:sql statement:: update personnel set * title='Manager', department='Toys' * where id='123456' * :: */ Selecting Values The general format for the SQL SELECT statement is SELECT fields FROM table The fields can either be * to indicate all fields in the table, or a comma-separated list like first_name, last_name, city, state, zip. One of the most common clauses in a SELECT statement is the WHERE clause, which narrows the selection. The WHERE clause contains a Boolean expression, which can contain comparison operators like =, <, and >. You can also combine multiple expressions with AND, OR, and NOT. To test whether a column has no value, use IS NULL. Here are some sample queries: SELECT * FROM personnel WHERE department='123456' SELECT * FROM personnel WHERE spouse IS NULL SELECT * FROM orders WHERE status='SHIPPED' or status='BACKORDERED' You can add an order by clause to sort the results. The order by clause takes a list of fields to sort by. For example, to sort first by last name and then by first name, use the following query: SELECT * FROM personnel ORDER BY last_name, first_name By default, ORDER BY sorts items in ascending order. You can use the DESC keyword after a field to sort that field in descending order. You can use the ASC keyword to explicitly specify ascending order for a field, which for complex ORDER BY clauses might improve readability. To sort by last name, then first name, then by age in descending order, use the following query: SELECT * FROM personnel ORDER BY last_name, first_name, age DESC Updating Values The UPDATE statement updates values in a table. The general format is UPDATE table SET column=value, column=value, ... WHERE where-clause For example: UPDATE personnel SET department='Marketing', manager_id='987654' WHERE id='123456' Inserting Values The INSERT statement inserts values into a table. The general format is INSERT INTO table (columns) values (column-values) For example: INSERT INTO personnel (id, first_name, last_name) VALUES (1, 'Kaitlynn', 'Tippin') Deleting Values The DELETE statement deletes values from a table. The general format is DELETE FROM table WHERE where-clause For example: DELETE FROM personnel WHERE id='321654' Joining Tables One of the most powerful aspects of SQL is the ability to coordinate data from multiple tables. This technique is called joining. You join tables by comparing values from two different tables in a WHERE clause. When the tables have duplicate column names, you can prefix the field name with the table name, using the form table.field, for example: personnel.first_name. Because your SELECT statement might get cluttered with long table names, you can create aliases for tables in the FROM clause. Simply list the alias after the table name. For example, if the FROM clause is FROM personnel p, you can refer to fields in the personnel table with p.field in the FROM clause. You can also use the table.field form when you list the fields you want to select. The following SELECT clause locates all items ordered by customers in Atlanta locating customers in a particular city, orders whose customer ID matches the customer's ID, and the products whose order ID matches the order's ID: SELECT p.* FROM customer c, products p, orders o WHERE p.order_id = o.id AND o.customer_id = c.id AND c.city = 'ATLANTA' Including Variables The database control uses the same variable substitution mechanism that you use to map incoming XML values to method parameters and vice versa. That is, you can include an incoming methods parameter in a database statement by surrounding it with {}s. You don't need to include the quotes for string fields, WebLogic Workshop handles that for you. For example, suppose you create a method that updates personnel information, like this: public void updatePersonnel(String id, String firstName, String lastName); You can substitute the parameters into an update statement like this: /** * @jws:sql statement:: * UPDATE personnel UPDATE first_name={firstName}, last_name={lastName} * WHERE id={id} */ public void updatePersonnel(String id, String firstName, String lastName); Getting a Result Set Although the database control has an easy way to map incoming parameters into an SQL statement, mapping SQL return values into method return values is more difficult. The problem is that a Java method can only return a single valuea primitive type or an object. Returning a Variable If a SELECT statement returns a single value, the database control can automatically return the result, like this: /** * @jws:sql statement="SELECT last_name FROM personnel WHERE id={id} */ public String getLastName(String id); The INSERT, UPDATE, and DELETE statements each return an integer value indicating the number of rows that have been inserted, updated, or deleted. You can return this value from a database control method: /** * @jws:sql statement="DELETE FROM personnel WHERE id={id}" */ int delete(String id); In this case, the return value of the delete method is the number of rows actually deleted. Returning a Row of Results If a statement returns multiple values for a single column (as opposed to multiple columns), you can return an array of all the values: /** * @jws:sql statement="SELECT last_name FROM personnel" */ String[] getAllLastNames(); You can limit the number of items returned in an array with the array-max-length attribute: /** * @jws:sql statement="SELECT last_name FROM personnel" * array-max-length="25" */ String[] getFirst25LastNames(); By default, the database control limits the number of rows returned to 1,024. Use "all" with array-max-length to force the control to return all values no matter how many there are. Returning a Class Returning single values or an array of values from a single column isn't very useful in a large application. You usually need to retrieve many values. The database control can map column values to fields in a Java class, as long as the field names match the column names. For example, to retrieve id, first_name, and last_name from the personnel table, you can use the following class: public class Personnel { public String id; public String first_name; public String last_name; public Personnel() { } } Now, to retrieve all the values from the personnel table, use the following declaration: /** * @jws:sql statement="select * from personnel" */ public Personnel[] getAllPersonnel(); Returning a HashMap If you don't want to write new classes for every variation of columns that you might retrieve, you can simply return a HashMap (a Java data structure that associates keys with values) or an array of HashMaps (if you want to return multiple database rows). The database control stores each column value in the HashMap using the column name as the key, for example: /** * @jws:sql statement="select * from personnel" */ public HashMap[] getAllPersonnel(); To fetch the value of the "firstName" column for the first row returned by the getAllPersonnel method, you could use a statement like this: HashMap[] personnel = getAllPersonnel(); String firstName = (String) personnel[0].get("firstName"); Returning a Multiple Row Result Set in a Container For managing large data sets, you might want to use a Java iterator instead of returning an array of values. To use an iterator, you must use the iterator-element-type attribute to specify what kind of object the iterator should return. To iterate through the personnel table, use the following declaration: /** * @jws:sql statement="select * from personnel" * iterator-element-type="Personnel" */ public java.util.Iterator getAllPersonnel(); The database control can also return an iterator that returns HashMaps. Simply specify java.util.HashMap as the iterator element type. To access each of these iterators, you could do something like this: Iterator iter = getAllPersonnel(); while (iter.hasNext()) { HashMap row = (HashMap) iter.next(); String lastName = (String) row.get("lastName"); // do something with lastName } Page 1 of 3
http://www.developer.com/java/data/article.php/1556041/The-Java-Database-Control-in-BEA-Weblogic.htm
CC-MAIN-2014-10
refinedweb
1,877
50.46
import "github.com/anacrolix/torrent" Package torrent implements a torrent client. Goals include: * Configurable data storage, such as file, mmap, and piece-based. * Downloading on demand: torrent.Reader will request only the data required to satisfy Reads, which is ideal for streaming and torrentfs. BitTorrent features implemented include: * Protocol obfuscation * DHT * uTP * PEX * Magnet Code: c, _ := torrent.NewClient(nil) defer c.Close() t, _ := c.AddMagnet("magnet:?xt=urn:btih:ZOCMZQIPFFW7OLLMIC5HUB6BPCSDEOQU") <-t.GotInfo() t.DownloadAll() c.WaitAll() log.Print("ermahgerd, torrent downloaded") Code: var f torrent.File // Accesses the parts of the torrent pertaining to f. Data will be // downloaded as required, per the configuration of the torrent.Reader. r := f.NewReader() defer r.Close() bad_storage.go bep40.go callbacks.go client.go closewrapper.go config.go conn_stats.go deferrwl.go dht.go dialer.go doc.go file.go global.go handshake.go ipport.go listen.go misc.go multiless.go networks.go peer-impl.go peer_info.go peer_infos.go peerconn.go peerid.go pex.go pexconn.go piece.go piecestate.go portfwd.go prioritized_peers.go protocol.go ratelimitreader.go reader.go request_strategy.go request_strategy_defaults.go socket.go spec.go t.go testing.go torrent.go torrent_pending_pieces.go torrent_stats.go tracker_scraper.go utp.go utp_libutp.go webrtc.go webseed-peer.go worst_conns.go wstracker.go const ( PeerSourceTracker = "Tr" PeerSourceIncoming = "I" PeerSourceDhtGetPeers = "Hg" // Peers we found by searching a DHT. PeerSourceDhtAnnouncePeer = "Ha" // Peers that were announced to us by a DHT. PeerSourcePex = "X" ) const ( PiecePriorityNone piecePriority = iota // Not wanted. Must be the zero value. PiecePriorityNormal // Wanted. PiecePriorityHigh // Wanted a lot. PiecePriorityReadahead // May be required soon. // Succeeds a piece where a read occurred. Currently the same as Now, // apparently due to issues with caching. PiecePriorityNext PiecePriorityNow // A Reader is reading in this piece. Highest urgency. ) type Callbacks struct { CompletedHandshake func(_ *PeerConn, infoHash InfoHash) ReadMessage func(*PeerConn, *pp.Message) ReadExtendedHandshake func(*PeerConn, *pp.ExtendedHandshakeMessage) } These are called synchronously, and do not pass ownership. The Client and other locks may still be held. nil functions are not called. Clients contain zero or more Torrents. A Client manages a blocklist, the TCP/UDP protocol ports, and DHT as desired. func NewClient(cfg *ClientConfig) (cl *Client, err error) Adds a Dialer for outgoing connections. All Dialers are used when attempting to connect to a given address for any Torrent. Registers a Listener, and starts Accepting on it. You must Close Listeners provided this way yourself.` Add or merge a torrent spec. Returns new if the torrent wasn't already in the client. See also Torrent.MergeSpec. Stops the client. All connections to peers are closed and all activity will come to a halt. ListenAddrs addresses currently being listened to. Returns the port number for the first listener that has one. No longer assumes that all port numbers are the same, due to support for custom listeners. Returns zero if no port number is found. Returns a handle to the given torrent, if it's present in the client. Returns handles to all the torrents loaded in the Client. Returns true when all torrents are completely downloaded and false if the client is stopped before that. Writes out a human readable status of the client, such as for writing to a HTTP status page. // The IP addresses as our peers should see them. May differ from the // local interfaces due to NAT or other network configurations. PublicIp4 net.IP PublicIp6 net.IP DisableAcceptRateLimiting bool // Don't add connections that have the same peer ID as an existing // connection for a given Torrent. DropDuplicatePeerIds bool ConnTracker *conntrack.Instance // OnQuery hook func DHTOnQuery func(query *krpc.Msg, source net.Addr) (propagate bool) DefaultRequestStrategy RequestStrategyMaker Extensions PeerExtensionBits DisableWebtorrent bool DisableWebseeds bool Callbacks Callbacks } Probably not safe to modify this after it's given to a Client. func NewDefaultClientConfig() *ClientConfig func TestingConfig() *ClientConfig func (cfg *ClientConfig) SetListenAddr(addr string) *ClientConfig type ConnStats struct { // Total bytes on the wire. Includes handshakes and encryption. BytesWritten Count BytesWrittenData Count BytesRead Count BytesReadData Count BytesReadUsefulData DhtServer interface { Stats() interface{} ID() [20]byte Addr() net.Addr AddNode(ni krpc.NodeInfo) error Ping(addr *net.UDPAddr) Announce(hash [20]byte, port int, impliedPort bool) (DhtAnnounce, error) WriteStatus(io.Writer) } type Dialer interface { // The network is implied by the instance. Dial(_ context.Context, addr string) (net.Conn, error) // This is required for registering with the connection tracker (router connection table // emulating rate-limiter) before dialing. TODO: What about connections that wouldn't infringe // on routers, like localhost or unix sockets. LocalAddr() net.Addr } Provides access to regions of torrent data that correspond to its files. Number of bytes of the entire file we have completed. This is the sum of completed pieces, and dirtied chunks of incomplete pieces. Deprecated: Use File.SetPriority. The relative file path for a multi-file torrent, and the torrent name for a single-file torrent. Requests that all pieces containing data in the file be downloaded. The FileInfo from the metainfo.Info to which this file corresponds. The file's length in bytes. Data for this file begins this many bytes into the Torrent. The file's path components joined by '/'. Returns the priority per File.SetPriority. Sets the minimum priority for pieces in the File. func (f *File) State() (ret []FilePieceState) Returns the state of pieces in this file. type FilePieceState struct { Bytes int64 // Bytes within the piece that are part of this File. PieceState } The download status of a piece that comprises part of a File. A file-like handle to some torrent data resource. type HeaderObfuscationPolicy struct { RequirePreferred bool // Whether the value of Preferred is a strict requirement. Preferred bool // Whether header obfuscation is preferred. } Maintains the state of a connection with a peer. Returns the pieces the peer has claimed to have. type PeerExtensionBits = pp.PeerExtensionBits Peer client ID. type PeerInfo struct { Id [20]byte Addr net.Addr Source PeerSource // Peer is known to support encryption. SupportsEncryption bool peer_protocol.PexPeerFlags // Whether we can ignore poor or bad behaviour from the peer. Trusted bool } Peer connection info, handed about publicly. func (me *PeerInfo) FromPex(na krpc.NodeAddr, fs peer_protocol.PexPeerFlags) Generate PeerInfo from peer exchange func (p *Piece) State() PieceState Forces the piece data to be rehashed. type PieceState struct { Priority piecePriority storage.Completion // The piece is being hashed, or is queued for hash. Checking bool // Some of the piece has been obtained. Partial bool } The current state of a piece. type PieceStateChange struct { Index int PieceState } type PieceStateRun struct { PieceState Length int // How many consecutive pieces have this state. } Represents a series of consecutive pieces with the same state. func (psr PieceStateRun) String() (ret string) Produces a small string representing a PieceStateRun. type PieceStateRuns []PieceStateRun func (me PieceStateRuns) String() string type Reader interface { io.Reader io.Seeker io.Closer missinggo.ReadContexter SetReadahead(int64) SetResponsive() } type RequestStrategyMaker func(callbacks requestStrategyCallbacks, clientLocker sync.Locker) requestStrategy func RequestStrategyDuplicateRequestTimeout(duplicateRequestTimeout time.Duration) RequestStrategyMaker func RequestStrategyFastest() RequestStrategyMaker func RequestStrategyFuzzing() RequestStrategyMaker Maintains state of torrent within a Client. Many methods should not be called before the info is available, see .Info and .GotInfo. Adds a trusted, pending peer for each of the given Client's addresses. Typically used in tests to quickly make one Client visible to the Torrent of another Client.. Returns a channel that is closed when the Torrent is closed. Marks the entire torrent for download. Requires the info first, see GotInfo. Sets piece priorities for historical reasons. Raise the priorities of pieces in the range [begin, end) to at least Normal priority. Piece indexes are not the same as bytes. Requires that the info has been obtained, see Torrent.Info and Torrent.GotInfo. Drop the torrent from the client, and close it. It's always safe to do this. No data corruption can, or should occur to either the torrent's data, or connected peers. Returns handles to the files in the torrent. This requires that the Info is available first. Returns a channel that is closed when the info (.Info()) for the torrent has become available. Returns the metainfo info dictionary, or nil if it's not yet available. The Torrent's infohash. This is fixed and cannot change. It uniquely identifies a torrent. KnownSwarm returns the known subset of the peers in the Torrent's swarm, including active, pending, and half-open peers. The completed length of all the torrent data, in all its files. This is derived from the torrent info, when it is available.. Returns a run-time generated metainfo for the torrent that includes the info bytes and announce-list as currently known to the client. The current working name for the torrent. Either the name in the info dict, or a display name given such as by the dn value in a magnet link, or "". Returns a Reader bound to the torrent's data. All read calls block until the data requested is actually available. Note that you probably want to ensure the Torrent Info is available first. The number of pieces in the torrent. This requires that the info has been obtained first. Get missing bytes count for specific piece. func (t *Torrent) PieceState(piece pieceIndex) PieceState func (t *Torrent) PieceStateRuns() PieceStateRuns Returns the state of pieces of the torrent. They are grouped into runs of same state. The sum of the state run-lengths is the number of pieces in the torrent. Returns true if the torrent is currently being seeded. This occurs when the client is willing to upload without wanting anything in return. Clobbers the torrent display name. The display name is used as the torrent name if the metainfo is not available. func (t *Torrent) Stats() TorrentStats The returned TorrentStats may require alignment in memory. See. func (t *Torrent) SubscribePieceStateChanges() *pubsub.Subscription The subscription emits as (int) the index of pieces as their state changes. A state change is when the PieceState for a piece alters in value. Forces all the pieces to be re-hashed. See also Piece.VerifyData. This should not be called before the Info is available. type TorrentSpec struct { // The tiered tracker URIs. Trackers [][]string InfoHash metainfo.Hash InfoBytes []byte // The name to use if the Name field from the Info isn't available. DisplayName string Webseeds []string DhtNodes []string // The combination of the "xs" and "as" fields in magnet links, for now. Sources []string // The chunk size to use for outbound requests. Defaults to 16KiB if not set. ChunkSize int Storage storage.ClientImpl // Whether to allow data download or upload DisallowDataUpload bool DisallowDataDownload bool } Specifies a new torrent for adding to a client. There are helpers for magnet URIs and torrent metainfo files. func TorrentSpecFromMagnetURI(uri string) (spec *TorrentSpec, err error) func TorrentSpecFromMetaInfo(mi *metainfo.MetaInfo) *TorrentSpec } Due to ConnStats, may require special alignment on some platforms. See. Package torrent imports 69 packages (graph) and is imported by 93 packages. Updated 2020-07-15. Refresh now. Tools for package owners. The go get command cannot install this package because of the following issues:
https://godoc.org/github.com/anacrolix/torrent
CC-MAIN-2020-34
refinedweb
1,821
52.87
Posted 17 Dec 2012 Link to this post Yes it is doable via this single code: ActiveBrowser.Window.Move( new System.Drawing.Rectangle(0,0,800,600), true ); Posted 18 Dec 2012 Link to this post The most common reason for these errors is a namespace mismatch. Probably the name of the namespace in GadgetBarOpenClose.tstest.cs and Login.tstest.cs is different then in the project settings and this is causing the problem. Please open these files and rename the namespace such as it is in the project settings (see the attached screenshot). Note that to view the entire class in the Test Studio you should convert one of the step in code and click on "View Class" button. Please see the second attached screen shot. Let me know if this helps. Regards,
http://www.telerik.com/forums/feature-request-or-is-this-doable
CC-MAIN-2017-22
refinedweb
134
76.01
. Scheduler Hello Everyone, I'm new to Vaadin, so it would be nice if you could answer some of my questions. I would like to make a Scheduler that looks like this: You should be able to click on the field, where you want to add a new activity. After clicking a new window opens where you can enter the name of the activity and the description. It should also be connected to a sql database, where the activities are stored. I would like to know, what the best way to realize this, or just a few tips(like if using a Table would be the best). Thanks! I am not a vaadin expert but I have been fiddling with it for some time now. I would use a table for the main scheduler. For linking to the DB, you can use a SQL container to link the table directly to the DB. I haven't used it myself but it seems easy to set up. Personally I use an ORM framework so all the DB ends up in beans which are easy to handle but the overhead might be too much for a simple scheduler. You will need to disable selection on the table (by default user clicks select whole lines) and add a click listener on the table to have the effect you want. Click handler demo (click on 'see the source' and look for "table.addListener(new ItemClickListener()" for some code) Then from the click handler you can open a modal subwindow with your second form. How to open a modal sub window Don't forget to go through the book for more details on the tables, event handlers, sub windows and containers. And come back to the forum for any question. Hey guys, I have a few questions regarding the SQLContainer. This are my current classes for the Scheduler. The main Window: public class SchedulerApplication extends Application { @Override public void init() { Window main = new Window("The Main Window"); setMainWindow(main); main.addComponent(new SchedulerTable(main)); } } The Scheduler with Listener: public class SchedulerTable extends Table{ private Window fenster; private GridLayout grid = new GridLayout(2,3); private Window main; public SchedulerTable(Window main) { this.main = main; setSizeFull(); addTableContent(); addTableListener(); } public void addTableContent(){ addContainerProperty("Monday", String.class, null); addContainerProperty("Tuesday", String.class, null); addContainerProperty("Wednesday", String.class, null); addContainerProperty("Thursday", String.class, null); addContainerProperty("Friday", String.class, null); addContainerProperty("Saturday", String.class, null); addContainerProperty("Sunday", String.class, null); addItem(new Object[]{"7:00", "sleep", "sleep","school","sleep","jogging",".."} ,new Integer(1)); setSelectable(false); setImmediate(true); } public void addTableListener(){ addListener(new ItemClickListener() { public void itemClick(ItemClickEvent event) { if (event.getButton() == ItemClickEvent.BUTTON_LEFT) { System.out.println("Hallo"); new EntryWindow(main); } } }); } } And the Window, where the user can add a new Acticity: public class EntryWindow extends Window { private GridLayout grid = new GridLayout(2,3); private Window main; public EntryWindow(Window main){ /* Create a new window. */ this.main = main; new Window("Please enter Activity and Descrition", new GridLayout()); this.center(); this.addComponent(grid); grid.addComponent(new Label("Activity: ")); grid.addComponent(new TextField()); grid.addComponent(new Label("Description: ")); grid.addComponent(new TextField()); grid.addComponent(new Button("Ok")); grid.addComponent(new Button("Cancel")); this.setHeight("200px"); this.setWidth("300px"); this.setModal(true); /* Add the window inside the main window. */ main.addWindow(this); } } And the DatabaseHelper.class from the SQLContainer.class. First I would like to know how to Create the tables. I don't get how they initialized the tables in the tutorial. I think two tables would be the best: 1. date: day, time, activityname 2. activity: activityname, description The second thing that I'm not sure about is how to get the content of a field displayed in my EntryWindow and then saved (if it has been changed). Is it enough to have the hsqldb.jar running as server? I appreciate any help, maybe someone has a download link to a project that similar! Thanks.
https://vaadin.com/forum/thread/1032674/scheduler
CC-MAIN-2022-21
refinedweb
652
50.73
A Visit to the RESTful SPA You can’t always get away with a single-page app; sometimes you need server-side rendering. When you need both, here’s one approach. At OddBird, the lion’s share of what we do is make webapps for people. We work a lot with Backbone and Marionette on the frontend, and Django on the backend. It’s your typical RESTful SPA. Even though this area is pretty well-explored, we’ve occasionally run into a couple of interesting challenges. Today, I’ll talk about how we’re approaching a site that needs to both be a dynamic SPA, and have pages that are rendered on the server. For this site, there’s a lot of content that we want to make accessible to search engines’ crawlers, but a lot of interactivity and behavior that we want to handle with our preferred SPA tools. Since we are not yet quite in the wonderful future where every search engine’s crawlers can handle SPAs where the markup is almost entirely generated by JavaScript, we have to do some server-side rendering. Doing this has the added benefit of making sure that the page will work for users who, for whatever reason, don’t have JavaScript enabled. This makes your site more accessible, whether someone’s constrained by ability, computing power, or security concerns. Progressive enhancement is useful for lots of people. There are a few different ways to approach this. One of the first options is to use the <noscript> tag to surround the server-side rendered content, and then let the JavaScript render and manage the page as-normal for users whose browsers support it. However, we’ve opted not to do this. There are two reasons. First, the crawler may actually run some JavaScript, just not well enough to correctly index the page (imagine a race between rendering the JavaScript templates and extracting the text from the page). Second, as long as we’re putting in the work to render content on the server, we may as well use that when a human user visits the page, right? Why waste the effort? There are some existing tools that make use of this approach, in fact, such as Ember fastboot. However, many of them presume that you are using JavaScript on the server. Since we aren’t using JavaScript on the server, and thus can’t just directly share logic, we have a bit more work to do. The first step is to use a templating language that we can easily render both with JavaScript and Python. This motivates us to use Backbone/Marionette, rather than something like Ember, Angular, or React on the front, because we can more easily use a templating language of our choice. Django, on the back, is not the most amenable to plugging in your own templating language, but it can be done. And so, where do JavaScript and Python templating languages overlap? Nunjucks/Jinja2. This allows us to have the same templates, shared between the front and back ends. Django can serve the templates as static content, and also render them. But we do have to render them slightly differently: the frontend has a skeleton page, and renders the templates into a particular element in the DOM. The backend needs to render a whole page every time. It turns out that you can put a Jinja2 extends directive in a conditional. So, our layout.njk template begins with a simple incantation: {% if server_rendered %} {% extends 'base.html.j2' %} {% endif %} The variable server_rendered is set to True on the backend, and left undefined (therefore falsy) on the frontend. What other potential duplication can we remove? How about API logic? There are two plausible approaches: - We could make distinct business logic, and then put an API wrapper around that, which can handle content-type negotiation, authentication, whatever else. - We can just fake out calls to the API on the server. The first clearly has the ring of morality to it, but the second is what we’ve opted to do, and I’ll explain why. The first approach turns out to have some problems. First, and most practically, we’re making our API using Django REST Framework, which provides its own hooks for a lot of the logic that might otherwise live in a dedicated business logic layer. This makes the particular separation of concerns implied in the first approach difficult, though not impossible. Second, the Jinja2/Nunjucks templates will need the exact same JSON structures whether they’re being rendered on the front end or on the back end, in order to be consistent. So getting in-memory Python objects doesn’t actually give us something we want; we still have to pass them through our API’s serializers. So the value of a distinct business logic layer diminishes. Finally, while it likely introduces some runtime inefficiencies, the second approach requires much less code, and provides a very clean and clear interface. If every line of code is a liability, this is a strong argument in favor of the “stupid hack” approach. The shim we use to make “requests” from inside a request is something like this: from django.core.urlresolvers import resolve from django.http import Http404 def server_side_api_adapter(path, request): """ Access a particular API path from inside the server, and get the resulting data. This is not going to be the most runtime-efficient usually, but it allows powerful reduction in duplication of effort. Arguments --------- path : str The API path to request. request : Request The request object from the calling view, to give things like headers and request.user that the inner API view requires. Returns ------- JSON object The decoded JSON that would be returned from that API endpoint. None If the path is not an API path, this returns None. """ try: resolved = resolve(path) handler = resolved.func args = resolved.args kwargs = resolved.kwargs resp = handler(request, *args, **kwargs) if resp.status_code == 404: return None return getattr(resp, 'data', None) except Http404: return None Now our terrible hacks can be yours! When we continue this series in the next month, we’ll talk about how to wire up Backbone and Marionette to take over from the server-rendered page.
https://www.oddbird.net/2016/12/16/server-side-rendering-spa/
CC-MAIN-2022-40
refinedweb
1,041
63.7
ignore failures from flaky tests (pytest plugin) Project description pytest-ignore-flaky ignore failures from flaky tests (pytest plugin) A “flaky” test is a test that usually pass but sometimes it fails. You should always avoid flaky tests but not always possible. This plugin can be used to optionally ignore failures from flaky tests. First “mark” your tests with the flaky marker: import random import pytest @pytest.mark.flaky def test_mf(): assert 0 == random.randint(0, 1) By default this mark is just ignored, unless the plugin is activated from the command line (or py.test config file): py.test --ignore-flaky If a flaky test pass it will be reported normally as test succeed. If the test fails, instead of being reported as failure it will be reported as a xfail. Project Details - Project code + issue track on github - - PyPI - license The MIT License Copyright (c) 2015 Eduardo Naufel Schettino see LICENSE file Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/pytest-ignore-flaky/
CC-MAIN-2018-34
refinedweb
181
64.2
Zidanca Sprint Report: JS/LESS Integration The back history Mockup project is a great bunch of widgets/ui elements written in less/js using the idea of patterns lib to trigger and configure the widgets with data attributes on the html elements. This elements where compiled on single files using requirejs/grunt/less so it could get a bundle version that can be shipped with plone. The problem of this solution is that is outside plone, you get the bundle and you can't modify it, split, debug. There was a solution to make it closer to plone but you needed a complet setup of mockup repo and every single js that you want to integrate with that needed to be stored on mockup repository. Some months ago I started a project that needs toons of js/css and we were going to use mockup widgets on it, so I started to feel a lot of pain when I saw that in order to get it working I needed to fork mockup repo, adapt to use a complex grunt configuration and I ended having huge problems integrating mockup with other add-ons, js projects, ... So I thought that if mockup is needed to be on Plone 5 and it needs to be friendly with integrators we need to find a better solution. On the other hand we needed to find a way to allow people to extend mockup, create its own patterns on their packages and finally make it easier for people to write their code for plone. PLIP At that moment I wrote the PLIP with an initial implementation It's main idea is: - As we are using requirejs on mockup for getting the resources and less for css - As we are using less on barceoneta theme to create a bunch of files that defines all the parts of the theme - As plone uses plone.app.registry to store the configurations - Create a plone.app.registry interface to store the bundles/resources. Bundles are a group of resources or one that gets its dependencies. Resources are a group of js/css files that defines a resource (pattern modal : 1 less , 1 js). All these resources have definitions of its requirements ( with requirejs or import less ). Bundles have the expression condition as we will have one css/js element for each bundle - Create a config.js and mixins.less browser views that defines the names and urls of all the possible dependencies - Define the "oficial" group of external js/css elements that is deployed with Plone and its versions. Creating a bower.json on static folder in Plone and adding the bower components folder on CMFPlone. So if you want to know which jquery version uses Plone5 you have a place at bower.json ( you can always update it at your own risk overwriting the resource jquery ) - Define a Dev and Production mode on frontend plone. Each bundle will be minified/compiled into a js/css file when you change the status from dev to production - Create a nice UI !! ( nathan rocks soo much ! ) Using the patterns library you can see the bundles, resources, modify them, overwrite them TTW, define LESS variables, compile them... - Move Tinymce to a oficial version without patches that is already translated and works on different languages. One big task is to move the tinymce control panel to plone 5, Rob did a really great job on it! - Define resource configuration, each resource can have a default configuration with json so it can be retrieved - Create a legacy importer, we already adapted the jsregistry.xml and cssregistry.xml to import its elements to the new registry - Be friendly with the Legacy code, in order to support legacy js/css code that is not requirejs compatible or less, there is a plone-legacy bundle that gets all the resources that are legacy and minimizes/compress it for production Actual Status So we could acomplish all our goals, the implmentation is there, it needs the resource registry tests to be adapted to the new one, upgrade steps, documentation, tests on the ui, some edge use cases that need to be tuned (when we should compile legacy code, when you install a package ? when you modify the registry ...) As the Zidanca sprint was also a plone5 theme sprint, Albert and Victor where working hard integrating all the GSOC with plone5. We needed the toolbar working, we needed the less compilation and all this code is outside of mockup, so we merged our work and actual 13787-maintemplate-remove (I know the name is really bad) PLIP configuration is the actual merge of the latest plone 5 barceloneta theme and this JS/LESS integration. There is a JK job for this plip.cfg: What needs to be done ?? - What we have on global js namespace on plone development mode: jquery and all the legacy code. On development mode requirejs loads the resources when are needed so we will need to change the plaaces where we have inline js/manual script src elements to use require. - Feedback, feedback and more feedback. While we were at Zidanca sprint we were thinking with all the possible use cases (and rewriting the code everytime), trying to be as much BBB as possible and as much js/css tech friendly as possible, but I'm sure that we may miss some problems. So feedback is really important. - Documentation, examples, tests !!! - Plone 4 compatibility, Asko already said that is willing to work on that :) so it would be great to have it on plone 4. Also checkout Plone 5 Resource Registries, by Nathan Van Gheem
https://plone.org/news/2014/zidanca-sprint-report-js-less-integration
CC-MAIN-2022-40
refinedweb
939
57.61
Agenda See also: IRC log Next meeting: 21 Jan. Regrets for 21 Jan: Henry, Mohamed Regrets for 28 Jan: Voytech It runs out at the end of this month RESOLUTION: Request a charter extension, recharter later How long an extension? RESOLUTION: Request an 8 month extension, to allow for the summer Consensus by email that content model of multipart is just body+ And that the prose about headers inside multipart has to go as well Then we have Voytech's message VT: The default for content-type on multipart might conflict -- should this be an error, or should we make it required? AM: or should we instead specify that the default is to inherit? VT: content-type is required on c:body AM: Doing the inheritance will not make it easier to understand ... we should just make it required, and assume that people will mostly not use c:header Content-Type with c:multipart ... So, two alternatives: leave as is but make clear inconsistency is an error; make required VT: Any pblm with making it required? ... If you are building it dynamically, you might have to find it in a header. . . AM: Same even w/o multipart ... Given the long list of possible types, it's hard to be sure what the right default is, so making it required avoids having to pick RESOLUTION: make the content-type attribute required on c:multipart TV: What happens if you specify wrapper-prefix or wrapper-namespace but no wrapper, on p:data? HT: Because the implicit default is c:data, that's an error (the same error as an explicit wrapper with a : in) RESOLUTION: Make clear that if if you specify wrapper-prefix or wrapper-namespace but no wrapper, on p:data, that's an error (the same error as an explicit wrapper with a : in) TV: 50 new tests, prefix, namespace, http-request HT: WooHoo -- gold star PG: What's next on this? HST: We have comments from the TAG, which I need to summarise and present to this group ... Then we decide what change, if any, to make, and whether to publish a First PWD.
http://www.w3.org/XML/XProc/2010/01/14-minutes.html
CC-MAIN-2016-40
refinedweb
356
53.85
Details - Type: Bug - Status: Resolved - Priority: Major - Resolution: Fixed - Affects Version/s: 1.5.3 - Fix Version/s: 1.5.4, 6.0.0-beta1 - - - Environment:HideGlassFish v2.1.1 and GlassFish v.3.1 java version "1.6.0_29" Java(TM) SE Runtime Environment (build 1.6.0_29-b11-402-11M3527) Java HotSpot(TM) 64-Bit Server VM (build 20.4-b02-402, mixed mode) Mac OS X 10.7.2 Description The UrlResourceStream, used by PackageResources, uses URLConnection#getInputStream() to get file contents. This method is called in UrlResourceStream#getInputStream(), but also when closing the resource in UrlResourceStream#close(). At least on GlassFish v2 and v3, the second call to URLConnection#getInputStream() returns a new stream, so the one created to retrieve the file contents is never closed properly. This results in a warning of the container when the classes are garbage-collected, for example on undeploy. The problem is not triggered in all situations. It can be reproduced by using Wicket in a multi-module project consisting of an EAR, WAR and JAR. The JAR must contain a resource (CSS, image, ...) and a Behavior. Inside the Behavior, a static ResourceReference must be created for the resource file. When using this Behavior from inside the WAR project by loading a page the resource is loaded properly. On undeploy however, the described problem will show up. The problem does not exists in Wicket 1.4.x, because a reference to the InputStream is stored. A quickstart and patch for 1.5 is available, will try to attach it Issue Links - is duplicated by WICKET-3957 PackageResource file locking problem - Resolved Activity We have test for this situation but it doesn't open a new input stream for each call: org.apache.wicket.util.resource.UrlResourceStreamTest.loadJustOnce() I just committed an improvement that should do what you need. Try it and give us feedback, Also try to break/improve the test if you can. Thanks! Thanks for your reply, Martin. Your commit fixes the problem, but has one disadvantage: the call to Connections.close(data.connection) still opens a new InputStream and immediately closes it. The problem is thereby even broader: every call to Connections.close(URLConnection) has this problem. Please refer to these implementations of URLConnection (in OpenJDK and GlassFish) to see a new stream created for every call: In my opinion all usages of the close(URLConnection) with connections other than HttpURLConnection are faulty and should be checked/changed. I'll attach a new patch (wicket-core-4293-2.patch) fixing the problem based on your new commit and extending the test you mentioned. The real problem is Connections#close() that calls getInputStream(). I don't see a reason to do that. Exactly, but the streams that are used must of course be closed afterwards, which is what my patch accomplishes. Your test improvement is not correct. Or at least doesn't follow the rule that calling getInputStream() should increase the streamCounter. I'll commit the improved version soon. Thanks, I'm curiously waiting for your change I noted another potential problem in my patch; the last line of internalGetInputStream should read: return (data != null) ? data.inputStream : null; Hi Pepijn, Can you please take a look at r1215075 ? Thanks! Hi Martin, that looks great. I see you have a different opinion on whether getInputStream should return a single stream instance or always a new one One remaining thing: you're still using Connections.closeQuietly(data.connection), which opens a new stream and immediately closes it. This is unnecessary and I'd suggest to fix that too by removing the call. Thanks! You didn't pay attention then Connections#close() no more does this. Thanks for your help! Ah sorry, didn't have the wicket-util project open. Shame on me. It works great now, thank you too Quickstart to reproduce the bug. Must be ran on GlassFish to be able to see the problem. The patch on Wicket 1.5-SNAPSHOT fixes the problem.
https://issues.apache.org/jira/browse/WICKET-4293?focusedCommentId=13170163&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
CC-MAIN-2014-23
refinedweb
666
60.21
Today was all about learning Django views. Again, this is not a new topic for me, so I already understand how that views are simply a means to display the data stored in your models. Example: If your model contains blog posts, then the view is how to display each individual blog post. Here’s the code that was integrated in the class example: views.py: #from django.shortcuts import render from django.http import HttpResponse from django.template.loader import get_template from django.template.context import Context # Create your views here. def hello(request): name = "Jay" html = "<html><body>Hi %s. It worked.</body></html>" % name return HttpResponse(html) def hello_template(request): name = "Jay" t = get_template('hello.html') html = t.render(Context({'name': name})) hello.html : <html> <body> Hi , this TEMPLATE seems to have worked. </body> </html> urls.py: from django.conf.urls import patterns, include, url from django.contrib import admin urlpatterns = patterns('', # Examples: # url(r'^$', 'django_test.views.home', name='home'), # url(r'^blog/', include('blog.urls')), url(r'^hello/', 'article.views.hello'), url(r'^hello_template/', 'article.views.hello_template'), url(r'^admin/', include(admin.site.urls)), ) The magic is in the urls.py code. It’s the tie that binds the url to the models/view code. Not too exciting- it’s just next in the order of things…
https://jasondotstar.com/Day-3-Django-Views.html
CC-MAIN-2020-29
refinedweb
221
56.11
<-- Back to Andy Balaam Home Troncode is a simple game that gives you a chance to play around with writing programs that play games. The game is the classic "light cycle" game inspired by the film Tron. Each player controls an ever-growing line and must trap her opponents, causing them to crash before she does. It looks like this: The title screen looks like this: git clone (Browse the code here:.) To run pygame, cd into the directory where it lives and run: python troncode.py Alternatively, on Windows, double-click on troncode.py. In the current version, the default setup is to watch every player fight every other one simultaneously (including a human player you can control with the arrow keys). To see this, just run: python troncode.py The other option is to play an automatic tournament where you can't see all the matches happen. Each player plays each other one 100 times, and there are also 100 x (number_of_players - 1) all-together "Melee" battles. The winner is chosen by the number of battles they won. To see this, run: python troncode.py --tournament You need to create a new file inside the players directory and name it SomethingPlayer.py (it must end in 'Player.py'). Inside that file you should create a class with exactly the same name as the file (without the .py). The file players/old/TurnRightPlayer.py in the source code is a simple example of what you need to do: from troncode_values import * class TurnRightPlayer( object ): def __init__( self ): pass def GetColour( self ): return ( 160, 160, 255 ) def GetName(): return "Turn Right Player" GetName = staticmethod( GetName ) def GetDir( self, position, direction, gameboard ): """Turn right if there is a pixel directly in front.""" if gameboard.GetRelativePixel( position, direction, 1, 0 ) > 0: return gameboard.TurnRight( direction ) else: return direction The key thing is to implement the GetDir() method, which is called every time step. The return value of GetDir() must be one of the directions, stating which way you want to go in the next time step (one of DIR_UP, DIR_RIGHT, DIR_DOWN or DIR_LEFT). Your implementation of GetDir() can use the input you have - your position in the arena (which is 200x200 in size), your direction which is one of DIR_UP, DIR_RIGHT, DIR_DOWN, DIR_LEFT, and the gameboard which provides 1 method to find out about which squares on the board are occupied GetRelativePixel(), 1 method to find out where the other players are GetPlayerPositions(), and 2 convenience methods TurnRight() and TurnLeft() which handle rotating directions for you. The interface for the gameboard object is defined in the file AbstractGameBoard.py. Apart from that, you can use any Python stuff you need (e.g. import math, random), and the only rules are that you can't modify the gameboard variable (it is passed by reference for efficiency) and you can't use an unreasonable amount of CPU. More rules will be added as people find clever ways of cheating. If you like to do thing test-driven, just create a function in your MyPlayer.py module like this: def run_tests(): # Do tests here print "All MyPlayer tests passed." Then you can run them (with the correct imports etc. in place) like this: python troncode.py --test MyPlayer If you want to create a fake/mock gameboard, it may well be useful to inherit from BasicGameBoard, and just implement the missing methods to return what you want. An example of this may be found in SelfishGitPlayer. The latest winner is Selfish Git by Andy Balaam.
http://www.artificialworlds.net/wiki/Games/Troncode
CC-MAIN-2019-04
refinedweb
590
61.97
Type: Posts; User: 2kaud Yes. The processor used is determined by the scheduler when the thread is scheduled to run for its quantum. The issue of when a thread is scheduled and to which processor it is scheduled is quite a... within the confines of any applied affinity mask. See A processor can also be set as the preferred processor for... When posting code, please use code tags as you did in your post #1 so that the posted code is readable. What values can the elements of board have? Then check for these values in the loops and... It looks like you need a loop with a terminating condition of 4 bulls? You could use a do loop for this such as do { cout << "Enter 4 digits followed by a non-digit.\n"; ... Have a look at GetCurrentProcessorNumber() It provides the processor number for the current thread. In this case because you are passing an invalid socket to send(). Also a better way to test would be SOCKET Socket = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP); if (Socket ==... You are not checking the return value of socket(), gethostbyname() and send() for errors. The return value of ALL functions should always be checked for error. Have you considered reversing the mapping ie map<char, int> test; test['d'] = 100; test['e'] = 101; test['f'] = 102; test['*'] = 32; Have a look at // it = test.find('*'); //if I comment in just this one it won't work // it = test.find('f'); //if I comment in just this one it will work you are using find wrong. See... When you are using a fixed size array, it is good practice to define a const that contains the size of the array and then use this value in loops etc. const int arraysize = 6; ... int... pArr = new Array1D*[_row]; for(int i=0;i<_row;++i) pArr[i] = new Array1D[_col]; You haven't specified the type to use with Array1D which takes a template argument of the type to use to... I don't know much about regex, but I tried this code #include <iostream> #include <regex> #include <string> ... Yes - and then some 'bright' programmer comes along and changes the type of container from one that doesn't invalidate used iterators to one that does. Ah!!!!!! In c++11, erase returns an iterator to the element that follows the last element removed. This returned value is either a valid iterator or is .end() if the last element was removed. As erase can... If you are using c++11, I prefer for ( MyMapT::iterator iter=inMap.begin(); iter!=inMap.end(); /* nothing here */ ) { if (some_criteria) iter = inMap.erase(iter); else ... When IT equals names.end() IT is 1 beyond the end of the list and elements beyond the end of list can't be de-referenced. You need to check that IT is not equal to names.end before using IT.... Complex1 operator + (const Complex1 & lhs , const Complex1 & rhs) { Complex1 temp(rhs); return temp += rhs; } rhs + rhs ??? :confused: What functions have you declared/defined for the operator >> ? Note that there is already a STL complex class called complex. See As I mentioned in my post #2, please post your java code to the java forum. Also use code tags when posting code. Go advanced, select the code and click '#'. Yes, as attached. However as a simple programming exercise why not take the attached file in my post #3 and simply convert it? The processing is trivial. 32991 Decimal digits are based around numbers to the base 10. So if we take the number 4567 then we have 7 * 1 + 6 * 10 + 5 * 100 + 4 * 1000 If we now consider the powers of 10 we find 0 1 1 10 2... You need to use two backslashes in a string when one is required. Try file.open("C:\\dir\\to\\file", ios:out); See Well you should have started it earlier! Please see Nobody here is going to the... Different hashtags simultaneously updating the same data? As OReubens said in post #14, Twitter made their design in such a way to hard avoid the multiuser conflict issues entirely. IMO of more...
http://forums.codeguru.com/search.php?s=22d6174b0c6c6e57097759db5261dfb2&searchid=5374825
CC-MAIN-2014-42
refinedweb
692
75.2
.hbm;23 24 import java.io.Serializable ;25 26 27 /**28 * comment29 *30 * @author <a HREF="mailto:bill@jboss.org">Bill Burke</a>31 */32 public class HBM2 implements Serializable 33 {34 private long id;35 private String name;36 37 public long getId()38 {39 return id;40 }41 42 public void setId(long id)43 {44 this.id = id;45 }46 47 public String getName()48 {49 return name;50 }51 52 public void setName(String name)53 {54 this.name = name;55 }56 }57 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/jboss/ejb3/test/hbm/HBM2.java.htm
CC-MAIN-2016-44
refinedweb
102
66.74
Tutorial Building Maps in Angular Using Leaflet, Part 1: Generating Maps. Leaflet is an awesome JavaScript library for creating maps. It comes packed with nice features and is extremely mobile-friendly. Let’s see how we can integrate Leaflet into our Angular app. Setup Before we begin, let’s first create a project using Angular schematics: $ ng new leaflet-example For this tutorial I’ll be using SCSS as my stylesheet syntax, but you can choose your favorite flavor. Once the CLI has finished generating the project, open up your package.json file and add the following dependency and run npm install: "leaflet": "1.5.1" (At the time of writing this, the latest version of leaflet is 1.5.1) Let’s add a map component that will serve as our leaflet map container. Navigate to src/app and type: $ ng generate component map We’re going to be building a few services as well so create a folder for that called _services in your app folder. Ignoring the generated files, our directory structure should now have at least: leaflet-example |_ node_modules/ |_ package.json \_ src/ \_ app/ |_ app.module.ts |_ app.routing.ts |_ app.component.ts |_ app.component.html |_ app.component.scss | |_ map/ | |_ map.component.ts | |_ map.component.html | \_ map.component.scss | \_ _services/ Open up app.component.html, and replace everything inside it with our new component: <app-map></app-map> Generating the Map Let’s first create a full-size map by constructing a simple skeleton: <div class="map-container"> <div class="map-frame"> <div id="map"></div> </div> </div> .map-container { position: absolute; top: 0; left: 0; right: 0; bottom: 0; margin: 30px; } .map-frame { border: 2px solid black; height: 100%; } #map { height: 100%; } We first have our outermost div that will position the map in the DOM, and then the innermost div will be the target for Leaflet’s script injection to produce the map. The id that we give it will be passed as an argument when we construct our Leaflet map. OK, the boring part’s done. Now we can start using Leaflet and construct our map. Open up map.component.ts and import the Leaflet package: import * as L from 'leaflet'; We’ll also declare a variable for our map object, (creatively called map), and assign it as a new leaflet map object. Note that the map div needs to already exist on the DOM before we can reference it to create our map. So, we put this in the AfterViewInit lifecycle hook. Extend your component to implement AfterViewInit and add the ngAfterViewInit() function to your component. Our component should now look like this: import { AfterViewInit, Component } from '@angular/core'; import * as L from 'leaflet'; @Component({ selector: 'app-map', templateUrl: './map.component.html', styleUrls: ['./map.component.scss'] }) export class MapComponent implements AfterViewInit { private map; constructor() { } ngAfterViewInit(): void { } } Looking good so far! Let’s create a separate private function called initMap() to isolate all the map initialization. We can then call it from ngAfterViewInit. In this function, we need to create a new Leaflet map object, and the API allows us to define some options in it as well. Let’s start off simple and set the center of the map and starting zoom value. I want to center our map on the continental United States, and according to Wikipedia the center is located at 39.828175°N 98.579500°W. The decimal coordinate system Leaflet uses assumes that anything to the west of the prime meridian will be a negative number, so our actual center coordinates will be [ 39.8282 -98.5795 ]. If we use a default zoom level as 3, then we can create our map, as: private initMap(): void { this.map = L.map('map', { center: [ 39.8282, -98.5795 ], zoom: 3 }); } Note the value passed into the map function 'map' is referring to the id of the div where our map will be injected. Run npm start and navigate to to see your shiny new map! …Whoops, maybe not. Why? Well, we created our map object but we didn’t populate it with anything. With Leaflet, we visualize data as Layers. The kind of data you think of when you picture a map are called “tiles”. In brief, we create a new tile layer and add it to our map. We first create a new tile layer which we must first pass a tile server URL. There are many tile server providers out there, but I personally like using the OpenStreetMap tile server. As with creating the map object, we can pass in a parameters object. Let’s go with setting the max zoom to 18, the min zoom to 3, and the attribution for the tiles. We cap it off by adding the tile layer to the map: const tiles = L.tileLayer('https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png', { maxZoom: 19, attribution: '© <a href="">OpenStreetMap</a>' }); tiles.addTo(this.map); Let’s now take a look at our browser and see how our map it coming along: Well this is progress, at least. But why are the tiles so garbled? One thing we need to do when we include the Leaflet package into our project is to also include the Leaflet stylesheet into the build. There are several ways we can do this, but my personal favorite (if you’re using SCSS, that is) is to simply import it into your root styles.scss file. /* You can add global styles to this file, and also import other style files */ @import "~leaflet/dist/leaflet.css"; If you’re currently running npm start you will need to stop the process and restart so it refreshes the base stylesheet. Take a final look at the browser: Looking good! You now have a map that you can drag around and zoom. In the next tutorial, I will show you how to add data and render it on top of your map. Happy mapping! 1 Comment Load
https://www.digitalocean.com/community/tutorials/angular-angular-and-leaflet
CC-MAIN-2020-34
refinedweb
1,007
65.42
This patch changes the way we initialize and terminate the plugins in the system initializer. It uses a similar approach as LLVM's TARGETS_TO_BUILD with a def file that enumerates the configured plugins. To make this work I had to rename some of the plugins. I went for the solution with the least churn, but we should probably clean this up in the future as separate patches. Details - Reviewers - Diff Detail Event Timeline Yes, it appears to matter... I'm still seeing test failures which I need to debug. I'm not sure yet if they're because of missing initializers (that don't have a corresponding CMake plugin) or because of the ordering. Overall, I like this, but I have a lot of comments: - I actually very much like the plugin namespace idea -- it makes it harder to use plugin code from non-plugin code, and allows plugins of the same category (e.g. ProcessLinux and ProcessNetBSD) to define similar concepts without fear of name clashes. I do see how that gets in the way of auto-generation though, so putting the initialization endpoints into a more predictable place seems like a good compromise. I wouldn't put the entire main class into the lldb_private namespace though, as that is also something that should not be accessed by generic code. Ideally, I'd just take the Initialize and Terminate (TBH, I don't think we really need the terminate function, but that may be more refactoring than you're ready for right now), and put it into a completely separate, predictibly-named file (void lldb_private::InitializeProcessLinux() in Plugins/Process/Linux/Initialization.h ?). That way the "main" file could be free to include anything it wants, without fear of polluting the namespace of anything, and we could get rid of the ScriptInterpreterPythonImpl thingy. - it would be nice to auto-generate the #include directives too. Including everything and then not using it sort of works, but it does not feel right. It should be fairly easy to generate an additional .def file with just the includes... - The way you disable ScriptInterpreterPython/Lua plugins is pretty hacky. Maybe the .def file should offer a way to exclude entire plugin classes? #ifdef LLDB_WANT_SCRIPT_INTERPRETERS ScriptInterpreterPython::Initialize(); // or whatever #endif - some of the things you initialize are definitely not plugins (e.g. Plugins/Process/Utilty, Plugins/Language/ClangCommon). I think that things which don't need to register their entry points anywhere should not need to have the Initialize/Terminate goo... Can we avoid that, perhaps by making the "plugin" special at the cmake level. Since this code is used only by other plugins, it makes sense to have it somewhere under Plugins/, but it is not really a plugin, it does not need to register anything, and we do not need to be able to explicitly disable it (as it will get enabled/disabled automatically when used by the other plugins). - Having this as one bit patch is fine for now, but once we agree on the general idea, it should be split off into smaller patches, as some of the changes are not completely trivial (like the bunching of the macos platform plugins for instance) Order of plugins within a kind should not matter, but it's possible it does. Some of the entry points may need to be changed so that they are less grabby. I'm not sure if order of plugin kinds matters right now, but I don't think it would be wrong if it did (e.g. have symbol files depend on object files). However, it looks like this approach already supports that... Since the initialization function will have to have predictable names anyway, instead of generating the #include directives, we could just automatically forward declare the needed functions. So, the .def file would expand to something like: extern lldb_initialize_objectfile_elf(); lldb_initialize_objectfile_elf(); extern lldb_initialize_objectfile_macho(); lldb_initialize_objectfile_macho(); ... That way we would have only one generated file, and we could ditch the Initialization.h file for every plugin (which would contain just the two forward declarations anyway). And this approach would be pretty close to what we'd need to do for dynamically loaded plugins (where instead of an extern declaration we'd have a dlsym lookup). This looks like a bad upload -- based on some earlier version of this patch and not master..
https://reviews.llvm.org/D73067
CC-MAIN-2020-10
refinedweb
722
59.43
- NAME - SYNOPSIS - DESCRIPTION - CONSTRUCTORS - COMMON METHODS - GENERIC METHODS - SERVER METHODS - SCHEME-SPECIFIC SUPPORT - CONFIGURATION VARIABLES - BUGS - PARSING URIs WITH REGEXP - SEE ALSO - AUTHORS / ACKNOWLEDGMENTS NAME URI - Uniform Resource Identifiers (absolute and relative)/"); DESCRIPTION This module implements URI class interface. A "URI-reference" is a URI that may have additional information attached in the form of a fragment identifier. An absolute URI reference consists of three parts: a scheme, a scheme-specific part and a fragment identifier. A subset of URI references share a common syntax for hierarchical namespaces. For these, the scheme-specific part is further broken down into authority, path and scheme. The URI class provides methods to get and set the individual components. The methods available for a specific URI object depend on the scheme. CONSTRUCTORS The following methods construct new URI objects: - $uri = URI->new( $str ) - - URIobject. If no $scheme is specified for a relative URI $str, then $str is simply treated as a generic URI (no scheme-specific methods available). The set of characters available for building URI references is restricted (see URI::Escape). Characters outside this set are automatically escaped by the URI constructor. - . - $uri = URI::file->new( $filename ) - - $uri = URI::file->new( $filename, $os ) Constructs a new file URI from a file name. See URI::file. - $uri = URI::file->new_abs( $filename ) - - $uri = URI::file->new_abs( $filename, $os ) Constructs a new absolute file URI from a file name. See URI::file. - $uri = URI::file->cwd Returns the current working directory as a file URI. See URI::file. - $uri->clone Returns a copy of the $uri. COMMON METHODS (percent-encoded) or an unescaped string. A component that can be further divided into sub-parts are usually passed escaped, as unescaping might change its semantics. The common methods available for all URI are: - $uri->scheme - - $uri->scheme( $new_scheme ) Sets and returns the scheme part of the $uri. If the $uri is relative, then $uri->scheme returns. - $uri->opaque - - $uri->opaque( $new_opaque ) Sets and returns the scheme-specific part of the $uri (everything between the scheme and the fragment) as an escaped string. - $uri->path - - $uri->path( $new_path ) - - $uri->fragment( $new_frag ) Returns the fragment identifier of a URI reference as an escaped string. - $uri->as_string Returns a URI object to a plain ASCII string. URI objects are also converted to plain strings automatically by overloading. This means that $uri objects can be used as plain strings in most Perl constructs. - $uri->as_iri Returns a Unicode string representing the URI. Escaped UTF-8 sequences representing non-ASCII characters are turned into their corresponding Unicode code point. - . - $uri->eq( $other_uri ) - - URIobject references denote the same object, use the '==' operator. - . - $uri->rel( $base_uri ) Returns a relative URI reference if it is possible to make one that denotes the same resource relative to $base_uri. If not, then $uri is simply returned. - $uri->secure Returns a TRUE value if the URI is considered to point to a resource on a secure channel, such as an SSL or TLS encrypted one. GENERIC METHODS. - $uri->path - - $uri->path( $new_path ) Sets and returns the escaped path component of the $uri (the part between the host name and the query or fragment). The path can never be undefined, but it can be the empty string. - $uri->path_query - - $uri->path_query( $new_path_query ) Sets and returns the escaped path and query components as a single entity. The path and the query are separated by a "?" character, but the query can itself contain "?". - $uri->path_segments - - . Note that absolute paths have the empty string as their first path_segment, i.e. the path /foo/barhave 3 path_segments; "", "foo" and "bar". - $uri->query - - $uri->query( $new_query ) Sets and returns the escaped query component of the $uri. - $uri->query_form - - $uri->query_form( $key1 => $val1, $key2 => $val2, ... ) - - $uri->query_form( $key1 => $val1, $key2 => $val2, ..., $delim ) - - $uri->query_form( \@key_value_pairs ) - - $uri->query_form( \@key_value_pairs, $delim ) - - $uri->query_form( \%hash ) - - $uri->query_form( \%hash, $delim ) Sets and returns query components that use the module - - $uri->host( $new_host ) Sets and returns the unescaped hostname. If the $new_host string ends with a colon and a number, then this number also sets the port. For IPv6 addresses the brackets around the raw address is removed in the return value from $uri->host. When setting the host attribute to an IPv6 address you can use a raw address or one enclosed in brackets. The address needs to be enclosed in brackets if you want to pass in a new port value as well. - $uri->ihost Returns the host in Unicode form. Any IDNA A-labels are turned into U-labels. - $uri-. - $uri->host_port - - $uri->host_port( $new_host_port ) Sets and returns the host and port as a single unit. The returned value includes a port, even if it matches the default port. The host part and the port part are separated by a colon: ":". For IPv6 addresses the bracketing is preserved; thus URI->new("http://[::1]/")->host_port returns "[::1]:80". Contrast this with $uri->host which will remove the brackets. - $uri->default_port Returns the default port of the URI scheme to which $uri belongs. For http this is the number 80, for ftp this is the number 21, etc. The default port for a scheme can not be changed. SCHEME-SPECIFIC SUPPORT Scheme-specific support is provided for the following URI schemes. For URI objects that do not belong for mapping file URIs back to local file names; $uri->file and $uri->dir. See URI::file for details. - ftp: An old specification of the ftp URI scheme is found in RFC 1738. A new RFC 2396 based specification in not available yet, but ftp URI references are in common use. URIobjects belonging to the ftp scheme support the common, generic and server methods. In addition, they provide two methods for accessing for accessing. Its syntax is the same as http, but the default port is different. - ldap: The ldap URI scheme is specified in RFC 2255. LDAP is the Lightweight Directory Access Protocol. An ldap URI describes an LDAP search operation to perform to retrieve information from an LDAP directory. URIobjects belonging to the ldap scheme support the common, generic and server methods as well as ldap-specific methods: $uri->dn, $uri->attributes, $uri->scope, $uri->filter, $uri->extensions. See URI::ldap for details. - ldapi:. - mailto: The mailto URI scheme is specified in RFC 2368. The scheme was originally used to designate the Internet mailing address of an individual or service. It has (in RFC 2368) been extended to allow setting of other mail header fields and the message body. URIobjects belonging to the mailto scheme support the common methods and the generic query methods. In addition, they support the following mailto-specific methods: $uri->to, $uri->headers. Note that the "foo@example.com" part of a mailto is not the userinfoand hostbut instead the path. This allows a mailto URI to contain multiple comma separated email addresses. - mms: The mms URL specification can be found at. URIobjects specification of the rlogin URI scheme is found in RFC 1738. URIobjects belonging to the rlogin scheme support the common, generic and server methods. - rtsp: The rtsp URL specification can be found in section 3.2 of RFC 2326. URIobjects. URIobjects belonging to the rsync scheme support the common, generic and server methods. In addition, they provide methods to access the userinfo sub-components: $uri->user and $uri->password. - sip: The sip URI specification is described in sections 19.1 and 25 of RFC 3261. URIobjects belonging to the sip scheme support the common, generic, and server methods with the exception of path related sub-components. In addition, they provide two methods to get and set sip parameters: $uri->params_form and . URIobjects belonging to the telnet scheme support the common, generic and server methods. - tn3270: These URIs are used like telnet URIs but for connections to IBM mainframes. URIobjects belonging to the tn3270 scheme support the common, generic and server methods. - ssh: Information about ssh is available at. URIobjects belonging to the ssh scheme support the common, generic and server methods. In addition, they provide methods to access the userinfo sub-components: $uri->user and $uri->password. - urn: The syntax of Uniform Resource Names is specified in RFC 2141. URIobjects. - urn:isbn: The urn:isbn:namespace contains International Standard Book Numbers (ISBNs) and is described in RFC 3187. A URIobject. - urn:oid: The urn:oid:namespace contains Object Identifiers (OIDs) and is described in RFC 3061. An object identifier consists of sequences of digits separated by dots. A URIobject belonging to this namespace has an additional method called $uri->oid that can be used to get/set the oid value. In a list context, oid numbers are returned as separate elements. CONFIGURATION VARIABLES The following configuration variables influence how the class and its methods behave: - ("") ==> "" - $URI::DEFAULT_QUERY_FORM_DELIMITER This value can be set to ";" to have the query form key=valuepairs delimited by ";" instead of "&" which is the default. BUGS There are some things that are not quite right: Using regexp variables like $1 directly as arguments to the URI accessor methods does not work too well with current perl implementations. I would argue that this is actually a bug in perl. The workaround is to quote them. Example: /(...)/ || die; $u->query("$1"); The escaping (percent encoding) of chars in the 128 .. 255 range passed to the URI constructor or when setting URI parts using the accessor methods depend on the state of the internal UTF8 flag (see utf8::is_utf8) of the string passed. If the UTF8 flag is set the UTF-8 encoded version of the character is percent encoded. If the UTF8 flag isn't set the Latin-1 version (byte) of the character is percent encoded. This basically exposes the internal encoding of Perl strings. PARSING URIs WITH REGEXP As an alternative to this module, the following (official) regular expression can be used to decode a URI: my($scheme, $authority, $path, $query, $fragment) = $uri =~ m|(?:([^:/?#]+):)?(?://([^/?#]*))?([^?#]*)(?:\?([^#]*))?(?:#(.*))?|; The URI::Split module provides the function uri_split() as a readable alternative. SEE ALSO URI::file, URI::WithBase, URI::QueryParam, URI::Escape, URI::Split, URI::Heuristic RFC 2396: "Uniform Resource Identifiers (URI): Generic Syntax", Berners-Lee, Fielding, Masinter, August 1998. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. AUTHORS / ACKNOWLEDGMENTS This module is based on the URI::URL module, which in turn was (distantly) based on the wwwurl.pl code in the libwww-perl for perl4 developed by Roy Fielding, as part of the Arcadia project at the University of California, Irvine, with contributions from Brooks Cutter. URI::URL was developed by Gisle Aas, Tim Bunce, Roy Fielding and Martijn Koster with input from other people on the libwww-perl mailing list. URI and related subclasses was developed by Gisle Aas.
https://metacpan.org/pod/release/GAAS/URI-1.60/URI.pm
CC-MAIN-2017-04
refinedweb
1,788
56.96
- Fund Type: Open-End Fund - Objective: Precious Metal Sector - Asset Class: Equity - Geographic Focus: U.S. DWS Gold & Precious Metals Fund+ Add to Watchlist SCGDX:US7.70 USD 0.19 2.41% As of 19:59:59 ET on 04/15/2014. Snapshot for DWS Gold & Precious Metals Fund (SCGDX) Mutual Fund Chart for SCGDX - SCGDX:US 7.70 … Previous Close - 1W - 1M - YTD - 1Y - 3Y - 5Y Open: High: Low:Volume: Recently Viewed Symbols Save as Watchlist Saving as watchlist... Fund Profile & Information for SCGDX DWS Gold & Precious Metals Fund is an open-end fund incorporated in the USA. The Fund's objective seeks the maximum return (principal change and income). The Fund invests at least 80% of net assets in common stocks and other equities of US and foreign companies engaged in activities related to gold, silver, platinum, diamonds or other precious metals and minerals. Fundamentals for SCGDX Dividends for SCGDX Fees & Expenses for SCGDX Top Fund Holdings for SCGD
http://www.bloomberg.com/quote/SCGDX:US
CC-MAIN-2014-15
refinedweb
161
56.66
System Administration Commands - Part 1 System Administration Commands - Part 2 System Administration Commands - Part 3 - start WBEM Log Viewer /usr/sadm/bin/wbemlogviewer The wbemlogviewer utility starts the WBEM Log Viewer graphical user interface, which enables administrators to view and maintain log records created by WBEM clients and providers. The WBEM Log Viewer displays a Login dialog box. You must log in as root or a user with write access to the root\cimv2 namespace to view and maintain log files. Namespaces are described in wbemadmin(1M). Log events can have three severity levels. Errors Warnings Informational The WBEM log file is created in the /var/sadm/wbem/log directory, with the name wbem_log. The first time the log file is backed up, it is renamed wbem_log.1, and a new wbem_log file is created. Each succeeding time the wbem_log file is backed up, the file extension number of each backup log file is increased by 1, and the oldest backup log file is removed if the limit, which in turn is specified in the log service settings, on the number of logfiles is exceeded. Older backup files have higher file extension numbers than more recent backup files. The log file is renamed with a .1 file extension and saved when one of the following two conditions occur: The current file reaches the specified file size limit. A WBEM client application uses the clearLog() method in the Solaris_LogService class to clear the current log file. A WBEM client application uses the clearLog() method in the Solaris_LogService class to clear the current log file. A user chooses Action->Back Up Now in the Log Viewer application. Help is displayed in the left panel of each dialog box. Context help is not displayed in the main Log Viewer window. The WBEM Log Viewer is not the tool for a distributed environment. It is used for local administration. The WBEM Log Viewer allows you to perform the following tasks: Click Action->Log File Settings to specify log file parameters and the log file directory. Click Action->Back Up Now to back up and close the current log file and start a new log file. Click Action->Open Log File to open a backed-up log file. Open the file and then click Action->Delete Log File. You can only delete backed-up log files. Double-click a log entry or click View->Log Entry Details to display the details of a log record. Click View->Sort By to sort displayed entries. You can also click any column heading to sort the list. By default, the log entries are displayed in reverse chronological order (new logs first). The wbemlogviewer utility terminates with exit status 0. WBEM log file See attributes(5) for descriptions of the following attributes: wbemadmin(1M), init.wbem(1M), mofcomp(1M), attributes(5)
http://docs.oracle.com/cd/E23823_01/html/816-5166/wbemlogviewer-1m.html
CC-MAIN-2015-40
refinedweb
472
65.32
I am new in api versioning ,so my question is : 1)Is this folder structure true? /app /controllers /Api /v1 /UserController.php /v2 /UserController.php for routes : Route::group(['prefix' => 'v1'], function () { Route::get('user', 'Api\v1\[email protected]'); Route::get('user/{id}', 'Api\v1\[email protected]'); }); Route::group(['prefix' => 'v2'], function () { Route::get('user', 'Api\v2\[email protected]'); Route::get('user/{id}', 'Api\v2\[email protected]'); }); 2)what about folder structure for models and events , should I make model for every version? Your approach is correct for API versioning. To avoid repeating the Api\vN\ prefix before every controller path, you could also do: Route::group(['prefix' => 'api/v1', 'namespace' => 'Api\v1'], function () { Route::get('user', '[email protected]'); Route::get('user/{id}', '[email protected]'); }); Route::group(['prefix' => 'api/v2', 'namespace' => 'Api\v2'], function () { Route::get('user', '[email protected]'); Route::get('user/{id}', '[email protected]'); }); If you don’t want to manage it by yourself you could also use some API library that supports versioning. I successfully used Dingo many times but probably there are some more available. I don’t think you should version models. They should represent your current database structure and therefore be unique. If you need to make some changes, try to make it backwards-compatible with the API versions you are still maintaining. Same story for the events, unless they are strongly coupled to your API. In that case, I believe that the best folder structure should be equivalent to the controllers one: /app /Events /Api /v1 /ApiEvent.php /v2 /ApiEvent.php GenericEvent.php Tags: api, laravel, php, phplaravel, struct
https://exceptionshub.com/php-laravel-api-versioning-folders-structure.html
CC-MAIN-2020-45
refinedweb
265
55.95
Well, you get a hopalong fractal. Let's plot this fractal using Pylab. The following function computes n points using the equations above: from __future__ import division from numpy import sqrt,power def hopalong(x0,y0,n,a=-55,b=-1,c=-42): def update(x,y): x1 = y-x/abs(x)*sqrt(abs(b*x+c)) y1 = a-x return x1,y1 xx = [] yy = [] for _ in xrange(n): x0,y0 = update(x0,y0) xx.append(x0) yy.append(y0) return xx,yyand this snippet computes 40000 points starting from (-1,10): from pylab import scatter,show, cm, axis from numpy import array,mean x = -1 y = 10 n = 40000 xx,yy = hopalong(x,y,n) cr = sqrt(power(array(xx)-mean(xx),2)+power(array(yy)-mean(yy),2)) scatter(xx, yy, marker='.', c=cr/max(cr), edgecolor='w', cmap=cm.Dark2, s=50) axis('equal') show()Here we have one of the possible hopalong fractals: Varying the starting point and the values of a, b and c we have different fractals. Here are some of them: Thank you for sharing your method. it,s great. Tesla and Elon Musk Hello! Nice code, works fine! However, the first equation typeset is written in the wrong way. That is, it does not match the code, which is right. hi, thank you! just fixed it.
http://glowingpython.blogspot.it/2013/07/hopalong-fractals.html
CC-MAIN-2017-30
refinedweb
225
67.25
Connect to API Gateway with IAM Auth Now that we have our basic create note form working, let’s connect it to our API. We’ll do the upload to S3 a little bit later. Our APIs are secured using AWS IAM and Cognito User Pool is our authentication provider. As we had done while testing our APIs, we need to follow these steps. - Authenticate against our User Pool and acquire a user token. - With the user token get temporary IAM credentials from our Identity Pool. - Use the IAM credentials to sign our API request with Signature Version 4. In our React app we do step 1 by calling the authUser method when the App component loads. So let’s do step 2 and use the userToken to generate temporary IAM credentials. Generate Temporary IAM Credentials Our authenticated users can get a set of temporary IAM credentials to access the AWS resources that we’ve previously specified. We can do this using the AWS JS SDK. Install it by running the following in your project root. $ npm install aws-sdk --save Let’s add a helper function in src/libs/awsLib.js. function getAwsCredentials(userToken) { const authenticator = `cognito-idp.${config.cognito .REGION}.amazonaws.com/${config.cognito.USER_POOL_ID}`; AWS.config.update({ region: config.cognito.REGION }); AWS.config.credentials = new AWS.CognitoIdentityCredentials({ IdentityPoolId: config.cognito.IDENTITY_POOL_ID, Logins: { [authenticator]: userToken } }); return AWS.config.credentials.getPromise(); } This method takes the userToken and uses our Cognito User Pool as the authenticator to request a set of temporary credentials. Also include the AWS SDK in our header. import AWS from "aws-sdk"; To get our AWS credentials we need to add the following to our src/config.js in the cognito block. Make sure to replace YOUR_IDENTITY_POOL_ID with your Identity pool ID from the Create a Cognito identity pool chapter and YOUR_COGNITO_REGION with the region your Cognito User Pool is in. REGION: "YOUR_COGNITO_REGION", IDENTITY_POOL_ID: "YOUR_IDENTITY_POOL_ID", Now let’s use the getAwsCredentials helper function. Replace the authUser in src/libs/awsLib.js with the following: export async function authUser() { if ( AWS.config.credentials && Date.now() < AWS.config.credentials.expireTime - 60000 ) { return true; } const currentUser = getCurrentUser(); if (currentUser === null) { return false; } const userToken = await getUserToken(currentUser); await getAwsCredentials(userToken); return true; } We are passing getAwsCredentials the userToken that Cognito gives us to generate the temporary credentials. These credentials are valid till the AWS.config.credentials.expireTime. So we simply check to ensure our credentials are still valid before requesting a new set. This also ensures that we don’t generate the userToken every time the authUser method is called. Next let’s sign our request using Signature Version 4. Sign API Gateway Requests with Signature Version 4 All secure AWS API requests need to be signed using Signature Version 4. We could use API Gateway to generate an SDK and use that to make our requests. But that can be a bit annoying to use during development since we would need to regenerate it every time we made a change to our API. So we re-worked the generated SDK to make a little helper function that can sign the requests for us. To create this signature we are going to need the Crypto NPM package. Install it by running the following in your project root. $ npm install crypto-js --save Copy the following file to src/libs/sigV4Client.js. This file can look a bit intimidating at first but it is just using the temporary credentials and the request parameters to create the necessary signed headers. To create a new sigV4Client we need to pass in the following: // Pseudocode sigV4Client.newClient({ // Your AWS temporary access key accessKey, // Your AWS temporary secret key secretKey, // Your AWS temporary session token sessionToken, // API Gateway region region, // API Gateway URL endpoint }); And to sign a request you need to use the signRequest method and pass in: // Pseudocode const signedRequest = client.signRequest({ // The HTTP method method, // The request path path, // The request headers headers, // The request query parameters queryParams, // The request body body }); And signedRequest.headers should give you the signed headers that you need to make the request. Now let’s go ahead and use the sigV4Client and invoke API Gateway. Call API Gateway We are going to call the code from above to make our request. Let’s write a helper function to do that. Add the following to src/libs/awsLib.js. export async function invokeApig({ path, method = "GET", headers = {}, queryParams = {}, body }) { if (!await authUser()) { throw new Error("User is not logged in"); } const signedRequest = sigV4Client .newClient({ accessKey: AWS.config.credentials.accessKeyId, secretKey: AWS.config.credentials.secretAccessKey, sessionToken: AWS.config.credentials.sessionToken, region: config.apiGateway.REGION, endpoint: config.apiGateway.URL }) .signRequest({ method, path, headers, queryParams, body }); body = body ? JSON.stringify(body) : body; headers = signedRequest.headers; const results = await fetch(signedRequest.url, { method, headers, body }); if (results.status !== 200) { throw new Error(await results.text()); } return results.json(); } We are simply following the steps to make a signed request to API Gateway here. We first ensure the user is authenticated and we generate their temporary credentials using authUser. Then using the sigV4Client we sign our request. We then use the signed headers to make a HTTP fetch request. Include the sigV4Client by adding this to the header of our file. import sigV4Client from "./sigV4Client"; Also, add the details of our API to src/config.js above the cognito: { line. Remember to replace YOUR_API_GATEWAY_URL and YOUR_API_GATEWAY_REGION with the ones from the Deploy the APIs chapter. apiGateway: { URL: "YOUR_API_GATEWAY_URL", REGION: "YOUR_API_GATEWAY_REGION" }, In our case the URL is and the region is us-east-1. We are now ready to use this to make a request to our create note API. If you liked this post, please subscribe to our newsletter and give us a star on GitHub. For help and discussionComments on this chapter For reference, here is the code so farFrontend Source :connect-to-api-gateway-with-iam-auth
https://serverless-stack.com/chapters/connect-to-api-gateway-with-iam-auth.html
CC-MAIN-2017-51
refinedweb
988
50.63
In this section, we will talk about executing the raw Cypher queries to handle complex requirements and then further converting them into a model and presenting it to the end user. Cypher queries can be executed by the following two ways: StructuredNode.cypher(query,params)function neomodel.db.cypher_query(query, params)function In the first option, the query definition needs to be defined within the domain class itself and then it uses StructuredNode.inflate() for creating domain objects from the raw Cypher results. For example, let's assume that we need to retrieve and print all the male friends of a given node. So we will modify our Model.py and add the following method in the Male class: def getAllMaleFriends(self): ... No credit card required
https://www.oreilly.com/library/view/building-web-applications/9781783983988/ch06s04.html
CC-MAIN-2019-43
refinedweb
126
56.96
restified 0.0.15 restified #] - 04-10-2019 - Init [0.0.2] - 04-10-2019 - Don't use same package for both crud and http files [0.0.3] - 04-10-2019 - Tweaked Update and Insert responses [0.0.4] - 04-10-2019 - Combined insert and update blocs into manipulation bloc Use this package as a library 1. Depend on it Add this to your package's pubspec.yaml file: dependencies: restified: ^0.0.15 2. Install it You can install packages from the command line: with Flutter: $ flutter pub get Alternatively, your editor might support flutter pub get. Check the docs for your editor to learn more. 3. Import it Now in your Dart code, you can use: import 'package:restified/crud.dart'; import 'package:restified/http) 46 out of 46 rest.
https://pub.dev/packages/restified
CC-MAIN-2019-43
refinedweb
133
69.07
Updated This method is free, but requires some technical know-how. First, you're going to need to install Processing. Then, follow the given instructions to install the PEmbroider library for for Processing. Open up Processing, and copy and paste in this script. I can't write Processing code, but I modified this example. import processing.embroider.*; PEmbroiderGraphics E; PShape mySvgImage; void setup() { size(500, 500); noLoop(); E = new PEmbroiderGraphics(this, width, height); String outputFilePath = sketchPath("myfile.pes"); E.setPath(outputFilePath); PShape mySvgImage = loadShape("myfile.svg"); E.fill(0,0,0); E.stroke(0,0,0); E.strokeWeight(1); E.hatchSpacing(2); E.setStitch(5, 15, 0); E.hatchMode(E.CROSS); E.shape(mySvgImage, 0, 0, 500, 500); //E.optimize(); E.visualize(); //E.endDraw(); } void draw() { ; } In the Processing toolbar, go to Sketch > Show Sketch Folder. In here, create a folder called data/ and then drop your SVG file into this folder. Rename myfile.svg in the script to point to its filename, and change myfile.pes to change what the outputted file will be called. A few things you can tweak about this script: hatchSpacingcontrols how close together the lines are. I've found that 2is usually pretty good. E.shape, the first two numbers control which coordinate the shape starts and ends at. The next two control the dimensions of the shape: right now it's a 500 x 500 square, but we'll change that in a moment. Run the script in Processing, and look at the output. It'll be square-ish, and if your original SVG wasn't square, that means it'll be distorted. Change the third and fourth numbers in E.shape to give it the right shape, continuing to re-run the script until it looks right. Once you're happy, uncomment the lines that say E.optimize() and E.endDraw(). optimize optimizes the output, but this can take several minutes. endDraw actually saves your file, which will be outputted to the "sketch folder" from earlier. Run the script again. When the preview window with the shape pops up, you'll know that the outputted file is in the sketch folder. You can now load that .pes file onto a USB stick and have your machine embroider it.
https://benborgers.com/1847bacf-5ef0-42db-b053-f70248f54362/
CC-MAIN-2021-17
refinedweb
376
68.57
Python zip function (with examples). What is the zip function? In Python 3, zip is a function that aggregates elements from multiple iterables and returns an iterator. Using the example from the introduction, you'll notice that I converted the zipped object to a list so we could visualize the output. But the zip function actually returns a zip object, which is an iterator. This offers major performance gains if we're wanting to zip extremely large iterables. first_names = ['George', 'Benjamin', 'Thomas'] last_names = ['Washington', 'Franklin', 'Jefferson'] zipped = zip(first_names, last_names) next(zipped) >> ('George', 'Washington') next(zipped) >> ('Benjamin', 'Franklin') next(zipped) >> ('Thomas', 'Jefferson') Note: The next function is designed to retrieve the next element from an iterator. You'll notice that the zip function returns a iterator where each element is a tuple containing the merged elements from each iterable. A few practical examples Associating column names with query results Imagine a database library that executes queries and only returns a list of tuples containing the values, which keeps the footprint small (the bigquery library does something like this). There's a little bit of hand waving here, but stick with me. So we've got a list containing the table schema: schema = ['id', 'first_name', 'last_name'] And the query results look like this: query_results = [ (1, 'Thomas', 'Sowell',), (2, 'Murray', 'Rothbard',), (3, 'Friedrich', 'Hayek',), (4, 'Adam', 'Smith',), ] Depending on what we want to do with this data, we may want to turn this into a list of dictionaries, where the keys are the column names and the values are the corresponding query results. Zip is our friend. dict_results = [dict(zip(schema, row)) for row in query_results] >> [{'id': 1, 'first_name': 'Thomas', 'last_name': 'Sowell'}, {'id': 2, 'first_name': 'Murray', 'last_name': 'Rothbard'}, {'id': 3, 'first_name': 'Friedrich', 'last_name': 'Hayek'}, {'id': 4, 'first_name': 'Adam', 'last_name': 'Smith'}] Combining query string lists Imagine we've got a front-end application that makes a GET request and passes a few lists in the query. And in our case, the elements of each list correspond to one another. In our example request, there are two titles and two slugs in the query string. On the backend, we may want to associate them, and we can use zip to do this! data = list(zip(request.GET.getlist('title'), request.GET.getlist('slug'))) >> [('step one', 'step-one'), {'step two', 'step-two')] What about unzipping? Ok, so now we've got a list of tuples and we want to pull elements of corresponding indexes into their own tuples. Imagine we've got a list of tuples that represents query results. The first value is the month, the second is the total revenue: results = [ ('January', 35423.85,), ('February', 31445.75,), ('March', 38525.22,), ] Suppose we want to get the total revenue for the first quarter. We can unzip these results, then sum the revenue. months, revenue = zip(*results) print(revenue) >> (35423.85, 31445.75, 38525.22) print(sum(revenue)) >> 105394.82 Beautiful. What happens if the iterables are not the same length? If we pass multiples iterables of different lengths, the default behavior is to zip the number of elements in the shortest iterable. short_list = list(range(5)) # [0, 1, 2, 3, 4] long_list = list(range(10,20)) # [10, 11, 12, 13, 14, 15, 16, 17, 18, 19] list(zip(short_list, long_list)) >> [(0, 10), (1, 11), (2, 12), (3, 13), (4, 14)] Notice that elements 15-19 from long_list were ignored. If you happen to care about the trailing elements, you can use itertools.zip_longest. import itertools list(itertools.zip_longest(short_list, long_list)) >> [(0, 10), (1, 11), (2, 12), (3, 13), (4, 14), (None, 15), (None, 16), (None, 17), (None, 18), (None, 19)] Conclusion Hopefully by now you understand how to use the zip function in Python. If you've got any questions, let me know in the comments below! Also, feel free to share interesting ways you've used the zip function. I'd be happy to add them to this guide.
https://howchoo.com/g/mgfimmrkogf/python-zip-function-with-examples
CC-MAIN-2019-30
refinedweb
656
62.38
Imported Xquery modules will not resolve using classpath -------------------------------------------------------- Key: CAMEL-4285 URL: Project: Camel Issue Type: Bug Components: camel-saxon Affects Versions: 2.8.0 Environment: All Reporter: Jay mann When using an xquery endpoint that uses an xquery file like this: <camel:to If the manual.xq file contains imported modules such as: import module namespace utils = "myutils" at "classpath:/com/test/utils.xq"; they will not resolve relative to the classpath. This is a big problem when using OSGI or any other situation where your Xquery files are inside your package/jar. I've created a patch so that it will resolve the imports in the same way that the component resolves resources using resolveMandatoryResource. I've tested the patch successfully using "classpath:/", "file://", and "http://" uri formats. -- This message is automatically generated by JIRA. For more information on JIRA, see:
http://mail-archives.apache.org/mod_mbox/camel-dev/201107.mbox/%3C1316371289.20974.1312042209659.JavaMail.tomcat@hel.zones.apache.org%3E
CC-MAIN-2017-39
refinedweb
143
56.86
Technical Articles Fiori elements – List Report – Sorting, Grouping and Table Types! ***** Having seen and understood the basics of List Reports and the main annotations, you are ready to explore other UI annotations. You might want to start with some simple changes – sorting and grouping. Certainly your users can sort and group a list report using the settings icon. However if you are creating a custom report you will probably want to default the sorting and grouping for them. In this blog you will see how to set the default sort order and grouping as a developer. It’s worth remembering that, as well as guiding the design of the List Report through annotations, there are always some List Report behaviours end users can control for themselves. Naturally, if there’s a common preference among many end users you will want to default those behaviours up front using annotations. It’s also necessary to set certain default behaviours – such as sorting and grouping – if you want to add more advanced aggregation features such as totals, subtotals, and other analytics. As a developer you can also adjust the Sort Order and Grouping using the annotation UI.PresentationVariant. Using annotations to default the Sort Order is straightforward. Setting up a default Grouping, however, requires you to have a little more knowledge than just the annotations. You will need to understand the difference between the different table types supported by the Fiori element List Report, and how that impacts grouping and other table behaviours. IMPORTANT: For simplicity in this blog you will see firstly how to default sorting and grouping using local annotations. However remember that if you are creating a specific custom CDS View to provide the data for your List Report you are recommended to build the annotations into the CDS View using a Metadata Extension. You can find out more about Metadata Extensions in the ABAP Programming Model for SAP Fiori. Below you see a simple Sales Order list report. This is your starting point for the rest of the examples in this blog. TIP: This report is based on a ABAP CDS View exposed as an OData Service, as recommended in the ABAP Programming Model for SAP Fiori. Before you attempt to use the PresentationVariant, you need to understand a little about List Report table types. Because the SAP Cloud Platform Web IDE makes the annotations easy to understand, you will see how to sort and group using the Annotation Modeler first, and then you will see some alternative approaches using XML or a metadata extension of your CDS View. So in this blog you will find: - A brief introduction to Table Types - A little revision on annotations basics to get you started - How to default the Sort Order - How to default Grouping - Using PresentationVariant in annotation XML files - Using PresentationVariant in the metadata extension of a CDS View The examples in this blog are based on: - SAPUI5 1.50 - SAP NetWeaver AS ABAP 7.52 (as part of a SAP S/4HANA 1709 solution) - SAP Cloud Platform Web IDE Full Stack version 180104 - Ecliipse Oxygen A brief introduction to Table Types There are 3 main table types used: - Responsive - Grid - Tree The Responsive type is a simple lean table. This table type is recommended for use on all types of devices including smartphones. With a responsive table you can Sort and add a single level of Grouping as a business user, using the Settings icon. However you are not permitted to default this as a developer. This example shows single level grouping on the Company Name. The other table types are known collectively as Analytical table types. They are intended to handle larger amounts of data, and are therefore recommended for larger devices, e.g. desktop and tablet. With analytical tables you can Sort and do multiple levels of Grouping, both as a business user and as a developer. You’ll also find features you would expect in a desktop or tablet app, such as: - Reorder columns via drag and drop - Adjust column width using the cursor on the column border - Sort, filter, group or freeze columns using the column header - Automatic totalling and subtotalling for aggregated data TIP: In the latest versions you even get a Show Details link for a totals dialog summarizing a column of amounts in multiple currencies into a consolidated total into a total per currency. A Grid table is the default table type used when your OData Service includes aggregated data, i.e. attribute sap:semantics is set to “aggregate”. TIP: You can see this setting on the Entity Type tag in the metadata of your OData Service, e.g. <EntityType Name="ZC_SO_SalesOrderItemType" sap: Aggregated data can be shown with summary information such as totals. If you are following the ABAP Programming Model for SAP Fiori, the easiest way to set sap:semantics to aggregate is to include the following annotation in your CDS View itself against any numeric field, such as amount or quantity: @DefaultAggregation: #SUM TIP: You cannot use DefaultAggregation annotations in metadata extensions. You can find out more about Default Aggregations in the ABAP Programming Model for SAP Fiori. A Tree table is used to show lists with hierarchical tree structures. When you add grouping properties to a Grid table, the Fiori elements automatically converts to a Tree table. This has the added bonus that as you adjust the grouping, totals and subtotals can be shown as well. TIP: If necessary you can can override the table type to some extent in the manifest.json by using the gridTable and treeTable settings within sap.ui.generic.app > pages (of the List Report Page) > component > settings. A little revision on annotations basics to get you started Start by opening your Fiori elements list report project in the SAP Cloud Platform Web IDE. If you don’t have one already you can find out how to do that in Fiori Elements – How to Develop a List Report – Basic Approach You can use the Web IDE Annotation Modeler to assist you as you will find explained in Fiori Elements – How to Develop a List Report – using Local Annotations TIP: You can also change the annotations.xml file directly. When you view the annotations file using the Annotation Modeler you can see the being used by the List Report. These annotations were assigned when you created the List Report using the List Report application wizard. In the Annotation Modeler, you can see both the local annotations and the external annotations inherited from the OData Service. TIP: When following the ABAP Programming Model for SAP Fiori, the annotations are created as a metadata extension to the CDS View. And the CDS View has been exposed as an OData Service. You start the process by adding to the Local Annotations. The easiest way to do that is by using the Annotation Modeler and the + icon on the local annotations row. TIP: If you need to change an external annotation, just select the matching redefine icon in the Actions column. So now you are ready to add your annotations. How to default the Sort Order The sort order is defaulted using the UI.PresentationVariant annotation properties: - sortOrder – this collects the sorting parameters - sortOrder.by – this nominates the property for sorting - sortOrder.direction – Ascending or Descending (the default is Ascending) If you want to sort by more than one field, you simply use the sortOrder property to collect multiple sets of sortOrder.by and sortOrder.direction combinations. You can add these using Local Annotations. Start by adding the UI annotation PresentationVariant. TIP: PresentationVariant has a number of subnodes that technically must exist even if they are not used. That’s all the red asterisk indicates in the Annotation Modeler. Within the subnode SortOrder, for each property you want to include in your Sort Order, you add a SortOrderType annotation. The sequence in which you have defined the SortOrderType properties determines the sort sequence. Set the Property of the SortOrderType to the OData entity property you want to sort by. By default Sort is ascending, if you need to you simply add the Descending subnode of SortOrderType. This is a Boolean value and all you need to do is set it to true. In this example you can see the annotations that will sort the Sales Order list firstly by Company Name and within company by Sales Order ID descending. Because PresentationVariant is used in many different scenarios – including Overview Page analytic cards – to complete your PresentationVariant annotation you also need to set: - the Visualization subnode to point to your @UI.LineItem annotation used by your List Report - the RequestAtLeast subnode to include your sort properties IMPORTANT: If you forget to do this in local annotations your app may hang! In this example you can see the Visualization and RequestAtLeast subnodes of the Sales Order List Report. Once you have saved the annotations and restarted your app, your Sales Order list looks like this: You can find more information on defaulting the Sort Order in the SAPUI5 SDK > Developing Apps with Fiori elements > How to use List Report and Object Page Templates > Preparing UI Annotations > Annotations relevant for List Reports and Object Pages > Tables > Default Sort Order. How to default Grouping You can default the grouping by setting the GroupBy subnode of the UI.PresentationVariant. TIP: If you want to group by more than one property, just use the + icon in the Annotation Modeler. VERY IMPORTANT: Remember however that how the grouping is applied depends on the table type. So if you try to apply grouping to a responsive table, the app is will ignore your settings! Provided you make sure you are using an analytical table, then your grouping will work, and your grid table will convert to a tree table. So then your Sales Order List Report will look like this. Notice the differences to a responsive report: - the sort indicators in the column headers show that you have sorted the report by Company Name ascending and Sales Order ID descending - instead of just clicking on a row to see the details, we now have a final column with a > icon to reach the related Object Page to see the details of the row NOTE: You do not need to sort before grouping – grouping will still work. However like me you will probably find that most business users are likely to expect that the data is sorted within your grouping. Using PresentationVariant in annotation XML files If you prefer to code this directly in the annotation XML editor – rather than using the annotation modeler – this is what it should look like. Using PresentationVariant in the metadata extension of a CDS View Of course, if you have created a CDS View specifically for this list report, you might prefer to follow recommended best practice and include your annotations as part of your CDS View. The correct way to do this is to add your PresentationVariant and other UI annotations in a metadata extension of your CDS View. Metadata extensions enable you to layer annotations on top of existing CDS Views. For the Sales Order list example, your Presentation Variant is formatted like this: @UI.presentationVariant: [{ sortOrder: [ { by: 'CompanyName' }, { by: 'SalesOrder', direction: #DESC } ], groupBy: [ 'CompanyName', 'SalesOrder' ], visualizations: [{ type: #AS_LINEITEM }], requestAtLeast: [ 'CompanyName', 'SalesOrder' ] }] As the PresentationVariant applies to the whole report, you add your annotation above the “annotate view” statement as in this example. Find out more about MetaData Extensions and UI Annotations in the ABAP Programming Model for SAP Fiori Taking your Fiori elements app to the Next Level If you are interested in Fiori elements you might also like to look at these videos on Youtube: And you can find more information on Developing apps with SAP Fiori elements in the SAPUI5 SDK Lastly you will find a collection of blogs, videos and other material in the Fiori elements wiki. And if you are creating custom Fiori apps for SAP S/4HANA, you will find many resources to help you in the Expert Deep Dive section of the SAP Fiori for SAP S/4HANA wiki Brought to you by the S/4HANA RIG Hello Jocelyn, Thanks for this precious information. It will be nice to have the same for the ObjectPage. Is the ObjectPage react the same as the ListReport (in the case where we have sections with table) ? Regards, Joseph Hi Joseph, Yes much the same although if you have more than one @UI.Identification or @UI.Lineitem annotation in your report it is always best to use a Qualifier to distinguish between them so that the template always knows which annotation you mean. Rgds Jocelyn Thanks Jocelyn. Great article! Are the settings (grouping, sorting, ..) also saved in the (user) Variant? Yes that's right. Remember that there may be 2 variants - one for the filter and one for the table - or a single combined variant depending on how the list report has been written. Hello Jocelyn, It is possible have multi-lines text in a column, to same id row, in the grid table? Best regards Pedro Ferreira Hi Pedro Try annotation UI.multiLineText as mentioned in per Rgds, Jocelyn Oh and blog it if you do it - we don't have that one in a blog yet 🙂 Thanks Jocelyn for the wonderful Blog! Can we use @UI.presentation variant for value helps based on Modeled Views? Thanks, Vik Hi Vikram It's not quite as simple as that as the OData Service needs to be adjusted to provide the value help as well. You can find more info on Providing Value Helps n the ABAP Programming Model for SAP Fiori Rgds Jocelyn Hi Jocelyn , Thanks for nice blog .We are trying to add filter on column header same as sort and group but cannot find it in the UI annotation. Can you help here. Thanks, Monica Hi Monica, I am not quite sure what you mean? Do you mean you want to adjust which columns are displayed in the list? That is controlled by the @UI.LineItem annotation. Rgds, Jocelyn Hi Jocelyn, I am getting sort ,freeze in column header but i am not getting filter in column header of table .Could you please help. Thanks, Monica Please ask questions as a separate post in answers.sap.com not in blog comments. As per the SAP Community Rules of Engagement. Hi Jocelyn, I'm using a list report smart template application. On scroll the the UI is sending batch requests again and again to the backend and say if i have 300 records, those 300 records are bound infinite times. This happens on scroll Can you please help with a solution for this? HI Akshaya, please raise questions as separate posts in answers.sap.com not as blog comments. At a guess it sounds like your OData Service isn't supporting top/skip/filter options correctly. If you want more please raise a question in answers.sap.com Rgds Jocelyn Hi Jocelyn, Does it work with OData V2 ? Regards, Joseph Yes absolutely. I confirm, it's working 🙂 thanks Does it work for OData V4 service? Fails to load the metadata file when I try. Hi Jocelyn, I have set gridTable:true in my manifest for list report table. But for Navigation to object page, it shows a "Show Details" button instead of Carousel. Can you please help me get a carousel for GridTable ? Hi Akshaya, I think perhaps you mean the > icon rather than a Carousel? The Show Details link shows depending on how your actions are defined. If you have more than one inline action you automatically get Show Details instead of the > icon. Rgds Jocelyn Hi Jocelyn, Can we have filter,group and sort in smart settings dialog using annotations? Hi Akshaya I'm not sure what you mean? The filter, group and sort settings are automatic in all List Reports - just look for the settings icon on the top RH of the Smart Table. Any defaults you set via annotations will show in the settings as well. Rgds, Jocelyn Hi Jocelyn Dart , nice blog, helped me a lot to get a view behind the scenes. is it possible to skip the navigation on the list. So I just need a list to show a specific status and do not need to navigate. Did not find anything at the annotations which sounded like that. Alternative I will create an second odata, which will show the details, but this would be just for workaraound, because noone need it. ~Florian Hi Florian, Yes you can override the default navigation in the manifest.json file. There is help on this in the SAPUI5 SDK re Developing Apps with Fiori elements > Configuring Navigation Rgds Jocelyn Dear Jocelyn Dart , Florian Henninger I am trying to override the default navigation to an object page and implement cross application navigation in a standard Fiori list report. I managed to do this using coding in a controller on a custom list report already. The link you shared,Jocelyn Dart, is a little confusing to me. I would need to navigate to an external application (also registered on the launchpad) with two parameters from the original application. I'm using version 1.3.0 of sap.ui.generic app as the links states "The example above applies to sap.ui.generic.app->_version 1.2.0." Any suggestions on how to achieve this? Kind regards, Maxim Jocelyn, I have a gridTable report, where I really need totals calculated. This report also has an Association to _line data. I have found that if I add @DefaultAggregation: #SUM to the _parent CDS view, it changes the KEY of that view to GENERATED_ID, which breaks the navigation to _line (due to key mismatch). Hi Tim, Interesting... usually we opt for GUIDs as the technical id and then relate that back to a semantic key. You could perhaps use a table function for that? Or else just adjust the on condition of the $projection relationship between the parent and line views. Rgds Jocelyn Two problems: So, as it stands, it is impossible to get totals, unless you use an analytical view, and you can't navigate with an Analytical view.... Is there any other way to get calculated totals on Responsive, or gridTable? I tried local annotations, and it did NOT work. Thanks, Tim Hi Tim, Local annotations only adjust the UI - they can't make your OData Service provide new functionality. Since the aggregation itself (i.e. the totalling of values) needs to be part of the OData Service it needs to be part of the CDS View itself. This is one of the reasons why it helps to use Metadata extensions rather than put all of your UI annotations inline in your CDS View. Using Metadata extensions helps you clarify, for both your developers and your support people, which annotations are directly influencing the UI, versus which annotations are influencing the capabilities of the OData Service itself. Rgds Jocelyn And for what it is worth, this did not work either: @AbapCatalog.preserveKey: true, but it did change SQL view key back to semantic key. Take a look at any SQL View with “Default Aggregation” (using SE11) — you see that all fields are key, but the “Generated_ID” will not be listed. Hi Tim, what I would advise is looking at how this problem is solved in delivered Fiori apps in S/4HANA - if you don't have one perhaps grab a CAL Fully Activated Appliance to understand what is happening under the covers in a more advanced scenario than can be explained in a blog. My suspicion is that your approach may not be providing what you need because too much is being attempted in the one CDS view. Typically CDS Views are layered into a Virtual Data Model approach. Using those layers gives you the flexibility to combine transactional and analytical approaches. In any case, at this point your questions are too specific to be handled as blog comments. So I recommend creating a separate post with the details of your issue including the backend system release and SAPUI5 release in answers.sap.com. That would also give the opportunity for some of our analytics experts to weigh in, rather than rely on a single blog author. Rgds Jocelyn I debugged it, and found enough to search OSS. Unfortunately, I am on 7.50 not 7.52, but looks like this is fixed in 7.52 SP 1. I will request a downport... who knows. Tim EUREKA (sort of) After a ton of debugging, 30 seconds at a time, I have figured out that navigation works, IF all fields in ASSOCIATION condition are displayed in the List (not hidden) – in my case two of the fields are mandatory selection fields (so I didn’t display them) I guess it makes sense, as the association were passed as “$projection”. on $projection.gjahr = _line.gjahr and $projection.monat = _line.monat and $projection.kostl = _line.kostl and $projection.hkont = _line.hkont As association fields (gjahr and monat), were not “projected” (the hidden key fields), the generated_id did not include those fields, and not retrieved by: if_sadl_gw_dpc_util~get_dpc( )->get_keys_from_analytical_id( EXPORTING io_tech_request_context = io_tech_request_context IMPORTING et_keys = DATA(lt_keys) ). Sorry for adding so many comments to your blog…I need to look at alternative methods of association. -Tim Last comment. Problem solved. Make sure all fields in the "on condition" ($projection or not) of an ASSOCIATION are specified (using Local Annotations) in: Local Annotations->UI.PresentationVariant->RequestAtLeast Thank you for all of your great posts. -Tim Glad you sorted it Tim! And thanks for posting the solution... it's always the non-obvious ones that are the most frustrating. But as you say it does make sense... with the benefit of hindsight! Hi Jocelyn Dart, I've added an Entity Set to our OData Service that return a hierarchy with annotations based on the section 2. Hierarchy using annotations of the blog post: TreeTable Odata binding. I want to consume that EntitySet on the ObjectPage of a Fiori Elements List Report Application. For that I've followed the example provided in Setting the Table Type. My sap.ui.generic.app section looks like this: The SalesdocumentFlowSet content is displayed, but only in a normal Responsive Table and not in a TreeTable. Do you have any hint for me? Is there anywhere a full documentation of sap.ui.generic.app? Best regards Gregor Hi Gregor Ok so your manifest.json looks fine. However if the Fiori element is not happy with the hierarchy annotations it will default to a responsive table. Similarly if you render the table on a phone treetable setting will be ignored. Those references are rather old.. you might want to cross check against the latest annotations reference for hierarchy annotations This one on how the OData aggregation is interpreted also explains more about the grouping and filtering behaviour that is also related to hierarchy usage. No we do not provide public documentation for sap.ui.generic.app itself nor are their any plans to do so. The template itself is updated very frequently. We do provide documentation for using the annotations. I agree there's not much publically on the hierarchy annotations themselves. If I find a hint I'll add some more. Rgds Jocelyn Hi Gregor, Ok so with the latest versions the manifest.json has changed a little and you can try this approach instead: "name" : "sap.suite.ui.generic.template.ObjectPage" "settings" : { "sections" : { "to_ProductText::com.sap.vocabularies.UI.v1.LineItem" : { "navigationProperty" : "to_ProductText" , "entitySet" : "STTA_C_MP_ProductText" , "treeTable" : true <Property Name= "HIERARCHY_NODE" Type= "Edm.String" Nullable= "false" MaxLength= "32" sap: // Odata annotation: sap:hierarchy-node-for <Property Name= "LEVEL" Type= "Edm.Int32" sap: //Odata annotation: sap:hierarchy-level-for <Property Name= "PARENT_NODE" Type= "Edm.String" MaxLength= "32" sap: // Odata annotation: sap:hierarchy-parent-node-for <Property Name= "DRILLDOWN_STATE" Type= "Edm.String" MaxLength= "16" sap: // Odata annotation: sap:hierarchy-drill-state-for Hi Gregor Wolf and Jocelyn Dart, Any idea on how to add the sap:hierarchy annotations on calculation views, not on CDS views? Any possibility of adding similar parameters on the frontend (V4 annotations on WebIDE) instead of relying on the odata metadata? Best regards, André Hi Jocelyn, I added a question regarding generating hierarchy annotations for CDS for the SAP ALV tree control. I am having trouble generating the sap:hierarchy properties using annotations. I want to do this without using an MPC class. Can you please take a look at my question? Your help is appreciated. Thanks, Jay Hi Jocelyn Dart I have successfully created my Fiori Elements List Tree table embedded together with CDS and also added annotation properties for my odata MPC method Define by followed this blog. It also same as what you mentioned at the top. My Hierarchy/Tree report design is Business Partner(Parent/Root)->Supplier(child)->Company Code(child). When I implemented my selection search to Business Partner and Supplier in CDS, I encounter a problem the search function only work for Business Partner but not Supplier. After I debug the odata request call I found out the filter always start with ‘LEVEL/hierarchy-level-for = 0 and (Business Partner=4000000000 and Supplier=”)’. Then I found out a SAP website mentioned I need to implement hierarchy-node-descendant-count-for so that the Fiori element search will know the Hierarchy level is not 0 and it will auto expand all the nodes. After I implemented this annotation(hierarchy-node-descendant-count-for) it enable me to search for supplier however it break my hierarchy/tree structure mean my company code data all missing. Appreciate you can help me or share me any solution that I can fix the search child node issues. Custom UI5 Tree application will be my last choice to go. Thank you. Regards, Gim Hi Gim, It is very difficult for authors to respond to specific issues in blog comments. It can be quite some time when you are relying on one very busy person to get to your comment. Please Ask a Question with the topic "Fiori elements" where many others in the SAP Community can help you. Lots of customers, partners, independents, and SAP employees who already use SAP Fiori elements are there to help you! Hi Jocelyn, I am using a list report with responsive table. Is there any way to merge duplicate via annotations in CDS, If not how to achieve this functionality? Thanks in advance. Regards, Monika Able to merge duplicate via SAP UI5 Visual Editor. Hello Jocelyn, Can you please provide me the tutorials of CDS view for this project Thanks in advance. Regards, Kaushik Hi Kaushik, Sorry there are no such tutorials. You need to work through the blog Thanks Jocelyn Hello Jocelyn, Thank you for this informative blog post. I have a Question concerning I have made a List inside a fiori elements object page using Local annotation in the Web IDE. ( I has to be local because we ae merging data from SRM and ECC in the samme service ) By default the list has a Navigation arrow at the end of each line item. How do I remove this? Normally in Ui5 I can set list item type to "Inactive " and the will solve it. but how is this done using Annotation ? Best August Hi August The > arrow at the end of each line is the default Expand option. You can remove this by adjusting the configuration to disable the navigation. See the section in the SAPUI5 SDK on Developing Apps using SAP Fiori elements > How to use SAP Fiori elements > Configuring Navigation > Changing Navigation to the Object Page. Rgds Jocelyn Hi Jocelyn This is a very informative blog post, thank you. How would I add to my Services metadata within the CDS View? Hi Jocelyn I figured it out (beside it is stated in the blog post); However I had the problem that only one sum-up row was shown in the analytical table; This is because the Line-Item keys has to be selected as well; this can be fixed using the RequestAtLeast-Annotation. Best regards Jan Well done in sorting it out! Hi Jocelyn, By any chance can we set default grouping to 1 field in responsive table? So that user opens the app with group-by set. Like you mentioned, this only works for Grid and analytical table. If I want to customize, any hint on how i can start ? Regards, Tejas Hi Tejas, Unfortunately grouping is not available at all in responsive tables. It is not supported. Thanks Jocelyn Hey, we managed to achieve it with controller extension. Well that's a new use for extensions I had not thought of! Well done. Worth blogging! Hi Tejas, May I know How did you achieve this via controller extension?? Thanks in advance. Regards, Abbilash Thank You Jocelyn for the wonderful blog. Is there any chance to do this grouping on value help for entity properties..? Like I tried for one, but it's not reflecting. I added value help annotation in MPC_EXT : cl_fis_shlp_annotation=>create( EXPORTING io_odata_model = model io_vocan_model = vocab_anno_model iv_namespace = 'FAC_FINANCIAL_STATEMENT_SRV' iv_entitytype = 'FinancialStatementList' iv_property = 'ProfitCenter' iv_search_help = 'FISSH_PRCTR' iv_search_help_field = 'PRCTR' iv_valuelist_entityset = 'PCHierSet' iv_valuelist_property = 'ProfitCenter' )->add_out_parameter( iv_property = 'ParentNode' iv_valuelist_property = 'ParentNode' )->add_out_parameter( iv_property = 'HierarchyNode' iv_valuelist_property = 'HierarchyNode' And able to see values like this. But i want see the grouped data for Parent Node Value (say). Is that possible via annotations ? Many Thanks, Gaurav Hi Gaur! Jocelyn Dart Great blog! I have a similar requirement where I have to group the responsive table in an object page. I see now grouping is available from ui5 1.65 version. However, I tried to extend the table, by setting group to 'true' onbeforetablerebind. It dint work. Would even the extensions not work? We are in version ui5 1.60.1 Thanks, Shiny Hi Shinynickitha,! Great Blog! I have a question. My Value Help is not sortable and i can sort it in the Core Data Service. I have a Table Function that give me the result in the correct order but if i consume this table function the result is in a wrong order. The sort doesnt work in the CDS that consume the table function with the correct order. Could you Help me pls! Hi S! Hi Jocelyn Dart I have a question regarding making totals or more then one level grouping by using to set sap:semantics to aggregate" I tried to set for a numeric field in my application created using ABAP programming model for fiori and it worked fine. list has totals , more then one level grouping. But when i created a similar application using restful abap programming model and gave aggregation for field then list did not show any changes . no totals or multiple grouping option. what is the diff using same in application created using RAP . Can you please help? Thanks Amit Gupta THIS POST IS NOW RETIRED – Please go directly to SAP Community topic for Fiori elements Hi Jocelyn Dart, I have applied the Default Sort Order and Group By on the Annotation file on my Fiori Elements app.I have four PresentationVarients for the same EntitySet with qualifiers. Sorting is getting applied to the Analytical table but GroupBy is not getting applied. Our current UI5 version is 1.60.18. When i check it on the latest version this seems to work. Do i need to add any additional attribute to make the Grouping to work on the lower UI5 versions? Regards, Srinivasan V THIS POST IS NOW RETIRED – Please go directly to SAP Community topic for Fiori elements and ask your question there. Hi Jocelyn, Very nice blog, can you share some reference CDS and UI code for having sum in tree table as per your image shared in this blog. We need this very desperately. Regards Ravi
https://blogs.sap.com/2018/01/11/fiori-elements-list-report-sorting-grouping-and-table-types/
CC-MAIN-2022-27
refinedweb
5,308
54.63
I. Who the hell is Hello, Blinky? Hello, Blinky is kind of Hello, World from Raspberry Pi and other microboards world. It’s possible to start also there with Hello, World but why not to do something more interesting with connected electronics? This is what those boards are made for. Hello, Blinky is about blinking LED lamp. It’s one of simplest things to do on boards like Raspberry Pi. There are many hands-on examples of Hello, Blinky in web for different languages, tools and utilities. This one here is for Blazor. Wiring To use LED lamp with Raspberry Pi we need to buy one, of course. We need also breadboard, wires and resistor. Just visit your local electronics shop and ask them for these components. They know well what to sell you. Wiring is actually easy. Just connect LED, resistor and wires like shown on the following image. NB! I’m using Raspberry Pi with Windows 10 IoT Core (there’s free version available). This code should actually work also on Linux but I have not tried it out and I have no idea if publishing works same way for Blazor applications we want to host on Linux. With wiring done it’s time to start coding. Blazor solution I decided to go with client-side Blazor application that is supported by server-side web application. Server-side Blazor handles events in server. I mean UI events are handled in server. In this case UI and server communicate using web sockets and SignalR. This is something I want to avoid. Raspberry Pi is small board with not much resources. At least the one I have is not very powerful. I don’t want to put UI workload there as browser has way more resources to use. With this in mind I created Visual Studio solution shown on image on right. BlazorHelloBlinky.Client is client-side Blazor application and BlazorHelloBlinky.Server is web application that runs on Raspberry Pi. Blinking LED from ASP.NET Core controller Before everything else we need a class to blink LED lamp. There’s no programming concept for blinking. No command like make-led-lamp-blink(). We have to write it by ourselves. Here is what blinking cycle means for computer: - Send signal to GPIO pin - Wait for one second - Cut signal off - Wait for one one second - Go to step 1 The problem is we cannot blink LED with just one command. We need something that is going through this cycle until it is stopped. For this I write LedBlinkClient class. It hosts task with endless loop but inside the loop it checks if it’s time to stop. Here is LedBlinkClient class for server-side web application (it needs System.Device.Gpio Nuget package to work). public class LedBlinkClient : IDisposable { private const int LedPin = 17; private const int LightTimeInMilliseconds = 1000; private const int DimTimeInMilliseconds = 200; private bool disposedValue = false; private object _locker = new object(); private bool _isBlinking = false; private Task _blinkTask; private CancellationTokenSource _tokenSource; private CancellationToken _token; public void StartBlinking() { if (_blinkTask != null) { return; } lock (_locker) { if (_blinkTask != null) { return; } _tokenSource = new CancellationTokenSource(); _token = _tokenSource.Token; _blinkTask = new Task(() => { using (var controller = new GpioController()) { controller.OpenPin(LedPin, PinMode.Output); _isBlinking = true; while (true) { if (_token.IsCancellationRequested) { break; } controller.Write(LedPin, PinValue.High); Thread.Sleep(LightTimeInMilliseconds); controller.Write(LedPin, PinValue.Low); Thread.Sleep(DimTimeInMilliseconds); } _isBlinking = false; } }); _blinkTask.Start(); } } public void StopBlinking() { if (_blinkTask == null) { return; } lock (_locker) { if (_blinkTask == null) { return; } _tokenSource.Cancel(); _blinkTask.Wait(); _isBlinking = false; _tokenSource.Dispose(); _blinkTask.Dispose(); _tokenSource = null; _blinkTask = null; } } public bool IsBlinking { get { return _isBlinking; } } protected virtual void Dispose(bool disposing) { if (!disposedValue) { if (disposing) { StopBlinking(); } disposedValue = true; } } public void Dispose() { Dispose(true); } } Server-side application needs also controller so Blazor can control blinking. Here is the simple controller I created. [Route("api/[controller]")] public class BlinkyController { private readonly LedBlinkClient _blinkClient; public BlinkyController(LedBlinkClient blinkClient) { _blinkClient = blinkClient; } [HttpGet("[action]")] public bool IsBlinking() { return _blinkClient.IsBlinking; } [HttpGet("[action]")] public void StartBlinking() { _blinkClient.StartBlinking(); } [HttpGet("[action]")] public void StopBlinking() { _blinkClient.StopBlinking(); } } Controller actions are just public HTTP based end-points for LED client class. Of course, LedBlinkClient must be registered in Startup class as we want to get it through constructor of controller and dependency injection. services.AddSingleton<LedBlinkClient>(); I registered it as a singleton as I don’t want multiple instances of LED client to be created. Client-side Blazor application Client-side application in my case contains only Index page and its code-behind class (Blazor supports code-behind files, name it as PageName.razor.cs). Here is the mark-up for Index page. @page "/" @inherits IndexPage <h1>Hello, blinky!</h1> <p>Led is @BlinkyStatus</p> <div> <button class="btn btn-success" @Start blinking</button> | <button class="btn btn-danger" @Stop blinking</button> </div> Here is the class for Index page. I keep it as code-behind file so my Blazor page doesn’t contain any logic. public class IndexPage : ComponentBase { [Inject] public HttpClient HttpClient { get; set; } [Inject] public IJSRuntime JsRuntime { get; set; } public string BlinkyStatus; protected override async Task OnInitializedAsync() { var thisRef = DotNetObjectReference.Create(this); await JsRuntime.InvokeVoidAsync("blinkyFunctions.startBlinky", thisRef); } protected async Task StartBlinking() { await HttpClient.GetStringAsync("/api/Blinky/StartBlinking"); } protected async Task StopBlinking() { await HttpClient.GetStringAsync("/api/Blinky/StopBlinking"); } [JSInvokable] public async Task UpdateStatus() { var isBlinkingValue = await HttpClient.GetStringAsync("/api/Blinky/IsBlinking"); if (string.IsNullOrEmpty(isBlinkingValue)) { BlinkyStatus = "in unknown status"; } else { bool.TryParse(isBlinkingValue, out var isBlinking); BlinkyStatus = isBlinking ? "blinking" : "not blinking"; } StateHasChanged(); } } Important thing to notice is OnInitializedAsync() method. This method is called when page is opened. It creates Index page reference for JavaScript and starts JavaScript timer to update blinky status periodically. Updating blinking status automatically I wasn’t able to get any C# timer work in browser so I went with JavaScript interop. The good old setInterval() with some Blazor tricks made things work. The trick I did is illustrated on the following image. When page is loaded I use JavaScript interop to send page reference to JavaScript. The reference is saved there and after this timer is created using setInterval() method. After every five seconds timer callback is fired and method to read LED state is called. Yes, this method is defined in Blazor form and it is called from JavaScript with no hacks. Add some JavaScript file to wwwroot folder of client-side Blazor application and include it in index.html file in same folder. Here’s the content of JavaScript file. window.blinkyFunctions = { blazorForm: null, startBlinky: function (formInstance) { window.blinkyFunctions.blazorForm = formInstance; setInterval(window.blinkyFunctions.updateStatus, 5000); }, updateStatus: function () { if (window.blinkyFunctions.blazorForm == null) { return; } window.blinkyFunctions.blazorForm.invokeMethodAsync('UpdateStatus'); } } startBlinky() is the method called from Blazor page. updateSatus() method is called by timer after every five seconds. In timer callback method there’s invokeMethodAsync() call. This is how we can invoke methods of Blazor objects in JavaScript. Why not updating status in JavaScript? Well, because this post is about Blazor and as much as possible I want to build using Blazor. This is also good temporary solution for Balzor applications that need timer. Publishing Blazor application to Raspberry Pi Publishing is currently challenging as client-side Blazor is still under construction and all supportive tooling is not ready yet. There are few tricks I had to figure out the hard way and to save your time I will describe here what I did. First thing is about assemblies. If there is version downgrade then it’s handled as an error. For some people it worked when they added Nuget packages with downgraded versions to their solution but it didn’t worked out for me. We can ignore warning NU1605 on project level by modifying project file for web application. <PropertyGroup> <TargetFramework>netcoreapp3.0</TargetFramework> <LangVersion>7.3</LangVersion> <OutputType>Exe</OutputType> <NoWarn>$(NoWarn);NU1605</NoWarn> </PropertyGroup> NoWarn tag tells tools that this warning must not be handled as an error. On my Raspberry Pi I want to use port 5000 for my blinky application. To make it happen with minimal efforts I added port binding to Program.cs file of web application. public class Program { public static void Main(string[] args) { BuildWebHost(args).Run(); } public static IWebHost BuildWebHost(string[] args) => WebHost.CreateDefaultBuilder(args) .UseConfiguration(new ConfigurationBuilder() .AddCommandLine(args) .Build()) .UseUrls("http://*:5000/") .UseStartup<Startup>() .Build(); } Now it’s time to build the whole solution and then focus on real publishing. I know, I know – the last and most annoying steps to do before LED starts blinking… It’s also called fun of playing with non-stable technology. So, here’s the dirtiest part of publishing process: - Publish client-side Blazor application using Visual Studio. Leave all settings like they are as everything should work out well with defaults - Publish server-side Blazor application on command-line using the following command: dotnet publish -c Release -r win10-arm /p:PublishSingleFile=true - Open C:\ drive of your Raspberry and create folder Blinky - Copy files from BlazorHelloBlinky.Server\bin\Release\netcoreapp3.0\win10-arm\ to Blinky folder on Raspberry Pi. - Open file BlazorHelloBlinky.Client.blazor.config and make it look like here: c:\Blinky\BlazorHelloBlinky.Client.csproj BlazorHelloBlinky.Client.dll - Copy wwwroot folder of client-side Blazor app to Blinky folder on Raspberry Pi - Copy dist folder from folder where client-size Blazor application was published to Blinky folder on Raspberry Pi - Connect to your Raspberry Pi using PowerShell - Move to Blinky folder and run BlazorHelloBlinky.Server.exe - Open browser on your coding box and navigate to (use the name or IP of your Raspberry Pi) If there were no problems and all required files got to Raspberry then Hello, Blinky should be ready for action. View Comments (1) Thanks for putting this together! I particularly like the Blazor tie-in. Another Blazor related project I'd like to see is Blazor and Alexa. Just a thought, thanks again.
https://gunnarpeipman.com/blazor-hello-blinky-iot/amp/
CC-MAIN-2022-40
refinedweb
1,639
50.53
Bezier basis classes which maintain knot vectors. More... #include <GA_BezBasis.h> Bezier basis classes which maintain knot vectors. The GA_BezBasis class maintains the knot vectors for Bezier splines. The basis consists of: Definition at line 37 of file GA_BezBasis.h. The default constructor will choose length/order based on the defaults for the basis type. default: order=4, length=2 Attach another basis to us and grow our basis as a result. The bases must have the same type and order. If "overlap" is true, we overlap the beginning of b with our end. Spreading makes sense when you can have multiple knots in the basis, and causes identical knots to be spread within range of the neighbouring knots. The checkValid() methods test to see whether th basis is valid given a curve with Transitional method while we might need to import data from a GB_Basis. Be careful calling this method as the order must be appropriate for the number of knots. TODO: Remove when no longer necessary. Definition at line 50 of file GA_BezBasis.h. The validate() method will force the basis to be valid (if possible). The adapt enum can be used to control the behaviour of this method.
http://www.sidefx.com/docs/hdk/class_g_a___bez_basis.html
CC-MAIN-2018-43
refinedweb
201
75.2
Title Description Given a two-dimensional array, each row is incrementally sorted from left to right, and from top to bottom. Given a number, determine whether the number is in the two-dimensional array. Consider the following matrix: [ [1, 4, 7, 11, 15], [2, 5, 8, 12, 19], [3, 6, 9, 16, 22], [10, 13, 14, 17, 24], [18, 21, 23, 26, 30] ] Given target = 5, return true. Given target = 20, return false.Copy to clipboardErrorCopied Solutions to problems Time complexity O(M + N) and space complexity O(1) are required. Where m is the number of rows and N is the number of columns. The way to solve the problem is that its number must be below it. Therefore, starting from the upper right corner, you can narrow the search range according to the size relationship between target and the current element. The search range of the current element is all elements in the lower left corner. Code Follow the breakpoint. It's very clear package com.janeroad; /** * Created on 2020/4/27. * * [@author]() LJN */ //Given a two-dimensional array, each row is incrementally sorted from left to right, and from top to bottom. Given a number, determine whether the number is in the two-dimensional array. public class test1 { public boolean find(int target,int [][]arr){ if(arr==null||arr.length<=1||arr[0].length<=1) return false; int j=0,k=arr[0].length-1; while (j<arr.length&&k>=0){ if(target==arr[j][k]) return true; else if(target>arr[j][k]) j++; else k--; } return false; } public static void main(String[] args) { int [][] arr=new int[][]{{1,4,7,11,15},{2,5,8,12,19},{3,6,9,16,12},{10,13,14,17,24},{18,21,23,26,30}}; test1 test1=new test1(); System.out.println(test1.find(5,arr)); System.out.println(test1.find(10,arr)); System.out.println(test1.find(27,arr)); } } Solutions and pictures:
https://programmer.ink/think/search-in-two-dimensional-array.html
CC-MAIN-2020-45
refinedweb
325
58.69
The author selected the Open Internet/Free Speech Fund to receive a donation as part of the Write for DOnations program. Introduction Images make up a large amount of the data that gets generated each day, which makes the ability to process these images important. One method of processing images is via face detection. Face detection is a branch of image processing that uses machine learning to detect faces in images. A Haar Cascade is an object detection method used to locate an object of interest in images. The algorithm is trained on a large number of positive and negative samples, where positive samples are images that contain the object of interest. Negative samples are images that may contain anything but the desired object. Once trained, the classifier can then locate the object of interest in any new images. In this tutorial, you will use a pre-trained Haar Cascade model from OpenCV and Python to detect and extract faces from an image. OpenCV is an open-source programming library that is used to process images. Prerequisites Step 1 — Configuring the Local Environment Before you begin writing your code, you will first create a workspace to hold the code and install a few dependencies. Create a directory for the project with the mkdir command: Change into the newly created directory: Next, you will create a virtual environment for this project. Virtual environments isolate different projects so that differing dependencies won't cause any disruptions. Create a virtual environment named face_scrapper to use with this project: - python3 -m venv face_scrapper Activate the isolated environment: - source face_scrapper/bin/activate You will now see that your prompt is prefixed with the name of your virtual environment: Now that you've activated your virtual environment, you will use nano or your favorite text editor to create a requirements.txt file. This file indicates the necessary Python dependencies: Next, you need to install three dependencies to complete this tutorial: numpy: numpy is a Python library that adds support for large, multi-dimensional arrays. It also includes a large collection of mathematical functions to operate on the arrays. opencv-utils: This is the extended library for OpenCV that includes helper functions. opencv-python: This is the core OpenCV module that Python uses. Add the following dependencies to the file: requirements.txt numpy opencv-utils opencv-python Save and close the file. Install the dependencies by passing the requirements.txt file to the Python package manager, pip. The -r flag specifies the location of requirements.txt file. - pip install -r requirements.txt In this step, you set up a virtual environment for your project and installed the necessary dependencies. You're now ready to start writing the code to detect faces from an input image in next step. Step 2 — Writing and Running the Face Detector Script In this section, you will write code that will take an image as input and return two things: - The number of faces found in the input image. - A new image with a rectangular plot around each detected face. Start by creating a new file to hold your code: In this new file, start writing your code by first importing the necessary libraries. You will import two modules here: cv2 and sys. The cv2 module imports the OpenCV library into the program, and sys imports common Python functions, such as argv, that your code will use. app.py import cv2 import sys Next, you will specify that the input image will be passed as an argument to the script at runtime. The Pythonic way of reading the first argument is to assign the value returned by sys.argv[1] function to an variable: app.py ... imagePath = sys.argv[1] A common practice in image processing is to first convert the input image to gray scale. This is because detecting luminance, as opposed to color, will generally yield better results in object detection. Add the following code to take an input image as an argument and convert it to grayscale: app.py ... image = cv2.imread(imagePath) gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) The .imread() function takes the input image, which is passed as an argument to the script, and converts it to an OpenCV object. Next, OpenCV's .cvtColor() function converts the input image object to a grayscale object. Now that you've added the code to load an image, you will add the code that detects faces in the specified image: app.py ... faceCascade = cv2.CascadeClassifier(cv2.data.haarcascades + "haarcascade_frontalface_default.xml") faces = faceCascade.detectMultiScale( gray, scaleFactor=1.3, minNeighbors=3, minSize=(30, 30) ) print("Found {0} Faces!".format(len(faces))) This code will create a faceCascade object that will load the Haar Cascade file with the cv2.CascadeClassifier method. This allows Python and your code to use the Haar Cascade. Next, the code applies OpenCV's .detectMultiScale() method on the faceCascade object. This generates a list of rectangles for all of the detected faces in the image. The list of rectangles is a collection of pixel locations from the image, in the form of Rect(x,y,w,h). Here is a summary of the other parameters your code uses: gray: This specifies the use of the OpenCV grayscale image object that you loaded earlier. scaleFactor: This parameter specifies the rate to reduce the image size at each image scale. Your model has a fixed scale during training, so input images can be scaled down for improved detection. This process stops after reaching a threshold limit, defined by maxSizeand minSize. minNeighbors: This parameter specifies how many neighbors, or detections, each candidate rectangle should have to retain it. A higher value may result in less false positives, but a value too high can eliminate true positives. minSize: This allows you to define the minimum possible object size measured in pixels. Objects smaller than this parameter are ignored. After generating a list of rectangles, the faces are then counted with the len function. The number of detected faces are then returned as output after running the script. Next, you will use OpenCV's .rectangle() method to draw a rectangle around the detected faces: app.py ... for (x, y, w, h) in faces: cv2.rectangle(image, (x, y), (x+w, y+h), (0, 255, 0), 2) This code uses a for loop to iterate through the list of pixel locations returned from faceCascade.detectMultiScale method for each detected object. The rectangle method will take four arguments: imagetells the code to draw rectangles on the original input image. (x,y), (x+w, y+h)are the four pixel locations for the detected object. rectanglewill use these to locate and draw rectangles around the detected objects in the input image. (0, 255, 0)is the color of the shape. This argument gets passed as a tuple for BGR. For example, you would use (255, 0, 0)for blue. We are using green in this case. 2is the thickness of the line measured in pixels. Now that you've added the code to draw the rectangles, use OpenCV's .imwrite() method to write the new image to your local filesystem as faces_detected.jpg. This method will return true if the write was successful and false if it wasn't able to write the new image. app.py ... status = cv2.imwrite('faces_detected.jpg', image) Finally, add this code to print the return the true or false status of the .imwrite() function to the console. This will let you know if the write was successful after running the script. app.py ... print ("Image faces_detected.jpg written to filesystem: ",status) The completed file) status = cv2.imwrite('faces_detected.jpg', image) print("[INFO] Image faces_detected.jpg written to filesystem: ", status) Once you've verified that everything is entered correctly, save and close the file. Note: This code was sourced from the publicly available OpenCV documentation. Your code is complete and you are ready to run the script. Step 3 — Running the Script In this step, you will use an image to test your script. When you find an image you'd like to use to test, save it in the same directory as your app.py script. This tutorial will use the following image: If you would like to test with the same image, use the following command to download it: - curl -O Once you have an image to test the script, run the script and provide the image path as an argument: - python app.py path/to/input_image Once the script finishes running, you will receive output like this: Output[INFO] Found 4 Faces! [INFO] Image faces_detected.jpg written to filesystem: True The true output tells you that the updated image was successfully written to the filesystem. Open the image on your local machine to see the changes on the new file: You should see that your script detected four faces in the input image and drew rectangles to mark them. In the next step, you will use the pixel locations to extract faces from the image. Step 4 — Extracting Faces and Saving them Locally (Optional) In the previous step, you wrote code to use OpenCV and a Haar Cascade to detect and draw rectangles around faces in an image. In this section, you will modify your code to extract the detected faces from the image into their own files. Start by reopening the app.py file with your text editor: Next, add the highlighted lines under the cv2.rectangle line: app.py ...) ... The roi_color object plots the pixel locations from the faces list on the original input image. The x, y, h, and w variables are the pixel locations for each of the objects detected from faceCascade.detectMultiScale method. The code then prints output stating that an object was found and will be saved locally. Once that is done, the code saves the plot as a new image using the cv2.imwrite method. It appends the width and height of the plot to the name of the image being written to. This will keep the name unique in case there are multiple faces detected. The updated app.py script) status = cv2.imwrite('faces_detected.jpg', image) print("[INFO] Image faces_detected.jpg written to filesystem: ", status) To summarize, the updated code uses the pixel locations to extract the faces from the image into a new file. Once you have finished updating the code, save and close the file. Now that you've updated the code, you are ready to run the script once more: - python app.py path/to/image You will see the similar output once your script is done processing the image: Output[INFO] Found 4 Faces. [INFO] Object found. Saving locally. [INFO] Object found. Saving locally. [INFO] Object found. Saving locally. [INFO] Object found. Saving locally. [INFO] Image faces_detected.jpg written to file-system: True Depending on how many faces are in your sample image, you may see more or less output. Looking at the contents of the working directory after the execution of the script, you'll see files for the head shots of all faces found in the input image. You will now see head shots extracted from the input image collected in the working directory: In this step, you modified your script to extract the detected objects from the input image and save them locally. Conclusion In this tutorial, you wrote a script that uses OpenCV and Python to detect, count, and extract faces from an input image. You can update this script to detect different objects by using a different pre-trained Haar Cascade from the OpenCV library, or you can learn how to train your own Haar Cascade.
https://www.xpresservers.com/tag/opencv/
CC-MAIN-2022-27
refinedweb
1,934
64.61