text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Yesterday Brian Hayes wrote a post about the distribution of primes. He showed how you could take the remainder when primes are divided by 7 and produce something that looks like rolls of six-sided dice. Here we apply the chi-square goodness of fit test to show that the rolls are too evenly distributed to mimic randomness. This post does not assume you’ve seen the chi-square test before, so it serves as an introduction to this goodness of fit test.
In Brian Hayes’ post, he looks at the remainder when consecutive primes are divided by 7, starting with 11. Why 11? Because it’s the smallest prime bigger than 7. Since no prime is divisible by any other prime, all the primes after 7 will have a remainder of between 1 and 6 inclusive when divided by 7. So the results are analogous to rolling six-sided dice.
The following Python code looks at prime remainders and (pseudo)random rolls of dice and computes the chi-square statistic for both.
First, we import some functions we’ll need.
from sympy import prime from random import random from math import ceil
The function
prime takes an argument n and returns the nth prime. The function
random produces a pseudorandom number between 0 and 1. The ceiling function
ceil rounds its argument up to an integer. We’ll use it to convert the output of
random into dice rolls.
In this example we’ll use six-sided dice, but you could change
num_sides to simulate other kinds of dice. With six-sided dice, we divide by 7, and we start our primes with the fifth prime, 11.
num_sides = 6 modulus = num_sides + 1 # Find the index of the smallest prime bigger than num_sides index = 1 while prime(index) <= modulus: index += 1
We’re going to take a million samples and count how many times we see 1, 2, …, 6. We’ll keep track of our results in an array of length 7, wasting a little bit of space since the 0th slot will always be 0. (Because the remainder when dividing a prime by a smaller number is always positive.)
# Number of samples N = 1000000 observed_primes = [0]*modulus observed_random = [0]*modulus
Next we “roll” our dice two ways, using prime remainders and using a pseudorandom number generator.
for i in range(index, N+index): m = prime(i) % modulus observed_primes[m] += 1 m = int(ceil(random()*num_sides)) observed_random[m] += 1
The chi-square goodness of fit test depends on the observed number of events in each cell and the expected number. We expect 1/6th of the rolls to land in cell 1, 2, …, 6 for both the primes and the random numbers. But in a general application of the chi-square test, you could have a different expected number of observations in each cell.
expected = [N/num_sides for i in range(1, modulus)]
The chi-square test statistic sums (O – E)2/E over all cells, where O stands for “observed” and E stands for “expected.”
def chisq_stat(O, E): return sum( [(o - e)**2/e for (o, e) in zip(O, E)] )
Finally, we compute the chi-square statistic for both methods.
ch = chisq_stat(observed_primes[1:], expected[1:]) print(ch) ch = chisq_stat(observed_random[1:], expected[1:]) print(ch)
Note that we chop off the first element of the observed and expected lists to get rid of the 0th element that we didn’t use.
When I ran this I got 0.01865 for the prime method and 5.0243 for the random method. Your results for the prime method should be the same, though you might have a different result for the random method.
Now, how do we interpret these results? Since we have six possible outcomes, our test statistics has a chi-square distribution with five degrees of freedom. It’s one less than the number of possibilities because the total counts have to sum to N; if you know how many times 1, 2, 3, 4, and 5 came up, you can calculate how many times 6 came up.
A chi-square distribution with ν degrees of freedom has expected value ν. In our case, we expect a value around 5, and so the chi-square value of 5.0243 is unremarkable. But the value of 0.01864 is remarkably small. A large chi-square statistics would indicate a poor fit, the observed numbers being suspiciously far from their expected values. But a small chi-square value suggests the fit is suspiciously good, closer to the expected values than we’d expect of a random process.
We can be precise about how common or unusual a chi-square statistic is by computing the probability that a sample from the chi square distribution would be larger or smaller. The
cdf gives the probability of seeing a value this small or smaller, i.e. a fit this good or better. The
sf gives the probability of seeing a value this larger or larger, i.e. a fit this bad or worse. (The
scipy library uses
sf for “survival function,” another name for the ccdf, complementary cumulative distribution function).
from scipy.stats import chi2 print(chi2.cdf(ch, num_sides-1), chi2.sf(ch, num_sides-1))
This says that for the random rolls, there’s about a 41% chance of seeing a better fit and a 59% chance of seeing a worse fit. Unremarkable.
But it says there’s only a 2.5 in a million chance of seeing a better fit than we get with prime numbers. The fit is suspiciously good. In a sense this is not surprising: prime numbers are not random! And yet in another sense it is surprising since there’s a heuristic that says primes act like random numbers unless there’s a good reason why in some context they don’t. This departure from randomness is the subject of research published just this year.
If you look at dice with 4 or 12 sides, you get a suspiciously good fit, but not as suspicious as with 6 sides. But with 8 or 20-sided dice you get a very bad fit, so bad that its probability underflows to 0. This is because the corresponding moduli, 9 and 21, are composite, which means some of the cells in our chi-square test will have no observations. (Suppose m has a proper factor a. Then if a prime p were congruent to a mod m, p would be have to be divisible by a.)
Update: See the next post for a systematic look at different moduli.
You don’t have to use “dice” that correspond to regular solids. You could consider 10-sided “dice,” for example. For such numbers it may be easier to think of spinners than dice, a spinner with 10 equal arc segments it could fall into.
Related post: Probability that a number is prime
8 thoughts on “Chi-square goodness of fit test example with primes”
Hi John,
For 8- and 20-sided dice, corresponding to the integers mod 9 and 21, just making sure you excluded the composite residues before testing the goodness of the fit?
Also, are you aware of whether this relates to the race between the primes equivalent to 1 and 3 modulo 4? There are a number of famous results taught in a first or second semester of number theory about the difference between the amount of the two types, how far apart they can be from each other, and how often it switches sign. Does this constrain the results to look different from what you would expect of a coin flip?
Best regards,
Dan
Dan,
I did not exclude the composite residues. This is why the chi-square statistics are huge for these dice. I’m about to update the post to point this out.
I think there is a connection to the imbalance of primes mod 1 and 3 mod 4. At least that result is mentioned in the introduction of articles about the new results.
Why aren’t you using random.randrange(1, num_sides) for the dice rolls? AFAIU the math.ceil() contraption is not perfectly evenly distributed because of rounding errors.
Marius: There’s nothing wrong with using
randrange. I’m in the habit of asking for a uniform random value and doing everything else I need from there. This means I have fewer APIs to remember, and often there’s not an API for what I want to do.
There’s nothing wrong with using the ceiling function either. It doesn’t add any more imperfection than was there to start with.
Hi John,
Unless I am missing something, expected[1:] is only five elements long, causing zip() to trim the last observed count, and the chi-square to be incorrect.
(R says X-squared = 0.02012, which doesn’t invalidate the conclusions.)
If you want to mimic 8-sided dice, you could try looking at a reduced residue system modulo 15, or 16, or 20, or 24. E.g., modulo 15, all primes (except 3 and 5) have remainders 1, 2, 4, 7, 8, 11, 13, or 14.
Similarly, for 20-sided dice, you can go modulo 25, or 44. | http://www.johndcook.com/blog/2016/06/07/chi-square-goodness-of-fit-test-example-with-primes/ | CC-MAIN-2016-44 | refinedweb | 1,539 | 71.85 |
The QFrame class is the base class of widgets that can have a frame. More...
#include <qframe.h>
Inherits QWidget.
Inherited by QGrid, QGroupBox, QHBox, QLCDNumber, QLabel, QMenuBar, QPopupMenu, QProgressBar, QScrollView, QSpinBox, QSplitter, QTableView and QWidgetStack.):
Examples: popup/popup.cpp movies/main.cpp scrollview/scrollview.cpp
Plain- the frame and contents appear level with the surroundings
Raised- the frame and contents appear raised
Sunken- the frame and contents appear sunken
Shadow interacts with QFrame::Shape, the lineWidth() and the midLineWidth(). The picture of the frames in the class documentation may illustrate this better than words.
See also QFrame::Shape, lineWidth() and midLineWidth().
NoFrame- QFrame draws nothing
Box- QFrame draws a box around its contents
Panel- QFrame draws a panel such that the contents appear raised or sunken
WinPanel- like
Panel,but QFrame draws the 3D effects the way Microsoft Windows 95 (etc) does
HLine- QFrame draws a horizontal line that frames nothing (useful as separator)
VLine- QFrame draws a vertical line that frames nothing (useful as separator)
StyledPanel- QFrame calls QStyle::drawPanel()
PopupPanel- QFrame calls QStyle::drawPopupPan().
Constructs a frame widget with frame style
NoFrame and a 1 pixel frame
width.
The last argument exists for compatibility with Qt 1.x; it no longer has any meaning.
The parent, name and f arguments are passed to the QWidget constructor.
Returns the rectangle inside the frame.
See also frameRect() and drawContents().
[virtual protected] draw only inside contentsRect(). The default function does nothing.
See also contentsRect(), QPainter::setClipRect() and drawContentsMask().
Reimplemented in QLabel, QMenuBar, QLCDNumber, QProgressBar and QPopupMenu.
[virtual protected]
Virtual function that draws the mask of the frame's contents.Label and QProgressBar.
[virtual protected]
Draws the frame using().
[virtual protected]
Virtual function that draws the mask of the frame's frame.
If you reimplemented drawFrame(QPainter*) and your widget should support transparency you probably have to re-implement this function as well.
See also drawFrame(), updateMask(), QWidget::setAutoMask() and QPainter::setClipRect().
[virtual protected]
Virtual function that is called when the frame style, line width or mid-line width changes.
This function can be reimplemented by subclasses that need to know when the frame attributes change.
The default implementation calls update().
Reimplemented in QHBox, QGrid, QScrollView and QWidgetStack.
Returns the frame rectangle.
The default frame rectangle is equivalent to the widget rectangle.
See also setFrameRect().
Returns the frame shadow value from the frame style.
See also setFrameShadow(), frameStyle() and frameShape().
Returns the frame shape value from the frame style.
See also setFrameShape(), frameStyle() and frameShadow().
Returns the frame style.
The default value is QFrame::NoFrame.
See also setFrameStyle(), frameShape() and frameShadow().
Examples: scrollview/scrollview.cpp
Returns the width of the frame that is drawn.().
Returns the line width. (Note that the total line width
for
HLine and
VLine is given by frameWidth(), not
lineWidth().)
The default value is 1.
See also setLineWidth(), midLineWidth() and frameWidth().
Examples: scrollview/scrollview.cpp
Returns the width of the margin. The margin is the distance between the innermost pixel of the frame and the outermost pixel of contentsRect(). It is included in frameWidth().
The margin is filled according to backgroundMode().
The default value is 0.
See also setMargin(), lineWidth() and frameWidth().
Examples: scrollview/scrollview.cpp
Returns the width of the mid-line.
The default value is 0.
See also setMidLineWidth(), lineWidth() and frameWidth().
Examples: scrollview/scrollview.cpp
[virtual protected]
Handles paint events for the frame.
Paints the frame and the contents.
Opens the painter on the frame and calls first drawFrame(), then drawContents().
Reimplemented from QWidget.
[virtual protected]
Handles resize events for the frame..
[virtual]
Sets the frame rectangle to r.().
Reimplemented in QWidgetStack.
Sets the frame shadow value of the frame style.
See also frameShadow(), frameStyle() and setFrameShape().
Sets the frame shape value of the frame style.
See also frameShape(), frameStyle() and setFrameShadow().
[virtual]
Sets the frame style to style.
The style is the bitwise OR between a frame shape and a frame shadow style. See the illustration in the class documentation.
The frame shapes are:
NoFramedraws nothing. Naturally, you should not specify a shadow style if you use this.
Boxdraws a rectangular box. The contents appear to be level with the surrounding screen, but the border itself may be raised or sunken.
Paneldraws a rectangular panel that can be raised or sunken.
StyledPaneldraws a rectangular panel with a look depending on the current GUI style. It can be raised or sunken.
PopupPanelis used to draw a frame suitable for popup windows. Its look also depends on the current GUI style, usually the same as
StyledPanel.
WinPaneldraws a rectangular panel that can be raised or sunken, very like those in Windows 95. Specifying this shape sets the line width to 2 pixels. WinPanel is provided for compatibility. For GUI style independence we recommend using StyledPanel instead.
HLinedraws a horizontal line (vertically centered).
VLinedraws a vertical line (horizontally centered).
The shadow styles are:
Plaindraws using the palette foreground color (without any 3D effect).
Raiseddraws a 3D raised line using the light and dark colors of the current color group.
Sunkendraws a 3D sunken line using the light and dark colors of the current color group..
Examples: layout/layout.cpp customlayout/main.cpp popup/popup.cpp xform/xform.cpp cursor/cursor.cpp scrollview/scrollview.cpp
[virtual]
Sets the line width to w.
See also frameWidth(), lineWidth() and setMidLineWidth().
Examples: xform/xform.cpp scrollview/scrollview.cpp
[virtual]
Sets the width of the margin to w.
See also margin() and setLineWidth().
Examples: scrollview/scrollview.cpp
[virtual]
Sets the width of the mid-line to w.
See also midLineWidth() and setLineWidth().
Examples: scrollview/scrollview.cpp
[virtual]
Reimplemented for internal reasons; the API is not affected.
Reimplemented from QWidget.
[virtual]
Reimplemented for internal reasons; the API is not affected.
Reimplemented from QWidget.
[virtual protected].
Search the documentation, FAQ, qt-interest archive and more (uses):
This file is part of the Qt toolkit, copyright © 1995-2005 Trolltech, all rights reserved. | http://doc.trolltech.com/2.3/qframe.html | crawl-002 | refinedweb | 978 | 61.53 |
Linux Journal interviews Larry Wall 75
jbc writes "Linux Journal's cover story for May was an interview with Larry Wall, which is now online. Some good stuff on the future of Perl, whether or not Open Source is a passing fad, and why Activestate is not necessarily evil. "
Re:PERL vs PHP (Score:1)
It also smells great!
Re:The problem with Perl. (Score:1)
Huh huh, OK Beavis.
#.
But there were and are design principles. I imagine you know this, though.
# Obviously I was stating my opinion, not trying to offer a proof of why
# Perl is bad.
It was not obvious. Usually, in my experience, when intelligent people express their opinions publicly, they try to offer some reason why others should agree. I am not insinuating you are unintelligent, but the opposite: I assumed that because you are intelligent, you were trying to offer some evidence of why Perl is "bad". I assumed incorrectly. I won't do so again.
# The ``X'' that Perl is is ``sucky.'' I hold that truth to be
# self-evident.
Nonsense. That all men are created equal and are endowed by their Creator with cretain unalienable rights can be said to be self-evident (and in fact, I believe it was). That Perl is "sucky" is not. Are you kidding here, or are you trying to hurt your argument^Wopinion?
# But don't confuse the act of giving the programmer freedom of
# expression, and the act of giving the programmer a simple, consistent,
# easy-to-undertand tool. You can do both.
I don't find consistency to be that important, or simplicity. Some of our more complex and inconsitent tools are our most powerful. Like Unix itself. Then again, you probably don't like Unix.
Regardless, Perl is pretty simple. It is also very complex. It is what you want. Babies can speak English, and yet it is rich enough for the most complex expression of ideas.
# Just because a tool is a simple
# and easy-to-understand device (like a lever, or rope) doesn't mean it
# restricts the artistic expression of those using it. Help, help, I'm
# being oppressed!
No, but a canvas and paint offers more room for expression than a rope does. That's why the Louvre has ropes protecting the paintings and not vice versa.
#.
If you are talking about the approach of some Perl programmers, that's true. If you are talking about the approach of the developers of perl, that's false.
#.
Perl does not encourage bad programming, it attracts bad programmers (and good ones, too). The language cannot be reasonably faulted for allowing bad programmers to program badly with ease. You are wrong to say that regexes are easier than other methods.
And, sure, bit manipulation in Perl is not the easiest (though if you understand bits, it isn't exactly difficult, either). But substr and rindex and index and split are very simple functions to use. I daresay they are significantly easier to use than regexes. The problem with people using regexes when they shouldn't is not technical or linguistic but social. People are incorrectly taught, one way or another, to use regexes when they don't belong. Again, you're talking about bad programmers. They are bad in any language. I'd rather have bad programmers using Perl regexes inappropriately than using C pointers inappropriately.
#.
You have not demonstrated that TMTOWTDI is the enemy of maintainability. Maybe it is just your opinion, but unless you are going to back it up, don't expect anyone to care what your opinion is.
Postmodern? Come on... (Score:2)
The Perl motto is cute, but it implictly sets up a strawman, because no language I can think of says "There's Only One Way To Do It." All languages, including Perl, put some restrictions on the programmer. That's pretty much what the word "language" means. Yes, in Python, you have to use whitespace as syntax. Oh, but in Perl you can't use whitespace as syntax. There may be more than one way to do it in Perl, but that ain't one of them. Yes, in Lisp you have to use parentheses. Oh, but in Perl you have to use "$", "@", and "%" to start names. Unpunctuated names aren't an option! There's only one way to do it!
Sorry for venting, but I get a little tired of the posturing in the Perl camp. They try to sell the language by implying that other languages are uncool, which is itself very uncool: "many programmers are still slaves of the cyber police." Give me a break.
Re:PERL vs PHP (Score:1)
--tom do write a lot more Perl than I have
to read but I've been told by those
who have to maintain my old code that
they can tell what is going on and modify
it when they need to. It's
the only language that I trip across
old embarrasing code of mine in on the
net that sucks rocks, but it does still
work.
The "write only" aspect of Perl can be seductive
though. At the moment I'm writing piles
of code for the pharmaceutical biz and I
try really hard to write in "baby talk" so that
if I get hit by a meteor any random hacker
with a smattering of perl can pick it up
and run with it, but.. jeez. I could compress
that 200 line program down to into something
that would fit in a
grrls would think I was k3wl... but that's the
Dark Side.
garyr
Larry Wall is Cool (Score:3)
We could have Larry, Linus, the Samba Team, etc. A new face every month. And a couple good quotes to go along with them...
In C++, the user is always wrong! (Score:1)
(If you do program in C++, I highly recommend you read Scott Meyers' C++ books. They are indispensable.)
Re:Python and Orwell (Score:1)
I was more surprised to see Wall focus on the rather minor feature of whitespace; syntax is only part of design philosophy, and Python's primary feature isn't its use of whitespace, any more than Algol-68's primary feature was its introduction of the "fi" keyword. Instead, Python's primary idea is really namespaces. Modules are namespaces. Classes are a bunch of namespaces with rules on how to look through them, and instances are namespaces on top of classes. It's noteworthy that Python's object model could be mapped to Perl fairly straightforwardly, influencing Perl 5's OO features.
Re:Python and Orwell (Score:1)
Rather, it's a consequence of the observation that people indent their programs anyway, even though in the languages which use various symbols for marking "blocks" (whether {/}, begin/end, if/fi/do/done,
...) don't require it. You could say Python enforces the readability people normally would add to their code.
(The only reasonable argument against it, IMHO, is that it makes looking for "block end" slightly more difficult than to look for '}'.)
Re:PERL vs PHP (Score:1)
Perl Rules. (Score:2)
Once you get over the initial confusion of what all those wierd slashes mean, Visual Basic starts to look like a bad joke, at least for CGI programming.
Larry Wall, while i have never met him, and while he may have a nasty habit of making religious references all the time, is, in my opinion, a damn good guy, who has given me the most useful software tool i ever came across.
-Pete
success despite quirks (Score:2)
Perl is a very useful tool, and when it came out it was a big improvement over awk/sed/sh, and it was free to boot. That's why it succeeded. Nowadays, most people simply use it because it's there (among other things, it's the only CGI language my ISP offers).
But far from being an advantage, Perl's haphazard error checking and "postmodern" syntax do cause problems in practice. For example, one of the big government projects Larry mentions lost a lot of data because Perl did not flag bad numerical input as an error but simply uses 0 by default (a design flaw that has since been partially corrected through addition of "-w"). And even the Perl system itself has problems with its syntax--there are many circumstances where Perl compiler or runtime error messages are way off.
The lesson I hope budding language designers will take away from Perl is that providing useful functionality in a timely manner and a free package is more important for success than clean design or robustness. But within the constraints of time to market, one should still strive for clean, robust designs, or one should at least aim to fix things up later. To Perl's credit, many of the initial design problems have been fixed.
And while Larry seems to think languages like Lisp and Python were designed by some CS types for thought control, reality is that, for example, Lisp evolved over nearly 40 years to meet the needs of its user community (and, in a twist of irony, Perl's syntax itself has followed a little bit of the same trajectory already; let's see where it ends up in another 30 years).
To me, Perl succeeded despite its quirks and problems, not because of them.
Regular expressions!!! (Score:1)
Re:Perl for the Palm? (Score:1)
Except that writing code with Graffiti would sort of suck.
Re:PERL vs PHP (Score:1)
you misunderstand Python's indentation (Score:2)
Re:Python and Orwell (Score:2)
Re:PERL vs PHP (Score:1)
Perl - Poor Excuse for a Real Language (Score:2)
But perl makes for buggy code. Languages like Eiffel have endeavoured to make coding more rigourous. And the power, conciseness and structured way of thinking of Lisp makes for elegant and thus maintainable code.
But perl it's too easy to write bad code. In some cases it just plain encourages bad code. I think the main reason perl is popular is just because it talks to everything. And it talks to everything because it's popular. It lulls you into a false sense of security. Doing something small in perl is quick. So you start using it. Soon you're doing something big and serious and still using it. And then you're in a bit of a mess.
If perl is post-modern - then give me modern please. The purity of careful thinking really is better than a hap-hazard, do it in hundreds of different ways thinking.
I try to use Scheme and Guile whenever I can. I think it can do as good or better job for everything (I'm not a post-modern thinker). But to do everything better it needs to have as many modules as Perl. It's got quite a few, but not as many as perl yet. So that's why I use perl.
Re:Fundamentalist Christian (Score:1) sp [chick.com]
Re:Perl is a sick,twisted,perverse,dominatrix lang (Score:1)
by psycho-looser-poor-ass-excuse-for-a-programmer.
What I was saying is that even if they do use
terrible names the "line noise" of perl at least
gives you a hint. sure, $first_name could be
a reference to an array of first names but at least its a hint.
garyr
too critical (Score:2)
But having said that, I still think among scripting languages, Perl is one of the most useful, and it gets a lot of things right, and I certainly appreciate all the effort that has gone into it.
Larry Wall Interview (Score:1)
I liked the point where he was looking for a name for his language. To actually fumble through the dictionary at every 4 letter word. THAT'S dedication.
"We're not just talking about dinosaurs here, but also snail darters and cheetahs and a bazillion beetles in Brazil---not to mention Visual Basic."
HEH.
-- Give him Head? Be a Beacon?
Interesting.. (Score:2)
Jeff
BTW> Lucky is a monadic parser. Monads are a very powerful way to resolve the contradictions between functional and object oriented programming philosophies. A monadic parser has all the elegance, flexibility, and raw power of a functional parser without the grammer constrains normaly imposd on a functional parser.
Re:Why Perl is popular (Score:1)
With one exception, you've just described JavaScript.
The one exception is convenient access to files, and that's the killer that has prevented it from being used as a batch-mode shell scripting language until now.
I really wish someone would fix that, because I would love to be writing scripts in JS instead of Perl. (It wouldn't take much: just a little syntactic sugar to make the act of listing, opening, and closing files be a less verbose process.)
Here's a telling example of the differences between the two languages: both allow you to treat everything as text manipulation (streams of bytes, regexps, etc.) And both allow you to construct objects and assign them behavior and manipulate things at a high level.
But there is a particular focus built into the language. Sure, there's ``more than one way to do it,'' but the nature of Perl, the nature of the language's shortcuts, encourages you think about things as matching patterns in text, instead of as communication between objects. While allowing both, JS focuses on the latter.
``All the world's a stream of bytes'' is the most horrible and damaging part of Unix's legacy. It has stunted a whole generation of programmers.
Or At It. (Score:1)
Re:Postmodern? Come on... (Score:1)
Larry Wall's Talk on Perl as Postmodern Language (Score:2)
Re:Larry the Missionary (Score:1)
They probably aren't in greater use because some people feel that if you mention the bible, or religion, in daily use, you must be a missionary or a bible-thumper. hehe
:-)
Very good article... (Score:4)
For those who have read _The design and evolution of C++_ by Stroustroupp (sp??) it is interesting to note that Perl and C++ share some design philosophies. Like "the user is always right" school of thought. Maybe that's why both of these languages are popular?
Anyway, you gotta admit larry Wall has a pretty big vocabulary:
and etc. etc... I sure would have got higher scores in school had I used words like that!
Re:The problem with Perl. (Score:1)
I'm not saying you need to like Perl or its design. You don't. But I would just hope that if you are going to be making a formal argument, as you appear to be doing, that you support your argument a little bit better than you have. Your definition of what a programming language should be is rejected by Larry Wall and the developers of perl, as well as many of the users of perl. So to use that definition as a basis for discussion isn't interesting.
Of course, Perl technically _is_ a formal mathematical system. It's just a really complex one with lots of quirks, idioms, and apparent inconsistencies. But of course its _spirit_ is nothing resembling a formal mathematical system. This is a good thing, your unaccepted definition of "programming languages" notwithstanding.
You say this fuzziness is inappropriate for a computer language. I disagree. I think it is highly apprpriate. Programming is, to me, art. It is a craft. And Perl lets me be expressive. This is important, and in my opinion, Perl's "fuzziness" is essential to the goal of allowing users to be optimally expressive. And the process of creation is just as important (to me) as the end goal.
And since you don't give any real support for your opinion, mine is just as good as yours, except I like mine better, so I win.
pudge@pobox.com
Re:Interesting.. (Score:1)
--tom
The only time I've ever sworn at perl... (Score:1)
After a few hours of banging my head against the wall, I discover this little gem buried on p. 70 of the Camel Book:
If the PATTERN evaluates to a null string, the last successfully executed regular expression not hidden within an inner block
... is used instead.
D'oh!
--
use English (Score:1)
perldoc English and perldoc perlvar will tell you more.
After all, there is more than one way to do it
:-)
Re:Perl == postmodern if postmodern == archaic (Score:1)
Re:Very good article... (Score:1)
C++ ? User always right? Yeah, right.
Re:Interesting.. (Score:1)
Anywho, the realevence of this to the programmer world is that this stuff moves us closer to a functional langauge which agree with as least part of the Perl philosophy and feal. Postmodernisn improved by modernist research.. sorta.. of couse that's not at all ruled out by postmodernism. If anything Perl's dependance on it's imperitive style is keeping it from being more postmodern. Er.. Well.. Whatever.. Maybe I'm spouting bullshit here.. I'm never quite shure about these things..
Perl for the Palm? (Score:1)
his Palm Pilot. Does that mean we can hope to
see a subset of Perl on the Pilot? God I hope
so. That would rule.'m going to have to object to your line of reasoning here. Any psycho-looser-poor-ass-excuse-for-a-programmer that names their variables "array" or their arrays "variable" deserves every form of torture their maintainers can dream up.
Maintainers can get downright creative when it comes to thingking up new and interesting ways to inflict pain on the original coder. Be nice to the people who may someday have to read your code. Comments and a clean writing style are all that stand between you and a sever thwacking.
Re:The problem with Perl. (Score:1)
Programming languages let you do all sorts of things that violate "purity". What do you think a type cast is? C will let you shoe horn anything you want as long as it doesn't cause a segfault.
Perl is by no means fuzzy. Every operation is founded on a set of principles that can be deduced by reading the source.
Why Perl is popular (Score:1)
That said, Perl has some nice features that could be part of any language:
There are quite a few things I don't like in Perl, e.g., heavy use of $_, uncommented regular expressions, and omitting optional parentheses can make Perl programs look like so much gibberish. It would have been nice if one goal of Perl was to ensure that any program looks semi-sensible to a C programmer, but I guess you can't have everything you want unless you create your own language.
Re:Perl for the Palm? (Score:1)
Larry is Cool (Score:3)
I don't know how else to say it - his presentations at conferences are so offbeat, his interviews always entertaining.
Of course, perl is offbeat and entertaining. Its bizarre and obscure at times, but anyone who can write it with skill usually swears by it.
Re:Perl is a sick,twisted,perverse,dominatrix lang (Score:1)
I've spent a bit of time beating my head against complex data structures in Perl, and while I'm sure I'll get the hang of these things eventually, I've had to make liberal use of Data::Dumper to prevent the syntax from biting me.
Of course, since Unicode support is coming to Perl, and all of the Zapf Dingbats are in Unicode, we can just extend the syntax to...
Re:PERL vs PHP (Score:1)
PHP only applies to web content.
Perl is popular because of its power, extensibility, flexibility, openness, and because
it's been well marketed (yes, even as it is free).
The NSA does use perl. (Score:3)
and furthermore... (Score:1)
I like Perl a lot. It's easy, fast, and cool. But I too weary at times of the idea that Perl is "better" than other languages because it's so free-form.
The only way post-modernism -- including Perl -- can really work is if there is a "modernist" foundation under it. When everyone literally abandons ethical standards (because "there's more than one way to do it"), we have chaos. So too with Perl: it is dependent upon an orthogonal OS to even run! Or shall we rewrite Linux in Perl?
His principles can only be applied so far. Which means they're not useful for everything. Just like Perl.
Re:PERL vs PHP (Score:1)
$_="It's line noise!!\r"; $|=1;
$e='s/([\x41-\x5a])(\W*)([\x61-\x7a])/\l\1\2\u\
print while select('','','',.1),eval $e || $e=~tr [4567lu] [6745ul];
Re:Perl is a sick,twisted,perverse,dominatrix lang (Score:1)
As a poor commenter myself, I can assure you that anything I write in any language is going to be difficult for someone to understand. This doesn't make my code bad, the application bad, or the language I wrote in bad. It just makes me bad!
;)
Lisp -- you know the acronym for Lisp, right ;) (Score:1)
Doing something small in perl is quick. So you start using it. Soon you're doing something big and serious and still using it. And then you're in a bit of a mess.
I'm not really objecting to your point of view, I've used a lot of languages, and all of them have their strong (well-loved) and weak (hated) points.
I do note though, on the particular point above, that you're not making a straight-up comparison.
I assume that when you do something big/serious in Lisp, you're making use of various constructs in the language such as (depending on dialect) packages, closures (lambda), OO (CLOS/Flavors), etc. You could certainly write a lot of conditional sexp's instead, but you want these features expressly because they improve the quality of the code.
Well, gee, Perl has these constructs too. (You just never see them in one-off, quick and dirty CGI scripts). It simply takes an investment of time before you learn how to use them.
must be new to perl (Score:1)
or oblique biblical references
is like Python without a parrot
skit or ip framing without reference
to carrier pigeons.
Really, I'm the canonical agnostic
Secular Humanist but Larry is the
best advertisement for Christianity
I've ever seen. [no one ever expects
the Spanish Inquisition - stop that
Guido...] His faith is an intregral
part of everything he writes. Maybe
this is what he meant when he said
that Perl has done more for missionaries
than if he had become one. He is a great
book review for his Author.
garyr
The problem with Perl. (Score:2)
Larry has a consistent philospohy behind the design of Perl (or rather, the intentional lack of overall design.) It's an interesting idea, certainly, and one that I think hasn't been consciously applied to a programming language before. However, if Perl is the kind of language that that approach produces, then I think the experiment is a failure.
While Larry is a smart fellow, the problem is that he is also a linguist. And having spent a few years working with linguists (doing a natural-language understanding boondoggle), my experience is that linguists should never ever be allowed near computers.
Computer languages aren't really languages, not in the sense that linguists know languages. Computer languages are formal mathematical systems, which are a totally different beast. Computers are very literal-minded, not fuzzy at all, so one must talk to them precisely. The fuzziness that appears in human languages is inappropriate in a computer language.
The ``language'' of mathematics doesn't have linguistic drift. Where is the ``slang'' in arithmetic? Where are the ``dialects'' of algebra? It doesn't happen, because mathematical systems exist by design, not by evolutionary pressure and random mutation.
Accretion works well for some things, like DNA, forests, and cities. But I for one am glad that my car's engine was designed to be efficient and self-consistent, and I prefer the software I use (including languages) to be the same, rather than a sprawling Winchester Mystery House [winchester...yhouse.com] of a language like Perl.
Of course, I end up using Perl anyway, because often it's the most convenient tool for the job for any number of not-very-good reasons. The way Perl manages to suck so bad and yet still be marginally useful is probably what makes it the perfect complement to Unix itself. Worse is Better [jwz.org], after all.
Actually, now that I think about it, Tcl is even more horrible than Perl, so it's a wonder it hasn't taken over.
Maximal obscurity! Now!
Re:Larry the Missionary (Score:1)
Larry loves to slip in to his speeches and stuff references to what he believes. And whether you like Jesus Christ or not, you have to admit, the man is a man of integrity, and believes what he says he believes. Go big camel!
----- if ($anyone_cares) {print "Just Another Perl Newbie"}
Re:Larry the Missionary (Score:1)
PERL vs PHP (Score:1)
Why PERL/DBI is better that PHP : (Score:1)
IMHO Perl/DBI is better that PHP :
1) DBI Module
With DBI (the DataBase Interface module for Perl)
Your perl source code are independent from the Database. If your database swap from MySQL to Informix (or Oracle) You don't have to change the code. PHP use native call, it's more speeder but not very portable.
2) Reglular Expression
4) Modules (CGI, FTP, MAIL, IRC/API, USENET)
A lot of interesting module are available in perl for easy Gate to do accross a Web-Database
eg :
Web/Database/Usenet
Web/Database/IRC
Web/FTP
Web/SMS
5) No other language to learn
PHP is easy and work fine in Unix and NT and seem to be a easy choice for Newcommer in WebDevelopment World. but it's JUST a WebDB language.
However, Perl/DBI a better choice for long time, and long vison project.
Well ! i home that will help you !
Perl/DBI users looks very rare, i'm feeling lonly on the PHP Hype
Re:The problem with Perl. (Score:1)
No, there's not. As I said, it was an interesting experiment. Nevertheless, Perl blows,. Interesting data point. Let's not try that again. Acknowlege, move on.
And you're arguing syntax. Obviously I was stating my opinion, not trying to offer a proof of why Perl is bad. The ``X'' that Perl is is ``sucky.'' I hold that truth to be self-evident. Beyond that, I suggest that this suckiness is a result of the thing about Perl that is different from prior, less sucky languages: the fact that Perl rejects outdated academic concepts like consistency and simplicity.
And to me. But don't confuse the act of giving the programmer freedom of expression, and the act of giving the programmer a simple, consistent, easy-to-undertand tool. You can do both. Just because a tool is a simple and easy-to-understand device (like a lever, or rope) doesn't mean it restricts the artistic expression of those using it. Help, help, I'm being oppressed!
People can live free with tools that haven't themselves gone hog-wild with baroque gilding..
And while ``there's more than one way to do it [as long as you use line noise punctuation and regexps]'',.
Some people, when confronted with a problem, think "I know, I'll use regular expressions." Now they have two problems..
Every 4 letter word (Score:1)
One might assume he just used
What, outlaw syntax checking? (Score:1)
As it is, my code is getting looser and looser (is that really a word) to the point while explaining bits to my supervisor (a PhD. equipped scientist, not computing) I have to gloss over the bits that look like line noise
"That line that says s/[\_[\. \,\?]//g; does have useful function, it does xxxx"
At least we're writing computer code that looks like computer code, we could all be writing CoBOL, RMS protect us from that....
Re:Larry Wall is Cool (Score:1)
Perl == postmodern if postmodern == archaic (Score:1)
Seriously, coming from C, Perl seems to make for needlessly archaic and unreadable code. Hard as I try, I can't get myself as excited about coding for Perl as half the open source community seems to be.
Re:Perl is a sick,twisted,perverse,dominatrix lang (Score:1) | http://developers.slashdot.org/story/99/06/01/2122209/linux-journal-interviews-larry-wall | CC-MAIN-2015-14 | refinedweb | 4,796 | 73.27 |
Hi Eyji
Re CRX / Jackrabbit questions:
On Tue, Dec 16, 2008 at 2:21 PM, Eyji <eyji@thorarinssons.net> wrote:
> You were right, we hade remnants left of our previously used
> TarPersistanceManager, but we switched over to Derby, and our environment
> now works well.
> To add on to our problem, we are using JackRabbit 1.5 deployed on Tomcat 6
> with CRX 1.4, which could have added some classpath failures? If that is
> so,
> how do we best deploy JR 1.5 with CRX 1.4?
CRX starting with 1.4 is drop-in/source-code compatible with the
corresponding Jacrkabbit release (specifically, Jackrabbit branch). Newer
Jackrabbit releases are not guaranteed to work out-of-the-box, as there
might be incompatible changes in the newer JR code (especially for modules
not maintained in Jackrabbit).
Because of that, I would not generally recommend to deploy CRX 1.4 with some
modules upgraded to JR 1.5.
> Do you know if CRX will soon be
> upgraded according to the new version of JR?
>
We're discussing the Jackrabbit version, on which we're going to base our
next major CRX update. There's quite some interesting JSR-283
forward-looking(*) enhancements going on in JR trunk a.t.m., which we'd like
to embrace in CRX a.s.a.p., and this surely is a certain challenge from the
commercial product, standards-compliance, and engineering perspective :)
Jackrabbit 1.5 already includes some of these of course.
(*) Surely neither CRX nor Jackrabbit claim to be compliant with the
upcomings standard, it's not possible (legal) before it's published.
Cheers
Greg
>
> Jukka Zitting wrote:
> >
> > Hi,
> >
> > On Tue, Dec 16, 2008 at 1:19 PM, Eyji <eyji67@gmail.com> wrote:
> >> We have a project using JackRabbit 1.5, Spring 2.5.6, Spring-modules 0.9
> >> (the
> >> JCR parts), deployed on Tomcat 6. When deploying the project, this
> >> exception
> >> is thrown:
> >>
> >> stack.txt
> >>
> >> It seems that the StringIndex class has been moved in JCR 1.5 and the
> >> Spring
> >> JCR module references it from the old package namespace. Does anyone
> have
> >> an
> >> idea of how to get around this, or is there a way to patch
> >> spring-modules-jcr to fix this?
> >
> > The following part of the stack trace suggest that the problem is the
> > persistence manager class you have configured in the <Versioning>
> > section of your repository.xml file:
> >
> > Caused by: java.lang.NoClassDefFoundError:
> > org/apache/jackrabbit/core/persistence/bundle/util/StringIndex
> > at java.lang.Class.forName0(Native Method)
> > at java.lang.Class.forName(Class.java:242)
> > at
> >
> org.apache.jackrabbit.core.config.BeanConfig.newInstance(BeanConfig.java:104)
> > at
> >
> org.apache.jackrabbit.core.RepositoryImpl.createPersistenceManager(RepositoryImpl.java:1338)
> > at
> >
> org.apache.jackrabbit.core.RepositoryImpl.createVersionManager(RepositoryImpl.java:450)
> >
> > Are you using a custom persistence manager?
> >
> > BR,
> >
> > Jukka Zitting
> >
> >
>
> --
> View this message in context:
>
> Sent from the Jackrabbit - Users mailing list archive at Nabble.com.
>
> | http://mail-archives.apache.org/mod_mbox/jackrabbit-users/200812.mbox/%3Cd8ffccc20812160658l2fad8000xe9771e9b80859d74@mail.gmail.com%3E | CC-MAIN-2015-32 | refinedweb | 484 | 52.15 |
Content-type: text/html
#include <sys/mman.h>
int shm_open (
const char *name,
int oflag,
mode_t mode);
The shm_open function establishes a connection between a shared memory object and a file descriptor. It creates an open file descriptor that refers to the shared memory object and a file descriptor that refers to that open file descriptor. This file descriptor is used by other functions to refer to the shared memory object. The name points to a string naming a shared memory object. The name can be a pathname, in which case other processes referring to the same pathname refer to the same shared memory object.
Once a shared memory object is created, its state and all data associated with it persist until the shared memory is unlinked.
The shm_open function returns a file descriptor that is the lowest numbered file descriptor not currently open for that process. File status flags and access modes are set according to the oflag argument. These flags are defined in the <fcntl.h> header file and can have zero or more of the following values:
The initial contents of the shared memory object are binary zeros.
On a successful call to shm_open a nonnegative integer is returned that represents the lowest numbered unused file descriptor. The file descriptor points to the shared memory object. Otherwise, -1 is returned and errno is set to indicate the error.
The shm_open function fails under the following conditions:
Functions: close(2), dup(2), exec(2), fcntl(2), fstat(2), mmap(2), umask(2), shm_unlink(3) delim off | http://backdrift.org/man/tru64/man3/shm_open.3.html | CC-MAIN-2017-04 | refinedweb | 258 | 54.93 |
WebServices : As the name implies, A WebService is an entity or a piece of code which is hosted on the Web, which serves a specefic purpose. This could be the most simple framed definition.
A more apt version may be: A WebService is a part fo a software application which is hosted on the Web and which does a particular function in order to execute the business logic.
Further, a WebService allows a website to interact with other websites irrespective of the programming language the websites's been written in and also irrespective of the hardware & OS concerns of the machines.
Lets take an example over here: Suppose, you are a developer and you have some 3 different mathematical websites under you, demanded by various clients.A common functionality needs to be implemented in all the 3 websites, lets say displaying a Fibonaaci series in a text area.
The question which arises here is, are you gonna write separate piece of codes to have the same functionality 3 times.Well it would be a redundant extra effort, if you would try to aim at this approach.Here, the need for a WebService comes into picture. Code the functionality of displaying a Fibonaaci series in a WebService just once, and call this webservice in the 3 different websites.There you go, you are done.You need not write the same code thrice or else.
One question that may arise over here is: Why WebServices now?What was the case earlier?Were people able to leverage the functionalities of one application to another application?
The answer is yes, people were able to do so.There were technologies such as COM (Component Object Model), DCOM (Distributed Component Object Model) which were helpful in achieving this leverage, but they had their limitations as well. Infact they had to be explicitly registered on each user's machine, thus giving chances for registery glitches and stuffs.
A WebService overcomes all those limitations.
The aim of this artcle is to make a person understand what's a webservice (which I have tried to do in the Introduction section) and "How to create a WebService and call & use them via different methods?".The later will be discussed in the below coming sections. Stay tuned..
Well I belong to the .NET domain and the code files in this article aim at .NET 3.5, so the only pre-requisite would be the presence of Visual Studio 2008 client.
Lets start with the creation of the WebService.
1) Open Visual Studio 2008 Client.Click on New>WebSite>ASP.NET Web Service.Give the path where you want to place the code. Refer to the below screenshots.
2) Visual Studio will open a default Service.cs file in which you can write your methods.Write the below lines of code.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.Services;
int Area(int l, int b)
{
return l * b;
}
[WebMethod]
public int Perimeter(int l, int b)
{
return 2 * (l +b);
}
}
3) Build the solution and view the Service.asmx file in the
browser. It will be something like below:
4) The
Web Methods created will be evident and will be invoked via HTTP POST method if
the links Area and Perimeter are clicked.
So
here it is, a very basic WebService has been coded and created, which will help
us to calculate the area and perimeter of a rectangle.
You might wana copy the Web Service URL because they are used later in the article.
Next, we will see how to call and use the Web Service in
times of need.
Well, there may be 3 methods by which a Web Service is called, depending upon the scenarios.
1) By HTTP POST method.<o:p />
2) By creating a Proxy Class
3) By using XMLHTTP Request object over SOAP
Let's discuss each of them in brief:
HTTP POST
method: This method can be used directly by calling the .asmx file from the client.
The most favorable method to execute an
HTTP POST method is to have the codes inside an HTML file. Let’s get started to
implement it.
Open your notepad and write the below lines
of code in it:
<o:p>
>
Just a friendly reminder, you would wana change the Web Service URL in the above code, by your own URL, which you would have copied sometime earlier..
The bold lines of code exhibit as to
how we can call the methods of the WebService separately via HTTP POST. Upon
saving the above lines of code as .html and executing it will result in a
browser action.
W
When the button “Click for Area” or “Click
for Perimeter” is clicked, we would see that the specific methods of the
WebService are called and the result is in the form of an XML.
One of the limitations of HTTP POST method can be
thought as of: We are unable to build Web Methods which take complex types as
parameter or return values.
2)
Creation
of a Proxy Class: This method is widely used by developers. This method
actually makes use of the full potential of a Web Service. Let’s implement this
and see its wonders.
We need to create one ASP.NET website to
see and implement this method.
Open Visual Studio once again and click on
New>Website>ASP.NET Website. Give the path where you want to place the
code.
This web site creation will open a
Default.aspx page, its aspx.cs page, an App_Code folder and few more stuffs.
Open the Default.aspx page and design it as
per below screenshot:
It’s basically 4 ASP label controls, 4 ASP
text box controls and 1 ASP button bound under 1 div.
Now the most important aspect of this very
method comes into play. Addition of the previously created Web Service.
Right-click the project and click on “Add Web
Reference”. The click will open a dialog box similar to the below screenshot.
Now, we need to catch the Web Service here either by
adding the URL or by browsing through the various options given. Successful
addition of the Web Service will yield “Web services found at this URL”. You
would see the methods that you have actually created. You might want to change
the name of the web reference from “localhost” to something else to avoid confusion
and stuffs. Click on Add Reference and there you go. You have successfully
added a Web Reference to your ASP.NET website.
The solution explorer will be something like below:
Now, once the Web Reference is added, let’s
write some code in Code-Behind to make use of the web service.
Open Default.aspx.cs
file and on Button Click event, add the following code:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
public partialclass_Default : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
}
protected void btnClick_Click(object sender, EventArgs e)
{
if (!string.IsNullOrEmpty(txtLength.Text) && !string.IsNullOrEmpty(txtBreadth.Text))
{
Maths.Service obj = new Maths.Service();
int l = Convert.ToInt32(txtLength.Text);
int b = Convert.ToInt32(txtBreadth.Text);
txtArea.Text = Convert.ToString(obj.Area(l, b));
txtPerimeter.Text = Convert.ToString(obj.Perimeter(l, b));
}
}
}
Now, let me explain the above code further.
The bold line of code is what this method is all about, Creation of
proxy class.
We are making an object of the Web Service
added and then are using that object to call the individual methods of the
WebService. Rest of the code is self-explanatory, which will calculate the area
and perimeter of the rectangle.
Build the solution and view the Default.aspx
file in the browser.
Enter some values for length and breadth
and click on the button.
You will see the area and perimeter
displayed in the placed textboxes.
Similar screenshots are below.
We see in this method that we can call a Web Service
by just adding a Web Reference and creating a proxy class and without even
coding the required functionality (done by the Web Service) in the current
project.
1) By using XMLHTTP Request object over SOAP:<o:p />
First let’s see what SOAP is.
SOAP (Simple Object Access Protocol) is a
protocol used for messaging and is completely XML based.
SOAP provides a complete set of rules for
messages and also, rules for entities like message encoding, message handling
etc.
Syntax of a SOAP message will be as follows:
<?xml version="1.0" encoding="utf-8"?><o:p />
<soap:Envelope
xmlns:xsi=""
xmlns:xsd=""
xmlns:<o:p />
<soap:Body>
</soap:Body><o:p />
</soap:Envelope>
A SOAP message is called an Envelope, which further
has Header and Body tags.
Now, let’s see what exactly XMLHttpRequest
is.
The XMLHttpRequest specification defines an
API that provides scripted client functionality for transferring data between a
client and a server.
It is used to send HTTP or HTTPS
requests directly to a web server and load the server response data
directly back into the script.
In this implementation we will use an HTML form and a
JavaScript function to demonstrate the full potential of this method.
a) First of all we will create an object of
XMLHttpRequest. Refer the below lines of code.
var sample;
if (window.XMLHttpRequest)
{
sample= new window.XMLHttpRequest;//For browsers other than IE
}
else
{
try
{
sample=new ActiveXObject("MSXML2.XMLHTTP.3.0"); //for IE
}
catch(ex)
{
}
}
b) Now, we will set the Web Service URL using the Open
method of XMLHttpRequest.
sample.open ("POST",”",true);
Basically Open method of XMLHttpRequest has 5 parameters:
Type of HTTPMethod, Url, Boolean value (true most of the times), Username and password (optional).
c) Now, we will create the Request Headers and
SOAP request (Shown in the below code).
d) The demonstration will be basically done
by an HTML code. Hence open your notepad and type in the below code:
!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "">
<html xmlns="">
<head>
<title></title>
<script type="text/javascript" >
var sample;
if (window.XMLHttpRequest)
{
sample = new window.XMLHttpRequest;
}
else
{
try
{
sample = new ActiveXObject("MSXML2.XMLHTTP.3.0");
}
catch (ex)
{
}
}
sample.open("POST", "",true);
sample.setRequestHeader("Content-Type", "text/xml; charset=utf-8");
sample.setRequestHeader("SOAPAction", "");
strRequest = "<?xml version='1.0' encoding='utf-8'?>";
strRequest = strRequest + "<soap:Envelope " +"xmlns:";
strRequest = strRequest + " <soap:Body>";
strRequest = strRequest + "<Area xmlns=''><l>4</l><b>5</b></Area>";
strRequest = strRequest + "</soap:Body>";
strRequest = strRequest + "</soap:Envelope>";
sample.onreadystatechange = function () {
if (sample.readyState == 4 &&xmlHTTP.status == 200)
{
alert(xmlHTTP.responseXML.xml);
}
}
sample.send(strRequest);
</script>
e) Save the notepad as .html extension and execute it to have a browser action.
The browser will give the output as a JavaScript
alert box, similar to below.
SetRequestHeader creates the Header tag of the SOAP message, while OnReadyStateChange property of XMLHTTPRequest object assists us to execute the code after the response from the Web Service is processed.
Initially i used to think web services,wcf's and stuffs are aliens, and had no interests in them at all.
Interests in Web Services grew when I got a .NET project dealing with web services, wcf services and else, had to learn them you know. Now I know that they are damn helpful..
Well, wasn't really successful in finding a great resource for a beginner(including me) under this topic, hence decided to pen down something.
First article at CP... Comments and criticisms are pretty welcome. And please don't mind much on the formatting, had a hart time doing it....:P
28th Feb. | https://www.codeproject.com/articles/524983/web-services-from-a-beginners-perspective | CC-MAIN-2017-09 | refinedweb | 1,937 | 67.35 |
Clojure has a good reputation for concurrency. People write Clojure programs that work on hundreds of threads, all safely reading and writing to the same memory. People know about the immutable data structures and the STM. But there's something going on at a much deeper level that is really hard to get right in Java. It has to do with the optimizations the JIT will run on your code.
To undertand what I'm referring to, let's look at a series of optimizations that the JIT can do.
Here's some Java code with a loop.
public class A { public void loopUntilDone(DoneChecker a) { while(!a.isDone()) { doSomething(); } } }
You pass in an object that will tell you when to stop looping. Let's look at
DoneChecker.
public class Donechecker { private boolean _isDone = false; public boolean isDone() { return _isDone; } public void setDone() { _isDone = true; } }
Very simple and easy. And it works. But then you run this a lot, and what happens? It stops working! Sometimes, you get an infinite loop. When you debug it, it always works. But after running for about 5 minutes, it goes back into the infinite loop.
When something like that happens, it's often the JIT. The JIT will optimize code that is run frequently. The debugger will use the unoptimized bytecode and interpret it. If it doesn't happen during debugging, but does happen after the JIT has had a chance to run, the JIT could be the culprit.
Let's step through the optimizations the JIT is allowed to do.
The first thing is called inlining. We can inline the call to
isDone().
public class A { public void loopUntilDone(DoneChecker a) { while(!a._isDone) { doSomething(); } } }
That's great. It avoids a method call. The next thing it can do is caching. If a value in the heap is accessed more than once, the JIT is allowed to cache that value on the heap using a local variable.
public class A { public void loopUntilDone(DoneChecker a) { boolean _isDone = a._isDone; while(!_isDone) { doSomething(); } } }
That's great! It avoids costly memory fetches. But wait! Something has
changed. Before, we were checking the value of
a's field every time
through the loop. We were expecting another thread to change the value
at some point. But now it's only checking once. So the JIT has turned
this into an infinite loop! This was hard for me to believe at first,
but it's true.
Java defaults to the sequential case. To avoid this problem, you
have to put a "memory barrier" to tell the JIT that it can't inline
this value. In this case, the proper keyword to use is
volatile. Any
time a value will be accessed by multiple threads, you should use
volatile.
Did you know that? I certainly didn't before I did some research. I
wrote Java code for years and I never used volatile. Before you run to
your Java project to make sure you're using
volatile correctly,
crying over years of wasted debugging time looking for those
heisenbugs (like I did), let me finish about Clojure.
Clojure simply makes a different tradeoff: assume everything will be accessed by multiple threads. While most things in Clojure are immutable (and so can be cached), the things that can change (atoms, refs, vars, etc.) are done with the correct memory barriers and locks.
I hear people talking about immutable values and STM. But I don't hear so much about this correct use of memory barriers in the core implementations. But what it means is that Clojure is much safer for threading than Java, without having to think about it. Yet another reason Clojure is a Better Java.
The JVM is complicated, but Clojure makes it easier. There's stil a lot to know, though. That's why I made JVM Fundamentals for Clojure. It's a video course with more than 5 hours of lessons about stuff I use all the time as a professional Clojure programmer.
There's one last thing I'd like to discuss, and those are the dreaded Clojure stacktraces. Next time! | https://purelyfunctional.tv/article/clojures-unsung-heroics-with-concurrency/ | CC-MAIN-2018-43 | refinedweb | 688 | 76.93 |
/* Display generation from window structure and buffer text.. */
/* New redisplay written by Gerd Moellmann <gerd@gnu.org>.
Redisplay.
Emacs separates the task of updating the display from code
modifying global state, e.g. buffer text. This way functions
operating on buffers don't also have to be concerned with updating
the display.
Updating the display is triggered by the Lisp interpreter when it
decides it's time to do it. This is done either automatically for
you as part of the interpreter's command loop or as the result of
calling Lisp functions like `sit-for'. The C function `redisplay'
in xdisp.c is the only entry into the inner redisplay code. (Or,
let's say almost---see the description of direct update
operations, below.)
The following diagram shows how redisplay code is invoked. As you
can see, Lisp calls redisplay and vice versa. Under window systems
like X, some portions of the redisplay code are also called
asynchronously during mouse movement or expose events. It is very
important that these code parts do NOT use the C library (malloc,
free) because many C libraries under Unix are not reentrant. They
may also NOT call functions of the Lisp interpreter which could
change the interpreter's state. If you don't follow these rules,
you will encounter bugs which are very hard to explain.
(Direct functions, see below)
direct_output_for_insert,
direct_forward_char (dispnew.c)
+---------------------------------+
| |
| V
+--------------+ redisplay +----------------+
| Lisp machine |---------------->| Redisplay code |<--+
+--------------+ (xdisp.c) +----------------+ |
^ | |
+----------------------------------+ |
Don't use this path when called |
asynchronously! |
|
expose_window (asynchronous) |
|
X expose events -----+
What does redisplay do? Obviously, it has to figure out somehow what
has been changed since the last time the display has been updated,
and to make these changes visible. Preferably it would do that in
a moderately intelligent way, i.e. fast.
Changes in buffer text can be deduced from window and buffer
structures, and from some global variables like `beg_unchanged' and
`end_unchanged'. The contents of the display are additionally
recorded in a `glyph matrix', a two-dimensional matrix of glyph
structures. Each row in such a matrix corresponds to a line on the
display, and each glyph in a row corresponds to a column displaying
a character, an image, or what else. This matrix is called the
`current glyph matrix' or `current matrix' in redisplay
terminology.
For buffer parts that have been changed since the last update, a
second glyph matrix is constructed, the so called `desired glyph
matrix' or short `desired matrix'. Current and desired matrix are
then compared to find a cheap way to update the display, e.g. by
reusing part of the display by scrolling lines..
Desired matrices.
Desired matrices are always built per Emacs window. The function
`display_line' is the central function to look at if you are
interested. It constructs one row in a desired matrix given an
iterator structure containing both a buffer position and a
description of the environment in which the text is to be
displayed. But this is too early, read on.
Characters and pixmaps displayed for a range of buffer text depend
on various settings of buffers and windows, on overlays and text
properties, on display tables, on selective display. The good news
is that all this hairy stuff is hidden behind a small set of
interface functions taking an iterator structure (struct it)
argument.
Iteration over things to be displayed is then simple. It is
started by initializing an iterator with a call to init_iterator.
Calls to get_next_display_element fill the iterator structure with
relevant information about the next thing to display. Calls to
set_iterator_to_next move the iterator to the next thing.
Besides this, an iterator also contains information about the
display environment in which glyphs for display elements are to be
produced. It has fields for the width and height of the display,
the information whether long lines are truncated or continued, a
current X and Y position, and lots of other stuff you can better
see in dispextern.h.
Glyphs in a desired matrix are normally constructed in a loop
calling get_next_display_element and then produce_glyphs. The call
to produce_glyphs will fill the iterator structure with pixel
information about the element being displayed and at the same time
produce glyphs for it. If the display element fits on the line
being displayed, set_iterator_to_next is called next, otherwise the
glyphs produced are discarded.
Frame matrices.
That just couldn't be all, could it? What about terminal types not
supporting operations on sub-windows of the screen? To update the
display on such a terminal, window-based glyph matrices are not
well suited. To be able to reuse part of the display (scrolling
lines up and down), we must instead have a view of the whole
screen. This is what `frame matrices' are for. They are a trick.
Frames on terminals like above have a glyph pool. Windows on such
a frame sub-allocate their glyph memory from their frame's glyph
pool. The frame itself is given its own glyph matrices. By
coincidence---or maybe something else---rows in window glyph
matrices are slices of corresponding rows in frame matrices. Thus
writing to window matrices implicitly updates a frame matrix which
provides us with the view of the whole screen that we originally
wanted to have without having to move many bytes around. To be
honest, there is a little bit more done, but not much more. If you
plan to extend that code, take a look at dispnew.c. The function
build_frame_matrix is a good starting point. */
#include <config.h>
#include <stdio.h>
#include "lisp.h"
#include "keyboard.h"
#include "frame.h"
#include "window.h"
#include "termchar.h"
#include "dispextern.h"
#include "buffer.h"
#include "charset.h"
#include "indent.h"
#include "commands.h"
#include "macros.h"
#include "disptab.h"
#include "termhooks.h"
#include "intervals.h"
#include "coding.h"
#include "process.h"
#include "region-cache.h"
#include "fontset.h"
#ifdef HAVE_X_WINDOWS
#include "xterm.h"
#endif
#ifdef WINDOWSNT
#include "w32term.h"
#endif
#ifdef MAC_OS
#include "macterm.h"
#endif
#define INFINITY 10000000
#if defined (USE_X_TOOLKIT) || defined (HAVE_NTGUI) || defined (MAC_OS) \
|| defined (USE_GTK)
extern void set_frame_menubar P_ ((struct frame *f, int, int));
extern int pending_menu_activation;
#endif
extern int interrupt_input;
extern int command_loop_level;;
Lisp_Object Qoverriding_local_map, Qoverriding_terminal_local_map;
Lisp_Object Qwindow_scroll_functions, Vwindow_scroll_functions;
Lisp_Object Qredisplay_end_trigger_functions;
Lisp_Object Qinhibit_point_motion_hooks;
Lisp_Object QCeval, QCfile, QCdata, QCpropertize;
Lisp_Object Qfontified;
Lisp_Object Qgrow_only;
Lisp_Object Qinhibit_eval_during_redisplay;
Lisp_Object Qbuffer_position, Qposition, Qobject;
/* Cursor shapes */
Lisp_Object Qbar, Qhbar, Qbox, Qhollow;
Lisp_Object Qrisky_local_variable;
/* Holds the list (error). */
Lisp_Object list_of_error;
/* Functions called to fontify regions of text. */
Lisp_Object Vfontification_functions;
Lisp_Object Qfontification_functions;
/* Non-zero means draw tool bar buttons raised when the mouse moves
over them. */
int auto_raise_tool_bar_buttons_p;
/* Margin around tool bar buttons in pixels. */
Lisp_Object Vtool_bar_button_margin;
/* Thickness of shadow to draw around tool bar buttons. */
EMACS_INT tool_bar_button_relief;
/* Non-zero means automatically resize tool-bars so that all tool-bar
items are visible, and no blank lines remain. */
int auto_resize_tool_bars, Qrelative_width, Qalign_to;
extern Lisp_Object Qface, Qinvisible, Qwidth;
/* Symbols used in text property values. */
Lisp_Object Qspace, QCalign_to, QCrelative_width, QCrelative_height;
Lisp_Object Qleft_margin, Qright_margin, Qspace_width, Qraise;
Lisp_Object Qmargin;
extern Lisp_Object Qheight;
/* Non-nil means highlight trailing whitespace. */
Lisp_Object Vshow_trailing_whitespace;
/* Name of the face used to highlight trailing whitespace. */
Lisp_Object Qtrailing_whitespace;
/* The symbol `image' which is the car of the lists used to represent
images in Lisp. */
Lisp_Object Qimage;
/* Non-zero means print newline to stdout before next mini-buffer
int noninteractive_need_newline;
/* Non-zero means print newline to message log before next message. */
static int message_log_need_newline;
/* Three markers that message_dolog uses.
It could allocate them itself, but that causes trouble
in handling memory-full errors. */
static Lisp_Object message_dolog_marker1;
static Lisp_Object message_dolog_marker2;
static Lisp_Object message_dolog_marker3;
/* The buffer position of the first character appearing entirely or
partially on the line of the selected window which contains the
cursor; <= 0 if not known. Set by set_cursor_from_row, used for
redisplay optimization in redisplay_internal. */
static struct text_pos this_line_start_pos;
/* Number of characters past the end of the line above, including the
terminating newline. */
static struct text_pos this_line_end_pos;
/* The vertical positions and the height of this line. */
static int this_line_vpos;
static int this_line_y;
static int this_line_pixel_height;
/* X position at which this display line starts. Usually zero;
negative if first character is partially visible. */
static int this_line_start_x;
/* Buffer that this_line_.* variables are referring to. */
static struct buffer *this_line_buffer;
/* Nonzero means truncate lines in all windows less wide than the
frame. */
int trunc;
/* Marker for where to display an arrow on top of the buffer text. */
Lisp_Object Voverlay_arrow_position;
/* String to display for the arrow. Only used on terminal frames. */
Lisp_Object Voverlay_arrow_string;
/* Values of those variables at last redisplay. However, if
Voverlay_arrow_position is a marker, last_arrow_position is its
numerical position. */
static Lisp_Object last_arrow_position, last overlay arrow has been displayed once CANON_Y_UNIT,;
/* Prompt to display in front of the mini-buffer contents. */
Lisp_Object minibuf_prompt;
/* Width of current mini-buffer prompt. Only set after display_line
of the line that contains the prompt. */
int minibuf_prompt_width;
/* This is the window where the echo area message was displayed. It
is always a mini-buffer window, but it may not be the same window
currently active as a mini-buffer. */
Lisp_Object echo_area_window;
/* List of pairs (MESSAGE . MULTIBYTE). The function save_message
pushes the current message and the value of
message_enable_multibyte on the stack, the function restore_message
pops the stack and displays MESSAGE again. */
Lisp_Object Vmessage_stack;
/* Nonzero means multibyte characters were enabled when the echo area
message was specified. */
int message_enable_multibyte;
/* Nonzero if we should redraw the mode lines on the next redisplay. */
int update_mode_lines;
/* Nonzero if window sizes or contents have changed since last
redisplay that finished. */
int windows_or_buffers_changed;
/* Nonzero means a frame's cursor type has been changed. */
int cursor_type_changed;
/* Nonzero after display_mode_line if %l was used and it displayed a
line number. */
int line_number_displayed;
/*;
/* The name of the *Messages* buffer, a string. */
static Lisp_Object Vmessages_buffer_name;
/* Current, index 0, and last displayed echo area message. Either
buffers from echo_buffers, or nil to indicate no message. */
Lisp_Object echo_area_buffer[2];
/* The buffers referenced from echo_area_buffer. */
static Lisp_Object echo_buffer[2];
/* A vector saved used in with_area_buffer to reduce consing. */
static Lisp_Object Vwith_echo_area_save_vector;
/* Non-zero means display_echo_area should display the last echo area
message again. Set by redisplay_preserve_echo_area. */
static int display_last_displayed_message_p;
/* Nonzero if echo area is being used by print; zero if being used by
int message_buf_print;
/* The symbol `inhibit-menubar-update' and its DEFVAR_BOOL variable. */
Lisp_Object Qinhibit_menubar_update;
int inhibit_menubar_update;
/*;
/* Non-zero means we want a hollow cursor in windows that are not
selected. Zero means there's no cursor in such windows. */
Lisp_Object Vcursor_in_non_selected_windows;
Lisp_Object Qcursor_in_non_selected_windows;
/* How to blink the default frame cursor off. */
Lisp_Object Vblink_cursor_alist;
/* A scratch glyph row with contents used for generating truncation
glyphs. Also used in direct_output_for_insert. */
#define MAX_SCRATCH_GLYPHS 100
struct glyph_row scratch_glyph_row;
static struct glyph scratch_glyphs[MAX_SCRATCH_GLYPHS];
/* Ascent and height of the last line processed by move_it_to. */
static int last_max_ascent, last_height;
/* Non-zero if there's a help-echo in the echo area. */
int help_echo_showing_p;
/* If >= 0, computed, exact values of mode-line and header-line height
to use in the macros CURRENT_MODE_LINE_HEIGHT and
CURRENT_HEADER_LINE_HEIGHT. */
int current_mode_line_height, current_header_line_height;
/* The maximum distance to look ahead for text properties. Values
that are too small let us call compute_char_face and similar
functions too often which is expensive. Values that are too large
let us call compute_char_face and alike too often because we
might not be interested in text properties that far away. */
#define TEXT_PROP_DISTANCE_LIMIT 100
#if GLYPH_DEBUG
/* Variables to turn off display optimizations from Lisp. */
int inhibit_try_window_id, inhibit_try_window_reusing;
int inhibit_try_cursor_movement;
/* Non-zero means print traces of redisplay if compiled with
GLYPH_DEBUG != 0. */
int trace_redisplay_p;
#endif /* GLYPH_DEBUG */
#ifdef DEBUG_TRACE_MOVE
/* Non-zero means trace with TRACE_MOVE to stderr. */
int trace_move;
#define TRACE_MOVE(x) if (trace_move) fprintf x; else (void) 0
#else
#define TRACE_MOVE(x) (void) 0
#endif
/* Non-zero means automatically scroll windows horizontally to make
point visible. */
int automatic_hscrolling_p;
/* How close to the margin can point get before the window is scrolled
horizontally. */
EMACS_INT hscroll_margin;
/* How much to scroll horizontally when point is inside the above margin. */
Lisp_Object Vhscroll_step;
/* A list of symbols, one for each supported image type. */
Lisp_Object Vimage_types;
/*;
/* Value returned from text property handlers (see below). */
enum prop_handled
{
HANDLED_NORMALLY,
HANDLED_RECOMPUTE_PROPS,
HANDLED_OVERLAY_STRING_CONSUMED,
HANDLED_RETURN
};
/* A description of text properties that redisplay is interested
in. */
struct props
{
/* The name of the property. */
Lisp_Object *name;
/* A unique index for the property. */
enum prop_idx idx;
/* A handler function called to set up iterator IT from the property
at IT's current position. Value is used to steer handle_stop. */
enum prop_handled (*handler)_ ((struct it *));
/* Properties handled by iterators. */
static struct props it_props[] =
{&Qfontified, FONTIFIED_PROP_IDX, handle_fontified_prop},
/* Handle `face' before `display' because some sub-properties of
`display' need to know the face. */
{&Qface, FACE_PROP_IDX, handle_face_prop},
{&Qdisplay, DISPLAY_PROP_IDX, handle_display_prop},
{&Qinvisible, INVISIBLE_PROP_IDX, handle_invisible_prop},
{&Qcomposition, COMPOSITION_PROP_IDX, handle_composition_prop},
{NULL, 0, NULL}
};
/* Value is the position described by X. If X is a marker, value is
the marker_position of X. Otherwise, value is X. */
#define COERCE_MARKER(X) (MARKERP ((X)) ? Fmarker_position (X) : (X))
/* Enumeration returned by some move_it_.* functions internally. */
enum move_it_result
{
/* Not used. Undefined value. */
MOVE_UNDEFINED,
/* Move ended at the requested buffer position or ZV. */
MOVE_POS_MATCH_OR_ZV,
/* Move ended at the requested X pixel position. */
MOVE_X_REACHED,
/* Move within a line ended at the end of a line that must be
continued. */
MOVE_LINE_CONTINUED,
/* Move within a line ended at the end of a line that would
be displayed truncated. */
MOVE_LINE_TRUNCATED,
/* Move within a line ended at a line end. */
MOVE_NEWLINE_OR_CR
};
/* This counter is used to clear the face cache every once in a while
in redisplay_internal. It is incremented for each redisplay.
Every CLEAR_FACE_CACHE_COUNT full redisplays, the face cache is
cleared. */
#define CLEAR_FACE_CACHE_COUNT 500
static int clear_face_cache_count;
/* Record the previous terminal frame we displayed. */
static struct frame *previous_terminal_frame;
/*;
/* Function prototypes. */
static void setup_for_ellipsis P_ ((struct it *));
static void mark_window_display_accurate_1 P_ ((struct window *, int));
static int single_display_prop));
#if 0
static int invisible_text_between_p P_ ((struct it *, int, int));
static int next_element_from_ellipsis P_ ((struct it *));
static void pint2str P_ ((char *, int, int));
static struct text_pos run_window_scroll_functions P_ ((Lisp_Object,
struct text_pos));
static void reconsider_clip_changes P_ ((struct window *, struct buffer *));
static int text_outside_line_unchanged_p P_ ((struct window *, int, int));
static void store_frame_title_char P_ ((char));
static int store_frame_title P_ ((const unsigned char *, int, int));
static void x_consider_frame_title P_ ((Lisp_Object));
static void handle_stop P_ ((struct it *));
static int tool_bar_lines_needed P_ ((struct frame *));
static int single_display_prop *));
static void extend_face_to_end_of_line P_ ((struct it *));
static int append_space P_ ((struct it *, int));
static int make_cursor_line_fully_visible P_ ((struct window *)); void update_menu_bar P_ ((struct frame *,,
int, int, struct it *, int, int, int, int));
static void compute_line_metrics P_ ((struct it *));
static void run_redisplay_end_trigger_hook P_ ((struct it *));
static int get_overlay_strings P_ ((struct it *, int));
static void next_overlay_string P_ ((struct it *));
static void reseat P_ ((struct it *, struct text_pos, int));
static void reseat_1 P_ ((struct it *, struct text_pos, int));
static void back_to_previous_visible_line_start P_ ((struct it *));
static void reseat_at_previous_visible_line_start P_ ((struct it *));
static void reseat_at_next_visible_line_start P_ ((struct it *, P_ ((struct it *,
int, int, int));
void move_it_vertically_backward P_ ((struct it *, int));
static void init_to_row_start P_ ((struct it *, struct window *,
struct glyph_row *));
static int init_to_row_end P_ ((struct it *, struct window *,
struct glyph_row *));
static void back_to_previous_line_start P_ ((struct it *));
static int forward_to_next_line_start P_ ((struct it *, int *));
static struct text_pos string_pos_nchars_ahead P_ ((struct text_pos,
Lisp_Object, int));
static struct text_pos string_pos P_ ((int, Lisp_Object));
static struct text_pos c_string_pos P_ ((int, unsigned char *, int));
static int number_of_chars P_ ((unsigned char *, int));
static void compute_stop_pos P_ ((struct it *));
static void compute_string_pos P_ ((struct text_pos *, struct text_pos,
Lisp_Object));
static int face_before_or_after_it_pos P_ ((struct it *, int));
static int next_overlay_change P_ ((int));
static int handle_single_display_prop P_ ((struct it *, Lisp_Object,
Lisp_Object, struct text_pos *,
int));
static int underlying_face_id P_ ((struct it *));
static int in_ellipses_for_invisible_text_p P_ ((struct display_pos *,
struct window *));
#define face_before_it_pos(IT) face_before_or_after_it_pos ((IT), 1)
#define face_after_it_pos(IT) face_before_or_after_it_pos ((IT), 0)
#ifdef HAVE_WINDOW_SYSTEM
static void update_tool_bar P_ ((struct frame *, int));
static void build_desired_tool_bar_string P_ ((struct frame *f));
static int redisplay_tool_bar P_ ((struct frame *));
static void display_tool_bar_line P_ ((struct it *));
#endif /* HAVE_WINDOW_SYSTEM */
/*********************************************************************** (w)
struct window *w;
{
struct frame *f = XFRAME (w->frame);
int height = XFASTINT (w->height) * CANON_Y_UNIT (w, area)
struct window *w;
int area;
{
struct frame *f = XFRAME (w->frame);
int width = XFASTINT (w->width);
if (!w->pseudo_window_p)
{
width -= FRAME_SCROLL_BAR_WIDTH (f) + FRAME_FRINGE_COLS (f);
if (area == TEXT_AREA)
{
if (INTEGERP (w->left_margin_width))
width -= XFASTINT (w->left_margin_width);
if (INTEGERP (w->right_margin_width))
width -= XFASTINT (w->right_margin_width);
}
else if (area == LEFT_MARGIN_AREA)
width = (INTEGERP (w->left_margin_width)
? XFASTINT (w->left_margin_width) : 0);
else if (area == RIGHT_MARGIN_AREA)
width = (INTEGERP (w->right_margin_width)
? XFASTINT (w->right_margin_width) : 0);
}
return width * CANON_X_UNIT (f);
}
/* Return the pixel height of the display area of window W, not
including mode lines of W, if any. */
INLINE int
window_box_height (w)
struct window *w;
struct frame *f = XFRAME (w->frame);
int height = XFASTINT (w->height) * CANON_Y_UNIT (f); frame-relative coordinate of the left edge of display
area AREA of window W. AREA < 0 means return the left edge of the
whole window, to the right of the left fringe of W. */
INLINE int
window_box_left (w, area)
struct window *w;
int area;
struct frame *f = XFRAME (w->frame);
int x = FRAME_INTERNAL_BORDER_WIDTH_SAFE (f);
x += (WINDOW_LEFT_MARGIN (w) * CANON_X_UNIT (f)
+ FRAME_LEFT_FRINGE_WIDTH (f));
if (area == TEXT_AREA)
x += window_box_width (w, LEFT_MARGIN_AREA);
else if (area == RIGHT_MARGIN_AREA)
x += (window_box_width (w, LEFT_MARGIN_AREA)
+ window_box_width (w, TEXT_AREA));
return x;
/* Return the frame-relative coordinate of the right edge of display
area AREA of window W. AREA < 0 means return the left edge of the
whole window, to the left of the right fringe of W. */
INLINE int
window_box_right (w, area)
struct window *w;
int area;
{
return window_box_left (w, area) + window_box_width (w, area);
/* Get the bounding box of the display area AREA of window W, without
mode lines, in frame-relative coordinates. AREA < 0 means the | https://emba.gnu.org/emacs/emacs/-/blame/d5cc60b8f99957c6569cf7471f1fe3ec1570da85/src/xdisp.c | CC-MAIN-2021-49 | refinedweb | 2,896 | 55.54 |
Download CAD Import .NET demo:
CAD Import .NET site:
mid=124
Recently released by Soft Gold Ltd a CAD Import.NET library brought a wide set of opportunities to AutoCAD users and software developers programming in the .NET environment. This product basically consists of two parts: Viewer and Import libraries.
Screenshot:
Viewer library
-------------
The Viewer library provides functionality for viewing CAD files, Windows metafiles and raster images. The supported formats include DWG, DXF, WFM, EMF, BMP, JPG, TIFF, GIF, and ICO. The program allows converting CAD files to Windows metafiles and raster images. Raster images can be printed, zoomed in, zoomed out and rotated in their planes. For CAD images the program provides a wider scope. They also can be printed, zoomed in and zoomed out.
Rotation for them is possible around each of three space axes. Each layer of the CAD image can be switched off or switched on. Different layouts can be selected in the corresponding combo box. A visibility of all entities contained in the CAD file can be switched off or switched on. The program also allows turning on or turning off SHX fonts.
Other features include:
* Supporting three-dimensional coordinates;
* Supporting nested extrusions;
* Easy and multifarious image scaling and dragging;
* Compatible with AutoDesk DXF Release 10, 12, 13, 14, 2000, 2002, 2004/2005/2006;
* Compatible with AutoDesk DWG Release 9, 10, 12, 13, 14, 2000, 2002, 2004/2005/2006;
* Fully compatible with C#, VB.NET and other programming environments;
* Using Unicode to view Japanese, Chinese, Korean and other hieroglyphs;
* Supporting a DXF AutoCAD Table;
* Supporting DXF and DWG reference images.
Another remarkable feature of the CADImport.NET library is CADImportControl component. This control makes possible inserting the whole CAD viewer to any Windows form.
To use CADImportControl create a new Windows application. Add respective name spaces in the project:
// code example:
using CADImport;
using CADImportControl;
Add three lines of code to the constructor of the main application's form:
CADImportControl.CADImportControl importControl = new CADImportControl.CADImportControl();
importControl.Size = new Size( this .ClientRectangle.Width, this .ClientRectangle.Height); this .Controls.Add(importControl);
Run the application and you will receive a completely functional AutoCAD viewer on your form.
Import library
--------------
The Import library besides the features of the first library also provides a full access to all entities read from a CAD file. This feature allows a programmer to modify the properties of each entity loaded from a CAD file by code. All entities from a CAD file also can be imported to the application and saved in the following text format:
ClassName = CADImport.CADLINE; Entity name = LINE layer = 0 style = psSolid color = ffff00ff Begin point:
X=314,02877557061 Y=260,650719629539 Z=0 End point:
X=347,350258688998 Y=227,50439611107 Z=0 ClassName = CADImport.CADLINE; Entity name = LINE layer = 0 style = psSolid color = ffff00ff Begin point:
X=402,649741311002 Y=172,49560388893 Z=0 End point:
X=412,575289473926 Y=162,622230925982 Z=0 ClassName = CADImport.CADTEXT; Entity name = TEXT layer = 0 style = psSolid color = ffff00ff Start point of
Text:
X=463,963052226108
Y=189,239334106445
Z=0
The text is: 90B0 Angle=090
All basic entity properties are represented here. For the line these are name, layer, style, color, beginning point, end point. For the text entity its contents is also included as a property.
If you would like to receive an email when updates are made to this post, please register here
RSS
can you help me how to import 3d max 9 files to vb.net | http://aspadvice.com/blogs/pressreleases/archive/2006/06/01/18246.aspx | crawl-001 | refinedweb | 582 | 55.95 |
JustLinux Forums
>
Community Help: Check the Help Files, then come here to ask!
>
Programming/Scripts
> basic programming problem
PDA
Click to See Complete Forum and Search -->
:
basic programming problem
Jazzalex
05-26-2004, 12:35 PM
Hi ! I have a very simple code that throws me some linking errors when I try to run it - filename : geruest.cpp :
#include <iostream>
using namespace std;
int main()
{
int c;
while ( (c = cin.get ()) != EOF)
{
cout << (char)c;
}
return 0;
}
When doing a gcc geruest.cpp -o test
I get this
[root@localhost root]# gcc -o geruest.cpp test
gcc: test: No such file or directory
gcc: no input files
[root@localhost root]# gcc geruest.cpp -o test
/tmp/ccH5UqmO.o(.text+0x14): In function `main':
: undefined reference to `std::cin'
/tmp/ccH5UqmO.o(.text+0x19): In function `main':
: undefined reference to `std::basic_istream<char, std::char_traints<char> >::get()'
/tmp/ccH5UqmO.o(.text+0x34): In function `main':
: undefined reference to `std::cout'
/tmp/ccH5UqmO.o(.text+0x39): In function `main':
: undefined reference to `std::basic_ostream<char, std::char_traits<char> >& std::operator<< <std::char_traits<char> >(std::basic_ostream<char, std::char_traits<char> >&, char)'
/tmp/ccH5UqmO.o(.text+0x68): In function `__static_initialization_and_destruction_0(int, int)':
: undefined reference to `std::ios_base::Init::Init[in-charge]()'
/tmp/ccH5UqmO.o(.text+0x97): In function `__tcf_0':
: undefined reference to `std::ios_base::Init::~Init [in-charge]()'
/tmp/ccH5UqmO.o(.eh_frame+0x11): undefined reference to `__gxx_personality_v0'
collect2: ld returned 1 exit status
I guess I have to include the right lib but don't know which and how.
Thanks
-- A l e x
eslackey
05-26-2004, 02:03 PM
I don't know if it makes a difference but you might want to try:
# g++ geruest.cpp -o test
See if that helps.
eyceguy
05-26-2004, 02:16 PM
i believe your loop should be
while(cin.get(c))
it automatically checks for EOF so there shouldnt be a problem other than that
maccorin
05-26-2004, 06:29 PM
eslackey is correct, you need to use g++ for c++, not gcc the link errors are because if you call it gcc it does not link to libstdc++ automatically (g++ does though)
GaryJones32
05-26-2004, 11:47 PM
Originally posted by Jazzalex
When doing a gcc geruest.cpp -o test
I get this
[root@localhost root]# gcc -o geruest.cpp test
gcc: test: No such file or directory
gcc: no input files
while what everyone said about g++ is correct
the command is still in the wrong order
and you hadn't gotten to the linking problem yet
it should be
g++ -o test geruest.cpp
maccorin
05-27-2004, 12:25 AM
Originally posted by GaryJones32
while what everyone said about g++ is correct
the command is still in the wrong order
and you hadn't gotten to the linking problem yet
it should be
g++ -o test geruest.cpp
doh!
can't believe i missed that
Jazzalex
05-27-2004, 06:33 AM
Thanks to all ! That worked. But why doesn't gcc automatically link to libstdc++ ? Of course this :
gcc -o test geruest.cpp -lstdc++ worked as well.
Thanks
-- A l e x
maccorin
05-27-2004, 02:14 PM
don't count on gcc with -lstdc++ always working. If you call it via g++ (which is usually just a symlink to gcc), it will link the the needed libs for c++, it decides what to automatically link to by the value of argv[0] (the name of the program that you call)
bwkaz
05-27-2004, 06:51 PM
Originally posted by Jazzalex
But why doesn't gcc automatically link to libstdc++? Because gcc is a C compiler, not a C++ compiler. libstdc++ is just extra baggage when you're writing a C program...
justlinux.com | http://justlinux.com/forum/archive/index.php/t-128209.html | crawl-003 | refinedweb | 630 | 62.38 |
I'm trying to follow through how the sort process is working though, here's the program :
#include <stdio.h> #define MAX 4 int a[MAX]; int rand_seed=170; /* from K&R - returns random number between 0 and 32767.*/ int rand() { rand_seed = rand_seed * 1103515245 +12345; return (unsigned int)(rand_seed / 65536) % 32768; } int main() { int i,t,x,y; /* fill array */ for (i=0; i < MAX; i++) { a[i]=rand(); printf("%d\n",a[i]); } /* bubble sort the array */ for (x=0; x < MAX-1; x++) { for (y=0; y < MAX-x-1; y++) if (a[y] > a[y+1]) { t=a[y]; a[y]=a[y+1]; a[y+1]=t; } } /* print sorted array */ printf("--------------------\n"); for (i=0; i < MAX; i++) printf("%d\n",a[i]); return 0; }
**Random numbers are :
11697 = a[0]
5078 = a[1]
31364 = a[2]
26294 = a[3]
So the sorting starts at line 29 - (to my understanding) the for loop with x will run 3 times.... x < 3(MAX-1)
0
1
2
then it hit's MAX-1(3) and will stop...
for every iteration of 'loop x' there is a nested loop with the variable y... this loop cuts off when y < MAX-x-1... which would allow it to run
0
1
2
When x is 0... Then when x = 1
0
1
So for every iteration of x loop y loop runs twice : 0, 1
Nested in y loop there is an if statement. This will be ran twice before reset(one for each iteration of y loop) and measures whether
a[0] > a[1] on the first iteration. And a[1] > a[2] on the second.
So if this program was run the first if statement would be working whether
a[y] 11697 > a[y+1] 5078
Which is true so we go into the if statement.
t = a[y]
So t = 11697
a[y] = a [y+1]
So a[0] = a[1] ---- a[0] = 5078
a[y+1] = t
So a[1] = 5078
Our array values have been effectively swapped. The random number array would now read
a[0] = 5078
a[1] = 11697
a[2] = 31364
a[3] = 26294
the next time the y loop runs the same process would be used, but the if statement would be checking if a[1] > a[2]
Which it isn't so the if statement doesn't run....
I'm not quite finishing it off though... I don't get how y ever references a[3] as it... ahh [y+1] stupid me
So this process runs for the second time, doing the same swap process to the array values a[1] and a[2]
a[1] is not greater than a[2], so the if statement doesn't execute...
On the last run of y a[2] is referenced against a[3], which is greater so they swap
a[0] = 5078
a[1] = 11697
a[2] = 26294
a[3] = 31364
X will now run again, but as the array's are in order the loops will exit.
Is this right? I think it works, but hopefully it show's any constructs that I'm not really communicating well, missed or don't seem to be understanding. | http://www.dreamincode.net/forums/topic/339859-bubble-sort-array-beginner/ | CC-MAIN-2016-22 | refinedweb | 537 | 81.67 |
Installing Python Packages from a Jupyter Notebook, that one, and this... etc.).
Fundamentally the problem is usually rooted in the fact that the Jupyter kernels are disconnected from Jupyter's shell;.
In the wake of several discussions on this topic with colleagues, some online (exhibit A, exhibit B) and some off, I decided to treat this issue in depth here. This post will address a couple things:
First, I'll provide a quick, bare-bones answer to the general question, how can I install a Python package so it works with my jupyter notebook, using pip and/or conda?.
Second, I'll dive into some of the background of exactly what the Jupyter notebook abstraction is doing, how it interacts with the complexities of the operating system, and how you can think about where the "leaks" are, and thus better understand what's happening when things stop working.
Third, I'll talk about some ideas the community might consider to help smooth-over these issues, including some changes that the Jupyter, Pip, and Conda developers might consider to ease the cognitive load on users.
This post will focus on two approaches to installing Python packages: pip and conda. Other package managers exist (including platform-specific tools like yum, apt, homebrew, etc., as well as cross-platform tools like enstaller), but I'm less familiar with them and won't be remarking on them further.
pip vs. conda¶
First, a few words on
pip vs.
conda.
For many users, the choice between pip and conda can be a confusing one.
I wrote way more than you ever want to know about these in a post last year, but the essential difference between the two is this:
- pip installs python packages in any environment.
- conda installs any package in conda environments.
If you already have a Python installation that you're using, then the choice of which to use is easy:
If you installed Python using Anaconda or Miniconda, then use
condato install Python packages. If conda tells you the package you want doesn't exist, then use pip (or try conda-forge, which has more packages available than the default conda channel).
If you installed Python any other way (from source, using pyenv, virtualenv, etc.), then use
pipto install Python packages
Finally, because it often comes up, I should mention that you should never use
sudo pip install.
NEVER.
It will always lead to problems in the long term, even if it seems to solve them in the short-term.
For example, if
pip install gives you a permission error, it likely means you're trying to install/update packages in a system python, such as
/usr/bin/python. Doing this can have bad consequences, as often the operating system itself depends on particular versions of packages within that Python installation.
For day-to-day Python usage, you should isolate your packages from the system Python, using either virtual environments or Anaconda/Miniconda — I personally prefer conda for this, but I know many colleagues who prefer virtualenv.
# DON'T DO THIS! !conda install --yes numpy
Fetching package metadata ........... Solving package specifications: . # All requested packages already installed. # packages in environment at /Users/jakevdp/anaconda/envs/python3.6: # numpy 1.13.3 py36h2cdce51_0
(Note that we use
--yes to automatically answer
y if and when conda asks for user confirmation)
For various reasons that I'll outline more fully below, this will not generally work if you want to use these installed packages from the current notebook, though it may work in the simplest cases.
Here is a short snippet that should work in general:
# Install a conda package in the current Jupyter kernel import sys !conda install --yes --prefix {sys.prefix} numpy
Fetching package metadata ........... Solving package specifications: . # All requested packages already installed. # packages in environment at /Users/jakevdp/anaconda: # numpy 1.13.3 py36h2cdce51_0
That bit of extra boiler-plate makes certain that conda installs the package in the currently-running Jupyter kernel (thanks to Min Ragan-Kelley for suggesting this approach). I'll discuss why this is needed momentarily.
# DON'T DO THIS !pip install numpy
Requirement already satisfied: numpy in /Users/jakevdp/anaconda/envs/python3.6/lib/python3.6/site-packages
For various reasons that I'll outline more fully below, this will not generally work if you want to use these installed packages from the current notebook, though it may work in the simplest cases.
Here is a short snippet that should generally work:
# Install a pip package in the current Jupyter kernel import sys !{sys.executable} -m pip install numpy
Requirement already satisfied: numpy in /Users/jakevdp/anaconda/lib/python3.6/site-packages
That bit of extra boiler-plate makes certain that you are running the
pip version associated with the current Python kernel, so that the installed packages can be used in the current notebook.
This is related to the fact that, even setting Jupyter notebooks aside, it's better to install packages using
$ python -m pip install <package>
rather than
$ pip install <package>
because the former is more explicit about where the package will be installed (more on this below).
The Details: Why is Installation from Jupyter so Messy?¶
Those above solutions should work in all cases... but why is that additional boilerplate necessary? In short, it's because in Jupyter, the shell environment and the Python executable are disconnected. Understanding why that matters depends on a basic understanding of a few different concepts:
- how your operating system locates executable programs,
- how Python installs and locates packages
- how Jupyter decides which Python executable to use.
For completeness, I'm going to delve briefly into each of these topics (this discussion is partly drawn from This StackOverflow answer that I wrote last year).
Note: the following discussion assumes Linux, Unix, MacOSX and similar operating systems. Windows has a slightly different architecture, and so some details will differ.
How your operating system locates executables¶
When you're using the terminal and type a command like
python,
jupyter,
ipython,
pip,
conda, etc., your operating system contains a well-defined mechanism to find the executable file the name refers to.
On Linux & Mac systems, the system will first check for an alias matching the command; if this fails it references the
$PATH environment variable:
!echo $PATH
/Users/jakevdp/anaconda/envs/python3.6/bin:/Users/jakevdp/anaconda/envs/python3.6/bin:/Users/jakevdp/anaconda/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin
$PATH lists the directories, in order, that will be searched for any executable: for example, if I type
python on my system with the above
$PATH, it will first look for
/Users/jakevdp/anaconda/envs/python3.6/bin/python, and if that doesn't exist it will look for
/Users/jakevdp/anaconda/bin/python, and so on.
(Parenthetical note: why is the first entry of
$PATH repeated twice here? Because every time you launch
jupyter notebook, Jupyter prepends the location of the
jupyter executable to the beginning of the
$PATH. In this case, the location was already at the beginning of the path, and the result is that the entry is duplicated. Duplicate entries add clutter, but cause no harm).
If you want to know what is actually executed when you type
python, you can use the
type shell command:
!type python
python is /Users/jakevdp/anaconda/envs/python3.6/bin/python
Note that this is true of any command you use from the terminal:
!type ls
ls is /bin/ls
Even built-in commands like
type itself:
!type type
type is a shell builtin
You can optionally add the
-a tag to see all available versions of the command in your current shell environment; for example:
!type -a python
python is /Users/jakevdp/anaconda/envs/python3.6/bin/python python is /Users/jakevdp/anaconda/envs/python3.6/bin/python python is /Users/jakevdp/anaconda/bin/python python is /usr/bin/python
!type -a conda
conda is /Users/jakevdp/anaconda/envs/python3.6/bin/conda conda is /Users/jakevdp/anaconda/envs/python3.6/bin/conda conda is /Users/jakevdp/anaconda/bin/conda
!type -a pip
pip is /Users/jakevdp/anaconda/envs/python3.6/bin/pip pip is /Users/jakevdp/anaconda/envs/python3.6/bin/pip pip is /Users/jakevdp/anaconda/bin/pip
When you have multiple available versions of any command, it is important to keep in mind the role of
$PATH in choosing which will be used.
import sys sys.path
['', '', '/Users/jakevdp/anaconda/lib/python3.6/site-packages/IPython/extensions', '/Users/jakevdp/.ipython']
By default, the first place Python looks for a module is an empty path, meaning the current working directory.
If the module is not found there, it goes down the list of locations until the module is found.
You can find out which location has been used using the
__path__ attribute of an imported module:
import numpy numpy.__path__
['/Users/jakevdp/anaconda/lib/python3.6/site-packages/numpy']
In most cases, a Python package you install with
pip or with
conda will be put in a directory called
site-packages.
The important thing to realize is that each Python executable has its own
site-packages: what this means is that when you install a package, it is associated with particular python executable and by default can only be used with that Python installation!
We can see this by printing the
sys.path variables for each of the available
python executables in my path, using Jupyter's delightful ability to mix Python and bash commands in a single code block:
paths = !type -a python for path in set(paths): path = path.split()[-1] print(path) !{path} -c "import sys; print(sys.path)" print()
/Users/jakevdp/anaconda/envs/python3.6/bin/python ['', '/Users/jakevdp/anaconda/envs/python3.6/lib/python36.zip', '/Users/jakevdp/anaconda/envs/python3.6/lib/python3.6', '/Users/jakevdp/anaconda/envs/python3.6/lib/python3.6/lib-dynload', '/Users/jakevdp/anaconda/envs/python3.6/lib/python3.6/site-packages'] /usr/bin/python ['', '/lib/python2.7/lib-tk', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-old', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload', '/Library/Python/2.7/site-packages', '/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python', '/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/PyObjC'] /Users/jakevdp/anaconda/bin/python ['', '']
The full details here are not particularly important, but it is important to emphasize that each Python executable has its own distinct paths, and unless you modify
sys.path (which should only be done with great care) you cannot import packages installed in a different Python environment.
When you run
pip install or
conda install, these commands are associated with a particular Python version:
pipinstalls packages in the Python in its same path
condainstalls packages in the current active conda environment
So, for example we see that
pip install will install to the conda environment named
python3.6:
!type pip
pip is /Users/jakevdp/anaconda/envs/python3.6/bin/pip
And
conda install will do the same, because
python3.6 is the current active environment (notice the
* indicating the active environment):
!conda env list
# conda environments: # python2.7 /Users/jakevdp/anaconda/envs/python2.7 python3.5 /Users/jakevdp/anaconda/envs/python3.5 python3.6 * /Users/jakevdp/anaconda/envs/python3.6 rstats /Users/jakevdp/anaconda/envs/rstats root /Users/jakevdp/anaconda
The reason both
pip and
conda default to the conda
python3.6 environment is that this is the Python environment I used to launch the notebook.
I'll say this again for emphasis: the shell environment in Jupyter notebook matches the Python version used to launch the notebook.
How Jupyter executes code: Jupyter Kernels¶
The next relevant question is how Jupyter chooses to execute Python code, and this brings us to the concept of a Jupyter Kernel.
A Jupyter kernel is a set of files that point Jupyter to some means of executing code within the notebook. For Python kernels, this will point to a particular Python version, but Jupyter is designed to be much more general than this: Jupyter has dozens of available kernels for languages including Python 2, Python 3, Julia, R, Ruby, Haskell, and even C++ and Fortran!
If you're using the Jupyter notebook, you can change your kernel at any time using the Kernel → Choose Kernel menu item.
To see the kernels you have available on your system, you can run the following command in the shell:
!jupyter kernelspec list
Available kernels: python3 /Users/jakevdp/anaconda/envs/python3.6/lib/python3.6/site-packages/ipykernel/resources conda-root /Users/jakevdp/Library/Jupyter/kernels/conda-root python2.7 /Users/jakevdp/Library/Jupyter/kernels/python2.7 python3.5 /Users/jakevdp/Library/Jupyter/kernels/python3.5 python3.6 /Users/jakevdp/Library/Jupyter/kernels/python3.6
Each of these listed kernels is a directory that contains a file called
kernel.json which specifies, among other things, which language and executable the kernel should use.
For example:
!cat /Users/jakevdp/Library/Jupyter/kernels/conda-root/kernel.json
{ "argv": [ "/Users/jakevdp/anaconda/bin/python", "-m", "ipykernel_launcher", "-f", "{connection_file}" ], "display_name": "python (conda-root)", "language": "python" }
If you'd like to create a new kernel, you can do so using the jupyter ipykernel command; for example, I created the above kernels for my primary conda environments using the following as a template:
$ source activate myenv $ python -m ipykernel install --user --name myenv --display-name "Python (myenv)"
Now we have the full background to answer our question: Why don't
!pip install or
!conda install always work from the notebook?
The root of the issue is this: the shell environment is determined when the Jupyter notebook is launched, while the Python executable is determined by the kernel, and the two do not necessarily match.
In other words, there is no guarantee that the
python,
pip, and
conda in your
$PATH will be compatible with the
python executable used by the notebook.
Recall that the
python in your path can be determined using
!type python
python is /Users/jakevdp/anaconda/envs/python3.6/bin/python
The Python executable being used in the notebook can be determined using
sys.executable
'/Users/jakevdp/anaconda/bin/python'.
For conda, you can set the prefix manually in the shell command:
$ conda install --yes --prefix /Users/jakevdp/anaconda numpy
or, to automatically use the correct prefix (using syntax available in the notebook)
!conda install --yes --prefix {sys.prefix} numpy
For pip, you can specify the Python executable explicitly:
$ /Users/jakevdp/anaconda/bin/python -m pip install numpy
or, to automatically use the correct executable (again using notebook shell syntax)
!{sys.executable} -m pip install numpy
Remember: you need your installation command to match the current python kernel if you want installed packages to be available in the notebook..
The exception is the special case where you run
jupyter notebook from the same Python environment to which your kernel points; in that case the simple installation approach should work.
But that leaves us in an undesireable place, as it increases the learning curve for novice users who may want to do something they (rightly) presume should be simple: install a package and then use it. So what can we as a community do to smooth-out this issue?
I have a few ideas, some of which might even be useful:
Potential Changes to Jupyter¶
As I mentioned, the fundamental issue is a mismatch between Jupyter's shell environment and compute kernel. So, could we massage kernel specifications such that they force the two to match?
Perhaps: for example, this github issue shows an approach to modifying shell variables as part of kernel startup.
Basically, in your kernel directory, you can add a script
kernel-startup.sh that looks something like this (and make sure you change the permissions so that it's executable):
#!/usr/bin/env bash # activate anaconda env source activate myenv # this is the critical part, and should be at the end of your script: exec python -m ipykernel $@
Then in your
kernel.json file, modify the
argv field to look like this:
"argv": [ "/path/to/kernel-startup.sh", "-f", "{connection_file}" ]
Once you do this, switching to the
myenv kernel will automatically activate the
myenv conda environment, which changes your
$CONDA_PREFIX,
$PATH and other system variables such that
!conda install XXX and
!pip install XXX will work correctly. A similar approach could work for virtualenvs or other Python environments.
There is one tricky issue here: this approach will fail if your
myenv environment does not have the
ipykernel package installed, and probably also requires it to have a jupyter version compatible with that used to launch the notebook. So it's not a full solution to the problem by any means, but if Python kernels could be designed to do this sort of shell initialization by default, it would be far less confusing to users:
!pip install and
!conda install would simply work.
Potential Changes to pip¶
One source of installation confusion, even outside of Jupyter, is the fact that, depending on the nature of your system's aliases and
$PATH variable,
pip and
python might point to different paths.
In this case
pip install will install packages to a path inaccessible to the
python executable.
For this reason, it is safer to use
python -m pip install, which explicitly specifies the desired Python version (explicit is better than implicit, after all).
This is one reason that
pip install no longer appears in Python's docs, and experienced Python educators like David Beazley never teach bare pip.
CPython developer Nick Coghlan has even indicated that the
pip executable may someday be deprecated in favor of
python -m pip.
Even though it's more verbose, I think forcing users to be explicit would be a useful change, particularly as the use of virtualenvs and conda envs becomes more common.
Explicit invocation¶
For symmetry with
pip, it would be nice if
python -m conda install could be expected to work in the same way the
pip counterpart does.
You can call
conda this way in the root environment, but the conda Python package (as opposed to the conda executable) cannot currently be installed anywhere but the root environment:
(myenv) jakevdp$ conda install conda Fetching package metadata ........... InstallError: Error: 'conda' can only be installed into the root environment
I suspect that allowing
python -m conda install in all conda environments would require a fairly significant redesign of conda's installation model, so it may not be worth the change just for symmetry with
pip's API.
That said, such a symmetry would certainly be a help to users.
A pip channel for conda?¶
Another useful change conda could make would be to add a channel that essentially mirrors the Python Package Index, so that when you do
conda install some-package it will automatically draw from packages available to
pip as well.
I don't have a deep enough knowledge of conda's architecture to know how easy such a feature would be to implement, but I do have loads of experiences helping newcomers to Python and/or conda: I can say with certainty that such a feature would go a long way toward softening their learning curve.
New Jupyter Magic Functions¶
Even if the above changes to the stack are not possible or desirable, we could simplify the user experience somewhat by introducing
%pip and
%conda magic functions within the Jupyter notebook that detect the current kernel and make certain packages are installed in the correct location.
from IPython.core.magic import register_line_magic @register_line_magic def pip(args): """Use pip from the current kernel""" from pip import main main(args.split())
Running it as follows will install packages in the expected location
%pip install numpy
Requirement already satisfied: numpy in /Users/jakevdp/anaconda/lib/python3.6/site-packages
conda magic¶
Similarly, we can define a conda magic that will do the right thing if you type
%conda install XXX.
This is a bit more involved than the
pip magic, because it must first confirm that the environment is conda-compatible, and then (related to the lack of
python -m conda install) must call a subprocess to execute the appropriate shell command:
from IPython.core.magic import register_line_magic import sys import os from subprocess import Popen, PIPE def is_conda_environment(): """Return True if the current Python executable is in a conda env""" # TODO: make this work with Conda.exe in Windows conda_exec = os.path.join(os.path.dirname(sys.executable), 'conda') conda_history = os.path.join(sys.prefix, 'conda-meta', 'history') return os.path.exists(conda_exec) and os.path.exists(conda_history) @register_line_magic def conda(args): """Use conda from the current kernel""" # TODO: make this work with Conda.exe in Windows # TODO: fix string encoding to work with Python 2 if not is_conda_environment(): raise ValueError("The python kernel does not appear to be a conda environment. " "Please use ``%pip install`` instead.") conda_executable = os.path.join(os.path.dirname(sys.executable), 'conda') args = [conda_executable] + args.split() # Add --prefix to point conda installation to the current environment if args[1] in ['install', 'update', 'upgrade', 'remove', 'uninstall', 'list']: if '-p' not in args and '--prefix' not in args: args.insert(2, '--prefix') args.insert(3, sys.prefix) # Because the notebook does not allow us to respond "yes" during the # installation, we need to insert --yes in the argument list for some commands if args[1] in ['install', 'update', 'upgrade', 'remove', 'uninstall', 'create']: if '-y' not in args and '--yes' not in args: args.insert(2, '--yes') # Call conda from command line with subprocess & send results to stdout & stderr with Popen(args, stdout=PIPE, stderr=PIPE) as process: # Read stdout character by character, as it includes real-time progress updates for c in iter(lambda: process.stdout.read(1), b''): sys.stdout.write(c.decode(sys.stdout.encoding)) # Read stderr line by line, because real-time does not matter for line in iter(process.stderr.readline, b''): sys.stderr.write(line.decode(sys.stderr.encoding))
You can now use
%conda install and it will install packages to the correct environment:
%conda install numpy
Fetching package metadata ........... Solving package specifications: . # All requested packages already installed. # packages in environment at /Users/jakevdp/anaconda: # numpy 1.13.3 py36h2cdce51_0
This conda magic still needs some work to be a general solution (cf. the TODO comments in the code), but I think this is a useful start.
If a pip magic and conda magic similar to the above were added to Jupyter's default set of magic commands, I think it could go a long way toward solving the common problems that users have when trying to install Python packages for use with Jupyter notebooks. This approach is not without its own dangers, though: these magics are yet another layer of abstraction that, like all abstractions, will inevitably leak. But if they are implemented carefully, I think it would lead to a much nicer overall user experience.
Summary¶
In this post, I tried to answer once and for all the perennial question, how do I install Python packages in the Jupyter notebook.
After proposing some simple solutions that can be used today, I went into a detailed explanation of why these solutions are necessary: it comes down to the fact that in Jupyter, the kernel is disconnected from the shell. The kernel environment can be changed at runtime, while the shell environment is determined when the notebook is launched. The fact that a full explanation took so many words and touched so many concepts, I think, indicates a real usability issue for the Jupyter ecosystem, and so I proposed a few possible avenues that the community might adopt to try to streamline the experience for users.
One final addendum: I have a huge amount of respect and appreciation for the developers of Jupyter, conda, pip, and related tools that form the foundations of the Python data science ecosystem. I'm fairly certain those developers have already considered these issues and weighed some of these potential fixes – if any of you are reading this, please feel free to comment and set me straight on anything I've overlooked! And, finally, thanks for all that you do for the open source community.
Thanks to Andy Mueller, Craig Citro, and Matthias Bussonnier for helpful comments on an early draft of this post. | http://jakevdp.github.io/blog/2017/12/05/installing-python-packages-from-jupyter/index.html | CC-MAIN-2022-33 | refinedweb | 4,052 | 52.29 |
Created on 2012-04-01 15:28 by tshepang, last changed 2019-08-23 14:45 by scoder. This issue is now closed.
I often miss lxml's "pretty_print=True" functionality. Can you implement something similar.
Would you like to provide a patch?
Tshepang,
Frankly, there are a lot of issues to solve in ElementTree (it hasn't been given love in a long time...) and such features would be low priority, as I'm not getting much help and am swamped already.
As Martin said, patches can go a long way here...
Okay, I will try, even though C scares me.
You may be able to code it entirely in the Python part of the module (adding a new parameter to Element.write and tostring).
A patch exists in the duplicate #17372
Proposed patch copied over from duplicate issue 17372.
Just to reiterate this point, lxml.etree supports a "pretty_print" flag in its tostring() function and ElementTree.write(). It would thus make sense to support the same thing in ET.
For completeness, the current signature looks like this:
def tostring(element_or_tree, *, encoding=None, method="xml",
xml_declaration=None, pretty_print=False,
with_tail=True, standalone=None, doctype=None,
exclusive=False, with_comments=True,
inclusive_ns_prefixes=None):
(The last three options are for C14N serialisation.)
For the record, at 2015-04-02, the bpo-23847 has been marked as a duplicate of this issue.
My.
> Serial.
A few more thoughts for consideration:
* We already have a toprettyxml() tool in the minidom package.
* Since whitespace is significant in XML, prettifying changes the content and meaning, so it doesn't round-trip and should only be used for debugging purposes.
* Usually, I recommend using XML viewers such as the one built into the Chrome browser. That provides indentation without changing meaning. It also lets you run searches and conveniently supports folding and unfolding elements. I would rather someone use a viewer rather than something like toprettyxml().
I have a use case where the receiving application is expecting the indentation, and I need to run my code in Lambda. So, lxml is out of the question.
FWIW, here is the relevant section of the XML specification, :
"".
"""
OTOH, the java TransformerFactory does support a property, OutputKeys.INDENT, so there is a precedent for this feature request.
Stefan, would you please make a final determination or pronouncement on whether this makes sense for ElementTree or whether it is outside the scope of what the module is trying to accomplish.
The spec section that Raymond quoted makes it clear that pretty printing is not for everyone. But there are many use cases where it is 1) helpful, 2) leads to correct results, and 3) does not grow the file size excessively. Whoever wants to make use of it is probably in such a situation. I think adding some kind of support in the standard library would be nice, but it should not hurt "normal" uses, especially when a lot of data is involved.
I'll send a PR that adds an indent() function to pre-process trees. Comments welcome.
New changeset b5d3ceea48c181b3e2c6c67424317afed606bd39 by Stefan Behnel in branch 'master':
bpo-14465: Add an indent() function to xml.etree.ElementTree to pretty-print XML trees (GH-15200) | https://bugs.python.org/issue14465 | CC-MAIN-2021-43 | refinedweb | 534 | 65.93 |
truncate(), truncate64()
Truncate a file to a specified length
Synopsis:
#include <unistd.h> int truncate( const char* path, off_t length ); int truncate64( const char* path, off64. The truncate64() function is a large-file version of truncate().
The effect of truncate() on other types of files is unspecified. If the file previously was larger than length, the extra data is lost. If it was previously shorter than length, bytes between the old and new lengths are read as zeroes. The process must have write permission for the file.
If the request would cause the file size to exceed the soft file size limit for the process, the request fails and the implementation generates the SIGXFSZ signal for the process.:
truncate() is POSIX 1003.1 XSI; truncate64() is Large-file support | https://developer.blackberry.com/playbook/native/reference/com.qnx.doc.neutrino.lib_ref/topic/t/truncate.html | CC-MAIN-2020-34 | refinedweb | 129 | 75.71 |
Kubernetes Ingress allows you to customize how external entities can interact with your Kubernetes applications via the network. This lab will allow you to exercise your knowledge of Kubernetes Ingress. You will use Ingress to open access from an existing service to an external server.
Learning Objectives
Successfully complete this lab by achieving the following learning objectives:
- Create a Service to Expose the web-auth Deployment
Create a service that will expose the Pods from the
web-authDeployment, located in the
defaultnamespace. The Deployment’s Pods publish port 80. Expose port 80 on the Service itself as well.
- Create an Ingress That Maps to the New Service
Create an Ingress that maps to the new Service. This Ingress should route requests with the path
/authto port 80 on the Service. | https://acloudguru.com/hands-on-labs/using-kubernetes-ingress | CC-MAIN-2022-33 | refinedweb | 130 | 55.34 |
TOTD #137: Asynchronous EJB, a light-weight JMS solution - Feature-rich Java EE 6
By arungupta on May 18, 2010
One of the new features introduced in Enterprise Java Beans 3.1 (JSR 318) is asynchronous invocation of a business method. This allows the control to return to the client before the container dispatches the instance to a bean. The asynchronous operations can return a "Future<V>" that allow the client to retrieve a result value, check for exceptions, or attempt to cancel any in-progress invocations.
The "@Asynchronous" annotation is used to mark a specific (method-level) or all (class-level) methods of the bean as asynchronous. Here is an example of a stateless session bean that is tagged as asynchronous at the class-level:
@Stateless @Asynchronous public class SimpleAsyncEJB { public Future<Integer> addNumbers(int n1, int n2) { Integer result; result = n1 + n2; try { // simulate JPA queries + reading file system Thread.currentThread().sleep(2000); } catch (InterruptedException ex) { ex.printStackTrace(); } return new AsyncResult(result); } }
The method signature returns "Future<Integer>" and the return type is "AsyncResult(Integer)". The "AsyncResult" is a new class introduced in EJB 3.1 that wraps the result of an asynchronous method as a Future object. Under the covers, the value is retrieved and sent to the client. Adding any new methods to this class will automatically make them asynchronous as well.
The 2 second sleep simulates server side processing of the response which may involve querying the database or reading some information from the filesystem.
This EJB can be easily injected in a Servlet using the normal way:
@EJB SimpleAsyncEJB ejb;
This business method can be invoked in the "doGet" method of a Servlet as:
PrintWriter out = response.getWriter(); try { Future<Integer> future = ejb.addNumbers(10, 20); print(out, "Client is working ..."); Thread.currentThread().sleep(1000); if (!future.isDone()) { print(out, "Response not ready yet ..."); } print(out, "Client is working again ..."); Thread.currentThread().sleep(1000); if (!future.isDone()) { print(out, "Response not ready yet ..."); } print(out, "Client is still working ..."); Thread.currentThread().sleep(1000); if (!future.isDone()) { print(out, "Response not ready yet ..."); } else { print(out, "Response is now ready"); } Integer result = future.get(); print(out, "The result is: " + result); } catch (InterruptedException ex) { ex.printStackTrace(); } catch (ExecutionException ex) { ex.printStackTrace(); } finally { out.close(); }
The control is returned to the client right after the the EJB business method is invoked and does not wait for the business method execution to finish. The methods on "Future" API are used to query if the result is available. The "Thread.sleep()" for 1 second simulate that client can continue working and possibly check for results at a regular interval. The "print" is a convenience method that prints the string to "response.getWriter" and flushes the output so that it can be instantly displayed instead of getting buffered. Invoking this "doGet" shows the following output:
1274142978365: Client is working ... 1274142979365: Response not ready yet ... 1274142979365: Client is working again ... 1274142980366: Response not ready yet ... 1274142980366: Client is still working ... 1274142981366: Response is now ready 1274142981366: The result is: 30
The client transaction context does not and security context do propagate from the client to the asynchronous business method.
Up until now, any kind of asynchrony in the EJB required to use the Message Driven Beans which in turn required some JMS setup. Introduction of this feature allows you to easily incorporate asynchrony in your EJB applications.
Try this and other Java EE 6 features in GlassFish Server Open Source Edition 3 or Oracle GlassFish Server today!
The complete source code used in this blog can be downloaded here.
Technorati: totd javaee ejb asynchronous glassfish v3
Thread.currentThread().sleep(2000);
should be written as
Thread.sleep(2000);
because sleep is a static method
Posted by bob on December 15, 2010 at 06:34 PM PST # | https://blogs.oracle.com/arungupta/entry/totd_137_asynchronous_ejb_a | CC-MAIN-2014-15 | refinedweb | 634 | 57.77 |
Finish the Slouch Alert and Make Noise!
Introduction: Finish the Slouch Alert and Make Noise!
Congratulations! The slouch alert is done! In this lesson, we will continue to test and play with the slouch alert circuit and learn how to trigger sound with it. That's the beauty of working with a microcontroller, you can use the same circuit and program it to have different behaviors.
After the Slouch Alert is tested and working you learn how to make your LilyPad USB into a keyboard. The LilyPad USB can produce keypresses that can trigger sound files when used with the right software. Let's get to it!
Test Slouch Alert Circuit
In the previous lesson, you finished the circuit and tested your connections along the way. It is now time to test the circuit again to make sure it's working before you try it on. The more you test a circuit throughout the creation process, the less amount of steps you need to backtrack once the circuit stops working.
Instead of using the battery let's test it with the USB cord plugged in. This way we can see the sensor's values and know if the sensor is working properly and whether the vibe board is turning on when it is supposed to.
Open Arduino, plug the in the USB cord and turn the board on. Open the slouchAlert sketch. This sketch was already uploaded during the Slouch Alert Circuit lesson, so no need to hit upload again. Remember that the sensor won't be read until the snap switch is closed so when you open the serial monitor there won't be any values printing in the monitor. Go ahead and close the switch and watch the values jump onto the screen. Press or bend the flex sensor to make it hit the threshold of 400. The word "slouch" will print in the monitor and the vibe board will go on.
Wear and Calibrate Slouch Alert
Now you know the circuit is working but you will need to test the sensor while it's on too. Right now the value for the flex sensor to hit that tells you whether you are slouching or not is 400. This is the default and will change based on what you read in the serial monitor once the sensor in on and you are you slouching.
Turn the LilyPad USB off. With the board still plugged into the computer put the t-shirt on. If this proves to be too difficult, unplug the cord from the computer, put the shirt on then replug the USB cord. Turn the LilyPad board back on.
Open the serial monitor while the slouchAlert sketch is open and observe the sensor's values. Slouch and straighten like you did in the Read Body Movement lesson to see if the current threshold works for you. The threshold right now is 400. You can see below where it states that in the code.
As a refresher let's go over where this is found in the sketch again. Below is the part of the code that says - if the sensor value is more than 400 turn the vibe on and print "slouch" in the serial monitor. Otherwise, turn the vibe board off.
If it's already above 400 and detecting a slouch while you are straight you need to raise the threshold. If it's difficult to detect to a slouch lower this number.
if (average > 400) { Serial.println("slouch"); digitalWrite(vibeBoard, HIGH); delay(1); } else { digitalWrite(vibeBoard, LOW); }
Make It Mobile
Now is the time for the project to become mobile! The circuit is done, tested, and the sensor is calibrated. Plug the battery in if it isn't already and unplug the USB cord. The circuit is now untethered and you can walk away from the computer! Test it out and make sure to take a picture to share at the end of this lesson while it's on!
Use LilyPad USB As a Keyboard
In the next few steps, I will go over how to make your LilyPad USB trigger sound files. You can think of this as bonus material... a little going away present to get inspired from for more projects to come. It uses the same circuit you just built for the Slouch Alert project and a simple sketch that I will explain later on. Ready to make some noise? Let's do it!
The thing that sets the LilyPad USB apart from other LilyPad boards is that it can be used as a keyboard or a mouse. This means that when a switch is closed or when a sensor hits a threshold the LilyPad USB can produce a keypress of a character, just like if you were to hit a key on your keyboard.
Before we go on, I must say that there is a warning that comes with turning your LilyPad USB (or any microcontroller) into a keyboard. You need to build an on/off switch into the circuit. This is because once you make it a keyboard it can get stuck printing a character and can take over your computer prohibiting you from uploading a new sketch to your board and many other things. This is partly why the snap switch is part of the circuit, so you can turn on and off the keyboard emulation while still being able to upload a new sketch to your LilyPad USB. This will all make more sense after you have uploaded the sketch and experienced it.
Download the attached musicTee file. In it are three sketches and sound files to get you started.
Upload
To use the LilyPad USB as a keyboard you will need to tether it back to the computer. I know it can be a drag but wireless serial communication will need to wait for another class. :)
Plug the USB cord into the board and open up the musicTee sketch in Arduino. Check the board and port, then upload the sketch. Now open the serial monitor to see your sensor values. You will get the below error but you already selected your port. So, why does this pop-up?
It is because the LilyPad's port has changed to an HID (human interaction device) type port and will need to be selected again under Tools > Port. This means that your board has begun to emulate a keyboard because a keyboard is an HID. You will need to select this port every time you upload a sketch using the keyboard feature.
Now open the serial port and remember to close your snap switch in order to see the sensor values. Press or bend the sensor to see the character "n" print in the text field at the top of the serial monitor and "play" print in the window. Woohoo! You are ready to trigger some sound - well, almost. First, let's quickly go over the new parts of this sketch that makes this all possible. Keep the t-shirt plugged into the computer but turn the LilyPad board off.
The Code
So how does your board become a keyboard? There is some magic going on under the hood in the software. Arduino software can do many things and some of these features are available through the use of libraries. Arduino libraries come bundled with the Arduino software or you can download and install libraries to add more functionality. You import libraries into your sketches so you can use the library's features throughout the sketch. Want to know more? Check out Arduino's website page on libraries here.
Let's take a look at where the library is used in the musicTee sketch.
To import a library into a sketch go to Sketch > Include Library > choose a library to import. It will add a line at the top of your sketch that starts with
#include
The musicTee already has this line at the very top which imports the Keyboard library.
#include "Keyboard.h"
Start emulating a keyboard by calling Keyboard.begin() in setup().
If the sensor value held in the variable "average" goes over 250 print "play" in the serial window and print the character "n" by emulating a keyboard press using Keyboard.press(). Emulate pressing the key for 100 milliseconds using delay() and then release the key with Keyboard.releaseAll().
if (average > 250) { Serial.println("play"); Keyboard.press('n'); delay(100); Keyboard.releaseAll(); digitalWrite(vibeBoard, HIGH); delay(1); }
Download Soundplant and Load Files
To trigger sounds with your LilyPad keyboard you are going to use a free piece of software called Soundplant created by Marcel Blum. Head over to the website to download and install Soundplant based on your operating system (sorry Linux!).
Open up Soundplant from your Applications folder and a virtual keyboard will pop up on your screen. This software is really fun and super easy to use. I love it because you can quickly change sounds whenever you like. Open the folder that contains the sound files you downloaded earlier in this lesson. Choose one and drag and drop it onto the "n" key. Check to see if your LilyPad is still off (it should be). Press the "n" on your computer's keyboard and the sound file will play.
Try dragging and dropping other sound files. Soundplant's website has a great list of where to get more free sounds on the download page.
Trigger Sound With LilyPad USB
At this point, you have gotten "n" to print and you've gotten sound files playing with Soundplant. Now let's put these two things together. Turn on your LilyPad USB and the musicTee sketch that you have already uploaded to the board will start running. While in Soundplant, bend the flex sensor and the sound file on the character "n" will play. You have just created a gestural musical controller!
Hooray!
You now know how to make your own sensor, how to sense a gesture, and how to trigger events such as haptic feedback and sounds. The sound triggering can get really fun when you add more sensors.
:)
Washing Wearable Electronics
..
.
| https://mobile.instructables.com/lesson/Slouching-and-Triggering-Sound/ | CC-MAIN-2019-18 | refinedweb | 1,687 | 81.33 |
Opened 6 years ago
Closed 2 years ago
#17914 closed Bug (wontfix)
Add support for namespaced view references when passing a callable to reverse()
Description
It's not possible to reverse a URL by passing a function reference to reverse()
that's hooked into the URLconf under a namespace.
Consider the trivial Django project (that doesn't work):
# urls.py from django.conf.urls.defaults import patterns, include from django.core.urlresolvers import reverse from django.http import HttpResponse def view(request): # Return the URL to this view return HttpResponse(reverse(view)) level2 = patterns('', (r'^2/', view) ) urlpatterns = patterns('', (r'^1/', include(level2, namespace="foo")) )
Removing
, namespace="foo"
will make it work.
My understanding of why this happens, is that
reverse() traverses
RegexURLResolvers
*only* for strings. Function references are passed directly to the root
RegexURLResolver.reverse
method.:
# from django/core/urlresolvers.py ~ line 432 if not isinstance(viewname, basestring): view = viewname else: # <snip> -- resolver traversal happens here to honor namespaces # `resolver` is reassigned to last RegexURLResolver in namespace chain return iri_to_uri(resolver._reverse_with_prefix(view, prefix, *args, **kwargs))
Obviously traversing namespaces isn't possible for a function reference,
so what's needed is to add *all* descendent view references into each
RegexURLResolver's
reverse_dict dictionary.
Populating
reverse_dict is lazy and happens on first access, the work is
performed in
RegexURLResolver._populate:
# django/core/urlresolvers.py ~ line 237 def _populate(self): # <snip> for pattern in reversed(self.url_patterns): # <snip> if isinstance(pattern, RegexURLResolver): if pattern.namespace: # <snip> -- tldr `self.reverse_dict` isn't populated, `pattern` (the # RegexURLResolver) is instead stored in `self.namespace_dict` else: # <snip> -- tldr `self.reverse_dict` *is* populated with the keys # being the name of each URL (or the function reference) # defined in this level. (i.e. only children, not all descendants) else: # <snip> -- tldr `self.reverse_dict` is again populated
This combination of behaviour by
RegexURLResolver._populate and
reverse()
leads to the problem.
Attachments (3)
Change History (26)
Changed 6 years ago by
Changed 6 years ago by
fix + tests
comment:1 Changed 6 years ago by
comment:2 Changed 6 years ago by
comment:3 Changed 6 years ago by
comment:4 Changed 6 years ago by
It should be noted that this patch is technically backwards incompatible. After applying this patch, it's no longer possible to use recursive URL patterns, e.g. consider a project named 'foo', and it had a
urls.py that contained:
urlpatterns = patterns('', (r'^abc/', include('foo.urls'))
comment:5 Changed 6 years ago by
Accepting.
Like we talked about on IRC, let's just avoid circular imports when doing your traversal rather than dropping that functionality.
comment:6 Changed 5 years ago by
I've updated the patch to maintain support for recursive urlconfs.
comment:7 Changed 5 years ago by
Doing a quick scan of URL reversing during a sprint, the last git code looks reasonable. Not ready to commit yet -- I haven't fully digested it, but my intuition is that it's the right kind of thing.
Except(!) that using function objects in reverse() situations is kind of a "don't do that" kind of situation. We permitted it initially for smooth integration, but it's kind of a lucky happenstance. My point being, that a patch of this size is a reasonable to fix this ticket. If it grows to twice this size, we should maybe just add a "don't do that" note to the documentation -- the extra complexity isn't worth it.
For now, charge ahead!
comment:8 Changed 5 years ago by
comment:9 Changed 5 years ago by
comment:10 Changed 4 years ago by
comment:11 Changed 4 years ago by
Any news here?
Since reverse now accepts app_name -- as well as include -- we could easily find corresponding namespace. E.g.:
from django.conf.urls import * urlpatterns = patterns('', url(r'^payment/', include('payment.urls', namespace='payment', app_name='payment')), ) reverse(view, current_app='payment') # try with current_app given
As alternative solution -- we could require passing namespace when using with function. The we could convert function to its name, add namespace+":" and use existing machinery.
comment:12 Changed 4 years ago by
#21899 was closed a duplicate of this ticket, but I'm not sure they are the same thing. Will fixing this ticket also fix #21899? This ticket seems to be about reversing raw callable view functions, for which we have no way to specify a namespace. The other ticket was about reversing a dotted path reference to a view function, specified as a string with a namespace.
comment:13 follow-up: 15 Changed 3 years ago by
Can someone present a usecase for using function references in reverse? IMO this should just get closed as wontfix and names should be used throughout instead.
comment:14 Changed 3 years ago by
This feels like using args and kwargs on reverses. Using either function reverses or namespaces is fine, just don't mix them.
Maybe we should enforce it a bit more like we do with reversing with args and kwargs?
comment:15 follow-up: 16 Changed 2 years ago by
Can someone present a usecase for using function references in reverse? IMO this should just get closed as wontfix and names should be used throughout instead.
For me the view-name is redundant. We try to avoid it. Reversing by function reference works fine (except this bug). Jumping through the code with an IDE is much more fun if you use function references.
It would be nice if this could be solved. For our team this means: don't use namespaced URLs :-(
comment:16 Changed 2 years ago by
For me the view-name is redundant. We try to avoid it. Reversing by function reference works fine (except this bug). Jumping through the code with an IDE is much more fun if you use function references.
If I remember correctly we've been over the redundancy point on the ML already, you can always just write your own url function which sets the path of the function as name, not really redundant imo. Also reversing by function reference is something which is ugly in templates etc imo… And redundant is relative anyways, "app:view_name" is imo way nicer than "app.views.view_name" -- especially if you have nested imports.
comment:17 Changed 2 years ago by
I'm in favor of a "won't fix" of this issue and promoting reversing by URL name as Florian suggested. We could deprecate passing callables to reverse, but this will probably just needlessly annoy people who are happy with it. Instead, we could just remove it from the docs like we did with the
@permalink decorator.
comment:18 Changed 2 years ago by
Changed 2 years ago by
comment:19 Changed 2 years ago by
Added a documentation patch to discourage reversing by callable view.
comment:20 Changed 2 years ago by
LGTM
test to expose bug | https://code.djangoproject.com/ticket/17914 | CC-MAIN-2017-47 | refinedweb | 1,147 | 55.64 |
So why am I here? According to this wonderful wiki, which I just discovered, 15 years of using Tcl apparently make me a Tcl'er. And after a lot of reading, I feel that I might have something to say and maybe even do. So here I am, in my first ever endeavor of this kind, to see how it goes.
PYK 2016-03-30: Greetings, Igal! How on Earth did you manage to use Tcl for 15 years before you discovered The Wiki?!Thanks Nathan! As for your question, I guess I just was too busy.
This wiki drove me to a lot of additional reading for the last two weeks, mainly on the wiki itself but also in many other places. Some of those were referenced from this wiki while others were through my own searching. The web is after all really is a "web". You start in some place of interest and very quickly find yourself in ten others and then a hundred and pretty quickly have a complete mess in your bookmarks, which is what I have now. Pretty soon you find yourself completely hooked and it starts to seem endless. There is always more, much more. It is a great pass-time, though, at least when you have nothing better to do. Time passes quickly and the brain is not bored, especially if you have a curious mind. There is a lot of interesting stuff, always something new to learn or to refresh or to just think about. Sometimes a lot to think about. There is also a lot of amusing stuff and sometimes even completely hilarious. Sometimes, quite often actually, total "BS" and utter junk which you need to learn to identify and filter out, sometimes while getting angry while other times while laughing. Thus getting another chance to exercise your brain, emotions and emotional control. Sometimes though, there are real gems to be found.I was somewhat surprised though, although I probably shouldn't have been, by the fact that I still care about software. After all, it has been my prime interest for 30 years, as well as my occupation for 25 of those. However, during 5 years since I retired, after a complete burnout, I didn't touch a computer at all, not counting the occasional web surf or email, and avoided my previous area of interest and occupation as if it was a plague. I was completely sure, or maybe I was just trying to convince myself, that I don't care about it anymore and never will again. Apparently, I was wrong. All that was needed to prove that was sufficient time for distancing and cooling down, being sufficiently bored, and finding myself with a spare laptop in my hands and nothing else to do.However, I do find myself now in a completely new situation. Before, I was deeply involved, both mentally and emotionally, as well as under pressure of a professional career. Now, on the other hand, since it's just for fun and only as long as it actually is fun, with a potential to grow into a new-old hobby or not, I feel completely relaxed and cool-headed. Also, the years of complete detachment made me very rusty. So in a way, I can probably also consider myself a kind of nub, although maybe a special kind, an old experienced one. A nub in areas which I never knew, partially nub in areas which I knew but forgot, lot's of experience which one can't really forget, and an overall detached cool-headed fresh perspective.So this is really a new thing for me and I hope it will be fun.
PYK 2016-04-09: Reading your response, I have this eerie feeling, as if reading a message from myself a few years in the future -- right down to the burnout and beyond.
More later...
References of interest
-
-
- Scripting: Higher Level Programming for the 21st Century
- ...
- Official Tcl/Tk Documentation
- Official Tcl Commands
- Official Tcl Syntax
- Dodekalogue - The 12 Rules
- ...
- Bash Reference Manual
- ...
- TIP #352: Tcl Style Guide
- TIP #247: Tcl/Tk Engineering Manual
- ...
Created pages
Thoughts
- A Fully Backward Compatible Extension Of Tcl Syntax
- Array as One Of The Most Powerful Features Of Tcl
- First-Class-Variable versus First-Class-Value
- The Curse of dict
- Data versus Arguments
- Reference versus Value
- String versus List
- exec, eval, subst, concat, join, split, etc...
- upvar, uplevel, global, variable, etc...
- expr
- namespace
- namespace ensemble
- Indexing of Strings and Lists
- Data Structures
- array, dict, list, keyedlist, etc...
- Structured Variables versus Structured Values
- Variable Based versus Value Based Access to Data
- Data Access Notation
- Little Languages
- Domain Specific Languages (DSL)
- Over-Flexibility of Syntax as a Limiting Factor
- Programming Paradigms
- Functionality and Behavior versus Notation
- API versus UI/CLI
- Options/Switches versus Arguments
- Scoping/Nesting/Dependency between Commands
- Inward versus Outward Semantics
- Command, Variable, Object, Value, etc...
- Language Design versus Usage
- Architecture and Design
- Levels of Abstraction
- Design of Infrastructure
- Design of Components
- Design of Core Capabilities and Engines
- Design of Command Line Interface (CLI)
- Descriptive and Behavioral versus Imperative Paradigms for UI(GUI and CLI)
- A Built-in Help System
- A Unified Data Model
- A Dynamic Interpreter for a Dynamic Language
- The Curse of The <newline>
- The Curse of The Paired List
- The Curse of "Everything is a ..."
- The Curse of Over Generalization and Over Unification
- The Curse of Over Specified, Redundant, Unnecessary, and Dangerous Features
- The Trade-of between Unification versus Separation of Concerns
- Reinterpretation of Concepts and Principles as The Engine for Evolution
- Quoting
- Coding Style
- The Future of Tcl versus Tcl for The Future
- ... | http://wiki.tcl.tk/42676 | CC-MAIN-2016-44 | refinedweb | 940 | 59.43 |
0,3
Number of ways n competitors can rank in a competition, allowing for the possibility of ties.
Also number of asymmetric generalized weak orders on n points.
Also called the ordered Bell numbers.
A weak order is a relation that is transitive and complete.
Called Fubini numbers by Comtet: counts formulas, Feb 10 2003
Also number of labeled (1+2)-free posets. - Detlef Pauly, May 25 2003
Also the number of chains of subsets starting with the empty set and ending with a set of n distinct objects. - Andrew Niedermaier, Feb 20 2004
From Michael Somos, Mar 04 2004: (Start)
Stirling transform of A007680(n) = [3,10,42,216,...] gives [3,13,75,541,...].
Stirling transform of a(n) = [1,3,13,75,...] is A083355(n) = [1,4,23,175,...].
Stirling transform of A000142(n) = [1,2,6,24,120,...] is a(n) = [1,3,13,75,...].
Stirling transform of A005359(n-1) = [1,0,2,0,24,0,...] is a(n-1) = [1,1,3,13,75,...].
Stirling transform of A005212(n-1) = [0,1,0,6,0,120,0,...] is a(n-1) = [0,1,3,13,75,...].
(End)
Unreduced denominators in convergent to log(2) = lim_{n->infinity} n*a(n-1)/a(n).
a(n) is congruent to a(n+(p-1)p^(h-1)) (mod p^h) for n >= h (see Barsky).
Stirling-Bernoulli transform of 1/(1-x^2). - Paul Barry,)!)) * (p(i)!/(Product_{j=1..d(i)} m(i,j)!)). - Thomas Wied occurring = log(2), D is d/dx and f(x) = x/(exp(x)-1), we have a(n) = (n!/2*A^(n+1)) Sum_{k=0., Oct 24 2007
First column of A154921. - Mats Granvik,}. - Martin Griffiths, Mar 25 2009
Starting (1, 3, 13, 75, ...) = row sums of triangle A163204. - Gary W. Adamson, Jul 23 2009
Equals double inverse binomial transform of A007047: (1, 3, 11, 51, ...). - Gary W. Adamson, Aug 04 2009
If f(x) = Sum_{n>=0} c(n)*x^n converges for every x, then Sum_{n>=0} f(n*x)/2^(n+1) = Sum_{n>=0} c(n)*a(n)*x^n. Example: Sum_{n>=0} exp(n*x)/2^(n+1) = Sum_{n>=0} a(n)*x^n/n! = 1/(2-exp(x)) = E.g.f. - Miklos Kristof, Nov 02 2009
Hankel transform is A091804. - Paul Barry, Mar 30 2010
It appears that the prime numbers greater than 3 in this sequence (13, 541, 47293, ...) are of the form 4n+1. - Paul Muljadi, Jan 28 2011
The Fi1 and Fi2 triangle sums of A028246 are given by the terms of this sequence. For the definitions of these triangle sums, see A180662. - Johannes W. Meijer, Apr 20 2011
The modified generating function A(x) = 1/(2-exp(x))-1 = x + 3*x^2/2! + 13*x^3/3! + ... satisfies the autonomous differential equation A' = 1 + 3*A + 2*A^2 with initial condition A(0) = 0. Applying [Bergeron et al., Theorem 1] leads to two combinatorial interpretations for this sequence: (A) a(n) gives the number of plane-increasing 0-1-2 trees on n vertices, where vertices of outdegree 1 come in 3 colors and vertices of outdegree 2 come in 2 colors. (B) a(n) gives the number of non-plane-increasing 0-1-2 trees on n vertices, where vertices of outdegree 1 come in 3 colors and vertices of outdegree 2 come in 4 colors. Examples are given below. - Peter Bala, Aug 31 2011
Starting with offset 1 = the eigensequence of A074909 (the beheaded Pascal's triangle), and row sums of triangle A208744. - Gary W. Adamson, Mar 05 2012
a(n) = number of words of length n on the alphabet of positive integers for which the letters appearing in the word form an initial segment of the positive integers. Example: a(2) = 3 counts 11, 12, 21. The map "record position of block containing i, 1<=i<=n" is a bijection from lists of sets on [n] to these words. (The lists of sets on [2] are 12, 1/2, 2/1.) - David Callan, Jun 24 2013
This sequence was the subject of one of the earliest uses of the database. Don Knuth, who had a computer printout of the database prior to the publication of the 1973 Handbook, wrote to N. J. A. Sloane on May 18, 1970, saying: "I have just had my first real 'success' using your index of sequences, finding a sequence treated by Cayley that turns out to be identical to another (a priori quite different) sequence that came up in connection with computer sorting." A000670 is discussed in Exercise 3 of Section 5.3.1 of The Art of Computer Programming, Vol. 3, 1973. - N. J. A. Sloane, Aug 21 2014
Ramanujan gives a method of finding a continued fraction of the solution x of an equation 1 = x + a2*x^2 + ... and uses log(2) as the solution of 1 = x + x^2/2 + x^3/6 + ... as an example giving the sequence of simplified convergents as 0/1, 1/1, 2/3, 9/13, 52/75, 375/541, ... of which the sequence of denominators is this sequence, while A052882 is the numerators. - Michael Somos, Jun 19 2015
For n>=1, a(n) is the number of Dyck paths (A000108) with (i) n+1 peaks (UD's), (ii) no UUDD's, and (iii) at least one valley vertex at every nonnegative height less than the height of the path. For example, a(2)=3 counts UDUDUD (of height 1 with 2 valley vertices at height 0), UDUUDUDD, UUDUDDUD. These paths correspond, under the "glove" or "accordion" bijection, to the ordered trees counted by Cayley in the 1859 reference, after a harmless pruning of the "long branches to a leaf" in Cayley's trees. (Cayley left the reader to infer the trees he was talking about from examples for small n and perhaps from his proof.) - David Callan, Jun 23 2015
From David L. Harden, Apr 09 2017: (Start)
Fix a set X and define two distance functions d,D on X to be metrically equivalent when d(x_1,y_1) <= d(x_2,y_2) iff D(x_1,y_1) <= D(x_2,y_2) for all x_1, y_1, x_2, y_2 in X.
Now suppose that we fix a function f from unordered pairs of distinct elements of X to {1,...,n}. Then choose positive real numbers d_1 <= ... <= d_n such that d(x,y) = d_{f(x,y)}; the set of all possible choices of the d_i's makes this an n-parameter family of distance functions on X. (The simplest example of such a family occurs when n is a triangular number: When that happens, write n = (k 2). Then the set of all distance functions on X, when |X| = k, is such a family.) The number of such distance functions, up to metric equivalence, is a(n).
It is easy to see that an equivalence class of distance functions gives rise to a well-defined weak order on {d_1, ..., d_n}. To see that any weak order is realizable, choose distances from the set of integers {n-1, ..., 2n-2} so that the triangle inequality is automatically satisfied. (End)
a(n) is the number of rooted labeled forests on n nodes that avoid the patterns 213, 312, and 321. - Kassie Archer, Aug 30 2018
From A.H.M. Smeets, Nov 17 2018: (Start)
Also the number of semantic different assignments to n variables (x_1, ..., x_n) including simultaneous assignments. From the example given by Joerg Arndt (Mar 18 2014), this is easely seen by replacing
"{i}" by "x_i := expression_i(x_1, .., x_n)",
"{i, j}" by "x_i, x_j := expression_i(x_1, .., x_n), expression_j(x_1, .., x_n)", i.e., simultaneous assignment to two different variables (i <> j),
similar for simultaneous assignments to more variables, and
"<" by ";", i.e., the sequential constructor. These examples are directly related to "Number of ways n competitors can rank in a competition, allowing for the possibility of ties." in the first comment.
From this also the number of different mean definitions as obtained by iteration of n different mean functions on n initial values. Examples:
the AGM(x1,x2) = AGM(x2,x1) is represented by {arithmetic mean, geometric mean}, i.e., simultaneous assignment in any iteration step;
Archimedes's scheme (for Pi) is represented by {geometric mean} < {harmonic mean}, i.e., sequential assignment in any iteration step;
the geometric mean of two values can also be observed by {arithmetic mean, harmonic mean};
the AGHM (as defined in A319215) is represented by {arithmetic mean, geometric mean, harmonic mean}, i.e., simultaneous assignment, but there are 12 other semantic different ways to assign the values in an AGHM scheme.
By applying power means (also called Holder means) this can be extended to any value of n. (End)
Total number of faces of all dimensions in the permutohedron of order n. For example, the permutohedron of order 3 (a hexagon) has 6 vertices + 6 edges + 1 2-face = 13 faces, and the permutohedron of order 4 (a truncated octahedron) has 24 vertices + 36 edges + 14 2-faces + 1 3-face = 75 faces. A001003 is the analogous sequence for the associahedron. - Noam Zeilberger, Dec 08 2019
Mohammad K. Azarian, Geometric Series, Problem 329, Mathematics and Computer Education, Vol. 30, No. 1, Winter 1996, p. 101. Solution published in Vol. 31, No. 2, Spring 1997, pp. 196-197.
N. L. Biggs et al., Graph Theory 1736-1936, Oxford, 1976, p. 44 (P(x)).
Miklos Bona, editor, Handbook of Enumerative Combinatorics, CRC Press, 2015, page 183 (see R_n).
Kenneth S. Brown, Buildings, Springer-Verlag, 1988.
A. Cayley, On the theory of the analytical forms called trees II, Phil. Mag. 18 (1859), 374-378 = Math. Papers Vol. 4, pp. 112-115.
Pietro Codara, Ottavio M. D'Antona and Vincenzo Marra, Best Approximation of Ruspini Partitions in Goedel Logic, in Symbolic and Quantitative Approaches to Reasoning with Uncertainty, Lecture Notes in Computer Science, Volume 4724/2007, Springer-Verlag.
L. Comtet, Advanced Combinatorics, Reidel, 1974, p. 228.
N. G. de Bruijn, Enumerative combinatorial structures concerning structures, Nieuw Archief. voor Wisk., 11 (1963), 142-161; see p. 150.
J.-M. De Koninck, Ces nombres qui nous fascinent, Entry 13, pp 4, Ellipses, Paris 2008.
P. J. Freyd, On the size of Heyting semi-lattices, preprint, 2002.).
Silvia Heubach and Toufik Mansour, Combinatorics of Compositions and Words, CRC Press, 2010.
D. E. Knuth, The Art of Computer Programming. Addison-Wesley, Reading, MA, Vol. 3, 1973, Section 5.3.1, Problem 3..
M. Muresan, Generalized Fubini numbers. Stud. Cerc. Mat. 37 (1985), no. 1, pp. 70-76.
Nkonkobe, S., and V. Murali. "A study of a family of generating functions of Nelsen-Schmidt type and some identities on restricted barred preferential arrangements." Discrete Mathematics, Vol. 340 (2017), 1122-1128.
P. Peart, Hankel determinants via Stieltjes matrices. Proceedings of the Thirty-first Southeastern International Conference on Combinatorics, Graph Theory and Computing (Boca Raton, FL, 2000). Congr. Numer. 144 (2000), 153-159.
S. Ramanujan, Notebooks, Tata Institute of Fundamental Research, Bombay 1957 Vol. 1, see page 19., Wadsworth, Vol. 1, 1986; see Example 3.15.10, p. 146.
J. van der Elsen, Black and White Transformations, Shaker Publishing, Maastricht, 2005, p. 18.
C. G. Wagner, Enumeration of generalized weak orders. Arch. Math. (Basel) 39 (1982), no. 2, 147-152.
H. S. Wilf, Generatingfunctionology, Academic Press, NY, 1990, p. 147.
Ai-Min Xu and Zhong-Di Cen, Some identities involving exponential functions and Stirling numbers and applications, J. Comput. Appl. Math. 260 (2014), 201-207.
Alois P. Heinz, Table of n, a(n) for n = 0..424 (first 101 terms from N. J. A. Sloane)
Connor Ahlbach, Jeremy Usatine and Nicholas Pippenger, Barred Preferential Arrangements, Electron. J. Combin., Volume 20, Issue 2 (2013), #P55.
J.-C. Aval, V. Féray, J.-C. Novelli, J.-Y. Thibon, Quasi-symmetric functions as polynomial functions on Young diagrams, arXiv preprint arXiv:1312.2727 [math.CO], 2013.
Jean-Christophe Aval, Adrien Boussicault, and Philippe Nadeau, Tree-like Tableaux, Electronic Journal of Combinatorics, 20(4), 2013, #P34.
Ralph W. Bailey, The number of weak orderings of a finite set, Social Choice and Welfare, Vol. 15 (1998), pp. 559-562.
P. Barry, Exponential Riordan Arrays and Permutation Enumeration, J. Int. Seq. 13 (2010) # 10.9.1, Example 12.
Paul Barry, Eulerian polynomials as moments, via exponential Riordan arrays, arXiv preprint arXiv:1105.3043 [math.CO], 2011, J. Int. Seq. 14 (2011) # 11.9.5.
Paul Barry, On a transformation of Riordan moment sequences, arXiv:1802.03443 [math.CO], 2018.
Paul Barry, Generalized Eulerian Triangles and Some Special Production Matrices, arXiv:1803.10297 [math.CO], 2018.
D. Barsky, Analyse p-adique et suites classiques de nombres, Sem. Loth. Comb. B05b (1981) 1-21.
J. P. Barthelemy, An asymptotic equivalent for the number of total preorders on a finite set, Discrete Mathematics, 29(3):311-313, 1980.
Beáta Bényi, José L. Ramírez, Some Applications of S-restricted Set Partitions, arXiv:1804.03949 [math.CO], 2018.
F. Bergeron, Ph. Flajolet and B. Salvy, Varieties of Increasing Trees, Lecture Notes in Computer Science vol. 581, ed. J.-C. Raoult, Springer 1992, pp. 24-48.
Nantel Bergeron, Laura Colmenarejo, Shu Xiao Li, John Machacek, Robin Sulzgruber, Mike Zabrocki, Adriano Garsia, Marino Romero, Don Qui, Nolan Wallach, Super Harmonics and a representation theoretic model for the Delta conjecture, A summary of the open problem sessions of Jan 24, 2019, Representation Theory Connections to (q,t)-Combinatorics (19w5131), Banff, BC, Canada.
Sara C. Billey, M. Konvalinka, T. K. Petersen, W. Slofstra, B. E. Tenner, Parabolic double cosets in Coxeter groups, Discrete Mathematics and Theoretical Computer Science, Submitted, 2016.
P. Blasiak, K. A. Penson and A. I. Solomon, Dobinski-type relations and the log-normal distribution, arXiv:quant-ph/0303030, 2003.
Olivier Bodini, Antoine Genitrini, Mehdi Naima, Ranked Schröder Trees, arXiv:1808.08376 [cs.DS], 2018.
Olivier Bodini, Antoine Genitrini, Cécile Mailler, Mehdi Naima, Strict monotonic trees arising from evolutionary processes: combinatorial and probabilistic study, hal-02865198 [math.CO] / [math.PR] / [cs.DS] / [cs.DM], 2020.
Florian Bridoux, Caroline Gaze-Maillot, Kévin Perrot, Sylvain Sené, Complexity of limit-cycle problems in Boolean networks, arXiv:2001.07391 [cs.DM], 2020.
P. J. Cameron, Sequences realized by oligomorphic permutation groups, J. Integ. Seqs. Vol. 3 (2000), #00.1.5.
J. L. Chandon, J. LeMaire and J. Pouget, Dénombrement des quasi-ordres sur un ensemble fini, Math. Sci. Humaines, No. 62 (1978), 61-80.
Grégory Chatel, Vincent Pilaud, Viviane Pons, The weak order on integer posets, arXiv:1701.07995 [math.CO], 2017.
Chao-Ping Chen, Sharp inequalities and asymptotic series related to Somos' quadratic recurrence constant, Journal of Number Theory, 2016, Volume 172, March 2017, Pages 145-159.
W. Y. C. Chen, A. Y. L. Dai and R. D. P. Zhou, Ordered Partitions Avoiding a Permutation of Length 3, arXiv preprint arXiv:1304.3187 [math.CO], 2013.
Ali Chouria, Vlad-Florin Drǎgoi, Jean-Gabriel Luque, On recursively defined combinatorial classes and labelled trees, arXiv:2004.04203 [math.CO], 2020.
Mircea I. Cirnu, Determinantal formulas for sum of generalized arithmetic-geometric series, Boletin de la Asociacion Matematica Venezolana, Vol. XVIII, No. 1 (2011), p. 13.
A. Claesson and T. K. Petersen, Conway's napkin problem, Amer. Math. Monthly, 114 (No. 3, 2007), 217-231.
Tyler Clark and Tom Richmond, The Number of Convex Topologies on a Finite Totally Ordered Set, 2013, to appear in Involve;
Pierluigi Contucci, Emanuele Panizzi, Federico Ricci-Tersenghi, and Alina Sîrbu, A new dimension for democracy: egalitarianism in the rank aggregation problem, arXiv:1406.7642 [physics.soc-ph], 2014..
D. Dominici, Nested derivatives: A simple method for computing series expansions of inverse functions. arXiv:math/0501052v2 [math.CA], 2005.
F. Fauvet, L. Foissy, D. Manchon, The Hopf algebra of finite topologies and mould composition, arXiv preprint arXiv:1503.03820, 2015
V. Féray, Cyclic inclusion-exclusion, arXiv preprint arXiv:1410.1772 [math.CO], 2014.
P. Flajolet, S. Gerhold and B. Salvy, On the non-holonomic character of logarithms, powers and the n-th prime function, arXiv:math/0501379 [math.CO], 2005.
P. Flajolet and R. Sedgewick, Analytic Combinatorics, 2009; see page 109.
A. S. Fraenkel and M. Mor, Combinatorial compression and partitioning of large dictionaries, Computer J., 26 (1983), 336-343. See Tables 4 and 5.
Harvey M. Friedman, Concrete Mathematical Incompleteness: Basic Emulation Theory, Hilary Putnam on Logic and Mathematics, Outstanding Contributions to Logic, Vol. 9, Springer, Cham, 179-234.
F. Foucaud, R. Klasing, and P.J. Slater, Centroidal bases in graphs, arXiv preprint arXiv:1406.7490 [math.CO], 2014.
W. Gatterbauer and D. Suciu, Approximate Lifted Inference with Probabilistic Databases, arXiv preprint arXiv:1412.1069 [cs.DB], 2014.
Wolfgang Gatterbauer, Dan Suciu, Dissociation and propagation for approximate lifted inference with standard relational database management systems, The VLDB Journal, February 2017, Volume 26, Issue 1, pp 5-30; DOI 10.1007/s00778-016-0434-5.
Joël Gay, Vincent Pilaud, The weak order on Weyl posets, arXiv:1804.06572 [math.CO], 2018.
C. Geist, U. Endriss, Automated search for impossibility theorems in social choice theory: ranking sets of objects, arXiv:1401.3866 [cs.AI], 2014; J. Artif. Intell. Res. (JAIR) 40 (2011) 143-174.
Olivier Gérard, Re: Horse Race Puzzle.
S. Getu et al., How to guess a generating function, SIAM J. Discrete Math., 5 (1992), 497-499.
Robert Gill, The number of elements in a generalized partition semilattice, Discrete mathematics 186.1-3 (1998): 125-134. See Example 1.
S. Giraudo, Combinatorial operads from monoids, arXiv preprint arXiv:1306.6938 [math.CO], 2013.
M. Goebel, On the number of special permutation-invariant orbits and terms, in Applicable Algebra in Engin., Comm. and Comp. (AAECC 8), Volume 8, Number 6, 1997, pp. 505-509; Lect. Notes Comp. Sci.
W. S. Gray and M. Thitsa, System Interconnections and Combinatorial Integer Sequences, in: System Theory (SSST), 2013 45th Southeastern Symposium on, Date of Conference: 11-11 March 2013.
M. Griffiths, I. Mezo, A generalization of Stirling Numbers of the Second Kind via a special multiset, JIS 13 (2010) #10.2.5.
O. A. Gross, Preferential arrangements, Amer. Math. Monthly, 69 (1962), 4-8.
Gottfried Helms, Discussion of a problem concerning summing of like powers
M. E. Hoffman, Updown categories: Generating functions and universal covers, arXiv preprint arXiv:1207.1705 [math.CO], 2012.
INRIA Algorithms Project, Encyclopedia of Combinatorial Structures 41
Marsden Jacques, Dennis Wong, Greedy Universal Cycle Constructions for Weak Orders, Conference on Algorithms and Discrete Applied Mathematics (CALDAM 2020): Algorithms and Discrete Applied Mathematics, 363-370.
Svante Janson, Euler-Frobenius numbers and rounding, arXiv preprint arXiv:1305.3512 [math.PR], 2013.
M. Jarocinski and B. Mackowiak, Online Appendix to "Granger-Causal-Priority and Choice of Variables in Vector Autoregressions", 2013.
Vít Jelínek, Ida Kantor, Jan Kynčl, Martin Tancer, On the growth of the Möbius function of permutations, arXiv:1809.05774 [math.CO], 2018.
N. Khare, R. Lorentz, and C. Yan, Bivariate Goncarov Polynomials and Integer Sequences, Science China Mathematics, January 2014 Vol. 57 No. 1; doi: 10.1007/s11425-000-0000-0.
Dongseok Kim, Young Soo Kwon, and Jaeun Lee, Enumerations of finite topologies associated with a finite graph, arXiv preprint arXiv:1206.0550 [math.CO], 2012. See Th. 4.3. - From N. J. A. Sloane, Nov 09 2012
D. E. Knuth, J. Riordan, and N. J. A. Sloane, Correspondence, 1970.
M. J. Kochanski, How many orders are there?.
A. S. Koksal, Y. Pu, S. Srivastava, R. Bodik, J. Fisher and N. Piterman, Synthesis of Biological Models from Mutation Experiments, 2012.
Takao Komatsu, José L. Ramírez, Some determinants involving incomplete Fubini numbers, arXiv:1802.06188 [math.NT], 2018.
Germain Kreweras, Une dualité élémentaire souvent utile dans les problèmes combinatoires, Mathématiques et Sciences Humaines 3 (1963): 31-41.
A. Kumjian, D. Pask, A. Sims, and M. F. Whittaker, Topological spaces associated to higher-rank graphs, arXiv preprint arXiv:1310.6100 [math.OA], 2013.
Victor Meally, Comparison of several sequences given in Motzkin's paper "Sorting numbers for cylinders...", letter to N. J. A. Sloane, N. D.
E. Mendelson, Races with Ties, Math. Mag. 55 (1982), 170-175.
I. Mezo, Periodicity of the last digits of some combinatorial sequences, arXiv preprint arXiv:1308.1637 [math.CO], 2013 and J. Int. Seq. 17 (2014) #14.1.1.
I. Mezo and A. Baricz, On the generalization of the Lambert W function with applications in theoretical physics, arXiv preprint arXiv:1408.3999 [math.CA], 2014.
M. Mor and A. S. Fraenkel, Cayley permutations, Discrete Math., 48 (1984), 101-112.
T. S. Motzkin, Sorting numbers for cylinders and other classification numbers, in Combinatorics, Proc. Symp. Pure Math. 19, AMS, 1971, pp. 167-176. [Annotated, scanned copy]
Todd Mullen, On Variants of Diffusion, Dalhousie University (Halifax, NS Canada, 2020).
Norihiro Nakashima, Shuhei Tsujie, Enumeration of Flats of the Extended Catalan and Shi Arrangements with Species, arXiv:1904.09748 [math.CO], 2019.
R. B. Nelsen and H. Schmidt, Jr., Chains in power sets, Math. Mag., 64 (1991), 23-31.
S. Nkonkobe, V. Murali, On Some Identities of Barred Preferential Arrangements, arXiv preprint arXiv:1503.06173 [math.CO], 2015.
Mathilde Noual and Sylvain Sene, Towards a theory of modelling with Boolean automata networks-I. Theorisation and observations, arXiv preprint arXiv:1111.2077 [cs.DM], 2011.
J.-C. Novelli and J.-Y. Thibon, Polynomial realizations of some trialgebras, Proc. Formal Power Series and Algebraic Combinatorics 2006 (San-Diego, 2006); arXiv:math/0605061 [math.CO], 2006.
J.-C. Novelli and J.-Y. Thibon, Hopf Algebras of m-permutations,(m+1)-ary trees, and m-parking functions, arXiv preprint arXiv:1403.5962 [math.CO], 2014.
J.-C. Novelli, J.-Y. Thibon, L. K. Williams, Combinatorial Hopf algebras, noncommutative Hall-Littlewood functions, and permutation tableaux, Adv. Math. 224 (4) (2010) 1311-1348
Arthur Nunge, Eulerian polynomials on segmented permutations, arXiv:1805.01797 [math.CO], 2018.
OEIS Wiki, Sorting numbers
Karolina Okrasa, Paweł Rzążewski, Intersecting edge distinguishing colorings of hypergraphs, arXiv:1804.10470 [cs.DM], 2018.
K. A. Penson, P. Blasiak, G. Duchamp, A. Horzela and A. I. Solomon, Hierarchical Dobinski-type relations via substitution and the moment problem, arXiv:quant-ph/0312202, 2003.
Tilman Piesk, Tree of weak orderings in concertina cube. Illustration of a(3) = 13, used with permission. See also the original of this figure on Wikimedia Commons.
Vincent Pilaud, V. Pons, Permutrees, arXiv preprint arXiv:1606.09643 [math.CO], 2016-2017.
C. J. Pita Ruiz V., Some Number Arrays Related to Pascal and Lucas Triangles, J. Int. Seq. 16 (2013) #13.5.7.
Robert A. Proctor, Let's Expand Rota's Twelvefold Way For Counting Partitions!, arXiv:math/0606404 [math.CO], Jan 05, 2007.
Helmut Prodinger, Ordered Fibonacci partitions, Canad. Math. Bull. 26 (1983), no. 3, 312--316. MR0703402 (84m:05012). [See F_n on page 312.
Y. Puri and T. Ward, Arithmetic and growth of periodic orbits, J. Integer Seqs., Vol. 4 (2001), #01.2.1.
S. Ramanujan, Notebook entry
Joe Sawada, Dennis Wong, An Efficient Universal Cycle Construction for Weak Orders, University of Guelph, School of Computer Science (2019), presented at the 30th Coast Combinatorics Conference at University of Hawaii, Manoa.
N. J. A. Sloane and Thomas Wieder, The Number of Hierarchical Orderings, Order 21 (2004), 83-89.
D. J. Velleman and G. S. Call, Permutations and combination locks, Math. Mag., 68 (1995), 243-253.
C. G. Wagner, Enumeration of generalized weak orders, Preprint, 1980. [Annotated scanned copy]
C. G. Wagner and N. J. A. Sloane, Correspondence, 1980
F. V. Weinstein, Notes on Fibonacci partitions, arXiv:math/0307150 [math.NT], 2003-2015 (see page 9).
Eric Weisstein's World of Mathematics, Combination Lock
Wikipedia, Ordered Bell number
H. S. Wilf, Generatingfunctionology, 2nd edn., Academic Press, NY, 1994, p. 175, Eq. 5.2.6, 5.2.7.
Andrew T. Wilson, Torus link homology and the nabla operator, arXiv preprint arXiv:1606.00764 [cond-mat.str-el], 2016.
Yan X Zhang, Four Variations on Graded Posets, arXiv preprint arXiv:1508.00318 [math.CO], 2015.
Yi Zhu, Evgueni T. Filipov, An efficient numerical approach for simulating contact in origami assemblages, Proc. R. Soc. A (2019) Vol. 475, 20190366.
Index entries for "core" sequences
Index entries for related partition-counting sequences
a(n) = Sum_{k=0..n} k! * StirlingS2(n,k) (whereas the Bell numbers A000110(n) = Sum_{k=0..n} StirlingS2(n,k)).
E.g.f.: 1/(2-exp(x)).
a(n) = Sum_{k=1..n} binomial(n, k)*a(n-k), a(0) = 1.
The e.g.f. y(x) satisfies y' = 2*y^2 - y.
a(n) = A052856(n) - 1, if n>0.
a(n) = A052882(n)/n, if n>0.
a(n) = A076726(n)/2.
a(n) is asymptotic to (1/2)*n!*log_2(e)^(n+1), where log_2(e) = 1.442695... [Barthelemy80, Wilf90].
For n >= 1, a(n) = (n!/2) * Sum_{k=-infinity..infinity} of (log(2) + 2 Pi i k)^(-n-1). - Dean Hickerson
a(n) = ((x*d/dx)^n)(1/(2-x)) evaluated at x=1. - Karol A. Penson, Sep 24 2001
For n>=1, a(n) = Sum_{k>=1} (k-1)^n/2^k = A000629(n)/2. - Benoit Cloitre, Sep 08 2002
Value of the n-th Eulerian polynomial (cf. A008292) at x=2. - Vladeta Jovovic, Sep 26 2003
First Eulerian transform of the powers of 2 [A000079]. See A000142 for definition of FET. - Ross La Haye, Feb 14 2005
a(n) = Sum_{k=0..n} (-1)^k*k!*Stirling2(n+1, k+1)*(1+(-1)^k)/2. - Paul Barry, Apr 20 2005
a(n) + a(n+1) = 2*A005649(n). - Philippe Deléham, May 16 2005 - Thomas Wieder, May 18 2005
Equals inverse binomial transform of A000629. - Gary W. Adamson, May 30 2005
a(n) = Sum_{k=0..n} k!*( Stirling2(n+2, k+2) - Stirling2(n+1, k+2) ). - Micha Hofri (hofri(AT)wpi.edu), Jul 01 2006
Recurrence: 2*a, Oct 04 2007
a(n) = Sum_{k=0..n} A131689(n,k). - Philippe Deléham, Nov 03 2008
From Peter Bala, generalized).
G.f.: 1/(1-x/(1-2*x/(1-2*x/(1-4*x/(1-3*x/(1-6*x/(1-4*x/(1-8*x/(1-5*x/(1-10*x/(1-6*x/(1-... (continued fraction); coefficients of continued fraction are given by floor((n+2)/2)*(3-(-1)^n)/2 (A029578(n+2)). - Paul Barry, Mar 30 2010
G.f.: 1/(1-x-2*x^2/(1-4*x-8*x^2/(1-7*x-18*x^2/(1-10*x-32*x^2/(1../(1-(3*n+1)*x-2*(n+1)^2*x^2/(1-... (continued fraction). - Paul Barry, Jun 17 2010
G.f.: A(x) = Sum_{n>=0} n!*x^n / Product_{k=1..n} (1-k*x). - Paul D. Hanna, Jul 20 2011
a(n) = A074206(q_1*q_2*...*q_n), where {q_i} are distinct primes. - Vladimir Shevelev, Aug 05 2011
The adjusted e.g.f. A(x) := 1/(2-exp(x))-1, has inverse function A(x)^-1 = Integral_{t=0..x} 1/((1+t)*(1+2*t)). Applying [Dominici, Theorem 4.1] to invert the integral yields a formula for a(n): Let f(x) = (1+x)*(1+2*x). Let D be the operator f(x)*d/dx. Then a(n) = D^(n-1)(f(x)) evaluated at x = 0. Compare with A050351. - Peter Bala, Aug 31 2011
G.f.: 1+x/(1-x+2*x*(x-1)/(1+3*x*(2*x-1)/(1+4*x*(3*x-1)/(1+5*x*(4*x-1)/(1+... or 1+x/(U(0)-x), U(k) = 1+(k+2)*(k*x+x-1)/U(k+1); (continued fraction). - Sergei N. Gladkovskii, Oct 30 2011
a(n) = D^n*(1/(1-x)) evaluated at x = 0, where D is the operator (1+x)*d/dx. Cf. A052801. - Peter Bala, Nov 25 2011
E.g.f.: 1 + x/(G(0)-2*x) where G(k) = x + k + 1 - x*(k+1)/G(k+1); (continued fraction, Euler's 1st kind, 1-step). - Sergei N. Gladkovskii, Jul 11 2012
E.g.f. (2 - 2*x)*(1 - 2*x^3/(8*x^2 - 4*x + (x^2 - 4*x + 2)*G(0)))/(x^2 - 4*x + 2) where G(k) = k^2 + k*(x+4) + 2*x + 3 - x*(k+1)*(k+3)^2 /G(k+1) ; (continued fraction, Euler's 1st kind, 1-step). - Sergei N. Gladkovskii, Oct 01 2012
G.f.: 1 + x/G(0) where G(k) = 1 - 3*x*(k+1) - 2*x^2*(k+1)*(k+2)/G(k+1); (continued fraction). - Sergei N. Gladkovskii, Jan 11 2013.
G.f.: 1/G(0) where G(k) = 1 - x*(k+1)/( 1 - 2*x*(k+1)/G(k+1) ); (continued fraction). - Sergei N. Gladkovskii, Mar 23 2013
a(n) is always odd. For odd prime p and n >= 1, a((p-1)*n) = 0 (mod p). - Peter Bala, Sep 18 2013
G.f.: 1 + x/Q(0), where Q(k) = 1 - 3*x*(2*k+1) - 2*x^2*(2*k+1)*(2*k+2)/( 1 - 3*x*(2*k+2) - 2*x^2*(2*k+2)*(2*k+3)/Q(k+1) ); (continued fraction). - Sergei N. Gladkovskii, Sep 23 2013
G.f.: T(0)/(1-x), where T(k) = 1 - 2*x^2*(k+1)^2/( 2*x^2*(k+1)^2 - (1-x-3*x*k)*(1-4*x-3*x*k)/T(k+1) ); (continued fraction). - Sergei N. Gladkovskii, Oct 14 2013
a(n) = log(2)* Integral_{x>=0} floor(x)^n * 2^(-x) dx. - Peter Bala, Feb 06 2015
For n > 0, a(n) = Re(polygamma(n, i*log(2)/(2*Pi))/(2*Pi*i)^(n+1)) - n!/(2*log(2)^(n+1)). - Vladimir Reshetnikov, Oct 15 2015
a(n) = Sum_{k=1..n}(k*b2(k-1)*(k)!*stirling2(n, k)), n>0, a(0)=1, where b2(n) is the n-th Bernoulli number of the second kind. - Vladimir Kruchinin, Nov 21 2016
a(n) = Sum_{k=0..2^(n-1)-1} A284005(k), n>0, a(0)=1. - Mikhail Kurkov, Jul 08 2018
a(n) = A074206(k) for squarefree k with n prime factors. In particular a(n) = A074206(A002110(n)). - Amiram Eldar, May 13 2019
For n >0, a(n) = -(-1)^n / 2 * PHI(2, -n, 0), where PHI(z, s, a) is the Lerch Zeta function. - Federico Provvedi, Sep 05 2020
Let the points be labeled 1,2,3,...
a(2) = 3: 1<2, 2<1, 1=2.
a(3) = 13 from the 13 arrangements
1<2<3,
1<3<2,
2<1<3,
2<3<1,
3<1<2,
3<2<1,(3) = 13. The 13 plane increasing 0-1-2 trees on 3 vertices, where vertices of outdegree 1 come in 3 colors and vertices of outdegree 2 come in 2 colors, are:
........................................................
........1 (x3 colors).....1(x2 colors)....1(x2 colors)..
........|................/.\............./.\............
........2 (x3 colors)...2...3...........3...2...........
........|...............................................
........3...............................................
......====..............====............====............
.Totals 9......+..........2....+..........2....=..13....
a(4) = 75. The 75 non-plane increasing 0-1-2 trees on 4 vertices, where vertices of outdegree 1 come in 3 colors and vertices of outdegree 2 come in 4 colors, are:
...............................................................
.....1 (x3).....1(x4).......1(x4).....1(x4)........1(x3).......
.....|........./.\........./.\......./.\...........|...........
.....2 (x3)...2...3.(x3)..3...2(x3).4...2(x3)......2(x4).......
.....|.............\...........\.........\......../.\..........
.....3.(x3).........4...........4.........3......3...4.........
.....|.........................................................
.....4.........................................................
....====......=====........====......====.........====.........
Tots 27....+....12......+...12....+...12.......+...12...=...75.
From Joerg Arndt, Mar 18 2014: (Start)
The a(3) = 13 strings on the alphabet {1,2,3} containing all letters up to the maximal value appearing and the corresponding ordered set partitions are:
01: [ 1 1 1 ] { 1, 2, 3 }
02: [ 1 1 2 ] { 1, 2 } < { 3 }
03: [ 1 2 1 ] { 1, 3 } < { 2 }
04: [ 2 1 1 ] { 2, 3 } < { 1 }
05: [ 1 2 2 ] { 1 } < { 2, 3 }
06: [ 2 1 2 ] { 2 } < { 1, 3 }
07: [ 2 2 1 ] { 3 } < { 1, 2 }
08: [ 1 2 3 ] { 1 } < { 2 } < { 3 }
09: [ 1 3 2 ] { 1 } < { 3 } < { 2 }
00: [ 2 1 3 ] { 2 } < { 1 } < { 3 }
11: [ 2 3 1 ] { 3 } < { 1 } < { 2 }
12: [ 3 1 2 ] { 2 } < { 3 } < { 1 }
13: [ 3 2 1 ] { 3 } < { 2 } < { 1 }
(End)->add(add((-1)^(k-i)*binomial(k, i)*i^n, i=0..n), k=0..n): seq(a(n), n=0..18); # Zerinvary Lajos, Jun 03 2007
a := n -> add(combinat:-eulerian1(n, k)*2^k, k=0..n): # Peter Luschny, Jan 02 2015
a := n -> (polylog(-n, 1/2)+`if`(n=0, 1, 0))/2: seq(round(evalf(a(n), 32)), n=0..20); # Peter Luschny, Nov 03 2015
Table[(PolyLog[-z, 1/2] + KroneckerDelta[z])/2, {z, 0, 20}] (* Wouter Meeussen *)
a[0] = 1; a[n_]:= a[n]= Sum[Binomial[n, k]*a[n-k], {k, 1, n}]; Table[a[n], {n, 0, 30}] (* Roger L. Bagula and Gary W. Adamson, Sep 13 2008 *)
t = 30; Range[0, t]! CoefficientList[Series[1/(2 - Exp[x]), {x, 0, t}], x] (* Vincenzo Librandi, Mar 16 2014 *)
a[ n_] := If[ n < 0, 0, n! SeriesCoefficient[ 1 / (2 - Exp@x), {x, 0, n}]]; (* Michael Somos, Jun 19 2015 *)
Table[Sum[k^n/2^(k+1), {k, 0, Infinity}], {n, 0, 20}] (* Vaclav Kotesovec, Jun 26 2015 *)
Table[HurwitzLerchPhi[1/2, -n, 0]/2, {n, 0, 20}] (* Jean-François Alcover, Jan 31 2016 *)
Fubini[n_, r_] := Sum[k!*Sum[(-1)^(i+k+r)*((i+r)^(n-r)/(i!*(k-i-r)!)), {i, 0, k-r}], {k, r, n}]; Fubini[0, 1] = 1; Table[Fubini[n, 1], {n, 0, 20}] (* Jean-François Alcover, Mar 31 2016 *)
Eulerian1[0, 0] = 1; Eulerian1[n_, k_] := Sum[(-1)^j (k-j+1)^n Binomial[n+1, j], {j, 0, k+1}]; Table[Sum[Eulerian1[n, k] 2^k, {k, 0, n}], {n, 0, 20}] (* Jean-François Alcover, Jul 13 2019, after Peter Luschny *)
Prepend[Table[-(-1)^k HurwitzLerchPhi[2, -k, 0]/2, {k, 1, 50}], 1] (*Federico Provvedi, Sep 05 2020*)
(PARI) {a(n) = if( n<0, 0, n! * polcoeff( subst( 1 / (1 - y), y, exp(x + x*O(x^n)) - 1), n))}; /* Michael Somos, Mar 04 2004 */
(PARI) Vec(serlaplace(1/(2-exp('x+O('x^66))))) /* Joerg Arndt, Jul 10 2011 */
(PARI) {a(n)=polcoeff(sum(m=0, n, m!*x^m/prod(k=1, m, 1-k*x+x*O(x^n))), n)} /* Paul D. Hanna, Jul 20 2011 */
(PARI) {a(n) = if( n<1, n==0, sum(k=1, n, binomial(n, k) * a(n-k)))}; /* Michael Somos, Jul 16 2017 */
(Maxima) makelist(sum(stirling2(n, k)*k!, k, 0, n), n, 0, 12); /* Emanuele Munarini, Jul 07 2011 */
(Maxima) a[0]:1$ a[n]:=sum(binomial(n, k)*a[n-k], k, 1, n)$ A000670(n):=a[n]$ makelist(A000670(n), n, 0, 30); /* Martin Ettl, Nov 05 2012 */
(Sage)
@CachedFunction
def A000670(n) : return 1 if n == 0 else add(A000670(k)*binomial(n, k) for k in range(n))
[A000670(n) for n in (0..20)] # Peter Luschny, Jul 14 2012
(Haskell)
a000670 n = a000670_list !! n
a000670_list = 1 : f [1] (map tail $ tail a007318_tabl) where
f xs (bs:bss) = y : f (y : xs) bss where y = sum $ zipWith (*) xs bs
-- Reinhard Zumkeller, Jul 26 2014
See A240763 for a list of the actual preferential arrangements themselves.
A000629, this sequence, A002050, A032109, A052856, A076726 are all more-or-less the same sequence. - N. J. A. Sloane, Jul 04 2012
Binomial transform of A052841. Inverse binomial transform of A000629.
Asymptotic to A034172.
Cf. A002144, A002869, A004121, A004122, A007047, A007318, A048144, A053525, A080253, A080254, A011782, A154921, A162312, A163204, A242280, A261959, A290376, A074206.
Row r=1 of A094416. Row 0 of array in A226513. Row n=1 of A262809.
Main diagonal of: A135313, A261781, A276890, A327245, A327583, A327584.
Row sums of triangles A019538, A131689, A208744 and A276891.
A217389 and A239914 give partial sums.
Column k=1 of A326322.
Sequence in context: A276900 A276930 A034172 * A032036 A305535 A300793
Adjacent sequences: A000667 A000668 A000669 * A000671 A000672 A000673
nonn,core,nice,easy,changed
N. J. A. Sloane
approved | http://oeis.org/A000670 | CC-MAIN-2020-45 | refinedweb | 5,977 | 59.4 |
Hello nice people
I am pulling data from REST API in a jupyter notebook in DSS and do a lot of things on the pandas dataframe I am creating
I would like to save the dataframe as a dataset I can later explore whitin the project I am working in.
I am trying something like :
if not results.empty: output_data = dataiku.Dataset(instrument_name+"_" + event_name + "_" + timestr) output_data.write_dataframe(results)
But I am always running through a problem
Unable to fetch schema for PROJ1.participant_screening_20210623-233350: dataset does not exist: PROJ1.participant_screening_20210623-233350
I tried some other alternatives to write the dataframe in a dataset but dss seems to look for a scheman with the project name (PROJ1) ?
Any easy way to get the dataframes into dss datasets ?
PS : this is a test instance, am not using a database for intermediate datasets but writing on disc for testing purposes
Thanks
Rad
Following is an example of how to create a new dataset in Python and then write a dataframe to it.
import dataiku dataset_name = 'TEST' # Get a handle to the current project client = dataiku.api_client() project = client.get_project(dataiku.default_project_key()) # Create a SQL dataset (you can create other types by specifying different parameters for the with_store_into method) # Documentation here: # Note that documentation shows project.new_managed_dataset which is incorrect builder = project.new_managed_dataset_creation_helper(dataset_name) builder.with_store_into("NZ_DSWRK") builder.create() # Write dataframe to dataset dataiku.Dataset(dataset_name).write_with_schema(df)
The dataset will show in the UI as not built. You can right click on the dataset and choose "mark as built" to fix this.
Note that this example uses both the "external" api (dataikuapi) to create the dataset and the internal api (dataiku) to write the dataframe to the dataset. More the differences here.
Hope this helps.
Marlan
You can write to any type of SQL database you have a connection set up for. The example I gave included a connection for a Netezza database. Not sure what you mean about storing intermediate tables. We write both intermediate and final output data to Netezza.
To create a file dataset, use the filesystem folders connection in the "with_store_into" method, e.g., builder.with_store_into('filesystem_folders')
Note also that you can pass "overwrite=True" to the create method.
Marlan
Fantastic,
thank you for clarifying, this helped very much | https://community.dataiku.com/t5/Using-Dataiku-DSS/Creating-dataset-from-Pandas/m-p/17647/highlight/true | CC-MAIN-2021-39 | refinedweb | 382 | 66.13 |
The Builder pattern is an easy way to make your code more readable. The pattern is useful when dealing with a constructor that takes several arguments that aren’t easy to keep straight in your head. It is even more useful if the class has multiple constructors with different sets of arguments, or arguments in a different order.
As an example, an Order class might have the following constructor:
public class Order { public Order(long price, int decimalPlaces, int shares, int side, String symbol) { this.price = price; ... } }
A test for the Order class might look like:
@Test public void testOrdersAreUnique() { Order order1 = new Order(5000, 2, 4000, 2, "MSFT"); Order order2 = new Order(5000, 2, 4000, 2, "MSFT"); assertThat(order1).isNotEqualTo(order2); }
It’s not obvious when reading the test what the different constructor arguments mean. Compare with the following:
@Test public void testOrdersAreUnique() { Order order1 = new OrderBuilder() .withPrice(5000) .withDecimalPlaces(2) .withShares(4000) .withSide(2) .withSymbol("MSFT") .build(); Order order2 = new OrderBuilder() .withPrice(5000) .withDecimalPlaces(2) .withShares(4000) .withSide(2) .withSymbol("MSFT") .build(); assertThat(order1).isNotEqualTo(order2); }
The second version is a big improvement. By reading the code you know exactly what attributes the Orders have without needing to refer to the Order class definition. Another benefit of this pattern is that if a new parameter is added to the Order constructor down the road, that change does not cause the lines of code that use the OrderBuilder to change, the way they typically would if they invoked the constructor directly.
Implementing the Builder pattern is straightforward. OrderBuilder can be implemented as:
public class OrderBuilder { private long price; private int decimalPlaces; private int shares; private int side; private String symbol; public OrderBuilder withPrice(long price) { this.price = price; return this; } public OrderBuilder withDecimalPlaces(int decimalPlaces) { this.decimalPlaces = decimalPlaces; return this; } public OrderBuilder withShares(int shares) { this.shares = shares; return this; } public OrderBuilder withSide(int side) { this.side = side; return this; } public OrderBuilder withSymbol(String symbol) { this.symbol = symbol; return this; } public Order build() { return new Order(price, decimalPlaces, shares, side, symbol); } }
Give it a try and see if it improves the readability of your code.
In my next post I’ll talk about using the Builder pattern to make tests easier to read and write.
Wouldn’t some form of named options hash/object work in this situation too?
JS uses this a lot:
new Order({
price: 5000,
decimals: 2,
shares: 4000,
sides: 2,
symbol: “MSFT”
});
Would work just as well, and I’m sure there’s a way to do that in *most* languages.
December 11, 2013 at 4:55 pm
Alas, Java does not give us nice hash literals or named params, so the Builder pattern is the workaround. Certainly for languages that provide a nicer syntax as in your example, that is the way to go.
December 16, 2013 at 1:42 pm | http://pivotallabs.com/using-the-builder-pattern-to-improve-code-readability/ | CC-MAIN-2014-10 | refinedweb | 477 | 53 |
How thoroughly, whenever you need to.
Why break Python?
If you maintain applications, libraries or other code written in Python, and you ever intend to distribute any of that code to third parties, you need to make some decisions. You need to decide how to license your code, you need to decide how to distribute it (if you’re open source, the Python Package Index is the place to do it), etc.
But you also need to — and this is an important and oft-overlooked step — decide which versions of Python you’ll support. And here I don’t just mean Python 2 vs. Python 3. If you support Python 2, which specific Python 2 releases do you want to support? On Python 3, which specific Python 3 releases?
Supporting lots of versions of Python can significantly increase your workload in writing and maintaining code. Python improves with each release of the language, but — aside from some 2-to-3 compatibility features present in Python 2.6/2.7, and some security stuff that got backported to 2.7, those improvements don’t migrate backward into older versions of the language. Which means that every version of Python you support adds another set of restrictions on what you can write, and can leave you having to invent workarounds for convenient features that unfortunately don’t exist in a version you’re still supporting. This is a recipe for burning out as a maintainer, so you really need to pick a set of Python versions and stick to that in order to put some limits on how much work you’ll have to do.
Also, once you’ve decided on a set of versions to support, you really want to be sure about it. Many people nowadays are good about having some kind of continuous integration running against a matrix of supported versions, but that only handles half the problem: you know your code works on the versions you support, but are you sure it doesn’t work on unsupported Python versions?
This is more important than it sounds at first. But one of the nightmare scenarios as a maintainer is to suddenly have someone who’s very angry because they didn’t change anything in their setup and suddenly it stopped working. And that’s exactly what happens with code that accidentally “works” — or at least mostly works — on an unsupported Python version. Everything is fine until one day the fateful code path gets reached for the first time, or you push a new release that turns your declaration of incompatibility into actual incompatibility.
So if at all possible, you should take steps to ensure that when you declare a set of supported Python versions, you’re actually enforcing them.
Which versions should you support?
This is entirely up to you, and will depend on how much work you feel like doing and how badly you want features that only exist in newer versions of Python. My own personal policy, though, is:
- When I’m writing a generic Python library, I support any version of Python receiving upstream support from the Python core team.
- When I’m writing something for use with Django, I look at the current supported versions of Django, and support any version of Python those will run on, minus any which have reached end-of-life (for example, Django 1.8 — still supported by the Django team — supported Python 3.2 at release, but Python 3.2 is now past end of life, so I don’t support Python 3.2 in my personal Django apps).
There are plenty of other sensible options available. You could support only the latest stable 2.x and latest stable 3.x, or only the latest stable 3.x, or only supported 3.x releases, and so on. The important thing is not the specific set of versions you decide on, but that you decide and take action on that decision.
You should also make sure — for anything which will go on the Python Package Index or a similar service — to use trove classifiers in your
setup.py to let people know which versions you’re supporting. The full list of trove classifiers is quite large, and lets you also indicate topical categories and things like framework version compatibility, but at the very least you should use the Python-version classifiers so people know what to expect (you can also browse PyPI by classifiers, in order to find, say, game-related code which supports Python 3.5 and is known to work on macOS, or any other arbitrary set of classifiers you care to combine).
How not to break Python
There is a very easy brute-force way to enforce use of specific versions of Python: you can just check the version someone’s running, and crash if it’s not one you want to support. For example, if you wanted to only allow Python 3.3 and later, you could do something like:
import sys if sys.version_info < (3, 3): sys.exit("You must use Python 3.3, or newer.")
This works because
sys.version_info is a tuple, and Python supports ordered comparisons on tuples (and on versions of Python which include
namedtuple in the
collections library,
sys.version_info is a
namedtuple with useful names on its fields, letting you inspect it semantically instead of having to memorize the indices of the different version components).
But it’s not exactly elegant, and it litters a bunch of these brute-force version checks throughout your code. If you’re OK with that, then it does of course get the job done.
Personally, I prefer an approach which involves simply writing natural Pythonic code to do whatever it is my library or application does, but in a way that also ensures compatibility with only the specific set of Python versions I’ve chosen to support. Which isn’t terribly hard once you know a few things, and those things are what the rest of this article will cover.
This will not, however, just be a list of what was new in each version of Python. Instead, the focus will be on generally-useful things which were added to Python and which achieve one of the following (in decreasing order of how preferable they are to use):
- Causing an import-time
SyntaxError. This is the ideal, because it ensures absolutely that your code will never work on an unsupported version of Python, with no chance of accidentally being able to sort-of work for long enough to mislead someone.
- Causing an import-time
ImportError,
NameErroror
AttributeError. This is almost as good as a syntax error, but not quite: these exceptions will get the job done, just without clearly communicating that your code is written for a different version of Python.
- Causing an exception fairly quickly and predictably, but not automatically on import. This is the least desirable option, because it runs the risk of someone thinking it’s a bug in your code instead of a deliberate way of breaking compatibility with an unsupported version.
Now let’s get started.
Python 2.6
As I mentioned above, I support any version of Python that still has upstream support, which means that for now I still support Python 2 even if I don’t recommend or use it personally. The only 2.x release still receiving upstream support is Python 2.7, but thanks to long-term third-party support contracts there are people still running 2.6. You really should try to stop supporting or using Python 2.6 as soon as possible, but if you must support it — and I highly recommend you make sure you’re paid well for doing so — here’s how to ensure your code works on 2.6 but not on 2.5:
- Use a
withstatement, and do not include the future import. Python 2.5 only supported
withwhen preceded by a top-level
from __future__ import with_statement, so omitting that will cause an import-time
SyntaxErroron Python 2.5.
- Use
asin an
exceptblock to assign the exception to a variable. Python 2.6 was the first version to support the
except ExceptionClass as varnamesyntax (previously you had to write
except ExceptionClass, varname, which was problematic if you had a tuple of exceptions to catch), so the
asform is an import-time
SyntaxErrorin 2.5.
- Use the
bprefix on byte strings. This prefix does not exist prior to 2.6, so again is an import-time
SyntaxErrorin 2.5.
- Perform string formatting with the
format()method instead of the
%operator. The
format()method was new as of 2.6, so will cause an
AttributeErrorthe first time string formatting attempts to execute in 2.5.
Note: do not attempt to use the
print() distinction to break Python 2.5 support. Python 2.6 introduced the
from __future__ import print_function flag for emulating Python 3’s behavior, but
Python 2.7
This is ideally the minimum Python version anyone should support now, which means you’ll want to break Python 2.6 compatibility. For that:
- You can use a set literal or set comprehension. Those were new in 2.7 and are a
SyntaxErrorin 2.6.
- Similarly, dict comprehensions are new in 2.7 and are a
SyntaxErrorin 2.6.
- Use multiple context managers in the same
withstatement. This is a
SyntaxErrorin Python 2.6.
- Use the
format()method to do string formatting, but omit positional identifiers in the placeholders (for example, do
'{}'.format('something')instead of
'{0}'.format('something')). Being able to omit those was a new feature in 2.7, and will raise a
ValueErroron Python 2.6 when the
format()call executes.
Python 3
If you only want to support Python 3, and cut off Python 2 altogether, it’s somewhat tricky precisely because the compatibility tools for supporting Python 2 and 3 in a single codebase were so good. This makes it hard to trigger a generic “this doesn’t work on Python 2”
SyntaxError, since Python 3 held off on adding new syntax for a few releases in order to give people a chance to start their porting and shake out issues in the early 3.x releases. Even worse, several syntactic changes got backported for compatibility into 2.7, taking away the option to use them.
The easiest and most obvious thing you can do, syntactically, is to use function annotations. These were never backported into a 2.x release, and are generally useful on a Python 3 codebase. Unfortunately, they’re most useful when combined with the
typing type-hint library which was first shipped in Python 3.5, so unless you’re 3.5+ you’ll need to install the generic backported version for earlier Python versions.
If you can’t or don’t want to use annotations, you can:
- Declare a metaclass using the
metaclass=syntax. This is a
SyntaxErroron Python 2.7 and earlier.
- Use the
raise … fromsyntax when raising a new exception in an
exceptblock. This is a
SyntaxErroron Python 2.7 and earlier.
- Use a keyword-only argument in a function definition. This is a
SyntaxErroron Python 2.7 and earlier.
- Use extended iterable unpacking. This is a
SyntaxErroron Python 2.7 and earlier.
Python 3.3
Python 3.3 is the oldest currently-supported Python 3.x release, and so is a good target for a minimum version in Python 3 land. This means you’ll want to break earlier 3.x releases. Two very easy ways to do this are:
- Use the
uprefix on strings. This was legal in Python 2 to indicate a Unicode string instead of a byte string, but not in 3.0 through 3.2. It was re-added in 3.3 (where it’s a no-op since strings are all Unicode no matter what you do) to ease the porting process, and is extremely handy as a way to write code that works on Python 2.7 and 3.3+, but is a
SyntaxErroron 3.0, 3.1 and 3.2.
- Use a
yield fromstatement to delegate to a sub-generator. This is a
SyntaxErroron Python 3.2, and also in Python 2.
Python 3.4
This one’s tricky, because Python 3 did not add any new syntax to the language. So there’s no way to get the ideal situation of code that works on 3.4 and is a
SyntaxError in 3.3. However, you can still force some errors in 3.4:
- Use the
asyncio,
enum, or
pathliblibraries, all of which are new as of Python 3.4 and so are automatic
ImportErrorin 3.3 (and in Python 2).
- Use
hashlib.pbkdf2_hmac(). This function is new as of 3.4, and is another handy 2/3 compatibility shim: it doesn’t exist in 3.0, 3.1 or 3.2, but does exist in later 2.7-series releases.
- Use
picklewith protocol version 4, which was new as of Python 3.4.
Python 3.5
Python 3.5 is the latest current 3.x release, though 3.6 is on its way. It also has new syntax we can take advantage of:
- Use the
asyncor
awaitkeywords when writing asynchronous code. These are a
SyntaxErrorin Python 3.4 (and in Python 2).
- Use the
%operator for formatting of
bytesobjects. This is another handy Python 2 compatibility trick:
%formatting worked on byte strings in Python 2, was not implemented on
bytesin Python 3.0 through 3.4, and was added to
bytesin 3.5.
- If you write numeric/scientific code, and have some types which implement it, use the new matrix-multiplication operator:
@. This is a
SyntaxErroron Python 3.4 and Python 2.
- Use the new new iterable-unpacking features, which are
SyntaxErrorin Python 3.4 and Python 2.
Note: Python 3.5 added the
typing module to the standard library, but as mentioned above it’s available as a separate download from PyPI for earlier Python versions. Which means importing from
typing is not guaranteed to break compatibility with older Pythons; someone might have installed it separately, or might go and install it separately to make your code work.
Python 3.6
Python 3.6 isn’t out yet, but it’s in beta and pretty close to its final form. This lets us anticipate some things:
- Use
f-prefixed strings for formatting operations. These are new as of 3.6, are a
SyntaxErrorin Python 3.5 and Python 2, and save you some typing when you’d just be dumping local variables into a
format()call.
- Use annotations to give type hints to variables as well as to functions. This is a
SyntaxErrorin Python 3.5 and Python 2.
- Use underscores in numeric values. This improves readability (i.e., you can write one million as
1_000_000instead of
1000000), and is a
SyntaxErrorin Python 3.5 and Python 2.
- Use
asyncand
awaitin comprehensions, or in a generator (previously,
yieldand
awaitcould not both occur in the same function body). Once again these get you
SyntaxErrorin 3.5 and in Python 2.
And that’s it… for now
Python 3.7 is lurking in the future, but so far nothing’s locked in for it that could be used to forcibly break compatibility with 3.6 or other earlier versions. So that’s where I’ll leave off.
I’ve already implemented some of the above in a piece of code I’m shipping (webcolors 1.7 uses
u prefixes to break Python 3.0-3.2, and a dict comprehension to break 2.6), and plan to do the same with my other projects over the course of their next release cycles. And hopefully the above list will help someone else manage the same in their projects, and perhaps prevent some maintenance headaches and bad days for developers and users. | https://www.b-list.org/weblog/2016/nov/28/break-python/ | CC-MAIN-2018-09 | refinedweb | 2,644 | 66.64 |
3 Essential Google Colaboratory Tips & Tricks
Google Colaboratory is a promising machine learning research platform. Here are 3 tips to simplify its usage and facilitate using a GPU, installing libraries, and uploading data files.
Like many of you, I have been very excited by Google's Colaboratory project. While it isn't exactly new, its recent public release has generated a lot of renewed interest in the collaborative platform.
For those that don't know, Google Colaboratory is...
[...] a Google research project created to help disseminate machine learning education and research. It's a Jupyter notebook environment that requires no setup to use and runs entirely in the cloud.
Here are a few simple tips for making better use of Colab's capabilities while you play around with it. To be clear, these aren't hidden hacks, but a handy collection of documented (and further clarified) functionality that may be essential.
1. Using a Free GPU Runtime
Select "Runtime," "Change runtime type," and this is the pop-up you see:
Ensure "Hardware accelerator" is set to GPU (the default is CPU). Afterward, ensure that you are connected to the runtime (there is a green check next to "connected" in the menu ribbon).
To check whether you have a visible GPU (i.e. you are currently connected to a GPU instance), run the following excerpt (directly from Google's code samples):
import tensorflow as tf device_name = tf.test.gpu_device_name() if device_name != '/device:GPU:0': raise SystemError('GPU device not found') print('Found GPU at: {}'.format(device_name))
If you are connected, here is the response:
Found GPU at: /device:GPU:0
Alternatively, supply and demand issues may lead to this:
And there you go. This allows you to access a free GPU for up to 12 hours at a time.
2. Installing Libraries
Currently, software installations within Google Colaboratory are not persistent, in that you must reinstall libraries every time you (re-)connect to an instance. Since Colab has numerous useful common libraries installed by default, this is less of an issue than it may seem, and installing those libraries which are not pre-installed are easily added in one of a few different ways.
You will want to be aware, however, that installing any software which needs to be built from source may take longer than is feasible when connecting/reconnecting to your instance.
Colab supports both the
pip and
apt package managers. Regardless of which you are using, remember to prepend any bash commands with a !.
# Install Keras with pip !pip install -q keras import keras >>> Using TensorFlow backend. # Install GraphViz with apt !apt-get install graphviz -y
3. Uploading and Using Data Files
You need to data to use in your Colab notebooks, right? You could use something like
wget to grab data from the web, but what if you have some local files you want to upload to your Colab environment within your Google Drive and use them?
Here's the easiest way to do so, IMO, with a little direction from here.
In a 3 step process, first invoke a file selector within your notebook with this:
from google.colab import files uploaded = files.upload()
After your file(s) is/are selected, use the following to iterate the uploaded files in order to find their key names, using:
for fn in uploaded.keys(): print('User uploaded file "{name}" with length {length} bytes'.format(name=fn, length=len(uploaded[fn])))
Example output:
User uploaded file "iris.csv" with length 3716 bytes
Now, load the contents of the file into a Pandas DataFrame using the following:
import pandas as pd import io df = pd.read_csv(io.StringIO(uploaded['iris.csv'].decode('utf-8'))) print(df)
There you go. There are other ways out there of getting to the same place uploading and using data files, but I find this one the most straightforward and simple.
Google Colab has me excited to try machine learning in a similar way as using Jupyter notebooks, but with less setup and administration. That's the idea, anyways; we'll see how it plays out.
If you have any helpful Colab tips or tricks, leave them in the comments below.
Related:
- Fast.ai Lesson 1 on Google Colab (Free GPU)
- From Notebooks to JupyterLab – The Evolution of Data Science IDE
- Exploratory Data Analysis in Python | https://www.kdnuggets.com/2018/02/essential-google-colaboratory-tips-tricks.html | CC-MAIN-2022-21 | refinedweb | 721 | 63.29 |
NAME
curl_getenv - return value for environment name
SYNOPSIS
#include <curl/curl.h>
char *curl_getenv(const char *name);
DESCRIPTION
curl_getenv() is a portable wrapper for the getenv() function, meant to emulate its behaviour and provide an identical interface for all operating systems libcurl builds on (including win32).
AVAILABILITY
This function will be removed from the public libcurl API in a near future. It will instead be made "available" by source code access only, and then as curlx_getenv().
RETURN VALUE
If
Under unix operating systems, there isnt any point in returning an allocated memory, although other systems wont work properly if this isnt done. The unix implementation thus has to suffer slightly from the drawbacks of other systems. | https://linux.fm4dd.com/en/man3/curl_getenv.htm | CC-MAIN-2022-05 | refinedweb | 116 | 52.39 |
Besides being serializable, Marten's only other requirement for a .Net type to be a document is the existence of an identifier field or property that Marten can use as the primary key for the document type. The
Id can be either a public field or property, and the name must be either
id or
Id or
ID. As of this time, Marten supports these
Id types:
String. It might be valuable to use a natural key as the identifier, especially if it is valuable within the Identity Map feature of Marten Db. In this case, the user will be responsible for supplying the identifier.
Guid. If the id is a Guid, Marten will assign a new value for you when you persist the document for the first time if the id is empty. And for the record, it's pronounced "gwid".
CombGuidis a sequential Guid algorithm. It can improve performance over the default Guid as it reduces fragmentation of the PK index. (More info soon)
Intor
Long. As of right now, Marten uses a HiLo generator approach to assigning numeric identifiers by document type. Marten may support Postgresql sequences or star-based algorithms as later alternatives.
- When the ID member of a document is not settable or not-public a
NoOpIdGenerationstrategy is used. This ensures that Marten does not set the ID itself, so the ID should be generated manually.
- A
CustomID generator strategy is used to implement the ID generation strategy yourself.
When using a `Guid`/`CombGuid`, `Int`, or `Long` identifier, Marten will ensure the identity is set immediately after calling `IDocumentSession.Store` on the entity.
See these topics for more information about specific Id types:
You can see some example id usages below:
public class Division { // String property as Id public string Id { get; set; } } public class Category { // Guid's work, fields too public Guid Id; } public class Invoide { // int's and long's can be the Id // "id" is accepted public int id { get; set; } }
Overriding the Choice of Id Property/Field
If you really want to -- or you're migrating existing document types from another document database -- Marten provides
the
[Identity] attribute to force Marten to use a property or field as the identifier that doesn't match
the "id" or "Id" or "ID" convention:
public class NonStandardDoc { [Identity] public string Name; } | http://jasperfx.github.io/marten/documentation/documents/identity/ | CC-MAIN-2017-26 | refinedweb | 387 | 51.48 |
#include <fstab.h>
The first field, (fs_spec), describes the special device or remote file system to be mounted. The contents are decoded by the strunvis(3) function. This allows using spaces or tabs in the device name which would be interpreted as field separators otherwise.
The second field, (fs_file), describes the mount point for the file system. For swap partitions, this field should be specified as "none". The contents are decoded by the strunvis(3) function, as above.
The third field, (fs_vfstype), describes the type of the file system. The system can support various file system types. Only the root, /usr, and /tmp file systems need be statically compiled into the kernel; everything else will be automatically loaded at mount time. (Exception: the sync -o noatime -m 644 -M 755 -u foo -g bar
should be written as
sync,noatime,-m=644,-M=755,-u=foo,-g=bar
in the option field of fstab. alternative absolute pathname following the quota option. Thus, if the user quota file for /tmp is stored in /var/quotas/tmp.user, this location can be specified as:
userquota=/var/quotas/tmp.user
If the option "failok" is specified, the system will ignore any error which happens during the mount of that filesystem, which would otherwise cause the system to drop into single user mode. This option is implemented by the mount(8) command and will not be passed to the kernel.
If the option "noauto" is specified, the file system will not be automatically.
If the option "late" is specified, the file system will be automatically", "notrim", and "sectorsize" options may be passed to control those geli(8) parameters.. ‘INT hardware. system utilities may determine that the file system resides on a different physical device, when it actually does not, as with a ccd(4) device. All file systems with a lower fs_passno value will be completed before starting on file systems with a higher fs_passno value. E.g. all file systems with a fs_passno of 2 will be completed before any file systems with a fs_passno of 3 or greater are started. Gaps are allowed between the different fs_passno values. E.g. file systems listed in /etc/fstab may have fs_passno values such as 0, 1, 2, 15, 100, 200, 300, and may appear in any order within /etc/fstab.
).
#
Please direct any comments about this manual page service to Ben Bullock. Privacy policy. | https://nxmnpg.lemoda.net/5/fstab | CC-MAIN-2019-39 | refinedweb | 400 | 54.93 |
Would I be able to set it to "Desktop/GreatWall.JPG". Would that work if the image is there?
Type: Posts; User: Leonardo1143
Would I be able to set it to "Desktop/GreatWall.JPG". Would that work if the image is there?
import java.awt.BorderLayout;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import javax.swing.ButtonGroup;
import javax.swing.ImageIcon;
import javax.swing.JFrame;...
How would I do the latter? just put in (double r)?
Here is what my compiler is telling me: GeometryCalculator.java:20: error: constructor Sphere in class Sphere cannot be applied to given types;
Sphere sp = new Sphere(r);
^...
Nvm fixed that one too
--- Update ---
Alright my code compiled without a problem. Thank you guys
--- Update ---
I'm having some trouble. Right now it's just that the compilier is telling me it cannot find the symbol
Grades.java:10: error: cannot find symbol
ArrayList<Integer> Grades = new...
Sorry I was doing this on a website. It messed up when I moved it here. But I fixed the problem.
public boolean sum28(int[] nums) {
int [] sum28 = {10,2,2,2,2,50};
int sum;
for (int i=0; i<sum28.length; i++) //This will cycle through the elements
{
if (sum28[i] == 2) //If...
Oh sorry for ambigious in my first post. But What I was wondering is how to loop it, effectively, and what are the coordinates. For example (x,x,x,x). What do each of those x's represent when i want...
import javax.swing.*;
import java.awt.Graphics;
import java.awt.Color;
public class MyDrawing_Start extends JFrame {
public MyDrawing_Start() {
add (new MyPanel());
}
public static...
Okay, now i am bit confused. So I may just go the longer route and just individually label the parts I need for my algorithm to work (armor,combat level,etc). Thanks for trying though.
Oh okay so using 10 for my example should fix it?
I chose 27 because it was used in the example
That was the whole error message and No I don't know what the radix was I was basing it off the API's example.
I thought the second arg,27, was for radix? The API used that as an example.
"Exception in thread "main" java.lang.Error: Unresolved compilation problem:
at...
Armor = parseInt("Armor", 27)
like that?
So how would I go about assigning the string value of armor to an int? Like I said
parseInt("Kona", 27) returns 411787
returns a value but when I try
parseInt("Armor", 27)
Okay I read it and don't completely understand it. If I have a string value, like armor, Do i put it in like this?
Integer.parseInt(Armor);
...
Where would I find the API document?
Sorry it seems I was supposed to use "Integer.parseInt()". But how would I go about changing something like armor, a string, to 25, an int value, using this?
Exception in thread "main" java.lang.Error: Unresolved compilation problem:
The method parseInt(JTextField) is undefined for the type new ActionListener(){}
at...
import javax.swing.*;
import java.awt.event.*;
import java.util.Random;
public class SimpleWindow extends JFrame {
JLabel...
Thank You, Norm. I got it to work thank you for being so patient with me.
Oh. I think I'm getting it. Remove setText(). from the method and move into the action listener only?
Okay which line of code is calling the setText()? I can't see it? Because I thought I needed to place getText in order for the setText to be called. I'm guessing this is wrong? | http://www.javaprogrammingforums.com/search.php?s=fc37a907315bd29feb8952ce58d69be3&searchid=685649 | CC-MAIN-2013-48 | refinedweb | 596 | 70.19 |
Object Oriented C# for ASP.NET DevelopersBy Kevin Yank
There was a time when any Web developer with a basic knowledge of JavaScript could pick up the essentials of ASP Web development in a couple of hours. With ASP.NET, Microsoft’s latest platform for Web application development, the bar has been raised. Though tidier and generally more developer-friendly, real-world ASP.NET development requires one important skill that ASP did not: Object Oriented Programming (OOP).
The two most popular languages that were used to write ASP scripts, VBScript and JScript, have been retrofitted with OOP features to become VB.NET and JScript.NET. In addition, Microsoft has introduced an entirely new programming language called C# (C-sharp). Unhindered by clunky syntax inherited from a non-OOP legacy, C# is arguably the cleanest, most efficient language for .NET in popular use today.
In this article, I’ll introduce you to the OOP features of C# as they apply to ASP.NET Web development. By the end, you should have a strong grasp of exactly what OOP is, and why it’s such a powerful and important aspect of ASP.NET. If you’re a seasoned pro when it comes to object oriented programming (for example, if you have some Java experience under your belt), you might like to bypass all the theory and skip straight to the section on Code-Behind Pages.
This article is the third in a series on ASP.NET. If you’re new to ASP.NET Web development and haven’t read my previous articles, check out Getting Started with ASP.NET and ASP.NET Form Processing Basics before proceeding.
Since C# is such a similar language to Java, much of this article is based on my two-part series, Object Oriented Concepts in Java. Please therefore accept my apologies if some of the examples seem eerily familiar to longtime readers.
Essential Jargon
Writing .NET applications (be they Windows desktop applications or ASP.NET Web applications) is all about constructing a web of interrelated software components that work together to get the job done. These components are called objects.
There are many different kinds of objects, and in fact a big part of programming in .NET is creating your own types of objects. To create a new type of object that you can use in your .NET programs, you have to provide a blueprint of sorts that .NET will use to create new objects of this type. This blueprint is called a class.
Let’s look at a conceptual example to help these ideas take hold. Say you worked for the National Forestry Commission, and your Web site needed to keep track of a group of trees in a forest; specifically, say it needed to keep track of the heights of those trees. Fig 1 shows an example of the class and objects that you might create as an ASP.NET programmer working on this site.
On the left we have a class called Tree. This class defines a type of object — a Tree — that will serve as the blueprint from which all Tree objects will be created. The class itself is not a Tree; it is merely a description of what a Tree is, or what all Trees have in common. In this example, our Tree class indicates that all Trees have a property called ‘height’.
On the right, we have two actual Tree objects. These are Trees, and they were created based on the blueprint provided by the Tree class. These objects are said to be instances of the Tree class, and the process of creating them is called instantiation. Thus, we can say that by instantiating the Tree class twice, we have created two instances of the Tree class, two objects based on the Tree class, or just two Trees. Notice that in creating these objects we have assigned a value to their height property. The first Tree is 2 meters high and the second is 5 meters high. Although the values of these properties differ, this does not change the fact that both objects are Trees. They are simply Trees with different heights.
Classes don’t only define properties of objects; they also define operations that may be performed by those objects. Such operations are called methods in object-oriented languages like C#. Continuing with our Tree example, we could define a method called ‘Grow’ in the Tree class. The result of this would be that every Tree object would then be able to perform the Grow operation as defined in the class. For instance, performing the Grow operation on a Tree might increase its height property by one metre.
A C# Tree
For our first foray into object-oriented programming, I propose to implement the Tree class discussed above in C# and then write an ASP.NET page that uses it to instantiate a couple of Trees and make them grow a little.
Open your text editor of choice and create a new text file called
Tree.cs. This file will contain the definition of the Tree class. Type the following (the line numbers are provided for your convenience only, and should not be typed as part of the code):
1 /**
2 * Tree.cs
3 * A simple C# class.
4 */
5
6 public class Tree {
Lines 1-4 are just an introductory comment (
/* marks the start of a multi-line comment in C#, while
*/ marks the end), and will be ignored by the compiler. We begin the actual code on line 6 by announcing our intention to create a
class called
Tree. The word
public indicates that our class may be used by any code in our program (or Web site). Note that I am observing the convention of spelling class names with a capital letter.
7 public int height = 0;
8
Aside from the word
public at the start of this line, this looks just like a standard variable declaration. As it would seem, we are declaring an integer variable called
height and assigning it a value of zero. Again, it is a matter of convention that variable names are not capitalized. Variables declared in this way just inside a class definition become fields for objects of the class. Fields are variables that behave as properties of a class. Thus, this line says that every object of class
Tree will have a field (property) called
height that will contain an integer value, and that the initial value of the height field will be zero. The word
public indicates that any code in your program (or Web site) can view and modify the value in this field. Later in this article, we’ll see techniques for protecting data stored in an object’s fields, but for now this will suffice.
That’s actually all there is to creating a Tree class that will keep track of its height; however, to make this example at least a little interesting, we’ll also implement the
Grow method that I mentioned in the previous section. It begins with the following:
9 /**
10 * Grows this tree by 1 meter
11 */
12 public void Grow() {
Let me explain this line one word at a time. The word
public once again indicates that the
Grow method (operation) is publicly available, meaning that it may be triggered by code anywhere in the program. Methods may also be
private,
protected,
internal, or
protected internal, and I’ll explain the meaning of each of these options later on. The word
void indicates that this method will not return a value. Later on we’ll see how to create methods that produce some value as an outcome, and for such methods we would replace
void with the type of value to be produced (e.g.
int).
Finally, the word
Grow is the actual name of the method that is to be created. Note that I am observing the convention of spelling method names starting with an uppercase letter (this .NET convention is different from some other languages, such as Java, where methods are normally not capitalized). The empty parentheses following this word indicate that it is a method we are declaring (as opposed to another field, like
height above). Later on we’ll see cases where the parentheses are not empty. Finally, the opening brace signifies the start of the block of code that will be executed each time the
Grow method of a Tree object is triggered.
13 height = height + 1;
This operation happens to be a simple one. It takes the value of the
height field and adds one to it, storing the result back into the
height field. Note that we did not need to declare
height as a variable in this method, since it has already been declared as a field of the object on line 7 above. If we did declare
height as a variable in this method, C# would treat it as a separate variable created anew every time the method was run, and our class would no longer function as expected (try it later if you’re curious).
14 }
15 }
The closing brace on line 14 marks the end of the
Grow method, while that on line 15 marks the end of the
Tree class. After typing all this in, save the file. Your next job is to compile it.
If you installed the Microsoft .NET Framework SDK separately (as opposed to getting it with a product like Visual Studio .NET), you should be able to open a Command Line window and run the C# compiler (by typing
csc) from any directory. If you’re using the Framework SDK that comes with Visual Studio .NET, you need to launch the special Visual Studio .NET Command Prompt instead (Start | Programs | Microsoft Visual Studio .NET | Visual Studio .NET Tools | Visual Studio .NET Command Prompt).
If you’ve never used the Command Line before, read my cheat sheet on the subject before proceeding. When you’re ready, navigate to the directory where you created
Tree.cs and type the following to compile the file:
C:CSTree>csc /target:library Tree.cs
Assuming you typed the code for the
Tree class correctly, a file called
Tree.dll is created in the same directory. This is the compiled definition of the Tree class. Any .NET program (or Web page) that you try to create a Tree in will look for this file to contain the blueprint of the object to be created. In fact, that’s our next step.
Using the Tree Class
Okay, so now you have the blueprint of a tree. Big deal, right? Where things get interesting is when you use that blueprint to create and manipulate Tree objects in a .NET program such as an ASP.NET Web page. Create a new file in your text editor called
PlantTrees.aspx and follow along as I talk you through writing such a page.
First, here’s a look at the full code for the page:
1 <%@ Page Language="C#" %>
2 <html>
3 <head>
4 <title>Planting Trees</title>
5 <script runat="server">
6 protected void Page_Load(Object Source, EventArgs E)
7 {
8 string msg = "Let's plant some trees!<br/>";
9
10 // Create a new Tree
11 Tree tree1 = new Tree();
12
13 msg += "I've created a tree with a height of " +
14 tree1.height + " metre(s).<br/>";
15
16 tree1.Grow();
17
18 msg += "After a bit of growth, it's now up to " +
19 tree1.height + " metre(s) tall.<br/>";
20/>";
32
33 Output.Text = msg;
34 }
35 </script>
36 </head>
37 <body>
38 <p><asp:label</p>
39 </body>
40 </html>
If you look at the bottom of the code, you’ll see the HTML section basically just contains a single
<asp:label> tag. Our
Page_Load function is where all the action will happen, and it will use that
<asp:label> to display the results of our messing around.
So let’s focus on the code within
Page_Load:
8 string msg = "Let's plant some trees!<br/>";
Here we’re creating a text string (
string) called
msg. We’ll use it to store the message that we’ll eventually tell the
<asp:label> tag to display. To begin with, it contains a little introductory message, with a
<br/> tag at the end to create a new line on the page.
10 // Create a new Tree
11 Tree tree1 = new Tree();
As the comment on line 10 suggests, line 11 achieves the feat of creating a new Tree out of thin air. This is a really important line; so let me explain it in depth. The line begins by declaring the class (type) of object to be created (in this case,
Tree). We then give a name to our new Tree (in this case,
tree1). This is in fact identical to declaring a new variable by specifying the type of data it will contain followed by the name of the variable (e.g.
string msg).
The rest of the line is where the real magic happens. The word
new is a special C# keyword that triggers the instantiation of a new object. After
new comes the name of the class to be instantiated, followed by a pair of parentheses (again, in more complex cases that we shall see later, these parentheses may not be empty).
In brief, this line says, "create a variable of type
Tree called
tree1, and assign it a value of a
new Tree." So in fact this line isn’t just creating a Tree, it’s also creating a new variable to store it in. Don’t worry if this distinction is a little hazy for you at this point; later examples will serve to clarify these concepts significantly.
Now that we’ve created a tree, let’s do something with it:
13 msg += "I've created a tree with a height of " +
14 tree1.height + " metre(s).<br/>";
This should not be too unfamiliar to you. The
+= near the start of the line tells C# to add the following string to the string already stored in the
msg variable. In other words,
msg += is shorthand for
msg = msg +. So this two-line command is just adding another line of text to the message, except that part of the line of text takes its value from the height of the
tree1 variable (
tree1.height). If you simply typed
height instead of
tree1.height, C# would think you were referring either to a variable called
height declared in this method, or a tag with ID
height (in the
PlantTrees.aspx page itself). Unable to find either of these, your Web server would print out an error message when you tried to view the page. In order to tell C# that you are referring to the
height field of the Tree in
tree1, you need to tack on
tree1 followed by the dot operator (
.).
The dot operator may be thought of sort of like the C# way of saying "belonging to" when you read the expression backwards. Thus,
tree1.height should be read as "
height belonging to
tree1." Since Trees are created with a height of zero, lines 13 and 14 should print out "I’ve created a tree with a height of 0 metre(s)."
Calling (or triggering) methods belonging to an object is accomplished in a similar way:
16 tree1.Grow();
This line calls the
Grow method belonging to the Tree in
tree1, causing it to grow by a metre. Again, the set of parentheses indicate that it is a method we are referring to, not a field. So after this line if we print out the height of
tree1 again…
18 msg += "After a bit of growth, it's now up to " +
19 tree1.height + " metre(s) tall.<br/>";
This line will print out "After a bit of growth, it is now up to 1 metre(s) tall."
To show that each Tree has its own height value that is independent of those of the other Trees, we’ll polish off this example by creating a couple more Trees and having them grow by different amounts:/>";
Finally, we assign our completed
msg variable as the
Text property of the
<asp:label tag:
33 Output.Text = msg;
34 }
Ok, so now we’ve got a class (
Tree) and an ASP.NET page that uses it. Let’s deploy these on an IIS Web server to try them out!
Create a new directory in your Web root directory (e.g.
c:inetpubwwwroot) called Trees (i.e.
c:inetpubwwwrootTrees). This will be the root directory of our little ASP.NET Web application. Copy
PlantTrees.aspx into that directory, then load the page in your browser (). Fig. 2 illustrates what you should see.
If you sift through the technical mumbo jumbo, you can see that this error screen is complaining that ASP.NET has no idea what a
Tree is. We need to put our
Tree.dll file (which contains the
Tree class) where ASP.NET can find it.
ASP.NET looks for class files in the
bin directory of the current Web application. So deploying our
Tree class is a two-step process:
- Ensure that IIS is configured so that the
Treesdirectory is a Web application.
- Place the
Tree.dllfile into the
Treesbindirectory.
To make the
Trees directory a Web application, open the Windows Control Panel on your server, open Administrative Tools, then Internet Information Services. Under local computer, Web Sites, Default Web Site, you’ll see the
Trees directory listed. The plain folder icon next to it indicates that it’s just a regular directory, as opposed to a Web application.
Right-click the
Trees directory and choose Properties…. On the Directory page, click the Create button under Application Settings. Click OK to close the Window, and you’ll see that Trees now has a nice blue Web Application icon next to it. It looks like a little box with a Web page in it (see Fig. 3).
Now when IIS loads an ASP.NET page in
Trees or one of its subdirectories, it will look in
Treesbin for any class files that might be needed. In this case, drop
Tree.dll into the
bin directory (you’ll need to create the
bin directory if you haven’t already).
With your Web Application created and
Tree.dll in the
bin directory, try loading again. This time, you should see the expected output, as shown in Fig. 4.
PlantTrees.aspx
Inheritance
One of the strengths of object-oriented programming is inheritance. This feature allows you to create a new class that is based on an old class. Let’s say your program also needed to keep track of coconut trees, so you would need a new class called
CoconutTree that kept track of the number of coconuts in each tree. You could write the
CoconutTree class from scratch, copying all the code from the
Tree class that is responsible for tracking the height of the tree and allowing the tree to grow, but for more complex classes that could involve a lot of duplicated code. What would happen if you later decided that you wanted your Trees to have non-integer heights (like 1.5 metres)? You would have to adjust the code in both classes!
Inheritance allows you to define your
CoconutTree class as a subclass of the
Tree class, such that it inherits the fields and methods of that class in addition to its own. To see what I mean, here’s the code for
CoconutTree:
1 /**
2 * CoconutTree.cs
3 * A more complex kind of tree.
4 */
5
6 public class CoconutTree : Tree {
7 public int numNuts = 0; // Number of coconuts
8
9 public void GrowNut() {
10 numNuts = numNuts + 1;
11 }
12
13 public void PickNut() {
14 numNuts = numNuts - 1;
15 }
16 }
The code
: Tree on line 6 endows the
CoconutTree class with a
height field and a
Grow method in addition to the
numNuts field (yes, this example was fiendishly conceived to produce a childishly amusing variable name — what of it?) and the
GrowNut and
PickNut methods that are declared explicitly for the class. The diagram in Fig. 5 shows the relationship of our two classes.
CoconutTreeis a subclass of
Tree
By building up a hierarchical structure of classes with multiple levels of inheritance, you can create powerful models of complex Objects with little or no duplication of code (which makes for less typing and easy maintenance). We’ll see a very important example of the power of inheritance for ASP.NET developers before the end of this article.
Compiling Multiple Classes
You should now have two C# source files in your working directory —
Tree.cs and
CoconutTree.cs.
Tree.cs can still be compiled on its own as before, but since
CoconutTree.cs refers to the Tree class, the compiler needs to know where to find that class to successfully compile
CoconutTree.cs.
In most cases, the easiest way to proceed is to compile all your related classes into a single DLL (also known as an assembly in .NET terminology). To compile
Tree.cs and
CoconutTree.cs into a single DLL file called
trees.dll that will contain the definitions of both classes, type the following command in the directory that contains the C# source files:
csc /target:library /out:trees.dll Tree.cs CoconutTree.cs
If the files you want to compile are the only C# source files in the directory, you can use a wildcard as a shortcut:
csc /target:library /out:trees.dll *.cs
You can then just plop the
trees.dll file in the
bin directory of your Web application and it will have access to both classes.
Passing Parameters and Returning Values
Most of the methods we have looked at so far have been of a special type. Here is a declaration of one such method, the
PickNut method for the
CoconutTree class that we developed above:
public void PickNut() {
numNuts = numNuts - 1;
}
What makes this, and the other methods we have looked at so far, special is the fact that it doesn’t require any parameters, nor does it return a value. As you’ll come to discover as we look at more practical examples of C# field. surprisingly - numberToPick;
}
The
return command immediately terminates the method. Thus, the operation of picking the nuts (subtracting from the
numNuts field) bool PickNut(int numberToPick) {
if (numberToPick < 0) return false;
if (numberToPick > numNuts) return false;
numNuts = numNuts -
bool.)) {
ErrorLabel.Text = "Error: Could not pick 10 nuts!";
return;
}
nutsInHand = nutsInHand + 10;
The condition of the
if statement calls
PickNut with a parameter value of 10, and then checks its return value to see if it’s false (note the
! operator). If it is, an error message is printed out. The
return command then terminates the
Page_Load function.
Access Modifiers
As we have already seen, classes should usually be set
public to allow code elsewhere in your application to make use of it. But there are times when you’ll want to restrict access to a class; for example, you might only want to allow other classes in the same assembly (DLL file) to use it. That’s when you’ll want to specify a different access modifier. Class members, which include fields and methods, can be similarly modified to control access to them.
Here are the access modifiers supported by C#:
public: any code may access the class or class member.
private: the class or class member is not accessible outside of the class that contains it (yes, in advanced cases, you can put a class inside another class).
protected: the class or class member is only accessible by the class that contains it, or any subclass of that class.
internal: the class or class member is only accessible by code in the same assembly (DLL).
protected internal: the class or class member is accessible by code in the same assembly (DLL), or by any subclass of the class that contains it.
If you don’t specify any access modifier, C# will default to
private.
Consider the following sample declarations:
int numNuts = 0;
bool PickNut(int numberToPick) {
...
}
Since no access modifiers are specified, both of these members will act as if they were declared
private. Even if the class is declared
public, the value of the above
numNuts field (assuming it is declared as the field of the class) may only be accessed or modified by code inside the same class. Similarly, the
PickNut method shown above may only be invoked by code elsewhere in the class..
For now, just take note of the choices I make for access modifiers. I’ll explain my reasons in each case, and before long you’ll be able to choose your own!
C# Properties fields like
numNuts from being assigned values like this that don’t make sense? The solution is to make the fields themselves
private, and permit access to them using a C# Property.
A C# Property is a pair of methods that is used to read and write a property of an object. From here on, C# Properties will be called simply
properties.
Here is an updated version of our
CoconutTree class that makes use of this technique:
1 public class CoconutTree : Tree {
2 private int numNuts = 0;
3
4 public void GrowNut() {
5 numNuts = numNuts + 1;
6 }
7
8 public bool PickNut(int numToPick) {
9 if (numToPick < 0) return false;
10 if (numToPick > numNuts) return false;
11 numNuts = numNuts - numToPick;
12 return true;
13 }
14
15 public int NumNuts {
16 get {
17 return numNuts;
18 }
19 set {
20 if (value < 0) return;
21 numNuts = value;
22 }
23 }
26 }
As you can see on line 2, the
numNuts field is now private, meaning that only code within this class is allowed to access it. The
GrowNut and
PickNut methods remain unchanged; they can continue to update the
numNuts field directly (the constraints in
PickNut ensure that the value of
numNuts remains legal). Since we still want code to be able to determine the number of nuts in a tree, we have added a public
NumNuts property (note the capitalization, which distinguishes
NumNuts the property from
numNuts the field — a pair of
numNuts, so to speak):
15 public int NumNuts {
16 get {
17 return numNuts;
18 }
19 set {
20 if (value < 0) return;
21 numNuts = value;
22 }
23 }
The declaration of a property starts off just like a field, with an access modifier (
public), the type (
int), and the name (
NumNuts), which is capitalized by convention. After those preliminaries, we define the accessors. The get accessor is used to retrieve the value of the property, while the set accessor is used to change it.
Accessors behave just like methods. The get accessor, which must always return a value of the type assigned to the property, in this example simply returns the value of the private
numNuts field. Nothing too fancy here. The set accessor, however, which is always provided with a variable called
value that contains the value that is to be assigned to the property, checks that this variable is greater or equal to zero (since we can’t have a negative number of nuts) before storing it in the
numNuts field.
Even in cases where any value is acceptable, you should make your objects’ fields
private and provide a property and properties property and two field with a property.
Constructors
A constructor is a special type of method that is invoked automatically when an object is created. Constructors allow you to specify starting values for properties, and other such initialization details.
Consider once again our
Tree class; specifically, the declaration of its
height field (which should now be
private and accompanied by a
public property):,
bool, etc.) in their declaration.
- Constructors have the same name as the class they are used to initialize. Since we are writing the
Treeclass, its constructor must also be named
Tree.
So dissecting this line by line, the first line states that we are declaring a public constructor that takes a single parameter and assigns its value to an integer (
int) variable
height. Note that this is not the object field
height, as we shall see momentarily. The second line checks to see if the
height variable is less than zero. If it is, we set the
height field of the tree to zero (since we don’t want to allow negative tree heights). If not, we assign the value of the parameter to the field.
Notice that since we have a local variable called
height, we must refer to the
height field of the current object as
this.height.
this is a special variable in C# that always refers to the object in which the current code is executing. If this confuses you (no pun intended), you could instead name the constructor’s parameter something like
newHeight. You’d then be able to refer to the object field simply as
height.
Since the
Tree class now has a constructor with a parameter, you must specify a value for that parameter when creating a new
Tree:(). C# bool PickNut() {
if (numNuts == 0) return false;
numNuts = numNuts - 1;
return true;
}
public bool PickNut(int numToPick) {
if (numToPick < 0) return false;
if (numToPick > numNuts) return false;
numNuts = numNuts - numToPick;
return true;
}
One way to save yourself some typing is to notice that
PickNut() is actually just a special case of
PickNut(int numToPick); that is, calling
PickNut() is the same as calling
PickNut(1), so you can implement P
ickNut() by simply making the equivalent call:
public bool PickNut() {
return PickNut(1);
}
public bool PickNut(int numToPick) {
if (numToPick < 0) return false;
if (numToPick > numNuts) return false;
numNuts = numNuts - earlier (i.e.
CoconutTree is a subclass of
Tree). (e.g. to grow leaves)? How could you make sure that this was still inherited by the
CoconutTree class without having to make the change in both places? Like in our discussion of overloaded methods, where we implemented a simple method by calling a special case of the more complicated method, we can implement a new definition for a method in a subclass by referring to its definition in the base class (also called the parent class or superclass):
public void Grow() {
base.Grow();
GrowNut();
}
The
base.Grow() line invokes the version of
Grow defined in the base class (
Tree), thus saving us from having to reinvent the wheel. This is especially handy when you are creating a class that extends a class for which you do not have the source code (e.g. a class provided by another developer, or built into .NET). By simply calling the base class versions of the methods you are overriding, you can ensure that your objects aren’t losing any functionality.
Constructors may be overridden just like normal methods, but calling the version of the constructor in the base class is slightly different. Here’s a set of constructors for the
CoconutTree class, along with the new declaration of the
numNuts field without an initial value:
private int numNuts;
public CoconutTree() : base() {
numNuts = 0;
}
public CoconutTree(int height) : base(height) {
numNuts = 0;
}
public CoconutTree(int height, int numNuts) : base(height) {
if (numNuts < 0) this.numNuts = 0;
else this.numNuts = numNuts;
}
The first two constructors override their equivalents in the
Tree class, while the third is completely new. Notice that we call the constructor of the base class as
base(), but instead of putting this call inside the constructor body, we add it to the end of the constructor declaration, using the
: operator. All three of our constructors call a constructor in the base class to ensure that we are not losing any functionality.
Namespaces
For much of this article, we’d designed all of the classes to handle the logic for a Web site the .NET Framework for creating buttons on user interfaces is called (you guessed it)
Button. How can this conflict be resolved without having to go back through your code and change every reference to your
Button class?
Namespaces to the rescue! C# provides namespaces as a way of grouping together classes according to their purpose, the company that wrote them, or whatever other criteria you like. As long as you ensure that your
Button class is not in the same namespace as C#’s built-in
Button class, you can use both classes in your program without any conflicts arising.
By default, classes you create reside in the default namespace, an unnamed namespace where all classes that are not assigned namespaces go. For most of your programs it is safe to leave your classes in the default namespace. All of the .NET Framework’s built-in classes as well as most of the classes you will find available on the Internet and from other software vendors are grouped into namespaces, so you usually don’t have to worry about your classes’ names clashing with those of other classes in the default namespace.
You will want to group your classes into namespaces if you intend to reuse them in future projects (where new class names may clash with those you want to reuse), or if you want to distribute them for use by other developers (where their class names may clash with your own). To place your class in a namespace, you simply have to surround your class declaration with a namespace declaration. For example, classes that we develop at SitePoint.com are grouped in the
Sitepoint namespace by adding the following to our C# files:
namespace SitePoint {
// Class declaration(s) go here
}
If we were to place our
Tree class inside the
SitePoint namespace in this way, the class’ full name would become
SitePoint.Tree. You can also declare namespaces within namespaces to further organize your classes (e.g.
SitePoint.Web.Tree).
As it turns out, the
Button class built into the .NET Framework is actually in the
System.Windows.Forms namespace, which also contains all of the other classes for creating basic graphical user interfaces in .NET. Thus, the fully qualified name of .NET’s
Button class is
System.Windows.Forms.Button. To make use of this class without your program thinking that you’re referring to your own
Button class, you can use this full name instead. For example:
// Create a Windows Button
System.Windows.Forms.Button b = new System.Windows.Forms.Button();
In fact, C# requires that you use the full name of any class that is not in the same namespace as the current class!
But what if your program doesn’t have a
Button class to clash with the one built into .NET? Spelling out the full class name every time means a lot of extra typing. To save yourself this annoyance, you can import the
System.Windows.Forms namespace into the current namespace by putting a
using line at the top of your C# file (just inside the
namespace declaration(s), if any):
namespace SitePoint {
using System.Windows.Forms;
// Class declaration(s) go here
}
Once its namespace is imported, you can use the class by its short name (
Button) as if it were part of the same namespace as your class.
So if you put your
Tree class into a namespace called
SitePoint, any class not also declared to be in the
SitePoint namespace that needed to use it would either have to call it
SitePoint.Tree or import the
SitePoint namespace.
For code in an ASP.NET page (
.aspx file), the following namespaces are automatically imported:
System
System.Collections
System.Collections.Specialized
System.Configuration
System.IO
System.Text
System.Text.RegularExpressions
System.Web
System.Web.Caching
System.Web.Security
System.Web.SessionState
System.Web.UI
System.Web.UI.HtmlControls
System.Web.UI.WebControls
But if you wanted to access the newly-namespaced
SitePoint.Tree class in your
PlantTrees.aspx file, you’d either have to call it by its full name, or use an import directive.
Like a page directive, an import directive is a special tag that goes at the top of your file to provide special information about your ASP.NET page. Here’s what the import directive to use the SitePoint namespace would look like:
<%@ Import namespace="SitePoint" %>
Static Members
All class members (including methods, fields and properties) can be declared
static. Static members belong to the class instead of to objects of that class. Since static members aren’t that common in basic ASP.NET development, I won’t dwell on static members too heavily; but for the sake of completeness, let’s look at a simple example.
It might be useful to know the total number of trees that had been created in our program. To this end, we could create a
static field called
totalTrees in the
Tree class, and modify the constructor to increase its value by one every time a
Tree was created. Then, using a
static property called
TotalTrees, for which we would only define a get accessor (making it a read-only property), we could check the value at any time by checking the value of
Tree.TotalTrees.
Here’s the code for the modified
Tree class:
public class Tree {
private static int totalTrees = 0;
private int height;
public Tree() {
this(0);
}
public Tree(int height) {
if (height < 0) this.height = 0;
else this.height = height;
totalTrees = totalTrees + 1;
}
f
public static int TotalTrees {
get {
return totalTrees;
}
}
// ... rest of the code here ...
}
Static members are useful in two main situations:
- When you want to keep track of some information shared by all members of a class (as above).
- When it doesn’t make sense to have more than one instance of a property or method.
As I said, static members don’t tend to crop up in basic ASP.NET applications, but it’s useful to know about them so you’re not totally bamboozled if you see someone accessing a method in a class instead of an object!
Code-Behind Pages in ASP.NET
Okay, so I’ve waffled on for what must seem like forever, and all you’ve learned how to do is grow trees (and nuts!). "When’s he going to get to the point?!" you must be asking. Either that, or you already know the basic concepts of OOP and you’ve skipped directly here for the meat.
Well here’s the payoff:
Everything… including the pages themselves. Every
.aspx page you write is automatically compiled into a class that inherits from the
System.Web.UI.Page class, which is built into the .NET Framework.
Everything that ASP.NET pages do automatically is handled by that class (e.g. calling the
Page_Load function — which is actually a method — before the page is displayed). So creating a site of ASP.NET pages is actually a process of creating subclasses of
System.Web.UI.Page. This is illustrated in Fig. 6.
System.Web.UI.Pageby default
Okay, I can see you’re getting impatient again. How about I tell you what this gives you (besides a headache)?
The object-oriented nature of ASP.NET lets you achieve complete separation of design code (HTML) and server-side code. The technique for doing this is called Code-Behind. The idea is that you create a subclass of
System.Web.UI.Page that contains all your server-side code, and then make your
.aspx page (which contains all your design code) inherit from that class instead of from
System.Web.UI.Page (see Fig. 7).
Not convinced? Let’s look at an example.
Here’s the code for the last example we saw in ASP.NET Form Processing Basics:
<%@ Page Language="C#" %>
<html>
<head>
<title>My First ASP.NET Form</title>
<script runat="server">
protected void Page_Load(Object Source, EventArgs E) {
if (IsPostBack) {
NameForm.Visible = false;
NameLabel.Text = NameBox.Text;
NameLabel.Style.Add( "font-weight", "bold" );
}
TimeLabel.Text = DateTime.Now.ToString();
}
</script>
<>
Now, this is a relatively simple ASP.NET page, and yet the server-side code (the
Page_Load method) takes up about half the page. An HTML-and-JavaScript designer who had to tweak the design of a more typical ASP.NET page would suffer a coronary at the sight of the code in the file (many ASP developers had trouble keeping designers around for this very reason).
In addition, if a Web development team composed of server-side developers and Web designers had to work on a site collaboratively, the tug-of-war involved in letting team members of both types work on a page at the same time would be next to unmanageable!
Let’s create a class called
HelloTherePage that contains all the server-side code:
using System;
using System.Web;
using System.Web.UI;
using System.Web.UI.HtmlControls;
using System.Web.UI.WebControls;
public class HelloTherePage : Page {
protected HtmlForm NameForm;
protected Label NameLabel;
protected Label TimeLabel;
protected TextBox NameBox;
protected void Page_Load(Object Source, EventArgs E) {
if (IsPostBack) {
NameForm.Visible = false;
NameLabel.Text = NameBox.Text;
NameLabel.Style.Add( "font-weight", "bold" );
}
TimeLabel.Text = DateTime.Now.ToString();
}
}
First note the block of
using commands at the top of the file. Since we’re not inside a
.aspx file anymore, we don’t have the benefit of all the automatic namespace imports we had before; therefore, we must explicitly import the namespaces we want to use. The five namespaces I’ve used here are the most common for ASP.NET programs.
The
Page_Load method is defined exactly as it was in the
.aspx page above. Note that it is declared
protected, since the only class that needs to call this class is the subclass we’re going to create with our
.aspx file (recall that
protected means a member is accessible only by the class itself or any of its subclasses).
Also declared in this class are four
protected fields:
NameForm,
NameLabel,
TimeLabel, and
NameBox. These are the
ID‘s of the page elements that
Page_Load accesses in the page. In the
.aspx subclass, these fields will be overridden by the actual elements (e.g.
<asp:label overrides
NameLabel), but they must be declared in this class so that C# doesn’t get confused when we refer to them in
Page_Load.
Since
NameLabel and
TimeLabel represent
<asp:label> tags in the
.aspx page, they should be objects of class
System.Web.UI.WebControls.Label (which we can abbreviate as
Label, since our file is
using the
System.Web.UI.WebControls namespace).
NameBox is a
<asp:textbox> and is therefore a
System.Web.UI.WebControls.TextBox.
NameForm is an HTML
<form> tag, which is represented by an object of class
System.Web.UI.HtmlControls.HtmlForm. In general, all ASP.NET tags (
<asp:tagName>) have their corresponding classes in the
System.Web.UI.WebControls namespace, while HTML tag classes are in
System.Web.UI.HtmlControls, and have names of the form
HtmlTagName.
Save this file as
HelloThere.cs in the same directory as the
.aspx page (don’t worry — IIS is smart enough not to allow Web browsers to view C# and other .NET source code files in Web-accessible directories). Now let’s re-write
HelloThere.aspx to use the Code-Behind file we have just written:
<%@ Page Inherits="HelloTherePage" src="HelloThere.cs" %>
<html>
<head>
<title>My First ASP.NET Form</title>
<>
The changes are dramatic, but very simple! I’ve simply removed the
<script> tag that contained the
Page_Load method, which is now defined in
HelloTherePage, the base class of this page defined in the Code-Behind file, and added two new attributes to the
Page directive on the first line. I also removed the
Language attribute from this directive, since there is no longer any server-side code in this file for which a language would need to be specified.
The
Inherits attribute is set to the name of the class from which this page should inherit. By default, this is
System.Web.UI.Page. The
src attribute tells the ASP.NET page compiler where to find the source code for the class specified. ASP.NET uses this attribute to intelligently compile the Code-Behind file for you on the fly and put the resulting
.dll file in the
bin directory of the Web application, so that this page can find it. You could of course do this manually, but it’s much easier to let ASP.NET do the work of recompiling the Code-Behind file whenever you make changes automatically.
With
HelloThere.cs and
HelloThere.aspx in the same directory on your Web server, you should be able to load
HelloThere.aspx as usual and find that it behaves exactly as it did before you split the server-side code into a separate file.
Summary
Well! It’s been a long trip, but in this article I’ve taken you on a tour of all the important object-oriented features of the C# language. What’s more, I’ve shown you how they apply to ASP.NET Web development.
In particular, I’ve demonstrated how to use a class to handle some of the logic in a simple page that tracks the growth of trees. In a practical system, each Tree class might for instance fetch its information from a database of national forest growth. The point is that the ASP.NET page doesn’t need to know how to track the growth of trees, because all that functionality is bottled up in a handy class that we can reuse in other projects if needed.
By far the most exciting application of object oriented programming concepts to everyday ASP.NET development, however, is the fact that everything in ASP.NET is an object. By defining a class in a Code-Behind file for each of our pages, we can totally separate the server-side logic of our pages from their design, thus letting our Web designers sleep at night. In fact, you could even make two or more different .aspx pages that inherit from the same Code-Behind file in order to experiment with different page designs (perhaps a WAP version of your site for mobile devices?) that share the same server-side logic.
In the next article in this series, I’ll introduce you to event-driven programming. It turns out that .NET classes (including most of those that make up ASP.NET) can send out events that your code can listen for and react to. We’ll see how these events are used to achieve some common tasks in ASP.NET development.
See you next time! | https://www.sitepoint.com/c-asp-net-developers/ | CC-MAIN-2016-44 | refinedweb | 7,743 | 63.19 |
1.1 Glossary
This document uses the following terms:
client: A computer on which the remote procedure call (RPC) client is executing.
connection: Firewall rules are specified to apply to connections. Every packet is associated with a connection based on TCP, UDP, or IP endpoint parameters; see [IANAPORT].
connection blocks: A pre-allocated chunk of memory that is used to store a single connection request.
Distributed File System (DFS): A file system that logically groups physical shared folders located on different servers by transparently connecting them to one or more hierarchical namespaces. DFS also provides fault-tolerance and load-sharing capabilities.
Distributed File System (DFS) link: A component in a DFS path that lies below the DFS root and maps to one or more DFS link targets. Also interchangeably used to refer to a DFS path that contains the DFS link..).
Interface Definition Language (IDL): The International Standards Organization (ISO) standard language for specifying the interface for remote procedure calls. For more information, see [C706] section 4.
Internet host name: The name of a host as defined in [RFC1123] section 2.1, with the extensions described in [MS-HNDS].
mailslot: A mechanism for one-way interprocess communications (IPC). For more information, see [MSLOT] and [MS-MAIL].
Microsoft Interface Definition Language (MIDL): The Microsoft implementation and extension of the OSF-DCE Interface Definition Language (IDL). MIDL can also mean the Interface Definition Language (IDL) compiler provided by Microsoft. For more information, see [MS-RPCE].
named pipe: A named, one-way, or duplex pipe for communication between a pipe server and one or more pipe clients.
NetBIOS host name: The NetBIOS name of a host (as described in [RFC1001] section 14 and [RFC1002] section 4), with the extensions described in [MS-NBTE].
Quality of Service (QoS): A set of technologies that do network traffic manipulation, such as packet marking and reshaping.].
scoped share: A share that is only available to a client if accessed through a specific DNS or NetBIOS name. Scoped shares can make a single server appear to be multiple, distinct servers by providing access to a different set of shares based on the name the client uses to access the server.
server: A computer on which the remote procedure call (RPC) server is executing..
site: A group of related webpages that is hosted by a server on the World Wide Web or an intranet. Each website has its own entry points, metadata, administration settings, and workflows. Also referred to as web site.
standalone DFS implementation: A Distributed File System (DFS) namespace whose configuration information is stored locally in the registry of the root server.
sticky share: A share that is available after a machine restarts..
work item: A buffer that receives a user request, which is held by the Server Message Block (SMB) server while it is being processed.
MAY, SHOULD, MUST, SHOULD NOT, MUST NOT: These terms (in all caps) are used as defined in [RFC2119]. All statements of optional behavior use either MAY, SHOULD, or SHOULD NOT. | https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-srvs/1709f6a7-efb8-4ded-b7ae-5cee9ee36320 | CC-MAIN-2022-33 | refinedweb | 500 | 55.54 |
The Samba-Bugzilla – Bug 8463
Buffer-overflow in dirsort plugin when directory contents change at wrong time.
Last modified: 2017-01-03 07:14:25 UTC
Created attachment 6901 [details]
Prevent buffer overflow when directory contents change
The dirsort vfs plugin opens the directory and reads all entries to count
them and figure out how much data to allocate; it then uses rewinddir()
and reads the entries again, this time copying them into the allocated
buffer. The problem is that the second time through you're not guaranteed
to get the same list of entries - if a new file/directory was created in
the mean time then readdir() will return that new entry too and the code
will attempt to write more into the buffer than it allocated space for.
The following little test demonstrates this behaviour:
-------------------------------------------------------------
#include <stdio.h>
#include <dirent.h>
#include <unistd.h>
#include <sys/stat.h>
#define DIR_PATH "/tmp/rewinddir_test"
#define NEW_FILE (DIR_PATH "/foobar")
int main() {
DIR *dir;
int cnt;
/* set up test directory */
mkdir(DIR_PATH, 0755);
dir = opendir(DIR_PATH);
/* first read of directory */
cnt = 0;
while (readdir(dir))
cnt++;
printf("first pass: num-files=%d\n", cnt);
/* create new file and rewind */
fclose(fopen(NEW_FILE, "a"));
rewinddir(dir);
/* second read of directory */
cnt = 0;
while (readdir(dir))
cnt++;
printf("second pass: num-files=%d\n", cnt);
/* clean up */
closedir(dir);
unlink(NEW_FILE);
rmdir(DIR_PATH);
return 0;
}
-------------------------------------------------------------
The attached patch fixes this by breaking out of the loop if we would
write too much into the buffer.
Fixed by commit cdcb6319127883d724508da3f6140a1e2aca75af | https://bugzilla.samba.org/show_bug.cgi?id=8463 | CC-MAIN-2017-04 | refinedweb | 255 | 54.76 |
Back from a nice long weekend, although I spent most of it sick with a cold. I find this increasingly the way with me: I fend off illness for months at a time (probably through stress, truth be told) but then I get a few days off and wham. A shame, as we had a huge dump of snow over the weekend... we get white Christmases here every five years or so, but it's really uncommon to get a white Easter.
I had a very interesting question come in by email from 冷血儿, who wanted to get the technique shown in this post working in his F# application.
Here's the F# code I managed to put together after consulting hubFS, in particular:
#light
namespace MyNamespace
open Autodesk.AutoCAD.Runtime
open Autodesk.AutoCAD.ApplicationServices
type InitTest() =
class
let ed =
Application.DocumentManager.MdiActiveDocument.Editor
interface IExtensionApplication with
member x.Initialize() =
ed.WriteMessage
("\nInitializing - do something useful.")
member x.Terminate() =
printfn "\nCleaning up..."
end
end
module MyApplication =
let ed =
Application.DocumentManager.MdiActiveDocument.Editor
[<CommandMethod("TST")>]
let f () =
ed.WriteMessage("\nThis is the TST command.")
[<assembly: ExtensionApplication(type InitTest)>]
do
ed.WriteMessage("\nModule do")
Here's what happens when we load our module and run the TST command:
Command: NETLOAD
Module do
Initializing - do something useful.
Command: TST
This is the TST command. | http://through-the-interface.typepad.com/through_the_interface/2008/03/initialization.html | CC-MAIN-2015-27 | refinedweb | 221 | 51.14 |
Mirror an Image to an External Registry
Requirements
In order to mirror an image built by CI to Quay, that image must be promoted.
If the image is promoted into a namespace for which no other image mirroring is set up yet, some RBAC needs to be configured:
- Create a folder in clusters/app.ci/registry-access with the name of the namespace, containing the manifests of the namespace and the RBAC regarding to that namespace. Provide an
OWNERSfile to allow your team to make changes to those manifests.
- The admin of the namespace should allow the SA in the mirroring job defined below to access the images with
oc image mirror, like this, which makes the images open to the public:
Mirroring Images
Periodically,
oc image mirror is used to push a configured set of images to Quay repositories. A number of Quay
repositories already have mirroring pipelines configured; each directory
here corresponds to a repository.
These directories contain mapping files that define tags on images in the target repository. New images may be submitted
to mirror to existing repositories, or new ones.
Existing Repositories
Submit a pull request adding the image source and target to the appropriate mirroring file. For instance, adding a new
image tag to the
quay.io/openshift:4.6 image would require a new entry in the
core-services/image-mirroring/openshift/mapping_origin_4_6
file. Adding a new image entirely would require a new
mapping_origin_* file.
WarningImages that are mirrored to Quay for the first time are private by default and need to be made public by an administrator of the Quay organization. For
openshiftorganization, contact Clayton Coleman about making images public.
Configuring Mirroring for New Repository
Submit a PR adding a new subdirectory
here, with at least a single mapping file
and an
OWNERS file (so that you can maintain your mappings). The mapping files
should follow the
mapping_$name$anything naming convention to avoid conflicts
when put into a
ConfigMap.
Additionally, you will need to add a new Periodic job
here. You can use
any of the jobs as sample and simply replace all occurences of the value found in the
ci.openshift.io/area label
(e.g.
knative) with the name of your repository (which should be the same as the name of the directory you created).
In oder to push images to an external repository, credentials are needed. Use
docker or
podman to create a docker config
file as described here
and then use our self-service portal to add it to the clusters,
using the following keys in Vault:
Then, the mirroring jobs can mount the secret as a volume: | https://docs.ci.openshift.org/docs/how-tos/mirroring-to-quay/ | CC-MAIN-2021-43 | refinedweb | 440 | 52.09 |
char charAt (int index) : This method returns the character in the string buffer at the specified index position. The index of the first character is 0, the second character is 1 and that of the last character is one less than the string buffer length. The index argument specified must be greater than or equal to 0 and less than the length of the string buffer. For example, strl.charAt(1) will return character e i.e. character at the index position 1 on the StringBuffer str1.
public class StringBuffercharAt
{
public static void main(String[] args)
{
StringBuffer sl = new StringBuffer("Hello Java");
System.out.println("sl.charAt(1) : " + sl.charAt(1));
}
}
| https://ecomputernotes.com/java/jarray/stringbuffer-charat | CC-MAIN-2022-05 | refinedweb | 111 | 65.62 |
import random print("\nRandom Food Generator") firstName = input("\nwhat is your first name: ") lastName = input("\nwhat is your last name: ") print("\nHello, ", firstName, lastName) hungerStatus = input("are you hungry?: Y/N ") print(hungerStatus) if hungerStatus == "y" or hungerStatus == "Y": print("\n",firstName, lastName,"What would you like to eat? ",) print('\npizza', '\nfruit salad', '\ntuna', '\ncurrant slice') foodOptions = input["pizza", "fruit salad", "tuna", "currant slice"] choiceOfFood = input("") else: print("\nYou are not hungry") if choiceOfFood == foodOptions: print(choiceOfFood)
Have you looked at that line? Is that what you intended to write with input? What is your intention here, might be worth a view of How to ask good questions (and get good answers)
Yes that is what i intended to write with input. It worked for over a month but now it gets a typeError at line 17
I’m not sure there was any version of Python where that would work. The error is there for a reason, you’re doing something that can’t be done, you cannot use
input like that; it’s intended to be used as a callable, e.g.
input() or
input("prompt text").
What is
foodOptions supposed to be here, what is your intention?
foodOptions is a list. Thank you, ive found my error, you were correct. i must of added that input without realising at some point
1 Like | https://discuss.codecademy.com/t/having-a-python-typeerror-issue-with-this-short-script-ive-wrote-on-line-11/606398 | CC-MAIN-2022-33 | refinedweb | 226 | 70.94 |
Question:-
Sam started a new business with an investment of Rs.1, 00,000. During the first year he got a profit of x%, whereas in the second year he lost a certain amount(say Rs. Y). Help him to find out the profit/loss (in terms of initial investment) in percentage at the end of the second year.
Sample input 1:
Enter the profit percentage
20
Enter the amount lost in Rs.
50000
Sample output 1:
After two years he gets a loss of 30%.
Explanation :
investment = 100000
profit = 20% of 100000 = 20000
Loss=50000
If loss > profit, there is a loss
If loss < profit, there is a profit
If loss = profit, no gain no loss
Here loss > profit So loss
To calculate loss% –
loss amount = loss – profit = 30000
Loss % =( loss amount / investment) * 100
That is 30000 * 100 / 100000 = 30%
Sample input 2:
Enter the profit percentage
20
Enter the amount lost in Rs.
20000
Sample output 2:
After two years he gets no loss or no gain.
Code:-
import java.util.*; public class Main { public static void main (String[] args) { int investment=100000, amount=0; Scanner sc =new Scanner(System.in); System.out.println("Enter the profit percentage"); int profit_percentage_1sty=sc.nextInt(); System.out.println("Enter the amount lost in Rs."); int lost_amount_2ndy=sc.nextInt(); int profit_amount_1sty=(profit_percentage_1sty*investment)/100;//BADMAS amount=(investment+profit_amount_1sty-lost_amount_2ndy); if(amount>investment) { int profit=((amount-investment)*100)/investment;//BADMAS System.out.println("After two years he gets a profit of "+profit+"%"); } else if(amount==investment) { System.out.println("After two years he gets no loss or no gain"); } else if(amount<investment) { int loss=((investment-amount)*100)/investment;//remember BADMAS while writing arithmetic expressions System.out.println("After two years he gets a loss of "+loss+"%"); } } } | https://quizforexam.com/java-find-profit-or-loss/ | CC-MAIN-2021-21 | refinedweb | 294 | 55.13 |
--- a +++ b/src/dvdread/nav_read.h @@ -0,0 +1,53 @@ +/* + * Copyright (C) 2000, 2001, 2002 H책kan Hjort <d95hjort@dtek.chalmers.se>. + * + * This file is part of libdvdread. + * + * libdvdread is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * libdvddvdread; if not, write to the Free Software Foundation, Inc., + * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. + */ + +#ifndef LIBDVDREAD_NAV_READ_H +#define LIBDVDREAD_NAV_READ_H + +#include "nav_types.h" + +/** + * Parsing of NAV data, PCI and DSI parts. + */ + +#ifdef __cplusplus +extern "C" { +#endif + +/** + * Reads the PCI packet data pointed to into th pci struct. + * + * @param pci Pointer to the PCI data structure to be filled in. + * @param bufffer Pointer to the buffer of the on disc PCI data. + */ +void navRead_PCI(pci_t *, unsigned char *); + +/** + * Reads the DSI packet data pointed to into dsi struct. + * + * @param dsi Pointer to the DSI data structure to be filled in. + * @param bufffer Pointer to the buffer of the on disc DSI data. + */ +void navRead_DSI(dsi_t *, unsigned char *); + +#ifdef __cplusplus +}; +#endif +#endif /* LIBDVDREAD_NAV_READ_H */ | https://sourceforge.net/p/xbmc/libdvdread/ci/9c138e7c973b61a4ccef94e1576b480679d919fb/tree/src/dvdread/nav_read.h?diff=79a329e75d7733b75c72f9acfb3ced0e287e237a | CC-MAIN-2017-26 | refinedweb | 192 | 58.58 |
facebook ◦ twitter ◦
View blog authority
View blog top tags!
Try the following site.
It helped me a lot. It has a regular expression tester and you can debug your expression till it works just right.
Good luck!
__CookieMonster fails :)
use: name.TrimStart('_');
I don't understand why the removing of periods is different then the other character replacements that are above it.
Surely, a regex to remove those 'unwanted' chars would be far better, even if you can't do the entire thing.
Regex.Replace(name, Regex.Escape(@"#%&*:<>?\/{|}~+-,()."), string.Empty);
With a little more elbow grease, the whole thing apart from the space replacement and the length trimming could be put into a regex.
This should work for you
([$#%&*:<>?\/{}|~+-,().-\])
@Marc: Thanks, I knew there was an easier way.
@hexy: Yes, it could be and your pattern does replace all those calls to .Replace. However if you remove the while(cleanName.Contains(".")) which strips out all periods it fails. There probably needs a "match all '.'" regex thingy here. As for the underline, I see the double underscore fails, but the rule is to only check it at the beginning of the name and not throughout. Again, a regex here could probably work. It's just beyond my meager skills.
I'm also a fan of System.Web.HttpUtility.UrlEncode() and/or HtmlEncode() - use at the end to catch any odd characters you've missed (e.g. ñ)?
Anyway, neat.
There are couple of problems with your code.
1. You should do trim before length check, not after. This way, you can fit more characters. Try following input for your test.
string name = " testInput".PadRight(60, 'C');
string expected = "testInput".PadRight(50, 'C');
Assert.AreEqual(expected, Helper.RemoveInvalidCharacters(name));
By trimming first, you can fit in more non-space characters in your name.
2. The length limit should be enforced at the end. Because you are converting space to %20 after checking for length, your function may return a string with length more than 50.
Try following input for your test.
string invalidName = "Cookie Monster".PadRight(50, 'C');
Assert.AreEqual(50, Helper.RemoveInvalidCharacters(invalidName).Length);
3. Why the special treatment to remove the period (.)?
4. The following code removes a valid character from the index 0.
if(cleanName.Length > 50)
cleanName = cleanName.Substring(1, 50);
You should do
cleanName = cleanName.Substring(0, 50);
I have re-written the code to use Regex and covered the edge cases.
Hope that helps,
Jd
<code>
using System;
using System.Text.RegularExpressions;
namespace Misc
public class Helper
{
public static string RemoveInvalidCharacters(string name)
{
if(name == null)
throw new ArgumentNullException("name");
//We should trim the input before we do length check.
//This way, we could be able to fit in more characters in
//our output if there are spaces in the beginning.
name = name.Trim();
string[] invalidCharacters =
new string[]
{
"#", "%", "&", "*", ":", "<", ">", "?", "\\", "/", "{", "}", "~", "+", "-", ",", "(", ")", "|",
"."
};
Regex cleanUpRegex = GetCharacterRemovalRegex(invalidCharacters);
string cleanName = cleanUpRegex.Replace(name, string.Empty);
cleanName = cleanName.Replace(" ", "%20");
if (cleanName.StartsWith("_"))
cleanName = cleanName.Substring(1);
if (cleanName.Length > 50)
cleanName = cleanName.Substring(0, 50);
return cleanName;
}
private static Regex GetCharacterRemovalRegex(string[] invalidCharacters)
if(invalidCharacters == null)
throw new ArgumentNullException("invalidCharacters");
if(invalidCharacters.Length == 0)
throw new ArgumentException("invalidCharacters can not be empty.", "invalidCharacters");
string[] escapedCharacters = new string[invalidCharacters.Length];
int index = 0;
foreach (string input in invalidCharacters)
{
escapedCharacters[index] = Regex.Escape(input);
index++;
}
return new Regex(string.Join("|", escapedCharacters));
}
</code>
using MbUnit.Framework;
using Misc;
namespace TestMisc
[TestFixture]
public class HelperTest
[Test]
[RowTest]
[Row("_Monster", "Monster")]
[Row("...period...removal", "periodremoval")]
[Row(" test ", "test")]
[Row("Cookie Monster", "Cookie%20Monster")]
[Row("...", "")]
public void CharacterRemovalTest(string name, string expected)
Assert.AreEqual(expected, Helper.RemoveInvalidCharacters(name));
public void SpecialCharacterRemovalTest()
string invalidName = @"#%&*:<>?\/{|}~+-,().";
Assert.AreEqual(string.Empty, Helper.RemoveInvalidCharacters(invalidName));
public void NameLengthTest1()
string invalidName = "CookieMonster".PadRight(51, 'C');
Assert.AreEqual(50, Helper.RemoveInvalidCharacters(invalidName).Length);
public void NameLengthTest2()
string invalidName = "Cookie Monster".PadRight(50, 'C');
public void NameLengthTest3()
string name = " testInput".PadRight(60, 'C');
string expected = "testInput".PadRight(50, 'C');
@Peter: The url encoding isn't the issue. For example create a document library in SharePoint using the API, Web Service, or from the UI called "my-library" and the created library will have a display name of "my-library" but a url addressable name of "mylibrary".
@JD: Thanks for the RegEx and edge cases. The 50 character limit before the "%20" replacement is done because of a SharePoint quirk, but it makes more sense to trim it afterwards in any case.
SPEncode.IsLegalCharInUrl(c)
@Red: Thanks and that could be useful, however it ties me to the SharePoint library and frankly, SPEncode and all of it's static methods are highly untestable IMHO. I'll look into this though..
I was working on a utility that needed to clean file an folder names for SharePoint when I ran across your post. You inspired me to attempt a regular expression solution.
I decided to write a blog post about it since I couldn't find any code examples:
simplyaprogrammer.com/.../importing-files-into-sharepoint.html | http://weblogs.asp.net/bsimser/archive/2008/04/25/cleaning-invalid-characters-from-sharepoint.aspx | crawl-002 | refinedweb | 830 | 52.97 |
xdf_write man page
xdf_write — Write samples to a xDF file
Synopsis
#include <xdfio.h>
ssize_t xdf_write(struct xdf* xdf, size_t ns, ...);
Description
xdf_write() writes ns samples to the xDF file referenced by xdf. This file should have been opened with mode XDF_WRITE and xdf_prepare_arrays(3) should have been successfully called on it. xdf_write() will fail otherwise).
The data to be added should be contained in arrays specified by pointers provided in the variable list of arguments of the function. The function expects the same number of arrays as specified by previous call to xdf_define_arrays(3). The internal organisation of the data in the arrays should have been specified previously with calls to xdf_set_chconf(3).
In addition, it is important to note that none of the arrays should overlap.
Return Value
The function returns the number of the samples successfully added to the xDF file in case of success. Otherwise -1 is returned and errno is set appropriately.
Errors
- EINVAL
xdf is NULL
- EPERM
No successfull call to xdf_prepare_transfer(3) have been done on xdf or it has been opened using the mode XDF_READ.
-IO
A low-level I/O error occurred while modifying the inode.
- ENOSPC
The device containing the xDF file has no room for the data.
- ESTALE
Stale file handle. This error can occur for NFS and for other file systems
Performance Consideration
By design of the library, a call to xdf_write() is "almost" ensured to be executed in a linear time, i.e. given a fixed configuration of an xDF file, for the same number of samples to be passed, a call xdf_write will almost take always the same time to complete. This time increases linearly with the number of samples. This insurance is particularly useful for realtime processing of data, since storing the data will impact the main loop in a predictible way.
This is achieved by double buffering the data for writing. A front and a back buffer are available: the front buffer is filled with the incoming data, and swapped with the back buffer when full. This swap signals a background thread to convert, reorganise, scale and save to the disk the data contained in the full buffer making it afterwards available for the next swap.
This approach ensures a linear calltime of xdf_write() providing that I/O subsystem is not saturated neither all processing units (cores or processors), i.e. the application is neither I/O bound nor CPU bound.
Data Safety
The library makes sure that data written to xDF files are safely stored on stable storage on a regular basis but because of double buffering, there is a risk to loose data in case of problem. However, the design of the xdf_write() ensures that if a problem occurs (no more disk space, power supply cut), at most two records of data plus the size of the chunks of data supplied to the function will be lost.
As an example, assuming you record a xDF file at 256Hz using records of 256 samples and you feed xdf_write() with chunks of 8 samples, you are ensured to receive notification of failure after at most 520 samples corresponding to a lose of at most a little more than 2s of data in case of problems.
Example
/* Assume xdf references a xDF file opened for writing whose channels source their data in 2 arrays of float whose strides are the length of respectively 4 and 6 float values, i.e. 16 and 24 bytes (in most platforms)*/ #define NS 3 float array1[NS][4], array2[NS][6]; unsigned int strides = {4*sizeof(float), 6*sizeof(float)}; unsigned int i; xdf_define_arrays(xdf, 2, strides); if (xdf_prepare_transfer(xdf)) return 1; for (i=0; i<45; i+=NS) { /* Update the values contained in array1 and array2*/ ... /* Write the values to the file */ if (xdf_write(xdf, NS, array1, array2)) return 1; } xdf_close(xdf);
See Also
xdf_set_chconf(3), xdf_define_arrays(3), xdf_prepare_transfer(3)
Referenced By
xdf_define_arrays(3), xdf_prepare_transfer(3). | https://www.mankier.com/3/xdf_write | CC-MAIN-2017-26 | refinedweb | 658 | 58.82 |
Introduction
After we wrote formulas for Excel, the values would be displayed in specified cells. It is possible that the formulas are wrongly used or lost because there are too many formulas. Therefore, sometimes, we need to read the formula for one cell to ensure if the formula is right. Well then, how to read Excel formula?
Read Formula in Microsoft Excel
Through Formulas tab in Excel 2007, we can find one button named Show Formulas. Select the cells which we want to read formulas, and then click the button. After that, we can see the formulas instead of values.
Read Excel Formulas via Spire.XLS
Spire.XLS presents you an easy way to read formula in the worksheet. You can get the formula through the value you provide, while you should specify where the value is. In the demo, we load a workbook from file named "ReadFormulaSample.xls" which has a formula written in the sheet["B5"], we can read the formula through its value in sheet["C5"]. In this example, in order to view, we read the formula to the sheet["D5"].
The following code displays the method to read formulas for cells with C#/VB.NET:
using Spire.Xls; namespace ReadFormula { class Program { static void Main(string[] args) { //Create a new workbook Workbook workbook = new Workbook(); //Load a workbook from file workbook.LoadFromFile("ReadFormulaSample.xls"); //Initialize the worksheet Worksheet sheet = workbook.Worksheets[0]; //Read the formula sheet.Range["D5"].Text = sheet.Range["C5"].Formula; //Save the file workbook.SaveToFile("sample.xls"); //Launch the file System.Diagnostics.Process.Start("Sample.xls"); } }
Imports Spire.Xls Module Module1 Sub Main() 'Create a new workbook Dim workbook As New Workbook() 'Load a workbook from file workbook.LoadFromFile("ReadFormulaSample.xls") 'Initialize the worksheet Dim sheet As Worksheet = workbook.Worksheets(0) 'Read the formula sheet.Range("D5").Text = sheet.Range("C5").Formula 'Save doc file. workbook.SaveToFile("Sample.xls") 'Launching the MS Word file. System.Diagnostics.Process.Start("Sample.xls") End Sub End Module
After running the demo, you may find a formula appear in the worksheet you specify:
| http://www.e-iceblue.com/Knowledgebase/Spire.XLS/Program-Guide/How-to-Read-Excel-Formulas.html | CC-MAIN-2015-22 | refinedweb | 346 | 51.85 |
Chapter 11: Creating Dynamic Ribbon Customizations .
Previous Part: Chapter 11: Creating Dynamic Ribbon Customizations (Part 1 of 2)
Contents
In addition to standard types of controls, such as buttons, edit boxes, and drop-downs, the Ribbon provides several new types of controls that enable you to create rich experiences for your users.
Dialog Box Launcher
You may have noticed at the bottom of certain groups, there is a very small button in the corner. This button is contained in a control called a dialog box launcher. This control is typically used to launch more options for the group than can be displayed in the Ribbon. For example, let's say that your application provides options to its users. If you have many options, it might not be feasible to put all of them into the Ribbon for fear of cluttering it. Instead, you might only include the most common options in the customization and expose the rest on an Access form. The dialog box launcher lets you create this button at the bottom of a group, as shown in the following XML. (This example shows the entire group as an example. Notice that this customization requires the OnOpenForm callback previously defined.)
Gallery
Use the gallery node in the XML for the ribbon customization to create a gallery. Galleries are used to display items that can be arranged in a grid-type layout. Items in the gallery can display images, text, or both images and text. Galleries can also include one or more buttons that appear at the bottom of the gallery.
Let's say that you were building a custom filter for a report and would like to include the months of the year to filter by date. To prevent users from scrolling through a list of months or from selecting from a long list, you may split up the months by quarters, as shown in Figure 11-18.
The XML for the gallery, as shown in the following, includes 12 item nodes and a button node that the user can use to change their regional settings. The following layout for the gallery is three columns by four rows.
<gallery id="galMonthsEng" label="Months (English)" columns="3" rows="4"> <item id="galMonth1" label="January"/> <item id="galMonth2" label="February"/> <item id="galMonth3" label="March"/> <item id="galMonth4" label="April"/> <item id="galMonth5" label="May"/> <item id="galMonth6" label="June"/> <item id="galMonth7" label="July"/> <item id="galMonth8" label="August"/> <item id="galMonth9" label="September"/> <item id="galMonth10" label="October"/> <item id="galMonth11" label="November"/> <item id="galMonth12" label="December"/> <button id="btnGal1" label="Regional Settings" imageMso="ShowTimeZones" onAction="ShowRegionalSettings"/> </gallery>
Split Button
Split buttons are controls that contain a button and a menu and are created using the splitButton node. The button inside the split button is displayed in the Ribbon, and as such, is used to set the label attribute for a split button. The splitButton node itself does not define the label attribute. As with other buttons, the button node inside a split button has an onAction attribute that you can handle to receive an event from the Ribbon. The menu items appear as additional choices beneath the button. We take a closer look at split buttons in the section "Creating a Split Button That Sticks."
Dynamic Menu
As the name suggests, a dynamic menu is a menu that is filled at runtime. Use the dynamicMenu node to create a dynamic menu. To fill the content, you must provide a getContent callback for the dynamic menu. This callback is required. Dynamic menus are useful for scenarios where users can contribute to the content of the application. We look at an example for using the dynamicMenu control in a few moments.
One cool thing about the Ribbon is that it is highly graphical in nature. We think this is cool not just because of the nice graphics, but because the graphics provide some really great opportunities for you and your users. Applications are easier to use because users have graphics as a guide. Naturally, you don't have to use graphics in your applications, and there are likely to be many applications where they are not appropriate. However, the addition of rich graphics enables new scenarios that may not have been possible in the past.
For the remainder of this chapter, we look at specific scenarios for using ribbon customizations in your applications. The examples for the scenarios are available for download and we've included the name of the sample database at the top of the section.
Images Included with Office
As cool as we think graphics are, we have to admit — we're graphically challenged. We're far more comfortable writing code than drawing bitmaps or icons. Luckily for us (and for you), there are many images included with Office that can be used in Access applications. Attributes in customizations that end with Mso are items that are included with Office. Using the imageMso attribute, you can specify the name of an image included with Office. As mentioned earlier in the section "Development Tips," the easiest ways to find control or image names is to use the Options dialog box for a given application.
You are not limited to using images from Access. In fact, most of our applications that contain ribbon customizations tend to use images from Word or Outlook! The following XML is an example of using images built into Office. (Figure 11-19 shows the result of this customization.)
<group id="grpImageMso" label="Examples: imageMso attribute"> <button id="btnWeather" imageMso="PictureBrightnessGallery" label="Weather" size="large"/> <button id="btnNotes" imageMso="ExchangeFolder" label="Notes" size="large"/> <button id="btnSearch" imageMso="ZoomPrintPreviewExcel" label="Search" size="large"/> <button id="btnHelp" imageMso="TentativeAcceptInvitation" label="Help" size="large"/> <separator id="s1"/> <button id="btnUsers" imageMso="DistributionListSelectMembers" label="Manage Users..." size="normal"/> <button id="btnSysHealth" imageMso="OfficeDiagnostics" label="System Health" size="normal"/> <button id="btnTechSupport" imageMso="TechnicalSupport" label="Technical Support" size="normal"/> </group>
The imageMso attribute makes it really easy to get started with images in your applications. But, what if you want to use images that you have on the hard drive or want to display images in the Ribbon as a feature of your application? For example, if you have a product catalog that includes images of the product, wouldn't it be cool to put that into the Ribbon for your users as a selection item? Users would immediately be able to make selections based on a visual. This can go a long way toward making applications easy-to-use.
There are basically two ways to load images dynamically. The easiest way is to use the loadImage callback, which is defined in the customUI node of a customization. Alternatively, you can handle the getImage callback for a given control.
Let's take a look at these two methods.
Creating a Global Image Handler
In the root node of a customization, customUI, there is a callback attribute defined called loadImage. This callback is used in conjunction with the image attribute on controls, and is called when the Ribbon asks for an image. Using the loadImage callbacks enables you to define one image handler for the application and specify the name of the image in the image attribute. Let's take a closer look to see how this works.
The gallery control mentioned earlier can also be used to display images. This is very useful for creating features, such as:
Product catalog
Membership photo gallery
Options for a screen layout
Galleries can easily be created using the loadImage callback. Start with a new customization, as follows.
Next, add the gallery control with images using the image attribute. Notice that we've filled the gallery with sample images from Windows Vista. We've also set the height and width of the items in the gallery using the itemHeight and itemWidth attributes respectively, as follows.
<gallery id="galVista" label="Vista Sample Images" itemHeight="100" itemWidth="100" size="large" imageMso="PictureEffectsShadowGallery"> <item id="galImg1" image="Autumn Leaves.jpg"/> <item id="galImg2" image="Creek.jpg"/> <item id="galImg3" image="Desert Landscape.jpg"/> <item id="galImg4" image="Dock.jpg"/> <item id="galImg5" image="Forest Flowers.jpg"/> <item id="galImg6" image="Forest.jpg"/> <item id="galImg7" image="Frangipani Flowers.jpg"/> <item id="galImg8" image="Garden.jpg"/> <item id="galImg9" image="Green Sea Turtle.jpg"/> <item id="galImg10" image="Humpback Whale.jpg"/> <item id="galImg11" image="Oryx Antelope.jpg"/> <item id="galImg12" image="Toco Toucan.jpg"/> <item id="galImg13" image="Tree.jpg"/> <item id="galImg14" image="Waterfall.jpg"/> <item id="galImg15" image="Winter Leaves.jpg"/> </gallery> </group> </tab> </tabs> </ribbon> </customUI>
Write the OnLoadImage callback as follows. This routine uses the Environ function in VBA to help retrieve the path of the Sample Pictures folder on Vista. You may need to change this path, as follows, if you are not using Vista.
(Visual Basic for Applications)
When you select the gallery, it should be filled with images, as shown in Figure 11-20.
getImage and getItemImage Callbacks
Instead of writing an image handler in the loadImage callback, many controls also provide a getImage callback that can be used to set the image for the control. This callback is useful when information about the image is stored in a table and you want to include more information about the image besides the file name that is stored in the image attribute.
Because we are working with a gallery control, let's actually set the image for items in the gallery, not for the gallery control itself. To do this, we use the getItemImage callback. This works very similarly to the getImage callback. To use the getItemImage callback, we create a table to store information about images. This table maps the ID of the control in the ribbon customization to the path of the image, along with some additional information.
Create a new table with the following fields. This table stores information about pictures, such as the file name, a friendly name of the image, and camera information that we display in a supertip. Save the table as tblImages.
We use the sample pictures included with Windows Vista for this example and include some arbitrary information for the other fields for testing. Because the data comes from the table, the following customization is pretty easy:
<customUI xmlns=""> <ribbon startFromScratch="true"> <tabs> <tab id="tab1" label="Images Examples"> <group id="grpGetImage" label="Example: getImage callback"> <gallery id="galGetImage" label="Vista Images (detailed)" size="large" imageMso="Camera" itemHeight="100" itemWidth="100" getItemCount="OnGetItemCount" getItemID="OnGetItemID" getItemImage="OnGetItemImage" getItemScreentip="OnGetItemScreentip" getItemSupertip="OnGetItemSupertip"/> </group> </tab> </tabs> </ribbon> </customUI>
You'll notice that there are a couple of additional callbacks we haven't discussed yet. So, let's discuss those for a moment. Screentips are large tooltips that can be displayed for controls in a customization. The screentip portion appears in bold and is used to provide context to the tip. The supertip portion of a screentip is not in bold and can display extra information. As you've probably guessed, we're going to use the screentip to display the name of the image and the supertip to display the detailed information about the image. By using these callbacks, we can create data-driven tooltips for our applications.
To fill a gallery dynamically such as this, we first need to tell the Ribbon how many items are in the gallery. For this, we implement the OnGetItemCount callback. But, before we can do this, we need a Recordset object because we want to read data from the tblImages table. So, define the following variable at the top of a new module:
(Visual Basic for Applications)
Next, add the OnGetItemCount callback. The Ribbon calls this one first, so we open the recordset and return the number of records as the count of items, as follows:
(Visual Basic for Applications)
Next, add the OnGetScreentip callback to return the file name for the image in bold, as follows.
(Visual Basic for Applications)
Now, add the OnGetSupertip callback. This callback provides the detailed information to the tooltip.
(Visual Basic for Applications)
Sub OnGetItemSupertip(ctl As IRibbonControl, Index As Integer, ByRef Supertip) Supertip = "Dimensions: " & m_rs("Dimensions") & vbCrLf Supertip = Supertip & "Camera maker: " & m_rs("CameraMaker") & vbCrLf Supertip = Supertip & "Camera model: " & m_rs("CameraModel") & vbCrLf Supertip = Supertip & "EXIF version: " & m_rs("EXIFVersion") & vbCrLf End Sub
We need to load the actual image. For this, we'll implement the getItemImage callback. In this code, we'll create the full path to the image using the file name in the table and a subdirectory of the folder where the database resides called images. Use the LoadPicture function in VBA to load the image, as follows:
(Visual Basic for Applications)
Last, we'll set the ID for the item. We aren't doing anything with the ID in this example, but if you were to later handle the onAction callback for items in the gallery you might need the ID. The ID is assigned using the getItemID callback, as follows.
(Visual Basic for Applications)
When you select the gallery and hover over an image, you should see the images with a tooltip, as shown in Figure 11-21.
Access 2007 includes a new data type called Attachment that allows you to embed files inside a database. This new data type, which is available only with the new .ACCDB file format, also compresses certain file formats such as bitmaps in a database. Bitmap images inside an attachment field can be used as the source for an image in a ribbon customization in Access. The trick to using these fields is to have a form that is bound to the attachment field in the table. Let's see how this works.
Create the Table
Start by creating a table that contains an attachment field. The table should have the following schema. Save the table as tblAttachments.
After you've created the table, add three bitmap (.bmp images into the first record of the attachment field). This example only handles cases where the attachments are in the first record. Make a note of the file name for each file.
Create the Form
You need a form to contain the attachments from the table because the images in the customization require the attachment control of a form to render the images. Create a new form using the tblAttachmentstable and save it as frmAttachments.
Create the Customization
Now, we need to add the customization. Use the following XML to define the customization. Notice that we've used the tag attribute to store the file name of the image as it appeared when you added it to the attachment field:
<group id="grpAttachmentImages" label="Access Attachment Field"> <button id="btnAttachment1" getImage="OnGetAttachmentImage" size="large" label="Attachment1" tag="Image1.bmp"/> <button id="btnAttachment2" getImage="OnGetAttachmentImage" size="large" label="Attachment2" tag="Image2.bmp"/> <button id="btnAttachment3" getImage="OnGetAttachmentImage" size="large" label="Attachment3" tag="Image3.bmp"/> </group>
Add the Callback
Last, add the OnGetAttachmentImage callback for the customization. This callback is used in the getImage callback for the button, as follows.
(Visual Basic for Applications)
Sub OnGetAttachmentImage(ctl As IRibbonControl, ByRef Image) ' open the attachment form (hidden) If (Not CurrentProject.AllForms("frmAttachments").IsLoaded) Then DoCmd.OpenForm "frmAttachments", , , , , acHidden End If ' bind the image which has the image name in the tag attribute DoCmd.Close acForm, "frmAttachments" End Sub
You'll notice that this code opens the frmAttachments form in hidden mode and uses the hidden PictureDisp property of the attachment control to retrieve the image. The name of the image is specified in the Tag property of the IRibbonControl object that you defined earlier. The PictureDisp property returns an IPictureDisp object similarly to the LoadPicture function in VBA.
Now that we've gone through the available controls and programming the Ribbon, it's time to put these pieces to use with some more scenarios.
The NotInList Event — Ribbon Style
Let's start with a pretty common scenario in Access. The NotInList event is used in combo boxes to add values to the underlying row source for the combo box. We can simulate the same effect using a combo box in a ribbon customization with just a few callbacks.
Let's say that we have a list of categories that is used as a lookup field. To manage the lookup field, we add a combo box to the Ribbon. There are two parts to this example. First, we need to be able to dynamically fill the combo box. Second, we need to be able to update it. To do this, we implement the following callbacks:
Fill — getItemCount, getItemLabel
Update — onChange
Add the following XML for the customization to the USysRibbons table:
<customUI xmlns= <ribbon startFromScratch="true"> <tabs> <tab id="tabNotInList" label="NotInList Example"> <group id="grpNotInList" label="NotInList Example"> <comboBox id="cboTest" label="Categories" getItemCount="OnGetItemCount" getItemLabel="OnGetItemLabel" onChange="OnNotInListRibbon"/> </group> </tab> </tabs> </ribbon> </customUI>
We'll need a recordset to fill the combo box. Add the following declaration to the top of a new module:
(Visual Basic for Applications)
The first callback that the Ribbon asks for is the getItemCount callback, so let's start there. The getItemCount callback is used to tell the combo box how many items are in the list. To fill the list, we'll use a recordset that points to the Categories table (from previous versions of Northwind). As with some of the other callbacks that we've seen, we return the number of items in an argument that is defined in the callback, as follows:
(Visual Basic for Applications)
Next, we need to fill the items in the combo box using the getItemLabel callback. Having set the number of items in the combo box, the getItemLabel callback is called for each item. As a result, we need to keep track of where we are in the recordset.
(Visual Basic for Applications)
The callback starts by returning the label and moving to the next record in the recordset. This advances the cursor in the recordset and provides the next category to the combo box. When we reach end-of-file (EOF) in the recordset, it's safe to do cleanup. That's all that is required to fill a combo box in a ribbon customization using items in a table!
All that's remaining now is the actual NotInList implementation. To do this, we'll handle the onChange callback for the combo box. This callback is fired when the text inside the combo box is changed. The signature for the callback includes an argument called Text that represents the data that was entered in the combo box. This is analogous to the NewData argument to the NotInListevent in Access.
Remember that we closed and destroyed the recordset earlier so we need to re-open it. This time, however, we'll open it with a filter for the item that was entered. If there are no matching records, we add it to the recordset. As you might imagine, we then invalidate the combo box to refresh the items in the list that reflect the new record that was added to the Categories table, as follows.
(Visual Basic for Applications)
Public Sub OnNotInListRibbon(ctl As IRibbonControl, Text As String) ' open the recordset Dim stSQL As String stSQL = "SELECT * FROM tblCategories WHERE CategoryName = '" & Text & "'" Set m_rs = CurrentDb().OpenRecordset(stSQL) If (m_rs.BOF And m_rs.EOF) Then ' add the item m_rs.AddNew m_rs("CategoryName") = Text m_rs.Update ' invalidate the combo box to refresh gobjRibbon.InvalidateControl "cboTest" End If ' close m_rs.Close Set m_rs = Nothing End Sub
Form Navigation
Have you ever written your own navigation form because you wanted a different look than the navigation buttons provided by Access? If so, this next example is for you. Here's how you can move form navigation into the Ribbon.
We've started with a form based on the Customers table from Northwind again, but any form should work. You need to add code to the form for some of the requirements, which are listed here.
Move between first, last, previous, and next records
Populate a label control with the current position in the form
Navigation controls should be kept up to date when you navigate through the form directly without using the controls
Disable the navigation buttons in the ribbon customization depending on where we are in the form
Let's get started with the XML for the customization. Because we need to refresh a label and buttons, we need to invalidate controls. This means we need to handle the onLoad callback, as follows:
Next, set up the ribbon, the tab, and the group, as follows:
Add a button to open the sample form. This requires the OnOpenFormcallback, as defined earlier. As you can see, our sample form is called frmCustomers. This is simply a shortcut to help get into the sample.
Time to start adding navigation controls. We want a layout that is linear rather than vertical, so we're using a box control to define a horizontal layout. Start with the navigation label inside the box. To set the text for the label, we handle the getLabel callback, as follows.
Next, we want to lay out the buttons and edit box in a horizontal layout, so we create another box node. The buttons should appear to be grouped together, so we're using a buttonGroup control to define a particular appearance.
Time to add the individual buttons. Notice that we are using built-in images specified with the imageMso attribute. Each button calls the same callback named OnNavigateRecord. To determine whether a control should be enabled, we handle the getEnabled callback, as follows:
Add the edit box for the navigation. We're handling the onChange callback so that the user can enter a number and jump to a particular record. We're handling the getText callback to put an empty string into the edit box for invalid data. The editBox control also defines an attribute called sizeString that is used to define the width of the edit box. The width of the string in this attribute determines the width of the control.
Finish the customization by adding two more buttons inside a buttonGroup, and close the nodes for the ribbon, as follows.
<buttonGroup id="bg2"> <button id="btnNavNext" imageMso="MailMergeGoToNextRecord" onAction="OnNavigateRecord" getEnabled="OnGetNavEnabled"/> <button id="btnNavLast" imageMso="MailMergeGotToLastRecord" onAction="OnNavigateRecord" getEnabled="OnGetNavEnabled"/> </buttonGroup> </box> </group> </tab> </tabs> </ribbon> </customUI>
When you put this into a USysRibbons table, the customization should look something like the Ribbon shown in Figure 11-22.
To enable the functionality, we need to start writing the callbacks. Create a new module called basFormNavigationCallbacks for this code. Let's start by writing the OnGetText callback, as follows. This is used to simply write an empty string into the edit box for invalid entry cases.
(Visual Basic for Applications)
Next, add the OnGetLabel callback. This routine is called to set the text for the navigation label. The following code sets the text for the label. If there are no forms open, we set the label to "No active form." When there is an open form, we set the label to something such as "Record 1 of 10." If the active form is filtered, we append the string "(Filtered)."
(Visual Basic for Applications)
Public Sub OnGetLabel(ctl As IRibbonControl, ByRef Label) Dim f As Form If (ctl.Id = "lblNav") Then If (Forms.Count = 0) Then Label = "No active form" Else Set f = Screen.ActiveForm Label = "Record " & f.CurrentRecord & " of " & _ f.RecordsetClone.RecordCount If (f.FilterOn) Then Label = Label & " (Filtered)" End If End If End If End Sub
Next, add the OnNavigateRecord callback, as follows. This routine is called by each of the navigation buttons. To do navigation, we're simply calling DoCmd.GotoRecord and passing the appropriate value based on the button that was clicked.
(Visual Basic for Applications)
Public Sub OnNavigateRecord(ctl As IRibbonControl) Dim lRecord As AcRecord Select Case ctl.Id Case "btnNavFirst": lRecord = acFirst Case "btnNavPrev": lRecord = acPrevious Case "btnNavNext": lRecord = acNext Case "btnNavLast": lRecord = acLast End Select ' do the navigation DoCmd.GoToRecord , , lRecord If (Not gobjRibbon Is Nothing) Then ' invalidate the nav label gobjRibbon.InvalidateControl "lblNav" ' invalidate the button to enable/disable gobjRibbon.InvalidateControl ctl.Id End If End Sub
Because we've moved records, we also need to invalidate the navigation label. We do this by calling InvalidateControlfor for lblNav.
Next, add the OnChangeRecord callback that is fired when the user enters a value in the edit box. To prevent the user from doing something invalid, there are some additional checks in this code. We first ensure that the user entered a number. If they didn't, we alert the user and invalidate the edit box. This, in turn, calls the OnGetText callback and writes an empty string.
To move the record, we're manipulating the underlying Recordset for the form. Again, to prevent the user from doing something invalid (much as Access itself does), we've added some checks. If you enter a value that is greater than the number of records or less than zero, we alert the user and invalidate the edit box.
(Visual Basic for Applications)
Public Sub OnChangeRecord(ctl As IRibbonControl, Text) If (Not IsNumeric(Text)) Then MsgBox "Please enter a number", vbExclamation, "Cannot Move Record" gobjRibbon.InvalidateControl "txtJump" Else ' move to the specified record With Screen.ActiveForm.Recordset .MoveFirst If (CLng(Text) > .RecordCount Or CLng(Text) < 1) Then MsgBox "Cannot move to specified record", vbInformation gobjRibbon.InvalidateControl "txtJump" Else .Move CLng(Text) - 1 End If End With End If End Sub
Nice job so far. One callback left — OnGetNavEnabled. Remember that we want to disable controls when they are not available. In other words, if you're on the first record, you shouldn't be able to move to the first or previous records. So why not disable the controls? The following code sets the enabled attribute, depending on the selected control and the CurrentRecord property of the active form.
(Visual Basic for Applications)
Public Sub OnGetNavEnabled(ctl As IRibbonControl, ByRef Enabled) If (Forms.Count > 0) Then Set f = Screen.ActiveForm Select Case ctl.Id Case "btnNavFirst" Enabled = (f.CurrentRecord > 1) Case "btnNavPrev" Enabled = (f.CurrentRecord > 1) Case "btnNavNext" Enabled = (f.CurrentRecord < f.Recordset.RecordCount) Case "btnNavLast" Enabled = (f.CurrentRecord < f.Recordset.RecordCount) End Select End If End Sub
Great! Okay, hang on a minute — we're not quite done. We need to add some code to our form to keep the controls in sync if you navigate using the form instead of the controls. Keeping the controls in sync will mimic the behavior that Access provides in the navigation buttons for a form. To do this, add the following helper routine to the code behind the form. This simply invalidates the navigation controls.
(Visual Basic for Applications)
Last, we need to call this code as you move from record to record, and also when the form is closed. When the form is closed, the label resets to "No active form."
(Visual Basic for Applications)
To test the navigation, open the sample form and move from record to record. Also, try to use the first and last buttons and jump to a specific record.
Managing Filters Using a Dynamic Menu
Let's say that you enable your users to save filters of a particular form so that they can reuse the filter later. You store information about the saved filters in a table. This information includes a friendly name of the filter and a description. Using a dynamic menu is an interesting way to display this information to the user because it permits changes from the user. To be truly dynamic, the Ribbon defines an attribute on the dynamicMenu node called invalidateContentOnDrop. Set this attribute to true to call the getContent callback every time the user drops down the menu.
To create the filter example, start by creating a table to save filters. The table should have the following fields defined. Save the table as tblSavedFilters when you're done. Create a form based on this table that the user can use to manage filters later on. Name the form frmSavedFilters.
Next, we define the XML for the customization. This sample was created using the Northwind sample database from previous versions of Access but it should work for any form because the filter is being saved as it relates to the form that is currently open.
<customUI xmlns=""> <ribbon startFromScratch="true"> <tabs> <tab id="tabSavedFilters" label="Saved Filters"> <group id="grpFilter" label="Filters"> <dynamicMenu id="dmnu1" label="Saved Filters" getContent="OnGetContent" invalidateContentOnDrop="true" size="normal" imageMso="Filter"/> </group> </tab> </tabs> </ribbon> </customUI>
Next, add the getContent callback, starting with the declarations.
(Visual Basic for Applications)
Open a recordset against the saved filter table. We're sorting the recordset based on the FilterString field in descending order so that when the menu is created the FilterString field is sorted ascending. As an alternative to opening the recordset sorted in descending order, you could iterate backwards through the recordset.
We need to start building the menu. To do this, we define XML that will be used in the customization for the dynamic menu. Start with the menu node. Notice that we need to include the customUInamespace definition. We've also added a menu separator for aesthetics.
It's time to fill the menu with the list of saved filters. To do this, we walk through the recordset. We're going to fill the menu with buttons so we've written a helper function called GetButtonXml to generate the XML for a button node given some parameters. This function appears in a moment.
One of the arguments to the GetButtonXml helper function is called strAction. This argument is used as the onAction callback for the button in the menu. We're going to create a callback called DoApplyFilter that applies the selected filter based on the user selection.
(Visual Basic for Applications)
' build the XML for the menu While (Not rs.EOF) ' get the ID for the button by replacing any spaces with empty spaces stID = Replace(rs("FilterName"), " ", "") ' Append the button node stMenu = stMenu & GetButtonXml(stID, _ rs("FilterName"), _ "DoApplyFilter", _ rs("FilterDescription"), _ rs("FilterName")) rs.MoveNext Wend
Before we close the menu node, we'd like to add a couple of static buttons to our menu to manage filters. The first is a Save button that has its own callback. The second is a button that opens the frmSavedFilters form that you created earlier. Add the buttons as follows.
(Visual Basic for Applications)
' add buttons to manage filters stMenu = stMenu & "<menuSeparator id='msMyFilters2' title='Manage Filters'/>" stMenu = stMenu & "<button id='btnSaveFilter' label='Save Filter' " & _ "onAction='OnSaveFilter' imageMso='FileSave'/>" stMenu = stMenu & "<button id='btnClearFilter' label='Clear Filter' " & _ "onAction='OnClearFilter'/>" stMenu = stMenu & "<button id='btnFilters' label='Manage...' " & _ "tag='frmSavedFilters' " & _ "onAction='OnOpenForm'/>"
Okay, now we close the menu node and the recordset.
And last, of course, we return the content to the customization and exit the routine.
So far so good. We have some helper functions and additional callbacks to write, so let's add those starting with GetButtonXml.
(Visual Basic for Applications)
Private Function GetButtonXml(strID As String, _ strLabel As String, _ strAction As String, _ Optional strDescription As" End Function
As you can see, this function accepts arguments for the id, label, onAction, description, and tag attributes of a button node in a customization.
Next, add the OnSaveFilter callback, as follows. This procedure is called when the user clicks the Save Filter button in the dynamic menu.
(Visual Basic for Applications)
Public Sub OnSaveFilter(ctl As IRibbonControl) Dim stFilter As String ' get the filter for the current filter On Error GoTo SaveFilterErrors stFilter = Screen.ActiveForm.Filter ' make sure there is a filter If (Len(stFilter) = 0) Then MsgBox "Filter has not been set for the ActiveForm, cannot save.", _ vbExclamation, "Cannot Save Filter" Exit Sub End If ' open the form to save the filter DoCmd.OpenForm "frmSavedFilters", , , , acFormAdd, , stFilter Exit Sub SaveFilterErrors: If (Err = 2475) Then MsgBox "There is no open form", vbExclamation Exit Sub Else Stop End If End Sub
For this callback, we're making sure that a form is open and if so, getting its filter using the Filter property of the Form object. If the Filter property is not empty, we open the frmSavedFilters form in data entry mode and pass the filter to the form in its OpenArgs.
Next, add the OnClearFilter callback. This procedure is called when the user clicks the Clear Filter button in the dynamic menu.
(Visual Basic for Applications)
Next, we write the OnOpenForm callback.
(Visual Basic for Applications)
And last, the DoApplyFilter callback. When we created the button for the saved filter in the menu, we passed in the name of the filter in the FilterName field to the tag attribute of the button. This is used to query the tblSavedFilters table to ask for the filter string.
(Visual Basic for Applications)
Public Sub DoApplyFilter(ctl As IRibbonControl) Dim stSQL As String Dim rs As DAO.Recordset2 ' build the SQL statement stSQL = "SELECT FilterString FROM tblSavedFilters " stSQL = stSQL & "WHERE FilterName = '" & ctl.Tag & "'" ' open the recordset Set rs = CurrentDb.OpenRecordset(stSQL) ' set the filter Screen.ActiveForm.Filter = rs("FilterString") Screen.ActiveForm.FilterOn = True ' cleanup rs.Close Set rs = Nothing End Sub
We need to add one more piece of code — to pick up the new filter in the frmSavedFilters form. Remember that we pass it a new filter via its OpenArgs property. Add the following code to the Form_Load event of frmSavedFilters.
(Visual Basic for Applications)
To test this example, open a form and apply a filter. Then, click on the Save Filter button in the menu. The frmSavedFiltersform should open where you can assign a filter name and description. Now, when you drop down the menu again, the saved filter should be listed, as shown in Figure 11-23.
Creating a Split Button That Sticks
Suppose you want to create an application launcher for your Access application. The application launcher gives the user a list of applications that they can launch easily from within your application. Here are the requirements:
The launcher includes buttons for Word, Excel, Outlook, Calculator and Notepad.
The launcher should stick — that is, the last application launched should stick in the button in the customization.
To meet this last requirement, we use a split button. The split button control contains a button and a menu of choices. When an application is launched, we update the button to reflect the last application that was launched. Start by setting up the customization, as follows:
Add the split button. The splitButton control contains a button control that is used as the actual button that is clicked in the Ribbon. This button control should reflect the last application that was launched so we handle the getLabel and getImage callbacks to update the label and the image respectively.
Now, let's add the menu and close the customization. This is the list of applications that can be launched from the customization. You'll notice that we are using images included with Office in the imageMso attribute. We're also storing the name of this image in the tag attribute as extra data. More on that in a moment.
<menu id="mnuLauncher" label="Launcher" itemSize="normal"> <menuSeparator id="ms1" title="Office Applications"/> <button id="btnWord" label="Word" imageMso="FileSaveAsWordDotx" onAction="OnLaunchApplication" tag="FileSaveAsWordDotx"/> <button id="btnExcel" label="Excel" imageMso="MicrosoftExcel" onAction="OnLaunchApplication" tag="MicrosoftExcel"/> <button id="btnOutlook" label="Outlook" imageMso="MicrosoftOutlook" onAction="OnLaunchApplication" tag="MicrosoftOutlook"/> <menuSeparator id="ms2" title="Utilities"/> <button id="btnCalc" label="Calculator" imageMso="Calculator" onAction="OnLaunchApplication" tag="Calculator"/> </menu> </splitButton> </group> </tab> </tabs> </ribbon> </customUI>
The customization is complete, so let's add the callbacks. First, the OnGetAppLabel callback. This callback is defined in the launcher button that is updated when a selection is made. Because we're going to update labels and images, we need some private variables in the module to store this information between callbacks. Add the following code to the top of a new module called basSplitButtonCallbacks.
(Visual Basic for Applications)
Add the OnGetAppLabel callback. If the mstrLabel variable has not been set, we default to Calculator. As an alternative, you could also retrieve this from a table as a setting.
(Visual Basic for Applications)
Add the OnGetImage callback. This callback updates the image in the button to reflect the last application launched. When we defined our customization, we used images included with Office in the imageMso attribute. However, you may have noticed that there is a getImage callback, but not a getImageMso callback. So how can we load a built-in image dynamically? Well, it turns out that while there isn't a getImageMso callback, there is a GetImageMso method! And, this method was added on the CommandBars object of all places. This method returns the image for a named image that is included with Office as you would define in the imageMso attribute. So, we can simply use this method to retrieve the image from Office.
(Visual Basic for Applications)
Okay, the last callback we need is OnLaunchApplication, which is called when the user clicks any of the buttons. Remember from our customization that the name of the launcher button is called btnLauncher. The buttons in the menu have names specific to their applications. So, clicking the launcher button in the Ribbon opens the application that was most recently launched. However, you need to set that value in mstrLabel, which is retrieved from the other buttons. For demo purposes, we are simply showing a message box. You could, however, also store the path to an application to launch from a table based on the button that was clicked.
Earlier we also mentioned that we were storing the name of the imageMso attribute as extra data in the tag attribute of the button. We use the tag value now to set the name of the image in mstrImage:
(Visual Basic for Applications)
Sub OnLaunchApplication(ctl As IRibbonControl) ' set the label and the image and invalidate If (ctl.ID <> "btnLauncher") Then mstrLabel = Mid(ctl.ID, 4) mstrImage = ctl.Tag gobjRibbon.InvalidateControl "btnLauncher" End If ' launch the application MsgBox "You are launching " & mstrLabel End Sub
When you click the button, you should get a message that says Calculator is being launched. If you select an item in the menu, say Outlook, you should get a message that says Outlook is being launched and the button should update to the Outlook icon.
So far, we've taken an in depth look at creating ribbon customizations using controls such as gallery and button. However, there are other places in the Ribbon that allow for customization. Let's take a look at two of them — the Office menu and built-in commands.
Modifying the Office Menu
You've probably noticed that when you use the startFromScratch attribute, some of the controls in the Office menu are hidden, but not all of them. Luckily, you can modify the Office menu if you'd like by modifying controls in the officeMenu node of a customization. You can even add your own controls. The following XML defines a customization that adds a button and a menu to the Office menu:
<customUI xmlns= <ribbon startFromScratch="false"> <officeMenu> <!-- About button --> <button id="btnAbout" label="About My Application..." insertBeforeMso="FileNewDatabase" imageMso="Info" onAction="=MsgBox('My Application - copyright 2007')"/> <!-- Menu --> <menu id="mnu1" label="Application Launcher" insertBeforeMso="FileNewDatabase"> <checkBox id="chk1" label="Disable Launcher"/> <menuSeparator id="ms1" title="Office Applications"/> <button id="btnWord" label="Project" imageMso="MicrosoftProject"/> <button id="btnExcel" label="Excel" imageMso="MicrosoftExcel"/> <button id="btnOutlook" label="Outlook" imageMso="MicrosoftOutlook"/> <menuSeparator id="ms2" title="Utilities"/> <button id="btnCalc" label="Calculator" imageMso="Calculator"/> <button id="btnNotepad" label="Notepad" imageMso="ReviewEditComment"/> <button id="btnPhotos" label="My Pictures" imageMso="Camera"/> </menu> </officeMenu> </ribbon> </customUI>
The modified Office menu is shown in Figure 11-24.
Overriding Commands and Repurposing Controls
The Access Ribbon provides a wealth of built-in functionality. For example, there are buttons in the Office menu to create new databases or open an existing database. Other commands are available in the Ribbon itself, such as the new Encrypt with Password button. With all of these commands, there may be times when you want to include behavior or controls that Access provides in your applications. There may be other times when you want to override the default behavior of a control that Access provides. This is called repurposing a control. For example, consider a basic Contacts form in a contact tracking application. If you are deploying this application using the Access runtime you will likely want to create a ribbon customization for the application so users can interact with it. To prevent from implementing your own functionality for filtering or finding records, you can include the groups that are defined by Access, which provides these capabilities.
To include or modify existing controls, you need the idMso attribute. For commands that are not in the Office menu, use the commands node under the customUI root node of the ribbon customization, as shown in the code that follows. For instance, say that we want to disable the New button in the Office menu so that users cannot create new databases when they are using your application. The following customization will do this.
Disabling controls is one way to change functionality, but the Ribbon also enables you to write your own code when a built-in control is clicked. The signature for callback routines changes when you repurpose a built-in control. Let's say that you wanted to enable users to encrypt their databases by adding a database password. Access provides this functionality using the Encrypt with Password button for the ACCDB file format so you don't have to create this yourself. However, Access gives an error when the database is not opened exclusively. Thus, we want to replace the Access error message. So, let's repurpose the control. Again, start with the customization, as follows.
To repurpose an existing control, specify an onAction callback with an idMso control. Now, add the OnSetPassword callback. Notice that the onAction callback includes an extra parameter called CancelDefault.
(Visual Basic for Applications)
Public Sub OnSetPassword(control As IRibbonControl, ByRef CancelDefault) If (CurrentProject.Connection.Properties!Mode = 12) Then CancelDefault = False Else MsgBox "You must open the database exclusively to set the password.", _ vbExclamation, "Cannot Set Password" CancelDefault = True End If End Sub
We are using the Mode property of the ADO Connection object for the database that is currently open. When the value of this property is 12, the database is open exclusively. We set the CancelDefault argument to False to let the built-in command run. When the property is not 12, we display a message and cancel the built-in command by setting the CancelDefault argument to True.
Now, if you click the Encrypt with Password button on the Database Tools tab in the Access Ribbon, you should receive the message box when the database is not opened exclusively.
The Ribbon, introduced in Office 2007, will undoubtedly change the way we look at user interface development in the future. It creates opportunities for us as developers to put the things in front of our users that really matter and help them do their jobs effectively. Sure, the model has changed, but we feel this model creates enough new opportunities for development and once you have learned them you might even be able to add new skill sets under your belt such as XML!
Here are some of the key points in this chapter:
The programming model for the Ribbon has changed. This will likely have an effect on how we perform user interface development.
There are many different controls that can be used to create rich user experiences.
Many resources are available for Ribbon development in Office 2007.
Using callback functions, you can provide the same functionality for several Access events in a ribbon customization.
Programming the Ribbon is different, but fun!
In the next chapter, we discuss configuration and extensibility, and go into the details of creating Access applications that can be configured by users, and even localized!
For more information, see the following resources: | https://msdn.microsoft.com/en-us/library/dd548011(v=office.12) | CC-MAIN-2017-51 | refinedweb | 7,362 | 55.34 |
SAP TechEd 2018 is just around the corner! I’ve been getting everything ready for my session on SAP Cloud Platform, SAP HANA Service and wanted to share a sneak peek of it.
If you are lucky to come to TechEd, I have plenty of instances to share for you to configure and deploy yourself. We will be using the Cloud Application Programming model to build a little something on top of them.
Setting the SAP HANA service up
In the meantime, here’s an overview of how to create some database artifacts, including a Calculation View with K anonymization, in your instance from SAP Web IDE Full Stack.
Philip Mugglestone already published great video tutorials on how to deploy your instance and connect to it. If you can’t see the option to create the service, remember that as of today, this is not available through trial.
This instance of mine has been created with the scriptserver option on:
Getting an MTA up and running
We could use the new CAP model to build an app but the good ol’ MTA wizard also does the trick.
There are only minor differences with the already documented XSA path. After all, XS Advanced is our HANA-friendly, on-premise version of Cloud Foundry.
Most of these differences come from using SAP Web IDE for Full Stack as opposed to SAP Web IDE for HANA. Again, these are minor and this wizard to create an MTA and a DB module altogether is one of them (compared to the currently available one in HANA Express).
Creating a Database Module Automagically
Please, remove that namespace, you don’t need it and only makes names longer:
So I get a pre-created database module. It basically accelerated step 3 in this series of step-by-step tutorials if you are mentally making the comparison.
You need to enable the plugins for Web IDE for the database explorer and other fun stuff if you haven’t already:
I will change things a little bit from those tutorials and create a different data model. My single table (.hdbtable) will be called JOBS.
If you want something similar to follow along, here is another tutorial that will also explain how to create the calculation view.
Fast forwarding, I am also loading data with a CSV file and an hdbtabledata configuration.
If I build the module, I get a table with data in the database explorer:
Errors, anyone?
If you get an error when building the module, make sure the space is set in your cloud Foundry options or the project settings. And of course, having a service of type “hana” in that same space is super important, otherwise, Web IDE will have nothing to bind to.
You could also get an error for having more than one HANA service in the same space (lucky you!). Use parameter “database-id” and the GUID for your database (you can find this at the Database Cockpit):
(What on Earth are) HDI Containers
When you build the module, Web IDE will also create an HDI container for you.
You could manually create the HDI container using the Command Line Interface or with this graphical option here:
But there is no need because Web IDE created it for you. If you click on that tile, you will see the instance of the service:
If you list the services using the CLI, you will also see the HDI container, which also translates to a schema with vitamins. So why do you see it as a service? How is it both a schema AND a service?
Because this service will manage the database artifacts for you. It will have its own technical user that will access the actual physical database (and physical schema, which it also created for you).
In other words, you only have to worry about creating the design-time artifacts, like the hdbtable. The technical user that gets created automatically with your container will use its authorizations and super power to execute the actual “CREATE TABLE” statement in actual physical schema.
I have given a similar explanation before (I even used a picture of a kitty) and so has the official documentation. But some people keep insisting on asking so I keep insisting on explaining.
Let’s try everyone’s favorites: Calculation Views!
Also with a twist, using an intuitive form of anonymization. If you want to create your own modelling and haven’t tried it yet, here are some more step-by-step tutorials.
Those include more fancy tricks like a Star Join and Currency Conversion (something I would have killed for back in the HANA Studio days).
Ta-da! Calculation views and the whole enchilada in your SAP HANA Service!
Easy, eh? So what if you wanted to expose this through OData?
Well, you just do! I might explain that in a next blog post.
In the meantime, if you do not have such an instance but are willing to learn HANA, remember there is a free alternative with SAP HANA, express edition.
All of the tutorials I mentioned here were created for and in HXE.
Looking forward to meeting you in person at SAP TechEd 2018. Remember to add the session on your First Steps with HANA as a Service to your Agenda if you want to play with the service yourself.
Great to hear this. Will it be available at trial platform soon?
You are able to sign up for an SAP Cloud Platform developer edition here:, which includes the WebIDE, and your own personal HDI container that you could use.
Longer term, there are plans to expand the trial model for SAP HANA in the cloud – no dates to communicate at this stage. | https://blogs.sap.com/2018/09/12/sneak-peek-into-hana-as-a-service-database-artifacts-and-calculation-views/ | CC-MAIN-2019-13 | refinedweb | 956 | 69.31 |
From: Marc-Antoine Ruel (maruel_at_[hidden])
Date: 2003-12-17 12:11:35
What do you think of that? I'm not very up to date with design patterns but I found that use quite interessing. It is somewhat the inverse of RIIA (resource-initialization-is-acquisition).
#include <boost/lambda/lambda.hpp>
#include <boost/lambda/construct.hpp>
#include <boost/lambda/bind.hpp>
using namespace boost::lambda;
#include <boost/type_traits.hpp>
#include <boost/function.hpp>
// Add a function call on return of the function
struct OnReturn
{
typedef boost::function< void(void) > FnPtr;
OnReturn(FnPtr A) : m_C(A) { }
~OnReturn() { m_C(); }
FnPtr m_C;
};
and for example:
bool Foo()
{
FILE *stream;
stream = fopen( "data", "r" );
if ( !stream )
return false;
OnReturn Q1( bind(fclose, stream) );
(code that uses stream)
return true;
}
As a side-effect, It adds exception safety as the function is called during stack-unwinding, which is especially cool when using asynchronous exception handling. And the objects are destroyed in the reverse order that they have been created.
I've looked at the code bloat generated in release build and it seems relativelly big. There is some function calls as they are not all inlined (about half of debug build on MSVC7.1) and there is a new/delete added, which is not negligible.
You can add as many Qx object as wanted.
Any comments / improvement or anything already existing ?
Marc-Antoine Ruel, ing. jr.
Cardinal Health Canada
330 St-Vallier Est, Suite 330
Québec (Québec) G1K 9C5
(418) 872-0172 x8931
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2003/12/57805.php | CC-MAIN-2019-43 | refinedweb | 271 | 59.9 |
Search: Search took 0.02 seconds.
- 12 Nov 2012 1:39 AM
- Replies
- 4
- Views
- 990
And the owners of the other components too.. since they are all relativly slow compared to the ones in other framework such as sencha touch or dijit. And with realtively slow I measured 25 times...
- 12 Nov 2012 1:26 AM
The problem is that I see an issue with the other rendering speeds too using e.g. Ext.form.Text.
parent = whateverComponent;
time = Date.now();
for(i = 0; i<200; i++){
b = new...
- 9 Nov 2012 8:23 AM
Well ofc not.
But its much easier to see the problem when its a bit blown up. I mean anything above 0.03s causes choppyness in animations and i would say that above 0.2 it is noticable and 12 is...
- 9 Nov 2012 7:00 AM
Hi,
I did some tests on my project using Sencha Touch, ExtJs and Dijit performing the same task of placing 200 buttons on the page since I had noticed some performance issues:
Sencha Touch...
- 3 Oct 2012 2:03 AM
- Replies
- 27
- Views
- 10,664
WTF!... Still missing images!
Spinners for forms components to name one.
(:|
- 25 Jul 2012 4:53 AM
- Replies
- 2
- Views
- 603
Well resizing the components on resize I have solved like you, by listening for resize events. But the first issue was a bit different, like this:
1. Create a button (mybutton) and render to a dom....
- 17 Jul 2012 2:11 AM
- Replies
- 2
- Views
- 603
Hi,
I have components such as textfields that I for different reasons have to have in a non Ext container such as a normal div. Then I set width and height using setWidth / setHeight, which works...
- 8 Jun 2012 11:56 AM
Ext.onReady works fine but does create some issues with asyncronicity in functions where I dont want it ;). Oh well, thank you for the good answer at least.
But I have to say I think that its...
- 8 Jun 2012 8:14 AM
- Replies
- 1
- Views
- 342
Actually that code works in the console without setTimeout but if you write the same console.log statement again manually (that is a little later) the array is empty.
Something in the lifecycle...
- 8 Jun 2012 8:09 AM
- Replies
- 1
- Views
- 342
Hi,
I am creating a store without data in it and later supply both data and model. But there seems to be a race condition between creating the store and setting the data. If I set the data and...
- 8 Jun 2012 8:00 AM
Please smart people! If I am making no sense or supplying the wrong info, please let me know.
//J
- 8 Jun 2012 12:35 AM
Hi,
I get a wierd error that has something to do with the loader and the pahts.
I have loaded the page and Ext-all. Then I define the component ExtPage, which in turn inherits from a custom...
- 7 Jun 2012 2:57 AM
Solved:
- Just not do the read at end and theres no extra requests.
- Since I set the model on the reader it is inferred on the store causing the implicitModel property to be set which will destroy...
- 7 Jun 2012 1:32 AM
Ok Partly resolved:
var myStore = Ext.create('Ext.data.Store', {
fields: [],
proxy: {
type: 'ajax',
url: 'labb.json',
reader: {
type: 'json',
- 7 Jun 2012 12:48 AM
This still requires 2 queries. I would like/need to infer the model from the data received.
Who is actually making the query and when is it actually populating the datastore?
Would it somehow be...
- 6 Jun 2012 5:20 AM
Thanks sdt6585,
The thing is that I need the store to be a rest loaded store that possibly only loads parts of a much larger data set depending on what is viewed in say a table. Of course I could...
- 6 Jun 2012 4:40 AM
If I dont know the structure of the data that I wish to load into a store I found that I could create the store with an empty field config:
var myStore = Ext.create('Ext.data.Store', {
...
- 14 May 2012 11:52 PM
- Replies
- 125
- Views
- 110,088
+1 for this too
But why on earth cant we set 100% width on a panel ??? To always have to set the pixel width on resize sounds so ... heavy.
//J
- 14 May 2012 6:29 AM
- Replies
- 4
- Views
- 2,218
I ran into the exact same issue. Please by all the patterns we hold dear, dont add global classes or polute the global namespace.
And specially with propeties without a settable default such as...
Results 1 to 19 of 19 | https://www.sencha.com/forum/search.php?s=9a8b5f2ed6cb4da240ff8eb08850431f&searchid=13303860 | CC-MAIN-2015-48 | refinedweb | 786 | 82.65 |
One prob i have is how you make your own header in C?
This is a discussion on C Question within the C Programming forums, part of the General Programming Boards category; One prob i have is how you make your own header in C?...
One prob i have is how you make your own header in C?
Last edited by fastprogrammer; 11-14-2005 at 10:59 AM.
I see if that works.
As an aside, you would #include it like this:
#include "my_header"
You mean:
(C headers end in .h by convention)(C headers end in .h by convention)Code:#include "my_header.h"
Originally Posted by cwrOriginally Posted by cwr
Absolutely that is what I meant. Thanks for the correction.
That make sence now.
You would also want inclusion guards:
This prevents bad things from happening if you include the header twice or in two different files.This prevents bad things from happening if you include the header twice or in two different files.Code:#ifndef HEADER_H #define HEADER_H 1 /* header file code here */ #endif.
dwks,
What is the point in adding advice to a 6 week old thread, especially given that the original poster hasn't posted anything since the last reply to this thread before yours?
See the guidelines (number 5). | http://cboard.cprogramming.com/c-programming/72221-c-question.html | CC-MAIN-2014-10 | refinedweb | 215 | 75.4 |
Manpage of WCSNCMP
WCSNCMPSection: Linux Programmer's Manual (3)
Updated: 2015-08-08
Index
NAMEwcsncmp - compare two fixed-size wide-character strings
SYNOPSIS
#include <wchar.h>int wcsncmp(const wchar_t *s1, const wchar_t *s2, size_t n);
DESCRIPTIONThe wcsncmp() function is the wide-character equivalent of the strncmp(3) function. It compares the wide-character string pointed to by s1and the wide-character string pointed to by s2, but at most nwide characters from each string. In each string, the comparison extends only up to the first occurrence of a null wide character (Laq\0aq), if any.
RETURN VALUEThe wcsncmp() function returns zero if the wide-character strings at s1For an explanation of the terms used in this section, see attributes(7).
CONFORMING TOPOSIX.1-2001, POSIX.1-2008, C99.
SEE ALSOstrncmp(3), wcsncasecmp(3)
Index
This document was created by man2html, using the manual pages.
Time: 16:30:23 GMT, October 09, 2016 Click Here! | https://www.linux.com/manpage/man3/wcsncmp.3.html | CC-MAIN-2016-44 | refinedweb | 155 | 52.29 |
Ever since NodeJS' inception, we've been able to execute JavaScript code outside of the browser. But NodeJS did a lot more than just that, it opened up a way to write server side code with JavaScript itself and along with it came the ability to manipulate host system's file system.
Photo by Maksym Kaharlytskyi on Unsplash
NodeJs introduced the
fs module that allows you to do synchronous or asynchronous I/O operations and it's available out of the box.
Getting Started
Make sure you've the node installed in your system, if not you can head over to Node's official site and download it from there. Now with that installed we're up and ready to do some file based operations.
To use the
fs, we can use the code below. If you're using commonjs use this line of code.
const fs = require('fs')
If you're using ES, you can import it like this.
import fs from 'fs'
Now each of the operations we'll be learning have both
synchronous and
asynchronous methods. All the synchronous methods have
Sync as a suffix. All the asynchronous methods takes a callback as it's last argument which gives us an
error as first argument and
data as second argument containing the result that some of the operation's return. With that being said and done, let's do some operations.
CRUD operations
Using the
fs module, we can implement following operations -
- Create
- Read
- Update
- Rename
-
Create File
In order to create a new file, we can use
fs.writeFile or
fs.writeFileSync.
Synchronous Method
This method takes three arguments:
- file - path of the file, where it would be stored
- data - content to store inside the file, can be
stringor
buffer.
- options - an object containing key-values for configuration for ex.
encoding
The return value for this method is
undefined.
fs.writeFileSync('./example.txt', 'exampel content')
By default the encoding for data of string type would be
utf8 and if different encoding is required, pass it using the third argument named
options.
Asynchronous Method
This method takes all the arguments same as the synchronous method except it let's you pass a callback.
fs.writeFile('./example.txt', 'exampel content', (error) => { if(error) console.log(error); console.log('The file has been saved!') })
Read File
Now if we want to read the content of the file
example.txt that we just created. We can use either
fs.readFile or
fs.readFileSync.
Synchronous Method
This method takes just one argument i.e path of the file where it's stored and returns the contents stored in that file. The content could be either of type
string or
buffer. With buffer type, simply convert it to string using
toString() method.
const data = fs.readFileSync('./example.txt') // data - "exampel content"
Asynchronous Method
fs.readFile('./example.txt', (error, data) => { if(error) console.log(error); console.log(data) }) // data - "exampel content"
Update File
Now that we have access to the content of the file and we want to update it because there's a typo you made or maybe I did which is perfectly normal, you can use the method's
fs.writeFile or
fs.writeFileSync again to overwrite your data.
Synchronous Method
This method just returns
undefined, because incase you're file doesn't exist it'll create a new one using the path itself and store the content in that file.
fs.writeFileSync('./example.txt', 'example content')
Asynchronous Method
fs.writeFile('./example.txt', 'example content', (error) => { if(error) console.log(error); console.log('The file has been updated!') })
Rename File
This method can be used for two purposes i.e for renaming a file/folder or moving a file/folder from one folder to another. The most likely error that it will throw is if the new name that was provided is a folder but incase if it's a file it will be overwritten. It will also throw an error if the folder you're moving the file to does not exist.
Synchronous Method
This method just takes two arguements:
oldPath and
newPath. Return
undefined if the operation was successfull. Throws error if
newPath doesn't exist or
newPath is a folder.
fs.renameSync('./example.txt', './example1.txt')
Asynchronous Method
This method has similar signature as the synchronous one with an extra callback, giving us an
error object that can be logged.
fs.rename('./example.txt', './example1.txt', (error) => { if(error) console.log(error); console.log('The file has been renamed!') })
Delete File
The methods we have for deleting a file are
fs.unlink and
fs.unlinkSync. The most likely error it could throw is if the file you're trying to delete doesn't exist.
Synchronous Method
This version just takes a path of type string or buffer or a URL. Returns
undefined if there are no errors.
fs.unlinkSync('./example1.txt')
Asynchronous Method
This version takes a path and callback as arguments. Callback gets just the
error argument that can be used to log the error.
fs.unlink('./example1.txt', (error) => { if(error) console.log(error); console.log('The file has been deleted!') })
Validation
These methods can get the job done but they're not enough because any error thrown in production, if not catched will stop the server. For ex. when you update a file, you would not want to update a wrong file because you passed
tire instead of
tier considering they both exist for some reason. So what do we do, we bring in validation. Simple checks before performing any operations to validate if a file exists or not.
There's a method that
fs module provides for checking if a file/folder exists or not, named
existsSync. The asynchronous method for this has been deprecated.
const fileExists = fs.existsSync('./example1.txt') // fileExists - false
Now we can write our validation for file based operations.
Create File
Let's start by creating a function named
create and we'll pass both the
filePath and
content to it. We'll use
try catch to catch all the errors that could possibly be thrown.
const create = (filePath, content) => { try { const fileExists = fs.existsSync(filePath); if (fileExists) { throw { success: false, message: "The file already exist!" }; } else { fs.writeFileSync(filePath, content); return { success: true, message: "The file has been created!" }; } } catch (error) { return error; } }; create("./example.txt", "Example Content")
Read File
Similarly for reading a file, we can write function called
read and pass our
filePath to it. Before returning the content
const read = filePath => { try { const fileExists = fs.existsSync(filePath); if (fileExists) { const content = fs.readFileSync(filePath, 'utf8'); return { success: true, data: content }; } else { throw { success: false, message: "The file doesn't exist!" }; } } catch (error) { return error; } }; const content = read("./example.txt")
Update File
Before updating a file, we'll check if it exists or not and throw an error if it does not.
const update = (filePath, content) => { try { const fileExists = fs.existsSync(filePath); if (fileExists) { fs.writeFileSync(filePath, content); return { success: true, message: "The file has been updated!" }; } else { throw { success: false, message: "The file doesn't exist!" }; } } catch (error) { return error; } }; update('./example.txt', "New Example Content")
Rename File
With renaming a file, we'll have to make sure that both the path's i.e
oldPath and
newPath exists. In case you're trying to move a file, make sure the folder you're moving the file into also exists.
const rename = (oldPath, newPath) => { try { const oldFileExists = fs.existsSync(oldPath); const newFileExists = fs.existsSync(newPath); if (newFileExists) { throw { success: false, message: "The file you're trying to rename to already exist!" }; } if (oldFileExists) { fs.renameSync(oldPath, newPath); return { success: true, message: "The file has been renamed!" }; } else { throw { success: false, message: "The file you're trying to rename doesn't exist!" }; } } catch (error) { return error; } }; rename("./example.txt", "./example1.txt")
Delete File
Similarly for deleting a file, check if it exists and if it does then delete it or throw an error.
const unlink = filePath => { try { const fileExists = fs.existsSync(filePath); if (fileExists) { fs.unlinkSync(filePath); return { success: true, message: "The file has been deleted!" }; } else { throw { success: false, message: "The file doesn't exist!" }; } } catch (error) { return error; } }; unlink("./example1.txt")
Conclusion
These are basic operations you might need when you want manipulate file system. The
fs module contains a plethora of functions like these that you can make use of.
Here's the link for the documentation for
fs module on NodeJs website for reference.
Need to ask a quick question?
Ask away on my twitter @prvnbist
originally posted on my blog
Discussion (2)
The fs.promises approach makes file operations much more easier as everything is a promise, and promises are cool.
Read file examples (also with promises) => youtu.be/6sNisr1FLRY | https://practicaldev-herokuapp-com.global.ssl.fastly.net/prvnbist/file-based-operations-using-nodejs-48a5 | CC-MAIN-2021-17 | refinedweb | 1,458 | 58.99 |
95170/find-occurrences-word-with-more-than-vowels-file-using-regex
I'm having trouble figuring out how to find all words that have 2 or more vowels in them. So far this is what I have but when i run it, it don't give me any output. I appreciate the help.
import re
def main():
in_f = open("jobs-061505.txt", "r")
read = in_f.read()
in_f.close()
for word in read:
re.findall(r"\b[aAeEiIoOuU]*", read)
in_f = open("twoVoweledWordList.txt", "w")
in_f.write(word)
in_f.close()
print (word)
main()
for word in read: <--- iterating over chars in "read"!
re.findall(r"\b[aAeEiIoOuU]*", read) <-- using read again, discarding result
your iteration and pattern usage do not align. Plus, you don't use the result.
Consider processing the file line by line etc.
twovowels=re.compile(r".*[aeiou].*[aeiou].*", re.I)
nonword=re.compile(r"\W+", re.U)
file = open("filename")
for line in file:
for word in nonword.split(line):
if twovowels.match(word): print word
file.close()
If you want to find the value ...READ MORE
Hello @Khanhh ,
Use panda to find the value of ...READ MORE
following way to find length of string
x ...READ MORE
First, use the dataframe to match the ...READ MORE
You can also use the random library's ...READ MORE
Syntax :
list. count(value)
Code:
colors = ['red', 'green', ...READ MORE
can you give an example using a ...READ MORE
You can simply the built-in function in ...READ MORE
Hey, @S,
you can have a look at ...READ MORE
Yes it is possible. You can refer ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/95170/find-occurrences-word-with-more-than-vowels-file-using-regex?show=95171 | CC-MAIN-2021-21 | refinedweb | 274 | 71.82 |
CodePlexProject Hosting for Open Source Software
Listed in the help files, not in the source or the namespace.
Does this exist or is it just a tease?
It was removed in Json.NET 4.0 and the help files on my website haven't been updated. If you want to use it then you could get the source code for an earlier version of Json.NET and copy it into your project.
Thank you.
I had the last release of 3.5 but it wasn't in there either. I'll check further back. Any reason why it got the boot?
A couple of reasons. It was a random converter back from v1 of Json.NET that I just happened to use at the time, there is no reason why people couldn't write it themselves, and it was the only class which used the System.Drawing (?) assembly and I want to
keep the assembly dependencies low.
That makes sense.
Also, thank you very much for this very useful tool. It has saved me considerable time and effort. Much obliged.
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | http://json.codeplex.com/discussions/241310 | CC-MAIN-2017-39 | refinedweb | 216 | 86.91 |
Functions
When you want to bundle up a reusable chunk of code in Magpie, you'll usually use a method. But sometimes you want a chunk of code that you can pass around like a value. For that, you'll use a function. Functions are first-class objects that encapsulate an executable expression.
Creating Functions
Functions are defined using the
fn keyword followed by the expression that forms the body of the function.
fn print("I'm a fn!")
This creates an anonymous function that prints
"I'm a fn!" when called. The body of a function can be a single expression like above or can be a block.
fn print("First!") print("Second!") end
Parameters
To make a function that takes an argument, put a pattern for it in parentheses after the
fn keyword.
fn(name, age) print("Hi, " + name + ". You are " + age + " years old.")
Like with methods, any kind of pattern can be used here. Go crazy.
Implicit Parameters
When programming in a functional style, you often have lots of little functions that just call a method or do some trivial expression. Here's a line of code to pull the even numbers from a collection:
val evens = [1, 2, 3, 4, 5] where(fn(n) n % 2 == 0)
To make this a little more terse, Magpie supports implicit parameters. The above code can also be written:
val evens = [1, 2, 3, 4, 5] where(fn _ % 2 == 0)
Note that the parameter pattern is gone, and
n in the body has been replaced with
_.
The rule for implicit parameters is pretty simple. If a function has no parameter pattern, then a pattern will be created for it. Every
_ that appears in the body of the function will be replaced with a unique variable for each occurrence. Then a pattern will be created that defines those variables in the order that they appear.
The "unique variable" and "order that they appear" parts are important here, since you can have multiple implicit parameters. When you do, each
_ becomes its own parameter for the function.
fn (_ + _) / _
This creates a function with three separate implicit parameters. It's equivalent to:
fn(a, b, c) (a + b) / c
Implicit parameters can help code be more readable when the function body is small and the parameters are obvious from the surrounding context. But they can also render your code virtually unreadable (like the above example here) otherwise. Like all pointy instruments, wield it with care.
Calling Functions
Once you have a function, you call it by invoking the
call method on it. The left-hand argument is the function, and the right-hand argument is the argument passed to the function.
var greeter = fn(who) print("Hi, " + who) greeter call("Fred") // Hi, Fred
If a function doesn't take an argument, then there won't be a right-hand argument to
call.
var sayHi = fn print("Hi!") sayHi call
Like methods, the argument pattern for a function may include tests. If the argument passed to
call doesn't match the function's pattern, it throws a
NoMethodError.
var expectInt = fn(n is Int) n * 2 expectInt call(123) // OK expectInt call("not int") // Throws NoMethodError.
If you pass too many arguments to a function, the extra ones will be ignored.
var takeOne = fn(n) print(n) takeOne("first", "second") // Prints "first".
However, if you pass too few, it will throw a
NoMethodError.
var takeTwo = fn(a, b) print(a + b) takeOne("first") // Throws NoMethodError.
Returning Values
A function automatically returns the value that its body evaluates to. An explicit
return is not required:
var name = fn "Fred" print(name call) // Fred
If the body is a block, the result is the last expression in the block:
var sayHi = fn print("hi") "result" end sayHi call // Prints "hi" then returns "result".
If you want to return before reaching the end of the function body, you can use an explicit
return expression.
var earlyReturn = fn(arg) if arg == "no!" then return "bailed" print("got here") "ok" end
This will return
"bailed" and print nothing if the argument is
"no!". With any other argument, it will print
"got here" and then return
"ok".
A
return expression with no expression following the keyword (in other words, a
return on its own line) implicitly returns
nothing.
Closures
As you would expect, functions are closures: they can access variables defined outside of their scope. They will hold onto closed-over variables even after leaving the scope where the function is defined:
def makeCounter() var i = 0 fn i = i + 1 end
Here, the
makeCounter method returns the function created on its second line. That function references a variable
i declared outside of the function. Even after the function is returned from
makeCounter, it is still able to access
i.
var counter = makeCounter() print(counter call // Prints "1". print(counter call) // Prints "2". print(counter call) // Prints "3".
Callables
The
call method used to invoke functions is a regular multimethod with a built-in specialization for functions. This means you can define your own "callable" types, and specialize
call to act on those. With that, you can use your own callable type where a function is expected and it will work seamlessly. | http://magpie.stuffwithstuff.com/functions.html | CC-MAIN-2017-30 | refinedweb | 878 | 72.46 |
$26.00
By Jared Diamond $36.00
$18
By.
Advertisement
The shift in results is encouraging, but state politicians are not yet rushing to Kopplin’s cause in sufficient numbers. The religious lobby, which is aligned with such national groups as Focus on the Family, has the power to elect and defeat political candidates. “Republican politicians are under pressure to vote with the religious right,” Kopplin said in a telephone interview with Truthdig on Friday. “If one of these politicians supports legislation that opposes creationism, they’ll be faced with someone who is further to the right than them in the next primary.”
Louisiana lawmakers and other state officials don’t just have the voting public to fear. Gov. Bobby Jindal has developed a reputation for punishing those who oppose him. According to New Orleans-based nonprofit news provider The Lens, this year Jindal removed Martha Manuel, the executive director of the Governor’s Office of Elderly Affairs, less than 24 hours after she publicly questioned one of his decisions. He appears to have pressed the resignation of Cynthia Bridges, a longtime secretary at the Department of Revenue, after she produced a tax report that contradicted his plans. And Jindal engineered the removal of several top officials at Louisiana State University who were “all seen as obstacles to privatizing the university’s hospital system.”
But Louisiana is not without officials willing to endure the pressure of well-funded interest groups and a vindictive governor. On Tuesday, the Orleans Parish School Board voted unanimously to ban the teaching of creationism and intelligent design in its schools, and included a direct rejection of the Texas Board of Education’s 2010 decision to tailor textbooks to conservative tastes. “No history textbook shall be approved which has been adjusted in accordance with the State of Texas revisionist guidelines,” the parish board wrote in its decision, “nor shall any science textbook be approved which presents creationism or intelligent design as science or scientific theories.”
The outcome affects six schools that operate within the parish. Along with the New Orleans City Council’s decision to reject the Louisiana Science Education Act, the city has now fully banned creationism from public classrooms.
Kopplin, now a history student in his second year at Rice University in Houston, has already received two honors for his efforts. He won a National Center for Science Education 2012 Friend of Darwin Award and was granted the 2012 Hugh M. Hefner First Amendment Award in Education. “This is a big victory for reason,” wrote Truthdig Editor-in-Chief Robert Scheer—who was on the jury for the Hefner award—of Tuesday’s result in New Orleans.
When asked why he believed that he, a mere high school student, could bring about such major change at the state and local level, Kopplin said: “I just thought it was right. I didn’t think about the opposition. I’m used to my attempts at things not working. I may lose nine out of 10 times, but eventually we will win this.”
Kopplin has been praised and reviled for his work. Skeptics and scientific groups applaud his endeavors, while some on the other side blame him for Hurricane Katrina. For building such stellar momentum in the direction of reason and progress, and for his commitment to carry on, we honor Zack Kopplin as our Truthdigger of the Week.
Zack Kopplin:
GateKeeper50hotmail:
Get truth delivered to
your inbox every week.
Previous item: What Americans Should Learn From the ‘Republican Apocalypse’
Next item: Where Are We Heading—Bedford Falls or Pottersville?
If you have trouble leaving a comment, review this help page. Still having problems? Let us know. If you find yourself moderated, take a moment to review our comment policy.
Newsletter
Like Us
Get Our Feed
Like Truthdig on Facebook | http://www.truthdig.com/report/item/truthdigger_of_the_week_zack_kopplin_20121222?ln | CC-MAIN-2014-15 | refinedweb | 633 | 53.1 |
How to Display Images on 2.4inch TFT and Make It a Digital Photoframe
Introduction: How to Display Images on 2.4inch TFT and Make It a Digital Photoframe
I really had this desire to build a digital photoframe from last three ,Until This January when i got this tft lcd touch module .I was excited but when i looked on the internet found very few (sorry none) help regarding it only JoaoLopesF instructable was there but the bitmap sketch had not been working so i cracked it as a challenged debugged it and made a working sketch .
Digital photo frame are quite popular now days.
but cost a hell lot of money with this DIY it cost less 25 $ to make one .
Most importantly its easy to add new file and even with a 1 gb card i can store more 5k photos.
Also like my page for support
Step 1: Incredients
I still give the items you shall be needing
- Arduino UNO
- 2.4 inch TFT LCD shield
- micro SD card
- card reader
- DC adapter for power (i am using a custom made lithium ion power source )
Step 2: What Is TFT LCD ?
TFT LCD stands for thin-film-transistor liquid-crystal display.It.
When no voltage is applied, the molecule structures are in their natural state and twisted by 90 degrees. The light emitted by the back light can then pass through the structure.
If a voltage is applied, i.e. an electric field is created, the liquid crystals are twisted so that they are vertically aligned. The polarized light is then absorbed by the second polarizer. Light can therefore not leave the TFT display at this location.
Step 3: Using Th SD Card Shield and Code
Its good to have a SD card shield when you have a lot of file to read or write,
The 2.4 inch LCD touchscreen module has a inbuilt SD card module .
The SD card uses the SPI bus for interfacing with the arduino.
For the working of the SD card you need to call the SD library
Formatting the SD card
Connect the 5V pin to the 5V pin on the Arduino
Connect the GND pin to the GND pin on the Arduino
Connect CLK to pin 13 or 52Connect DO to pin 12 or 50
Connect DI to pin 11 or 51Connect CS to pin 10 or 53
Code
Thanks to adafruit for its library and JoaoLopesF for the modified library
all his sketch work except the bitmap once i corrected it
Step 4: Setting Up the Bitmap Image
Arduino supports bitmap images so i needed to convert my jpeg into bmp files.
You can do these easily with Photoshop any version
- Open Photoshop
- Create a canvas of 240 x 320
- drag the image you want in the Photoshop
- adjust it
- Save it as .bmp file
- the bitrate should be 24
Once saved the file should be moved in the sd card
is there anyway to make it display multiple images?
STM32 based BMP Display on TFT LCD
Please upload the libraries you have used
2.4" is pretty small; any way to scale this up? I happen to have access to several larger displays, 7" and 10.1" that I have procured...any ideas how to adapt the project to these?
they have 7 inch lcd shield work the same ways just need the proper library
cool, thanks! I actually was thinking of making my own shield....and I really have a lot more 10.1 and 10.2" LCDs....looks like I'll have a bit of work to do to figure this out! Great idea though; been wanting to do this for a while (as long as I have had those spare LCDs), but this is the kick in the pants I needed to start.
let me know how it looked :)
Did u use SDcard shield OR just insert the SDcard into SDcard module in TFT LCD? Thx sir.
I use the SD card on the TFT display with a 32 gb card. Next up is to add jpeg support and image scaling..
the shield comes with a pre board sd card reader
Nice work
Thanks bro
Thanks bro
I found the TFT screen and Uno on Banggood.com about a month ago and over the weekend I was messing with the pair and found the tftbmp draw code in the demo.. I extended it with the ability to read any bmp file on the SD card.. so all you do is put your bitmaps on the SD and plug it in.. Having to add/edit/recompile/reload the Uno everytime is BS... Here is my code:
// BMP-loading example specifically for the TFTLCD breakout board.
// If using the Arduino shield, use the tftbmp_shield.pde sketch instead!
// If using an Arduino Mega make sure to use its hardware SPI pins, OR make
// sure the SD library is configured for 'soft' SPI in the file Sd2Card.h.
#include <Adafruit_GFX.h> // Core graphics library
#include <Adafruit_TFTLCD.h> // Hardware-specific library
#include <SD.h>
#include <SPI.h>
int COMM = 19200; // Speed of our comm port - SET THIS to match your setup
int SLEEP = 5000; // Sleep time in milliseconds between pictures - Set this to make you happy..
// For Arduino Uno/Duemilanove, etc
// connect the SD card with DI going to pin 11, DO going to pin 12 and SCK going to pin 13 (standard)
// Then pin 10 goes to CS (or whatever you have set up)
// In the SD card, place 24 bit color BMP files (be sure they are 24-bit!)
// There are examples in the sketch folder
/* * SD card attached to SPI bus as follows: */
#define MOSI 11 // Not used yet in this code
#define MISO 12 // Not used yet in this code
#define SCLK 13 // Not used yet in this code
#define SD_CS 10 // Set the chip select line to whatever you use (10 doesnt conflict with the library)
//
//).
// our TFT wiring
Adafruit_TFTLCD tft(LCD_CS, LCD_CD, LCD_WR, LCD_RD, A4);
File root;
//.
#define BUFFPIXEL 20
void bmpDraw(char *filename, int x, in buffer (R+G+B per pixel)
uint16_t lcdbuffer[BUFFPIXEL]; // pixel out buffer (16-bit per pixel)
uint88_t lcdidx = 0;
boolean first = true;
if((x >= tft.width()) || (y >= tft.height())) return;
int i = 0;
int j = 0;
tft.setRotation(i);
tft.fillScreen(j);
Serial.println();
Serial.print(F("Loading image '"));
Serial.print(filename);
Serial.println('\'');
// Open requested file on SD card
if ((bmpFile = SD.open(filename)) == NULL) {
Serial.println;
}
//);
}
Serial.print(F("Loaded in "));
Serial.print(millis() - startTime);
Serial.println(" ms");
} // end goodBmp
}
}
bmpFile.close();;
}
void setup()
{
// Open serial communications and wait for port to open:
Serial.begin(COMM);
while (!Serial) {
; // wait for serial port to connect. Needed for native USB port only
}
Serial.print("Initializing SD card...");
if (!SD.begin(SD_CS)) {
Serial.println("initialization failed!");
return;
}
Serial.println("initialization done.");
tft.reset();;
}
tft.begin(identifier);
delay(1000);
}
void printDirectory(File dir) {
int k = 0;
dir.rewindDirectory();
while (true) {
File entry = dir.openNextFile();
if (! entry) {
// no more files
Serial.println("Last File Reached");
entry.close();
break;
}
if (entry.isDirectory()) {
Serial.println("/");
}
bmpDraw(entry.name(),k,k);
entry.close();
delay(SLEEP);
}
}
void loop()
{
while (true) {
root = SD.open("/");
printDirectory(root);
// Serial.println("End"); // Loop Debug to console
}
}
Great show us a pic would be awesome. | http://www.instructables.com/id/How-to-Display-Images-on-24inch-TFT-and-Make-It-a-/ | CC-MAIN-2017-39 | refinedweb | 1,218 | 72.76 |
A data type classifies various types of data eg. String, integer, float, boolean, the types of accepted values for that data type, operations that can be performed on the data type, the meaning of the data, and the way that data of that type can be stored.
The table below shows the most commonly used data types used in the Java programming language.
The table below shows some of the other data types used in the Java programming language.
Sample code
The sample Java code below shows how some of the different data types can be stored in variables. Later on, we will look at how to actually work with the values of different data types (eg. math calculations with integers and floats, and decision making with booleans).
The code includes comments explaining each data type.
package myjavaproject; public class DataTypes { public static void main (String[] args){ String message = "Hello"; // variable of String data type char letter = 'a'; // variable of char data type int number = 20; // variabe of int (integer) data type float decimal = 43.65f; // variable of float (floating point) data type boolean result = true; // variable of Boolean data type // now let's output the values of the different variables System.out.println("Message is " + message); System.out.println("Letter is " + letter); System.out.println("Age is " + number); System.out.println("Score is " + decimal); System.out.println("The answer is " + result); } } | https://www.codemahal.com/video/data-types-java/ | CC-MAIN-2020-24 | refinedweb | 232 | 56.96 |
How to declare natural ordering by implementing the generic IComparable interface in C# .NET
April 29, 2015 Leave a comment
Primitive types such as integers can be ordered naturally in some way. Numeric and alphabetical ordering comes in handy with numbers and strings. However, there’s no natural ordering for your own custom objects with a number of properties.
Consider the following Triangle class:
public class Triangle { public double BaseSide { get; set; } public double Height { get; set; } public double Area { get { return (BaseSide * Height) / 2; } } }
Imagine that you want to be able to declare that one triangle is larger or smaller than another one in a natural way. One way to declare natural comparison is by implementing the generic IComparable interface which comes with one method, CompareTo, which returns an integer. The method should return 0 if the compared instances are equal. It should return an integer larger than 0 if “this” current instance is larger/higher/longer etc. than the instance it is compared to. The reverse is true if “this” instance is ranked behind the other instance.
Here comes an example for the Triangle class:
public class Triangle : IComparable<Triangle> { public double BaseSide { get; set; } public double Height { get; set; } public double Area { get { return BaseSide * Height; } } public int CompareTo(Triangle other) { if (other == null) return 1; if (Area > other.Area) return 1; return -1; } }
…and here’s how you can call the CompareTo method:
Triangle tOne = new Triangle() { BaseSide = 4, Height = 2 }; Triangle tTwo = new Triangle() { BaseSide = 5, Height = 3 }; if (tOne.CompareTo(tTwo) > 0) { Console.WriteLine("tOne is larger than tTwo: {0} vs. {1}", tOne.Area, tTwo.Area); } else { Console.WriteLine("tOne is smaller than tTwo: {0} vs. {1}", tOne.Area, tTwo.Area); }
As tOne is smaller than tTwo the control flow will take the “else” path.
As the CompareTo method is by default implemented on numeric types – and strings – we can have a simpler implementation of the EqualTo method:
public int CompareTo(Triangle other) { return Area.CompareTo(other.Area); }
We’ll soon look at a related interface, the generic IComparer of T which provides for more sophisticated ordering logic.
View all various C# language feature related posts here. | https://dotnetcodr.com/2015/04/29/how-to-declare-natural-ordering-by-implementing-the-generic-icomparable-interface-in-c-net/ | CC-MAIN-2021-49 | refinedweb | 363 | 54.12 |
Getting started
In this article you will develop a data-centric web application using some of the latest server-side Java technologies and the Dojo toolkit for creating a rich user interface. These technologies significantly reduce the amount of code you have to write, on both the server and client sides. Familiarity with Java and JavaScript is recommended to get the most out of this article. You will need a Java 1.6 JDK to compile and run the code; JDK 1.6.0_20 was used in this article. You will also need a Java Web container; Apache Tomcat 6.0.14 was used in this article. For data persistence, any database with a JDCB 2.0 compliant driver can be used. To keep things simple, an embedded database, Apache Derby 10.6.1, was used. This article uses the Java API for RESTful Web Services (JAX-RS), with Jersey 1.3 for the JAX-RS implementation. You will also use the Java Persistence API (JPA) with Hibernate 3.5.3 for the implementation. Finally, the Dojo toolkit 1.4 was used in this article as well. See Resources for links to these tools.
Data on the fly with the Java Persistence API
Many web applications are data centric—they present persistent data and allow the user to create or update this data. It sounds simple enough, but even when it comes to something as basic as reading and writing data from a database, things can get ugly. However, the Java Persistence API (JPA) greatly reduces the amount of tedious boilerplate code that you must write. We will take a look at a simple example of using JPA.
In this article you will develop a simple application for managing a youth
soccer league. You will start by developing a simple data model for
keeping track of the teams in your league and the players on those teams.
You will use JPA for all access to this data. You will start with the
first of two data models, a
Team. Listing 1 shows this class.
Listing 1. The
Team data model class
@Entity public class Team { .... ....@Id ....@GeneratedValue(strategy = GenerationType.IDENTITY) ....private long id; .... ....private String name; .... ....@OneToMany ....private Collection<Player> players; .... // getters and setters........ }
This is a typical JPA annotated class. You use the
@Entity annotation to declare that this class
will be mapped to a database. You could optionally specify the name of the
table for the class, or implement the convention where you use the same
name as the class. Next, you annotate the
id
field of the class. You want this to be the primary key for your table, so
use the
@Id annotation to declare this. The
id
is not important from a business logic perspective; you just need it for
the database. Since you want the database to take care of coming up with
its values, use the
@GeneratedValue
annotation.
In Listing 1, you also declare another field, the
name field. This will be
the name of the team. Notice there are no JPA annotations on this field.
By default this will be mapped to a column of the same name, and that is
good enough for the purposes of this article. Finally, each team will have
multiple players associated to it. You use the
@OneToMany annotation to let the JPA runtime
know that this is a managed relation with one team having many players. In
your Java class, this is just a
java.util.Collection of
Player object. Listing 2
shows the
Player class being referenced.
Listing 2. The
Player data model class
@Entity public class Player { .... ....@Id ....@GeneratedValue(strategy = GenerationType.IDENTITY) ....private long id; .... ....private String firstName; .... ....private String lastName; .... ....private int age; .... ....@ManyToOne (cascade=CascadeType.ALL) ....private Team team; .... // getters and setters }
The
Player class shown in Listing 2 is similar
to the
Team class in Listing 1. It has more
fields, but again in most cases you won't need to worry about annotating
these fields. JPA will do the right thing for you. The one difference
between Listing 1 and Listing 2 is how you specify the
Player class's relationship to the
Team class. In this case you use a
@ManyToOne annotation, as there are many
Players on one
Team.
Notice that you also specified a cascade policy. Take a look at some of
the JPA documentation to pick the right cascade policy for your
application. In this case, with this policy you can create a new
Team and a
Player at
the same time and JPA will save both, which is convenient for your
application.
Now that you have declared your two classes, you just need to tell the JPA runtime how to connect to your database. You do this by creating a persistence.xml file. The JPA runtime needs to find this file and use the metadata in it. The easiest way to do this is to put it into a /META-INF directory that is a subdirectory of your source code (it just needs to be in the root of the directory where your compiled classes are output). Listing 3 shows the persistence.xml file.
Listing 3. The persistence.xml for the soccer app
<persistence version="1.0" .... ....<persistence-unit ........<class>org.developerworks.soccer.model.Team</class> ........<class>org.developerworks.soccer.model.Player</class> ........<properties> ............<property name="hibernate.dialect" value="org.hibernate.dialect.DerbyDialect" /> ............<property name="hibernate.connection.driver_class".... ............<property name="hibernate.connection.url" value="jdbc:derby:soccerorgdb;create=true" /> ............<property name="hibernate.hbm2ddl.auto" value="update" /> ............<property name="hibernate.show_sql" value="true" /> ............<property name="hibernate.connection.characterEncoding" value="UTF-8" /> ............<property name="hibernate.connection.useUnicode" value="true" /> ........</properties> ....</persistence-unit> </persistence>
Looking back at Listing 1 and 2, all of the code is generic JPA code. Actually all you ever use are JPA annotations and some of its constants. There is nothing specific to your database or the JPA implementation you used. As you can see from Listing 3, the persistence.xml file is where those specific things are found. Several excellent JPA implementations are available, including OpenJPA and TopLink (see Resources). You have used the venerable Hibernate, so you have several Hibernate-specific properties that you have specified. These are mostly straightforward things like the JDBC driver and URL, and some useful things like telling Hibernate to log the SQL that it is executing (something you would definitely not want to do a in a production situation, but it is great for debugging during development).
You will also notice from Listing 3 that you are using the Apache Derby
database. In fact, you are using an embedded version of the database. So,
you do not have to separately start up your database or worry about
configuring it. Further, you have specified in the connection URL that the
database should be created automatically, and you have told Hibernate to
automatically create the schema (this is the
hibernate.hbm2ddl.auto property). So if you
just run your application, the database and the tables can all be created
for you. This is great for development, but of course you may want
different settings for a production system. Now that you have all of your
data model code created and you have enabled access through JPA, we'll
take a look at exposing this data so that a web application can take
advantage of it.
RESTful access to data with JAX-RS
If you were creating this application five years ago, you would now start
creating some Java Server Pages (JSPs) or Java Server Faces (JSFs) or some
other similar templating technology. Instead of creating the UI for this
application on the server, you are going to use Dojo to create it on the
client. All you need to do is provide a way for your client-side code to
access this data using Ajax. You can still use a templating solution for
something like this, but it is much simpler to use the Java API for
RESTful Web Services (JAX-RS). Let's start by creating a class for reading
all of the
Teams in the database and for
creating new
Teams. Listing 4
shows such a class.
Listing 4. Data access class for
Teams
@Path("/teams") public class TeamDao { .... ....private EntityManager mgr = DaoHelper.getInstance().getEntityManager(); .... ....@GET ....@Produces("application/json").... ....public Collection<Team> getAll(){ ........TypedQuery<Team> query = mgr.createQuery("SELECT t FROM Team t", Team.class); ........return query.getResultList(); ....} .... ....@POST ....@Consumes("application/x-www-form-urlencoded") ....@Produces("application/json") ....public Team createTeam(@FormParam("teamName") String teamName){ ........Team team = new Team(); ........team.setName(teamName); ........EntityTransaction txn = mgr.getTransaction(); ........txn.begin(); ........mgr.persist(team); ........txn.commit(); ........return team; ....} }
Listing 4 shows a class data access object class, hence the name
TeamDao. We will get to the annotations on
this class shortly, but let me first explain the data access. The class has
a reference to the JPA class
EntityManager.
This is a central class in JPA and provides access to the underlying
database. For your first method that retrieves all of the teams in the
league, use the
EntityManager to create a
query. The query uses JPA's query language, which is very similar to SQL.
This query simply gets all of the
Teams. For
the second method, you simply create a new
Team
using the name of the team that is passed in, create a transaction, save
the new team, and commit the transaction using the
EntityManager. All of this code is vanilla JPA
code, as all of these classes and interfaces are part of the base API.
Now that you understand the JPA part of Listing 4, let's talk about the
JAX-RS aspects of it. The first thing you will notice is that you use the
@Path annotation to expose this to HTTP-based
clients. The
/teams string specifies the
relative path to this class. The full URL path is going to be
<host>/SoccerOrg/resources/teams. The
/SoccerOrg will specify the path to your web
application (of course, you can configure this to be something different,
or remove this completely). The
/resources part
will be used to specify a JAX-RS end point. The
/teams corresponds to the
@Path annotation and
specifies which of the JAX-RS classes to use.
Next, the first method,
getAll, has a
@GET annotation on it. This specifies that this
method should be invoked if an HTTP
GET request
is received. Next, the method has a
@Produces
annotation. This declares the MIME type of the response. In this case, you
want to produce JSON, since that is the easiest thing to use with a
JavaScript-based client.
This is all you have to do to use JAX-RS to expose this class to web
clients. However, you might be asking yourself: If this method returns a
java.util.Collection of
Team objects, how will this be sent to web
clients? The
@Produces annotation
declares that you want it to be sent as JSON, but how will the JAX-RS
serialize this into JSON? It turns out that all you need to enable this is
to add one more annotation to the
Team class as
shown in Listing 5.
Listing 5. Modified
Team class
@XmlRootElement @Entity public class Team { .... // unchanged from Listing 1 ........ }
By adding the
@XmlRootElement annotation, the
JAX-RS can now turn this class into a JSON object. You might recognize
this annotation. It is not part of JAX-RS; it is instead part of the Java
Architecture for XML Binding (JAXB) API that is part of the core Java 1.6
platform. This annotation would seem to indicate that it is for XML, but it
can in fact be used for various JAXB outputs including JSON. There are
many other JAXB annotations, but this is the only one that you need to use
in this case. It will simply use conventions for serializing all of the
fields of the
Team class to JSON.
Now go back to Listing 4 and take a look at the second method of the
class, the
createTeam method. This method uses
the
be invoked when an HTTP
POST request is
received. Next, it uses the
@Consumes
annotation to declare what kind of
POST request
it can consume. The value specified here corresponds to the
content-type header of the HTTP request. In this case it is specified as
x-www-form-urlencoded. This is the type you
will receive when an HTML form is submitted. Thus, this method will be
invoked when an HTML form is submitted with the /SoccerOrg/resources/teams
end point. Finally, notice that the method takes a single input parameter,
a string called
teamName. Notice that this
parameter is decorated with the
@FormParam
annotation. This tells the JAX-RS runtime to look for a form parameter in
the body of the request whose name is
teamName
(the value of the annotation) and bind that to a variable passed into the
invocation of this method. With this you can easily handle a simple form
submission and wire it up to your code. This could get messy if you had a
lot of data being submitted. In such a case, you might want to use a more
structured approach. Listing 6 shows an example for creating a
Player object.
Listing 6. Handling structured
POST data using JAX-RS
@Path("/players") public class PlayerDao { ....private EntityManager mgr = DaoHelper.getInstance().getEntityManager(); .... ....@POST ....@Consumes("application/json") ....@Produces("application/json") ....public Player addPlayer(JAXBElement<Player> player){ ........Player p = player.getValue(); ........EntityTransaction txn = mgr.getTransaction(); ........txn.begin(); ........Team t = p.getTeam(); ........Team mt = mgr.merge(t); ........p.setTeam(mt); ........mgr.persist(p); ........txn.commit(); ........return p; ....} .... ....@GET ....@Produces("application/json") ....public List<Player> getAllPlayers(){ ........TypedQuery<Player> query = ............mgr.createQuery("SELECT p FROM Player p", Player.class); ........return query.getResultList(); ....} }
The
PlayerDao class in Listing 6 is very similar
to the
TeamDao class from Listing 5. The main
difference that you want to examine is its
addPlayer method. This handles HTTP
POST requests, similar to the
createTeam method in
TeamDao. However, it consumes
application/json—that is, it is expecting JSON data. This implies two
things. First, the request needs to specify a content-type of
application/json so that this method will be invoked. Next, the body of
the post should be JSON data. Notice that the input parameter of this
method is of type
JAXBElement<Player>,
that is, it is a JAXB wrapper around a
Player
object. That tells JAX-RS to automatically parse the posted data into a
JAXBElement wrapper, so you do not have to
bother writing any parsing code for this. Notice that in the body of the
method, it only takes one line of code to get a full
Player object that can then be used to save the
new
Player to the database using JPA.
The last thing you need to do to complete the JAX-RS story is show the configuration needed to wire all of it up. For this, you only need to modify the web.xml of your application. Listing 7 shows the application's web.xml.
Listing 7. Application's web.xml
<?xml version="1.0" encoding="UTF-8"?> <web-app xmlns: <display-name>SoccerOrg</display-name> <welcome-file-list> <welcome-file>index.html</welcome-file> </welcome-file-list> <servlet> <servlet-name>JAXRS-Servlet</servlet-name> <servlet-class>com.sun.jersey.spi.container.servlet. ServletContainer</servlet-class> <init-param> <param-name>com.sun.jersey.config.property.packages</param-name> <param-value>org.developerworks.soccer.model;org.developerworks. soccer.web</param-value> </init-param> </servlet> <servlet-mapping> <servlet-name>JAXRS-Servlet</servlet-name> <url-pattern>/resources/*</url-pattern> </servlet-mapping> </web-app>
As you can see in Listing 7, your application has a single servlet declared. This is a servlet that is provided by Jersey, the JAX-RS implementation that you are using. You pass in a single initialization parameter to the servlet—the packages containing any classes that you want JAX-RS to know about. In this case you have a package where your data models are kept and a package where your data access objects are kept. You need the models to be discovered so that JAX-RS can convert them to JSON. Of course, you need the DAOs to be discovered so that JAX-RS can route requests to them. Finally, notice the servlet-mapping. This is where the /resources part of your URL paths is specified. Now you are ready to use all of this back-end code on the client to create a UI using Dojo.
Leveraging REST on the client with Dojo
The Dojo toolkit provides almost any kind of library or utility you might need for building the client side of your web application. You will see how it can help you when working with Ajax, forms, JSON, and creating UI widgets. (However, it can do much more than that. This just happens to be all you need for this simple example.) It is such a large system, that you may want to download the full toolkit and do a custom build of it to get exactly what you need for your application. For this example application, you will instead use the Google Ajax APIs to access the various parts of the toolkit that you need. This is both convenient and has some nice performance advantages since Google's copies of Dojo are provided through Google's own highly efficient content delivery network (CDN).
Your application is data centric, so you need to start by adding some data
to it. We will use Dojo to create a UI for adding
Teams. Listing 8 shows all of
the code you need for this.
Listing 8. Adding
Teams using Dojo
<html> <head> <meta http- <title>Test Harness</title> <link rel="stylesheet" type="text/css" .... <script type="text/javascript" src="" djConfig="parseOnLoad: true"></script> <script type="text/javascript"> ....function init(){ .... var btn = dijit.byId("addTeamBtn"); .... dojo.connect(btn, "onClick", function(event){ .... ....event.preventDefault(); event.stopPropagation(); dojo.xhrPost({ form : dojo.byId("addTeamForm"), ....handleAs: "json", ....load : function(data){ ........addTeam(data); ........alert("Team added"); ....}, ....error : function(error){ .... alert("Error adding team: " + error); ....} }); .... }); ....} </script> </head> <body class="soria"> ....Add a Team<br/> ....<form method="POST" action="/SoccerOrg/resources/teams" id="addTeamForm"> ........<label for="teamName">Team Name:</label> ........<input name="teamName" type="text" id="teamName" dojoType="dijit.form.TextBox"/> ........<button type="submit" id="addTeamBtn" dojoType= "dijit.form.Button">Add Team</button> ....</form> ....<script type="text/javascript"> ........dojo.require("dijit.form.Button"); .... dojo.require("dijit.form.TextBox"); .... dojo.addOnLoad(init); ....</script> </body> </html>
Notice in Listing 8 that you reference the base Dojo library from Google's
CDN. Once you have that, you can then request each of the additional parts
of Dojo you want using the
dojo.require
function (see the script block at the bottom of Listing 8). Notice that
you just create a normal HTML form, but you use some extra Dojo-specific attributes. This tells Dojo to add some extra styling to the
visual elements and to add some extra capabilities to the corresponding
DOM elements. You tell Dojo to execute the
init
function once everything else (all of the Dojo components) are loaded. In
that function, you use the
dijit.byId function
to get a handle on the button in the form.
Dijit is Dojo's widget library. You could use
the
dojo.byId to reference any DOM element
using its ID, but the similar
dijit.byId gives
you a widget with extra capabilities (if the element is marked as a
widget, which is the case for the button in Listing 8).
You then use Dojo to associate an event handler for when the button is
clicked. The handler stops form submission and uses Ajax instead through the
dojo.xhrPost function. This function makes it
easy to
POST HTML forms. It figures out the
Ajax end point by inspecting the HTML form's
action attribute. It also
reads all of the form elements and passes them to the Ajax
POST. When it gets a response back from the
server, it will invoke the
load function that
is passed to
xhrPost. Notice that you declared
that JSON will be returned by the server by setting the
handleAs property passed to the
xhrPost function. You will see the
addTeam function shortly, but you can pass in
the data object directly because Dojo has already safely parsed the JSON
data into a usable JavaScript object. This
addTeam function is used in conjunction with
another form, for adding
Players. Listing 9 shows the HTML for that form.
Listing 9. Add
Player form
Add a Player<br/> <form id="addPlayerForm" action="/SoccerOrg/resources/players"> ....<label for="firstName">First Name:</label> ....<input name="firstName" id="firstName" type="text" dojoType="dijit.form.TextBox"/> ....<label for="lastName">Last Name:</label> ....<input type="text" name="lastName" id="lastName" dojoType="dijit.form.TextBox"/><br/> ....<label for="age">Age:</label> ....<input type="text" name="age" id="age" dojoType="dijit.form.TextBox"/><br/> ....<label for="team">Team:</label> ....<select id="team" name="team" dojoType="dijit.form. Select"></select> ....<button type="submit" id="addPlayerBtn" dojoType= "dijit.form.Button">Add Player</button> </form> <script type="text/javascript"> dojo.require("dijit.form.Select"); dojo.addOnLoad(loadTeams); </script>
This form, like the one in Listing 8, is a valid HTML
form. However, it also has Dojo-specific attributes added to its elements. Notice that it
has a
SELECT element that will serve as a
drop-down list of the
Teams, so that the user can
pick which
Team to add the new
Player to. This is dynamic data that needs to
be loaded from the server. Notice that you added another function to be
called at startup—the
loadTeams function. This
is what loads the teams from the server. Listing 10
shows this function, as well as the
addTeam
function that you saw referenced in Listing 9.
Listing 10. The
loadTeams and
addTeam functions
var teams = {}; function loadTeams(){ ....var select = dijit.byId("team"); ....dojo.xhrGet({ ........url: "/SoccerOrg/resources/teams", ........handleAs:"json", ........load : function(data){ ............var i = 0; ............for (i in data.team){ ................addTeam(data.team[i]); ............} ........}, ........error : function(error){ ............alert("Error loading team data: " + error); ........} ....}); } function addTeam(team){ ....teams[team.id] = team; ....var select = dijit.byId("team"); ....var opt = {"label":team.name, "value":team.id}; ....select.addOption(opt); }
Here you once again use Dojo's Ajax utilities to access data provided by
the JAX-RS end point created earlier. This time you use the
dojo.xhrGet, which makes an HTTP
GET request to an Ajax end point. In this case
you need to specify its URL, but otherwise it is very similar to the
xhrPost you saw in Listing 9. Finally, you see
the
addTeam method. This once again uses the
Dojo widget's extra capabilities to easily add new options to the
drop-down list that shows the teams. Now that you have seen how the player form is
created, take a look at the code that handles its submissions (see
Listing 11).
Listing 11. Adding a new
Player
var button = dijit.byId("addPlayerBtn"); dojo.connect(button, "onClick", function(event){ .... event.preventDefault(); event.stopPropagation(); var data = dojo.formToObject("addPlayerForm"); var team = teams[data.team]; data.team = team; data = dojo.toJson(data); var xhrArgs = { postData: data, handleAs: "json", load: function(data) { alert("Player added: " + data); dojo.byId("gridContainer").innerHTML = ""; loadPlayers(); }, error: function(error) { alert("Error! " + error); }, url: "/SoccerOrg/resources/players", headers: { "Content-Type": "application/json"} }; var deferred = dojo.xhrPost(xhrArgs); });
This code is going to submit data to the
PlayerDao.addPlayer method you saw back in
Listing 6. This code expects the
Player object
to be serialized into a JSON data structure. First, you once again use
Dojo to wire up an event handler to a button click on the form. Next, you
use Dojo's convenience function,
dojo.formToObject, to turn all of the data from
the form into a JavaScript object. You then modify that JavaScript object
slightly to match the structure expected on the server. Then you use
Dojo's
dojo.toJson function to turn this into a
JSON string. Now this gets passed to
dojo.xhrPost, similarly to how the
addTeam form was submitted. Notice that you add
the HTTP header
Content-Type to make sure that
it gets routed to the
PlayerDao.addPlayer
method.
The
xhrPost once again has a
load function that will be invoked once the
Ajax request comes back with a successful response from the server. In
this case it is clearing an element on the page called
gridContainer and calling a function called
loadPlayers. This is another Dojo widget used
to show all of the players. Listing 12 shows the HTML
and JavaScript used for this.
Listing 12. Player grid HTML and JavaScript
<style type="text/css"> @import ""; @import ""; .dojoxGrid table { margin: 0; } html, body { width: 100%; height: 100%; margin: 0; } </style> <script type="text/javascript"> function loadPlayers(){ ....var pStore = new dojox.data.JsonRestStore({ ........target: "/SoccerOrg/resources/players" ....}); ....pStore._processResults = function(data, deferred){ ........return {totalCount:deferred.fullLength || data.player.length, items: data.player}; ....}; var pLayout = [{ field: "firstName", name: "First Name", width: "200px" }, { field: "lastName", name: "Last Name", width: "200px" }, { field: "age", name: "Age", width: "100px" }, { field : "teamName", name : "Team", width: "200px" }]; var grid = new dojox.grid.DataGrid({ store: pStore, clientSort: true, rowSelector: "20px", structure: pLayout }, document.createElement("div")); dojo.byId("gridContainer").appendChild(grid.domNode); grid.startup(); } </script> <div id="gridContainer" style="width: 100%; height: 100%;"></div> <script type="text/javascript"> dojo.require("dojox.grid.DataGrid"); dojo.require("dojox.data.JsonRestStore"); dojo.addOnLoad(loadPlayers); </script>
Listing 12 shows Dojo's
DataGrid widget. This is
one of the more rich widgets in Dojo, and so it requires some extra CSS as
well. To create a grid, you need to do two things. First, you need to
create a data store for it. In this case it will be JSON data coming from
your server, so create a new
JsonRestStore
object and point it to the URL on your server that will produce this data.
Then you override its
_processResults. You only
have to do this because it is expecting a JSON array of data, and your
JAX-RS end point will produce a slightly more complicated object (it will
have a single property called
player whose value will be the JSON array
that the
JsonRestStore expects). The next thing
the grid needs is layout metadata that tells it what columns to show and
what the corresponding property on the JavaScript object will be. Then you
can create the grid and drop it into your DOM tree.
Now you have completed the sample soccer application, and you have a very rich way to show the soccer players in the league. You could easily expand this simple example from here to add the editing of players, the sorting of the grid, or you can even add more data like games and results.
Conclusion
This article has shown you a quick way to create a rich, data-centric web application. You used several key technologies to remove tedious boilerplate code both from the server side and the client side: JPA, JAX-RS, and Dojo. In many cases you made use of default conventions to further reduce the amount of code needed to create your web application. The result is a very modern web application created with minimal code. All of the technologies it uses are extensible and production-quality, so you can confidently expand the sample application (or your own application) for more robust use cases in a straightforward manner. Even better is that there is no lock-in. You used open standards on the server side. You could easily switch out database technologies, for example. You used REST and JSON on the front end, meaning you can use a different UI kit, or you can easily hook up a mobile client.
Download
Resources
Learn
- Introduction to Spring 2 and JPA (Sing Li, developerWorks, August 2006): Learn more about JPA.
- Implementing composite keys with JPA and Hibernate (Stephen Morris, developerWorks, August 2009): Dive deeper into JPA and Hibernate.
- Create RESTful Web services with Java technology (Dustin Amrhein and Nick Gallardo, developerWorks, February 2010): Get a thorough introduction to JAX-RS.
- Create Ajax applications for the mobile web (Michael Galpin, developerWorks, March 2010): See how JAX-RS is great for mobile web applications as well.
-.
- "Comment lines: Using Ant and ShrinkSafe to improve performance of Web applications that use Dojo" (Kevin Haverlock, developerWorks, March 2010): See a real-world example of how you can use REST calls to lazily load JSON data for populating a Dojo Dijit tree widget.
- "Comment lines: Lazily loading your Dojo Dijit tree widget can improve performance" (Scott Johnson, developerWorks, May 2008): Learn how to use the Ant build utility together with Dojo's ShrinkSafe to automate the profile build, resulting in improved performance of your Dojo-based web pages.
- The developerWorks Web development zone specializes in articles covering various web-based solutions.
Get products and technologies
- Download the Dojo toolkit.
- Get the Java SDK. JDK 1.6.0_17 was used in this article.
- Get Apache Tomcat. Apache Tomcat 6.0.14 was used in this article.
- Get Apache Derby 10.6.1.0.
- Jersey is the open source, production-quality, reference implementation of JAX-RS.
- Hibernate is an implementation of the Java Persistence API (JPA). Version 3.5.3 was used in this article.
-. | http://www.ibm.com/developerworks/web/library/wa-datawebapp/ | CC-MAIN-2013-48 | refinedweb | 4,883 | 57.77 |
3308Views182Replies
What will happen in the year 2012 Answered
Here is the best thing I found about what will happen in the year 2012.
This page explains what will or could happen on that date. read the rest for a full explanation.
Discussions
Greetings from 11 months later. Absolutely nothing has happen.
What will happen in the year 2012?
I will turn 54 *sigh*
*GASP* You are the same age as my dad!
And i would become 19, (eek! I would nave no parental protection!)
i graduate in 2012 >.<
Gasp ! You're the same age as the son I never had :-)
id be 17
Huh? I don't get it...
Sorry about that. I discovered soon after marriage that my wife could not have any children, and so I just turned your statement around a bit. It was in bad taste, sorry.
Ahh, I get it now... Sorry to hear you and your wife could not start another generation your own...
Well, if I can inspire, even just one of this group towards greatness, I will live on in that manner also.
You've inspired many of us.
That is very kind of you to say so. Sometimes I wonder...my accomplishments are almost nil. While I have reproduced things, I have not seen any evidence of my being able to bring things to any form of fruition.
You helped me with your take care of a cavie instructable, and your pyrography i'ble, and I liked the april fol's jokes, and and I subscribed, and yeah..
Well, I am glad the cavy instructable helped someone :-) I am not sure how the pyrography one fared though. April Fools jokes I think are funniest if everyone can join in laughing at the joke, so I am not too much for the normal "practical jokes" But, thank you.
I am not sure how the pyrography one fared though.
Don't you mean flare?
No litterlly. Isn't it flare?
I meant, I was not sure how well the pyrography ible fared as far as being popular. Pyrography being the art of singing pictures into a flat piece of wood.
Oh. I understand...
Sorry, not singing pictures (I have never heard a picture sing actually), but singeing
I actually didn't notice that. I really meant I knew what you were saying.
maybe not, but I DID notice it.....and it bugged me I had spelled it that way :-)
I would be 14
Really?
Yes
:-)
and I will turn 17
absolutly nothing abnormal whatsoever.......
Politics, War, celebrations all the usual.
Supposedly we will die. Put in FEMA death camps perhaps.
We (USA) will elect a new president, or not. People will die. People will be born. Normal people will give the day a passing glance and say "oh, look at that." The seventeen year cicadas will return (I think. I forget the exact year they were here last.) And many many conspiracy theorists will drive up demand for bottled water, kerosene, spam, twinkies, guns and ammo.
after 2012 we just start over in numbering years. so the new year will start with January 1, 1 NC (for "New Cycle"). thats all. no dooms day. no end of life. the myans were probably wiped out before they could finish (damn Spaniards).
cool, lol about the spartains.
Spartans =/= Spaniards
its the conquistadors. you were off about 1700 years.
oh sorry, I read it quick and thought it said Spartans.
No. Master Chief is way after that.
damn you Bungi...
in the year 2012, if we were to go by the myan calendar, we would have to start over at zero. so, 2013 will be 1 N.C. (new cycle). all these people saying "o noez! the world will end in that year! RUN TO THE HILLS!" are just retards and superstitious idiots that dont know much about anything. dont get cought up in the stupidity, nothing will happen except a new number cycle for the years.
What does the N.C. stand for?
New Cycle, as in a new cycle of numbers.
Thanks.
Or just update the calendar to last for infinity...
ya, but this way is alot cooler! imagine telling your grand children you were alive durring the year 1 nc.
at 20 nc...
and we would stat again at 1024 if it was the binary calendar
I will be 16 hopefully driving a trans zam or camaro.
Good luck with that
I don't need that much. I just need to wait until Wardwood season (I never said it'd be mine)
The world does not end in 2012 because in the myan calendar it is just starting up a new cycle and end of the old one which is 2012.I'm pretty sure people will live past 2012.
ill still be a virgin man my life sucks.
atleast you still have your hands.
I do know that during the winter solstice of 2012, our solar system will pass through the galaxy's equator. We will be in the other hemisphere of the galaxy. Now just think about that... | http://www.instructables.com/topics/What-will-happen-in-the-year-2012/ | CC-MAIN-2018-30 | refinedweb | 848 | 86.2 |
Code: Select all
$server = self::RECAPTCHA_API_SERVER; if ($app->isSSLConnection()) { $server = self::RECAPTCHA_API_SECURE_SERVER; }
In a case where a server is behind a proxy that masks the protocol and SSL, the above logic will cause mixed content which will prevent the captcha from displaying.
Code: Select all
public function isSSLConnection() { return ((isset($_SERVER['HTTPS']) && ($_SERVER['HTTPS'] == 'on')) || getenv('SSL_PROTOCOL_VERSION')); }
There are two alternatives. Always use an SSL server to retrieve captcha content. Or offer an additional parameter in the plugin that allows the admin to force SSL.
I can't think of a reason not to always use SSL to retrieve captcha content. | https://forum.joomla.org/viewtopic.php?f=579&t=790107&p=2989258 | CC-MAIN-2019-35 | refinedweb | 101 | 52.6 |
-
me reason I am reminded of Triumph the Insult Comic Dog.
getPaula() ... FOR ME TO POOP ON!!!
Admin
Awesome... I got the first one in... now I can laugh at the idiots who proclaim proudly, "FIRST!" when they're the 3rd or 4th post. :)
Admin
Gah! Just when I though that "Brillant" (French for "brilliant", not that anyone cares) was finally on the way out.
And I missed first today (and eleventh yesterday), too.
Sincerely,
Gene Wirchenko
Admin
My eyes! The goggles they once again do Nothing! ROFL it's even funnier the second time.
Admin
That's horrible. You should always start class names with an upper case letter.
Admin
yikes... I shudder to think what she billed for those months of doing absolutely nothing
Admin
and no constructor... tisk tisk
Admin
FIRST!
ah damn.
I actually enjoyed reading the Paula Bean story when I first saw it. Sometimes I wonder how these people get hired.
Admin
Well developed BS skills. Everyone technical needs to learn to hone their BS skills, because no matter how awesome (or inferior) your techie skill, most managers will always pick the one that BSes them the farthest.
Admin
Here testing code is umm... brilliant!
Admin
Admin
Yeah, but take a look at how many bugs are in the code. Quality over quantity.
Admin
You may want to check the authors on the first and second posts, guy.
Admin
He actually did have the first post. It was his second post that pointed out this fact. I almost sent the same reply, but caught myself.
Admin
Reminds me of an incident early in my career - a new "experienced" fortran programmer was given the assignment of changing some code to copy one large COMMON block (remember those?) to another. The next day his boss asked if he was done yet -- He wasn't! Two days later, still not done! At the end of the week the boss looked at the new employee's code (still incomplete) and found 5000 lines of:
blockB[1] = blockA[1]
blockB[2] = blockA[2]
blockB[3] = blockA[3]
...
The "experienced" programmer didn't think of using a loop...
Admin
A number of years ago a friend told me about her friend who made a good living as a Java programmer in Manhattan without being able to speak a word of English, only Russian.
His business model was to find 5 to 7 people who could successfully get jobs as programmers without being actually able to code. They would email him their assignments and he would email back the code.
Maybe Paula misplaced his email address. :)
Admin
What no unit tests? Paula should be fired!!!!!!
Admin
This reminds me of a guy at my workplace who spends all day browsing the Internet and he's got a project due by the end of the year and he's not nearly done...doh! Oh shit, I guess I best get back to work.
Admin
this was awesome! haha. I love this site. It keeps me sane.
Admin
Two things:
1) Is the statement "return paula" the result of the getPaula() function, or is it what the company wish they could have done sooner?
2) Has anybody thought about the fact that Paula could potentially declare a variable String bean?
Admin
Oh, and Happy New Year to all!
[<:o)]
Admin
The real WTF for me is how anyone can hire a completely unknown person, and have her work for months without actually checking her work.
Even if I'd never have suspected her of being this bad, I'd still want to ensure that she didn't code like a leprous gazelle.
Admin
That was actually version 2.0 of Paula's program. Version 1.0 looked like this:
<span id="_ctl0_PostForm_Reply"><pre><font>package</font> test;<br><br><font>public class</font> helloBean {<br><br> <font>private String</font> hello = <font>"Hello World!"</font>;<br><br> <font>public String</font> getHello() {<br> <font>return</font> hello;<br> }<br>}<br><br>It took extensive refactoring for her to update the code, I'm sure.<br></pre></span>
Admin
If you can't make a good living as 5-to-7 Java programmers, you need to move somewhere with a lower cost of living...
Admin
Hm, as this is my third post to the site over the last few months, maybe I should finally think about registering...
Anyway, I worked at a company once where I was tasked (along with two others who made up our team) to hire 5 to 10 people very quickly for a Java project. This was during the dot-com days, obviously, where money was no object. My boss actually said to me that we needed the team as quickly as possible, and not to worry too much about the quality... we could always fire some people and hire replacements if needed over the next few months.
Ugh. We didn't like it at the time, but mainly because I thought it was unkind. Today I look back on it and see it with new horror on all sorts of levels.
Anyway, we had our team in about two or three weeks or so. We did actually do interviews, but none of us were too experienced on how you properly do interviews. We looked at resumes, relevant experience, personalities (to see if the team would work together well), etc, but never did any technical questions beyond asking people to describe their prior work. Miraculously, most of the team actually wound up being pretty good. I think we hired around 7 people. Maybe 6. 2 of them were fresh out of college.
Amazingly, only one guy was a total bust. He had absolutely no real world knowledge of programming that I could detect. He was one of our two kids recently out of school, to be sure, but his degree was in Computer Science! I mean, I expected him to at least be able to do simple projects, but the questions he asked of myself and the other two "senior" team members were ludicrous. Questions about simple case statements, the difference between passing by reference and passing by value, and so on, just basic stuff. He was fired a week after he started. But to this day, I wonder about his interview. We asked him about specific Java technologies (not to quiz him, but to just ask if he had any experience with them) and he claimed he had all sorts of hobbyist experience along with his classwork in specifically what we were asking for. What did he think he was going to do on the job? How would he think he'd be able to get away with not really knowing anything?
Seeing a post like this, I now understand. He must have thought we wouldn't review his work or even check up on him, possibly for years! (But then why ask us the questions he did?)
Even more strange, I wonder how he graduated.
Admin
Oh, almost forgot. He was surprised at being fired.
Admin
A Real Programmer doesn't use loops, loops are for quiche-eaters. A Real Programmer uses GOTO, which is how God meant programs to be written.
Admin
Bonus! You can laugh at travisowens also! I certainly did.
Admin
I had similar experience myself with my first ever hiree. Nice Chinese lad with a good degree in Math/Comp-Sci, claimed to know C++ and a few bits of webby technologies. Bit on the quiet side during the interview but I put that down to nerves. Anyways, put him to work the first week with a relatively simple task (some simple HTML parsing if I recall) and kept checking up on him..asking him if he was OK, if he needed any help or pointers etc. (I even more or less did the code for him at one point on paper). On the 4th day though, with him STILL not having approached me with any work, I thought Id sit in with him and see how he was doing....in 4 days he'd managed to write 'cout << "<html> some html</html>";'. 1 line in 4 days. Needless to say we let him go on the 5th day.
In my defense for having hired him, we'd had only 3 candidates because the job was paying peanuts and although I didnt want to hire any of them my PHB had told me "Just f***** hire one of them, how hard can it be?". However I wish Id gone for the 50+ year old alcoholic with 5 teeth instead......
Admin
This is true. One only has to read the story of mel to know.
Admin
I vote we save Alex the trouble and call this the post of the year anyway.
But why does this forum software edit in Times New Roman, and display in Arial?
Admin
I remember interviewing a bunch of unemployed girls for a "keyboard punching" post. All we needed was som basic computer knowledge, and ability to fill an Access form. We didn't actually need a girl, but no boys applied for the post.
One of the interviewees had an amazing resume, which claimed that she knew not only some common applications (like, say, MS Office), but there was also "Lotus".
We asked her what that Lotus thing was about (could be Notes, Domino, 1-2-3 or whatever) just out of curiosity. After some vague "you know, it does uh stuff like uh it works", the girl cracked and explained that she was supposed to learn it
in class, but she wasn't there that day.
We hired another girl, and that was a disaster too.
Admin
Funny! I hope people will not confuse with Paula Dean (), fabulous cook and star of her own Food TV show. Paula Dean would kick PaulaBean's butt.
Admin
I seriously think I'm depressed now for having read this. Truly a WTF, and though I've only been here a few months, I also vote this WTF of 05.
Admin
This forum software is an endless source of puzzlement. I think that by now its quirkiness can be considered part of the charm of this website :)
Admin
Ah...I remember graduating with my CS degree back more than 10 years ago (damn...I'm getting old) Some of my classmates who also graduated with a CS degree I would never hire in a million years. Some of them actually could program, but would get stumped on how to copy a file from one floppy to another. Jeez...you just spent four years pretty much saying "this is what I want my life to be about" (yes...CS is more about algorithims, and computers are just the tools to implement them, but in the real world over here, they were all wanting to get jobs as programmers, not thinkers of Big O).
In our assembly class, I did two versions of my code. One I would turn in, and one I would be nice and share on the mainframe. This way if someone did copy it, I didn't get busted. I usually left the last couple steps as an exercise for the user as well. That's how these people get out there.
Reminds me also of someone we interviewed for a job once - all the questions we asked about DNS, Perl, Unix, etc... (it was a Unix job) were answered with either "I have a friend that does that" or "I have a book on it". I was told it'd be rude to ask if he had the friends number with him :)
Admin
<<span id="PostFlatView"> I was told it'd be rude to ask if he had the friends number with him><br><br>I'd rather hire 10 guys that know who to ask for the answer than 1 of the "experts" we've seen here repeatedly.<br><br>Thanks.<br></span>
Admin
I don't understand how something like this can happen. How can months go by without anything tangible? How can an employer hire someone who obviously doesn't have a clue? Hiring someone mediocre might happen, but THIS?
Admin
Yes, unfortunately I have to agree. I've had some people that later graduated with a Bachelor of Science as partners who didn't know anything. Quickly I found out that it would actually be less work for me if I told them I'd do the assignment myself (they wouldn't have to do anything) than to explain everything I do. Mea culpa. That's how they get through.
Admin
I agree.
It took her a couple of months to get that code out?
Damn, as an inexperienced recent c.s. grad it would only take me, like, a week to do that...
And I can't find a job.
rolls eyes
See, I don't use emoticons...
Admin
I'm sure she was just following good software engineering practices. She had been gathering and analyzing requirements, assessing risks, doing research on the algorithms involved, doing user interface design and testing with paper prototypes, constructing detailed UML models, estimating the performance of the finished software and making detailed unit, integration, system and acceptance test plans. She was juuust about ready to start the implementation phase.
Admin
The real WTF here is why?
Admin
What are you talking about? GOTO is only one way to implement a loop!<span id="_ctl0_PostForm_Reply"></span>
<span id="_ctl0_PostForm_Reply"></span>
Admin
I have actually written code that looks quite like that:
blockB1 = blockA1
blockB2 = blockA2
etc...
The trick was that those variables were differently aligned bit fields. The compiler does good job at combining the writes as much as possible. A loop could have been possible but certainly less maintenable and probably at least equally slow.
Admin
Has no one realized that Paula spelled brilliant wrong?
Admin
Congratulations Lon, you were the FIRST person to notice that. It wasn't one of the main things that made the original post hilarious. It didn't give the original WTF a complete element of irony. AND out of the over 150 comments spread out over more than 3 pages, YOU were the first to catch it.
Admin
Wow, It is indeed a good thing that we have Lon Varscsak around here to point these things out to us poor ignorant plebes. He must be brillant!
Admin
I did a similar thing in some of my CS classes. I did the assignment and printed a single copy and gave it to a group of struggling people. I would then go and redo the assignment differently for myself.
The reason I did this was that I would be pestered in all hours of the night for help on homework. I got so many questions like, "What is a variable and why should I use one?" "How do I do this assignment?" "How do I make something happen sometimes?" "Where's the command that generates this program?"
It was never ending. I got so fed up with it. The only way I could get sleep was if I just gave them the answer. I figure it's their problem, when they don't know how to do anything on the test. It's the professors fault they passed. If you fail all the tests and ace all the homeworks, obviously something is wrong. Also, the professors seemed to have a problem giving anything worse than a B if you showed up.
Admin
So you mean it's your fault. Bad. (I've done this too, so I suppose I can't complain too much...)
And yes, it's amazing how far people can get in quite decent CS degrees without actually being any good.
Admin
This is one of the common misconceptions of interviewing - the aim is not to find out what the person knows but how well a person can find out information and use it.
For example, I was once asked about a switch for the 'ls' command in an interview. My answer was "Who cares? I can do a 'man ls' and find out" - that's the distinction - not that I know every tiny bit of easily found information about something, that I know where to look and how to use those resources...
I'm always called an evil interviewer as I follow the above principle ;-) | http://thedailywtf.com/articles/comments/The_Brillant_Paula_Bean%2C_J2ME_Edition | CC-MAIN-2017-26 | refinedweb | 2,735 | 73.68 |
QuickTime for Java: A Developer's Notebook/Working with QuickDraw
From WikiContent
And now, on to the oldest, cruftiest, yet can't-live-without-it-iest part of QTJ: QuickDraw. QuickDraw is a graphics API that can be traced all the way back to that first Mac Steve Jobs pulled out of a bag and showed the press more than 20 years ago. You know—back when Mac supported all of two colors: black and white.
Don't worry; it's gotten a lot better since then.
To be fair, a native Mac OS X application being written today from scratch probably would use the shiny new "Quartz 2D" API. And as a Java developer, the included Java 2D API is at least as capable as QuickDraw, with extension packages like Java Advanced Imaging (JAI) only making things better.
The real advantage to understanding QuickDraw is that it's what's used to work with captured images (see Chapter 6) and individual video samples (see Chapter 8). It is also a reasonably capable graphics API in its own right, supporting import from and export to many formats (most of which J2SE lacked until 1.4), affine transformations, compositing, and more.
Getting and Saving Picts
If you had a Mac before Mac OS X, you probably are very familiar with picts , because they were the native graphics file format on the old Mac OS. Taking screenshots would create pict files, as would saving your work in graphics applications. Developers used pict resources in their applications to provide graphics, splash screens, etc.
Actually, a number of tightly coupled concepts relate to picts. The native structure for working with a series of drawing commands is called a Picture actually. This struct, along with the functions that use it, are wrapped by the QTJ class quicktime.qd.Pict. There's also a file format for storing picts, which can contain either drawing commands or bit-mapped images—files in this format usually have a .pct or .pict extension. QTJ's Pict class has methods to read and write these files, and because it's easy to create Picts from Movies, Tracks, GraphicsImporters, SequenceGrabbers (capture devices), etc., it's a very useful class.
How do I do that?
The PictTour.java application, shown in Example 5-1, exercises the basics of getting, saving, and loading Picts.
Note
Compile and run this example with ant run-ch05-picttour from the downloadable book code.
Example 5-1. Opening and saving Picts
package com.oreilly.qtjnotebook.ch05; import quicktime.*; import quicktime.app.view.*; import quicktime.std.*; import quicktime.std.image.*; import quicktime.io.*; import quicktime.qd.*; import java.awt.*; import java.io.*; import com.oreilly.qtjnotebook.ch01.QTSessionCheck; public class PictTour extends Object { static final int[ ] imagetypes = { StdQTConstants.kQTFileTypeQuickTimeImage}; static int frameX = -1; static int frameY = -1; public static void main (String[ ] args) { try { QTSessionCheck.check( ); // import a graphic QTSessionCheck.check( ); QTFile inFile = QTFile.standardGetFilePreview (imagetypes); GraphicsImporter importer = new GraphicsImporter (inFile); showFrameForImporter (importer, "Original Import"); // get a pict object and then save it // then load again and show Pict pict = importer.getAsPicture( ); String absPictPath = (new File ("pict.pict")).getAbsolutePath( ); File pictFile = new File (absPictPath); if (pictFile.exists( )) pictFile.delete( ); try { Thread.sleep (1000); } catch (InterruptedException ie) { } pict.writeToFile (pictFile); QTFile pictQTFile = new QTFile (pictFile); GraphicsImporter pictImporter = new GraphicsImporter (pictQTFile); showFrameForImporter (pictImporter, "pict.pict"); // write to a pict file from importer // then load and show it String absGIPictPath = (new File ("gipict.pict")).getAbsolutePath( ); QTFile giPictQTFile = new QTFile (absGIPictPath); if (giPictQTFile.exists( )) giPictQTFile.delete( ); try { Thread.sleep (1000); } catch (InterruptedException ie) { } importer.saveAsPicture (giPictQTFile, IOConstants.smSystemScript); GraphicsImporter giPictImporter = new GraphicsImporter (giPictQTFile); showFrameForImporter (giPictImporter, "gipict.pict"); } catch (Exception e) { e.printStackTrace( ); } } public static void showFrameForImporter (GraphicsImporter gi, String frameTitle) throws QTException { QTComponent qtc = QTFactory.makeQTComponent (gi); Component c = qtc.asComponent( ); Frame f = new Frame (frameTitle); f.add (c); f.pack( ); if (frameX = = -1) { frameX = f.getLocation( ).x; frameY = f.getLocation( ).y; } else { Point location = new Point (frameX += 20, frameY += 20); f.setLocation (location); } f.setVisible (true); } }
Warning
The two Thread.sleep( ) calls are here only as a workaround to a problem I saw while developing this example—reading a file I'd just written proved crashy (maybe the file wasn't fully closed?). Because it's unlikely you'll write a file and immediately reread it, this isn't something you'll want or need to do in your code.
When run, this example prompts the user for a graphics file, which then is displayed in three windows, as shown in Figure 5-1. These represent three different means of loading the pict.
Figure 5-1. Writing and reading PICT files
Image:QuickTime for Java: A Developer's Notebook I 5 tt67.png
What just happened?
You can get picts in a number of ways in QTJ. The first example here is to use a GraphicsImporter to load an image file in some arbitrary format, and then call getAsPicture( ) to get a Pict object. This is the easiest way to get a Pict from an arbitrary file—if you knew for sure that a given file was in the pict file format, you could use Pict.fromFile( ) instead, but that does not check to ensure the file really is a pict. So, the safe thing to do is to use a GraphicsImporter, let it figure out the format of the source file, and then convert to pict if necessary with getAsPicture( ).
Writing a pict file to disk is easy: just call writeToFile() .
Tip
Curiously, this takes a java.io.File, not a QTFile, like so many other I/O routines in QTJ.
You also can write a Pict to disk by using the GraphicsImporter's saveAsPicture( ) method.
Note
Yes, it is kind of weird to use the "importer" for what is effectively an "export."
The example uses both of these methods to write pict files to disk—Pict.writeToFile( ) creates pict.pict and GraphicsImporter.saveAsPicture( ) creates gipict.pict. Each file is then reloaded with GraphicsImporters. Conveniently, a GraphicsImporter can be used with a QTFactory to create a QTComponent (see Section 4.4 in Chapter 4), which is how the imported picts are shown on-screen.
What about . . .
. . . other ways to get pictures? Look at the Pict class and you'll see several static fromXXX( ) methods that provide Picts from GraphicsImporters, GraphicsExporters, Movies, Tracks, and other QTJ classes.
Also, why does this example go through the hassle of creating absolute path strings and passing those to the QTFile constructor? It's a workaround to an apparent bug in QTJ for Windows: when you use a relative path (like Pict.writeToFile (new File("MyPict.pict"))), QTJ sometimes writes the file not to the current directory, but rather to the last directory it accessed. In this case, that means the directory it read the source image from. Specifying absolute paths works around this problem.
Getting a Pict from a Movie
If you're working with movies, you'll probably want to be able to get a pict from some arbitrary time in the movie. You could use this for identifying movies via thumbnail icons, identifying segments on a timeline GUI, etc. This action is so common, and it's really easy.
How do I do that?
To grab a movie at a certain time, you just need a one-line call to Movie.getPict( ) , as exercised by the dumpToPict( ) method shown here:
Note
Notice I don't say "grab the current movie frame" because the movie could have other on-screen elements like text, sprites, other movies, etc., not just one frame of one video track.
public void dumpToPict ( ) { try { float oldRate = movie.getRate( ); movie.stop( ); Pict pict = movie.getPict(movie.getTime( )); String absPictPath = (new File ("movie.pict")).getAbsolutePath( ); pict.writeToFile (new File (absPictPath)); movie.setRate (oldRate); } catch (Exception e) { e.printStackTrace( ); } }
This method stops the movie if it's playing and stores the previous play rate. Then it creates a Pict on the movie's current time and saves it to a file called movie.pict. Then it restarts the movie.
Note
The downloadable book code exercises this in a demo called PictFromMovie. Run it with ant run-ch05-pictfrommovie.
What about . . .
. . . not stopping the movie? I haven't had good results with this call unless the movie is stopped. At best, it makes the playback choppy for a few seconds; at worst, it crashes.
Converting a Movie Image to a Java Image
It's possible you'll want to grab the current display of the movie and get it into a java.awt.Image. A convenient method call has been provided for just this task; unfortunately, it doesn't work very well, so a Pict-based workaround is needed.
How do I do that?
QTJ provides QTImageProducer , an implementation of the AWT ImageProducer interface. ImageProducer dates back to Java 1.0, and was designed to handle latency and unreliability when loading images over the network—issues that are irrelevant in typical desktop cases.
The most straightforward way to get an image from a movie is to get a QTImageProducer from a MoviePlayer, the object typically used to create a lightweight, Swing-ready QTJComponent. The ConvertToImageBad application in Example 5-2 demonstrates this approach.
Note
Makes sense, doesn't it? The MoviePlayer needs to generate AWT images for the lightweight QTJComponent, so that's what you get an ImageProducer from.
Example 5-2. Using MoviePlayer's QTImageProducer
package com.oreilly.qtjnotebook.ch05; import com.oreilly.qtjnotebook.ch01.QTSessionCheck; import java.awt.*; import java.awt.event.*; import javax.swing.*; import quicktime.*; import quicktime.app.view.*; import quicktime.io.*; import quicktime.qd.*; import quicktime.std.*; import quicktime.std.clocks.*; import quicktime.std.movies.*; public class ConvertToJavaImageBad extends Frame implements ActionListener { Movie movie; MoviePlayer player; MovieController controller; QTComponent qtc; static int nextFrameX, nextFrameY; QTImageProducer ip; public static void main (String[ ] args) { ConvertToJavaImageBad ctji = new ConvertToJavaImageBad( ); ctji.pack( ); ctji.setVisible(true); Rectangle ctjiBounds = ctji.getBounds( ); nextFrameX = ctjiBounds.x + ctjiBounds.width; nextFrameY = ctjiBounds.y + ctjiBounds.height; } public ConvertToJavaImageBad( ) { super ("QuickTime Movie"); try { // get movie QTSessionCheck.check( ); QTFile file = QTFile.standardGetFilePreview (QTFile.kStandardQTFileTypes); OpenMovieFile omFile = OpenMovieFile.asRead(file); movie = Movie.fromFile(omFile); player = new MoviePlayer (movie); controller = new MovieController (movie); // build gui qtc = QTFactory.makeQTComponent (controller); Component c = qtc.asComponent( ); setLayout (new BorderLayout( )); add (c, BorderLayout.CENTER); Button imageButton = new Button ("Make Java Image"); add (imageButton, BorderLayout.SOUTH); imageButton.addActionListener (this); movie.start( ); // set up close-to-quit addWindowListener (new WindowAdapter( ) { public void windowClosing (WindowEvent we) { System.exit(0); } }); } catch (QTException qte) { qte.printStackTrace( ); } } public void actionPerformed (ActionEvent e) { grabMovieImage( ); } public void grabMovieImage( ) { try { // lazy instantiation of ImageProducer if (ip = = null) { QDRect bounds = movie.getBounds( ); Dimension dimBounds = new Dimension (bounds.getWidth( ), bounds.getHeight( )); ip = new QTImageProducer (player, dimBounds); } // stop movie to take picture boolean wasPlaying = false; if (movie.getRate( ) > 0) { movie.stop( ); wasPlaying = true; } //( ); } } }
This application is shown in Figure 5-2. When you click the Make Java Image button, the movie is stopped, an AWT Image of the current display is made, and that Image is opened in another window.
Figure 5-2. Converting movie to Java AWT Image
Image:QuickTime for Java: A Developer's Notebook I 5 tt69.png
Warning
This is a negative example. Keep reading for why you don't want to use this code, and for a superior alternative.
What just happened?
The grabMovieImage() method creates a QTImageProducer from the MoviePlayer and hands it to the AWT Toolkit method createImage(). This call returns an AWT Image that (because it's a nice, clean, one-line call) is stuffed into a Swing ImageIcon and put on-screen.
This is more of a "what the heck" than a "what just happened." If your results are anything like mine, you're probably wondering why the movie stopped the first time you snapped a picture, even though the sound continued. Or why, for that matter, subsequent pictures seem to be later in the movie, meaning the decompression and decoding of the video is still working, but that it's just not getting to the screen.
Tip
Or not—maybe they'll have fixed it by the time you read this. At any rate, as of this writing, the QTImageProducer provided by a MoviePlayer is not to be trusted.
A Better Movie-to-Java Image Converter
The code shown in Section 5.3 is error-prone and nasty. On the other hand, a QTImageProducer is available from the GraphicsImporterDrawer. It does not have to work with a moving target like the MoviePlayer does. If only you could use that one instead . . . .
How do I do that?
The example program ConvertToJavaImageBetter has a different implementation of the grabMovieImage( ) method, as shown in Example 5-3.
Note
Run this example with ant run-ch05-convert-tojava-imagebetter.
Example 5-3. In-memory pict import to use GraphicsImporterDrawer's QTImageProducer
public void grabMovieImage( ) { try { // stop movie to take picture boolean wasPlaying = false; if (movie.getRate( ) > 0) { movie.stop( ); wasPlaying = true; } // take a pict Pict pict = movie.getPict (movie.getTime( )); // add 512-byte header that pict would have as file byte[ ] newPictBytes = new byte [pict.getSize( ) + 512]; pict.copyToArray (0, newPictBytes, 512, newPictBytes.length - 512); pict = new Pict (newPictBytes); // export it DataRef ref = new DataRef (pict, StdQTConstants.kDataRefQTFileTypeTag, "PICT"); gi.setDataReference (ref); QDRect rect = gi.getSourceRect ( ); Dimension dim = new Dimension (rect.getWidth( ), rect.getHeight( )); QTImageProducer ip = new QTImageProducer (gid, dim); //( ); } }
Try out this example and you should be able to create multiple AWT Images without harming playback of the movie, as exhibited in Figure 5-3.
Figure 5-3. Converting movie to Java AWT image (a better way)
Image:QuickTime for Java: A Developer's Notebook I 5 tt70.png
What just happened?
This isn't a hack. It's close, though.
Once the movie is paused, the key is to get the movie's display into a GraphicsImporter. Once that's done, it's easy to get a QTImageProducer from a GraphicsImporterDrawer and an image from the AWT Toolkit.
Note
Note to self: pitch QuickTime for Java Hacks to O'Reilly!
The problem is getting the image into a GraphicsImporter. If you look at the Javadoc, you might see one way to connect the dots: get a Pict from the Movie, save that to disk, then turn around and import. It would look something like this:
Pict pict = movie.getPict (movie.getTime( )); QTFile tempFile = new QTFile (new java.io.File ("temppict.pict")); pict.writeToFile (tempFile); GraphicsImporter importer = new GraphicsImporter (tempFile);
With the pict imported into a GraphicsImporter, you would get a QTImageProducer from the GraphicsImporterDrawer and generate AWT Images from the image producer, without messing up the movie playback.
The drawback of this approach is that you must read and write data to the hard drive, which is obviously much slower than an operation that takes place purely in memory.
In fact, an in-memory equivalent is possible. Look back at the GraphicsImporter Javadoc. Several setData( ) methods allow you to use sources other than just flat files for input to a GraphicsImporter. Two of them allow you to pass in more or less opaque pointers: setDataReference() and setDataHandle(). With these calls, the importer will read from memory the same way it would read from disk.
Note
And they say Java doesn't have pointers!
The trick in this case is to make the GraphicsImporter think it's reading a .pict file from disk, but actually it's reading from memory. One gotcha in this case is that pict files have a 512-byte header before their data—the header doesn't have to contain anything meaningful, it just has to be present. So, allocate a byte array 512 bytes longer than the size of the Pict data (getSize() and getBytes( ), inherited from QTHandleRef, respectively, return the size and contents of the native structure pointed to by the Pict object, not the Java object itself), and copy those bytes over with an offset of 512.
Next, you need a GraphicsImporter for the Pict format, and a GraphicsImporterDrawer to provide the QTImageProducer. The example code creates these in its constructor:
// set up graphicsimporter gi = new GraphicsImporter (StdQTConstants.kQTFileTypePicture); gid = new GraphicsImporterDrawer (gi);
Build a DataRef to point to the byte array and pass it to the GraphicsImporter with setDataReference( ). You've now replaced the file write and file read with equivalent in-memory operations. Now it's a simple matter of getting a GraphicsImporterDrawer and, from that, a QTImageProducer to create Java images.
Tip
This technique is adapted from "Technical Q&A QTMTB56: Importing Image Data from Memory," at. Check it out for a comparison of QTJ versus straight-C QuickTime coding styles.
Drawing with Graphics Primitives
In AWT, a Graphics object represents a drawing surface—either on-screen or off-screen—and supplies various methods for drawing on it. QuickTime has a GWorld object that's so similar, the QT developers renamed it QDGraphics just to make Java developers feel at home. As with the AWT class, painting is driven by a callback mentality.
How do I do that?
Example 5-4 shows the GWorldToPict example, which creates a QDGraphics object and performs some simple drawing operations.
Example 5-4. Drawing on a QDGraphics object
package com.oreilly.qtjnotebook.ch05; import quicktime.*; import quicktime.std.*; import quicktime.std.image.*; import quicktime.qd.*; import com.oreilly.qtjnotebook.ch01.QTSessionCheck; public class GWorldToPict extends Object implements QDDrawer { public static void main (String[ ] args) { new GWorldToPict( ); } public GWorldToPict( ) { try { QTSessionCheck.check( ); QDRect bounds = new QDRect (0, 0, 200, 250); ImageDescription imgDesc = new ImageDescription(QDConstants.k32RGBAPixelFormat); imgDesc.setHeight (bounds.getHeight( )); imgDesc.setWidth (bounds.getWidth( )); QDGraphics gw = new QDGraphics (imgDesc, 0); System.out.println ("GWorld created: " + gw); OpenCPicParams params = new OpenCPicParams(bounds); Pict pict = Pict.open (gw, params); gw.beginDraw (this); pict.close( ); try { pict.writeToFile (new java.io.File ("gworld.pict")); } catch (java.io.IOException ioe) { ioe.printStackTrace( ); } } catch (QTException qte) { qte.printStackTrace( ); } System.exit(0); } public void draw (QDGraphics gw) throws QTException { System.out.println ("draw( ) called with GWorld " + gw); QDRect bounds = gw.getBounds( ); System.out.println ("bounds: " + bounds); // clear drawing surface, set up colors gw.setBackColor (QDColor.lightGray); gw.eraseRect (bounds); // draw some shapes gw.penSize (2, 2); gw.moveTo (20,20); gw.setForeColor (QDColor.green); gw.line (30, 100); gw.moveTo (20,20); gw.setForeColor (QDColor.blue); gw.lineTo (30, 100); // draw some text gw.setForeColor (QDColor.red); gw.textSize (24); gw.moveTo (10, 150); gw.drawText ("QDGraphics", 0, 10); // draw some shapes gw.setForeColor (QDColor.magenta); QDRect rect = new QDRect (0, 170, 40, 30); gw.paintRoundRect (rect, 0, 0); QDRect roundRect = new QDRect (50, 170, 40, 30); gw.paintRoundRect (roundRect, 10, 10); QDRect ovalRect = new QDRect (100, 170, 40, 30); gw.paintOval (ovalRect); QDRect arcRect = new QDRect (150, 170, 40, 30); gw.paintArc (arcRect, 15, 215); } }
This is a headless application. When run, it does its imaging off-screen and writes the file to gworld.pict. Open this file in a pict-aware editor or viewer to see the output, as shown in Figure 5-4.
Figure 5-4. Graphics primitives drawn with QDGraphics
Image:QuickTime for Java: A Developer's Notebook I 5 tt73.png
What just happened?
The program sets up an ImageDescription, specifying a color model and size information, and creates a QDGraphics drawing surface according to its specs. Next, a new Pict is created from the QDGraphics and an object called OpenCPicParams, which provides size and resolution information. For on-screen work, the default 72dpi is fine.
Next, it issues a Pict.beginDraw() command, passing in a QDDrawer object. QDDrawer is an interface for setting up callbacks to a draw() method that specifies the QDGraphics to be drawn on. This redraw-oriented API is kind of overkill for this headless, off-screen example, but it does get the job done. The Pict records the drawing commands made in the draw( ) call and saves the result to disk as gworld.pict.
So, what can you do with QDGraphics primitives? Some basics of geometry are shown in this example. QDGraphics work with a system of foreground and background colors, a pen of some number of horizontal and vertical pixels, and a concept of a current position. This example begins with two variants of line drawing: the first drawing a line specified by an offset in horizontal and vertical pixels, and the second drawing a line to a specific point. Next, it draws some text in the default font—note that as with AWT, the text will go above the current point. Finally, the example iterates through some of the simpler shapes available as graphics primitives: ovals, optionally rounded rectangles, and arcs.
What about . . .
. . . drawing an image into the QDGraphics, like with AWT's Graphics.drawImage( ) ? Ah, you're getting ahead of me. That will be covered later in the chapter.
Also, why are all the variables and comments here GWorld and gw instead of QDGraphics and qdg? Like I said at the start of this lab, QDGraphics is something of an analogy to an AWT Graphics. Unfortunately, it's a flawed analogy. It wraps a native drawing surface called a GWorld , and all the calls throughout QTJ that take or return it use the "GWorld" verbiage, such as the setGWorld( ) and getGWorld( ) calls that you'll see throughout the Javadoc. Once you start getting into QTJ, the desire to understand it from QuickTime's point of view, as a GWorld, outweighs the benefits of making an appeal to the AWT Graphics analogy. So, to me, it's a GWorld.
Getting a Screen Capture
One frequently useful source of image data is, unsurprisingly, the screen—or screens, if you're so fortunate. Each screen is represented by an object that can give you its current contents, though it takes a little work to do anything with it.
How do I do that?
ScreenToPNG, shown in Example 5-5, is a headless application that starts up, grabs the screen, and writes out the image to a PNG file called screen.png.
Note
I use PNG for screenshots because it's lossless, widely supported, compressed, and patent-unencumbered.
Example 5-5. Grabbing screen pixels
package com.oreilly.qtjnotebook.ch05; import quicktime.*; import quicktime.std.*; import quicktime.std.image.*; import quicktime.qd.*; import quicktime.io.*; import quicktime.util.*; import com.oreilly.qtjnotebook.ch01.QTSessionCheck; public class ScreenToPNG extends Object { public static void main (String[ ] args) { new ScreenToPNG( ); } public ScreenToPNG( ) { try { QTSessionCheck.check( ); GDevice gd = GDevice.getMain( ); System.out.println ("Got GDevice: " + gd); PixMap pm = gd.getPixMap( ); System.out.println ("Got PixMap: " + pm); ImageDescription id = new ImageDescription (pm); System.out.println ("Got ImageDescription: " + id); QDRect bounds = pm.getBounds( ); RawEncodedImage rei = pm.getPixelData( ); QDGraphics decompGW = new QDGraphics (id, 0); QTImage.decompress (rei, id, decompGW, bounds, 0); GraphicsExporter exporter = new GraphicsExporter (StdQTConstants4.kQTFileTypePNG); exporter.setInputPixmap (decompGW); QTFile outFile = new QTFile (new java.io.File ("screen.png")); exporter.setOutputFile (outFile); System.out.println ("Exported " + exporter.doExport( ) + " bytes"); } catch (QTException qte) { qte.printStackTrace( ); } System.exit(0); } }
When finished, open the screen.png file with your favorite image editor or browser. A shot of my iBook's screen while writing the demo is shown in Figure 5-5.
Figure 5-5. Screen capture
Image:QuickTime for Java: A Developer's Notebook I 5 tt74.png
Notice at the bottom left that I have the DVD Player application running. Apple's tools for doing screen grabs—the Grab application and the Cmd-Shift-3 and Cmd-Shift-4 key combinations—won't work if you have the DVD Player running. However, this proves that those pixels are available to QuickDraw. That said, if you grab the screen while a DVD is playing, you might get tearing (if the capture grabs between frames) or even a blank panel (if the capture catches the repaint at a bad time). If you're going to use this to grab images from DVDs, hit Pause first.
Note
Also, don't do anything with a DVD that will get you or me sued.
What just happened?
The program asks for the main screen by means of the static GDevice.getMain( ) method. From this, you can get a PixMap , which is an object that represents metadata about a stored image, such as its color table, pixel format, packing scheme, etc. This metadata also can be stored as an ImageDescription, which is a structure that many graphics methods take as a parameter. The PixMap also has a pointer to the byte array that holds the image data, which you can retrieve as the wrapper object RawEncodedImage .
Note
Java 2D analogy: a PixMap is like a Raster, an ImageDescription is like a Sample-Model, and an EncodedImage is like a DataBuffer. Not exactly the same, but the same ideas throughout.
So now you have an image of what's on the screen—what can you do with it? The goal is to get that image into a format suitable for a GraphicsExporter. One means of doing this is to render into a QDGraphics and send that to the exporter. To do this, look to the QTImage class, which has methods to compress (from a QDGraphics drawing surface to an EncodedImage) and decompress (from a possibly compressed EncodedImage to a QDGraphics). In this case, use decompress( ) to make a QDGraphics, then pass that to the exporter's setInputPixMap( ) method (yes, despite the name, it takes a QDGraphics, not a PixMap) and do the export.
Tip
It's odd that EncodedImage is an interface, yet its relevant methods, like decompress( ), are static in QTImage (which is in another package!). Maybe EncodedImage should have been an abstract class?
What about . . .
. . . getting other screens? If you do have multiple monitors, GDevice has a scheme for iterating through the screens. Call the static GDevice.getList( ) to get—wait for it—not a list of GDevices, but just the first one. You then call its instance method getNext( ) to return another GDevice, and so on, until getNext() returns null.
And why is the PNG file-type constant defined in StdQTConstants4 ? PNG came late to the QuickTime party and wasn't supported until QuickTime 4. The later constants classes (StdQTContants4, StdQTContants5, and StdQTContants6) define constants that were added in later versions of QuickTime. kQTFileTypeTIFF is also in StdQTConstants4, but most other values you'd want to use are in the original StdQTConstants.
Also, it's getting difficult to remember the various means of converting between EncodedImages, Picts, QDGraphics, etc. To keep track of all this for myself, I created the diagram in Figure 5-6 while writing this chapter and have found myself consulting it frequently since then.
Figure 5-6. Converting between QuickDraw objects
Image:QuickTime for Java: A Developer's Notebook I 5 tt75.png
Note
Why, oh why, are these methods named like this?
Matrix-Based Drawing
Primitives and copying blocks of pixels are nice, but they're kind of limiting. Oftentimes, you must take pixels and scale them, rotate them, and move them around. Of course, if you've worked with Java 2D, you know this as the concept of affine transformations , which maps one set of pixels to another set of pixels, keeping straight lines straight and parallel lines parallel.
If you've really worked with Java 2D's affine transformations, you probably know that they're represented as a linear algebra matrix, with coordinates mapped from source to destination by multiplying and/or adding pixel values against coefficients of the matrix. By changing the coefficients in the matrix to interesting values (or trigonometric functions), you can define different kinds of transformations.
QuickTime does exactly the same thing, with the minor exception that rather than hiding the matrix in a wrapper (like J2D's AffineTransformation class), it puts the matrix front-and-center throughout the API. One reason for this is that it's also a major part of the file format—tracks in a movie all have a matrix in their metadata to determine how they're rendered at runtime.
QuickTime matrix manipulation can basically do three things for you:
- Translation
- Move a block of pixels from one location to another
- Rotation
- Rotate pixels around a given point
- Scaling
- Make block bigger or smaller, or change its shape
Tip
This is a lab, not a lecture, so you don't get the all-singing, all-dancing, all-algebra introduction to matrix theory here. If you must have this, Apple provides a pretty straightforward intro in "The Transformation Matrix," part of the "Introductions to QuickTime" documentation anthology on its web site.
How do I do that?
The example GraphicImportMatrix shows the effect of setting up a Matrix and then using it for drawing operations. A full listing is in Example 5-6.
Note
Run this example with ant run-ch05-graphic-importmatrix.
Example 5-6. Drawing with matrix-based transformations
package com.oreilly.qtjnotebook.ch05; import quicktime.*; import quicktime.std.*; import quicktime.std.image.*; import quicktime.qd.*; import quicktime.io.*; import quicktime.util.*; import quicktime.app.view.*; import java.io.*; import java.awt.*; import com.oreilly.qtjnotebook.ch01.QTSessionCheck; public class GraphicImportMatrix extends Object { public static void main (String[ ] args) { try { QTSessionCheck.check( ); File graphicsDir = new File ("graphics"); QTFile pngFile1 = new QTFile (new File (graphicsDir, "1.png")); QTFile pngFile2 = new QTFile (new File (graphicsDir, "2.png")); GraphicsImporter gi1 = new GraphicsImporter (pngFile1); GraphicsImporter gi2 = new GraphicsImporter (pngFile2); // define some matrix transforms on importer 1 QDRect bounds = gi1.getBoundsRect( ); // combine translation (movement) and scaling into // one call to rect QDRect newBounds = new QDRect (bounds.getWidth( )/4, bounds.getHeight( )/4, bounds.getWidth( )/2, bounds.getHeight( )/2); Matrix matrix = new Matrix( ); matrix.rect(bounds, newBounds); // rotate about its center matrix.rotate (30, (bounds.getWidth( ) - bounds.getX( ))/2, (bounds.getHeight( ) - bounds.getY( ))/2); gi1.setMatrix (matrix); // draw somewhere QDGraphics scratchWorld = new QDGraphics (gi2.getBoundsRect( )); System.out.println ("Scratch world: " + scratchWorld); // draw background gi2.setGWorld (scratchWorld, null); gi2.draw( ); // draw foreground gi1.setGWorld (scratchWorld, null); gi1.draw( ); int bufSize = QTImage.getMaxCompressionSize (scratchWorld, scratchWorld.getBounds( ), 0, StdQTConstants.codecNormalQuality, StdQTConstants4.kPNGCodecType, CodecComponent.anyCodec); byte[ ] compBytes = new byte[bufSize]; RawEncodedImage compImg = new RawEncodedImage (compBytes); ImageDescription id = QTImage.compress(scratchWorld, scratchWorld.getBounds( ), StdQTConstants.codecNormalQuality, StdQTConstants4.kPNGCodecType, compImg); System.out.println ("rei compressed from gw is " + compImg.getSize( )); System.out.println ("exporting"); GraphicsExporter exporter = new GraphicsExporter (StdQTConstants4.kQTFileTypePNG); exporter.setInputPtr (compImg, id); QTFile outFile = new QTFile (new File ("matrix.png")); exporter.setOutputFile (outFile); exporter.doExport( ); System.out.println ("did export"); } catch (QTException qte) { qte.printStackTrace( ); } System.exit(0); } }
Note
Run this example with ant run-ch05-screentopng.
This headless app begins by importing two PNG files, the number 1 on a green background and the number 2 on cyan. Then it creates a GWorld (oops, I mean a QDGraphics—sorry!) big enough to hold the 2 image, which will serve as the background. Both GraphicsImporters call setGWorld() with the scratchWorld, which allows them to draw( ) into it. A Matrix defines a scale, translate, and rotate transformation for the 1, which is drawn atop the 2. The result is compressed as a PNG and saved as matrix.png, which is shown in Figure 5-7.
Figure 5-7. Drawing with a Matrix
Image:QuickTime for Java: A Developer's Notebook I 5 tt76.png
What just happened?
Using setMatrix( ) with a GraphicsImporter allows you to tell the importer to use the transformation specified by the Matrix when you call the importer's draw( ) method. Of the three typical transformations, two can be combined into one call—scaling and translating can be expressed with a single call, Matrix.rect() , which defines a mapping from one source rectangle to a target rectangle. In the example, rect( ) maps from the full size of the image to a quarter-size image, centered horizontally and vertically.
Tip
The same thing can be done with separate calls to Matrix.translate( ) and Matrix.scale(), if you prefer.
The example also calls Matrix.rotate() to rotate the scaled and moved box by 30 degrees clockwise.
Tip
You also can define matrix transformations by calling the various setXXX( ) methods that set individual coordinates in the Matrix, if you've read Apple's Matrix docs and understand each coefficient. But why bother when you've got the convenience calls?
Having set this Matrix on 1's GraphicsImporter, the example draws 2 into scratchWorld as a background, and then draws 1 on top of it, scaled, translated, and rotated.
But what to do with the pixels that have been drawn into the QDGraphics? It's not like the Section 5.5 lab, in which a QDGraphics was wrapped by a Pict that could be saved off to disk. Instead, use QTImage to create an EncodedImage from the drawing surface. In the Section 5.6 lab, QTImage.decompress( ) converted an image to a QDGraphics. In this case, QTImage.compress( ) can return the favor by compressing the possibly huge pixel map into a compressed format.
Compressing is harder than decompressing. You need to know up front how big of a byte array will be needed to hold the compressed bytes, so first you call getMaxCompressionSize() . This takes six parameters:
- A QDGraphics to compress from.
- A QDRect defining the region to be compressed.
- Color depth, as an int. Set this to 0 to let QuickTime decide.
- Codec quality. These are in StdQTConstants . From the worst to best, they are: codecMinQuality, codecLowQuality, codecNormalQuality, codecHighQuality, codecMaxQuality, codecLosslessQuality. Note that not all codecs support all these values.
- Codec type. These constants are identified as XXXCodecType constants in the StdQTConstants classes.
- Codec identifier. If you have a CodecComponent object you want to use for the compression, pass it here. Typically, you pass null to let QuickTime decide.
Most of these parameters are used in the subsequent compress( ) call. It goes without saying that you need to use the same values for each call, or else getMaxCompressionSize( ) will lead you to create a byte array that is the wrong size.
Along with many of the preceding parameters, the compress() call takes a RawEncodedImage created from a suitably large byte array. compress( ) puts the compressed and encoded image data into the RawEncodedImage and returns an ImageDescription. Taken together, these are enough to provide an input to a GraphicsExporter, in the form of a call to setInputPtr() .
Note
Passing pointers again! This is one of those cases where QTJ is very un-Java-like.
Compositing Graphics
Matrix transformations are nice, but you can do more with image drawing. QuickDraw supports a number of graphics modes so that instead of just copying pixels from a source to a destination, you can combine them to create interesting visual effects. The graphics mode defines the combination: blending, translucency, etc.
How do I do that?
Specifying a graphics mode for drawing is trivial. Create a GraphicsMode object and call setGraphicsMode( ) on the GraphicsImporter. In the included example, GraphicImportCompositing.java, the mode is set with the following code:
// draw foreground GraphicsMode alphaMode = new GraphicsMode (QDConstants.blend, QDColor.green); gi1.setGraphicsMode (alphaMode);
Note
Run this with ant-ch05-graphic-importcompositing.
This is another headless app, producing the composite.png file as shown in Figure 5-8. Notice that where the images overlap, the 2 can now show through the 1.
Figure 5-8. Drawing with blend graphics mode
Image:QuickTime for Java: A Developer's Notebook I 5 tt78.png
What just happened?
The "blend" GraphicsMode instructs QuickDraw to average out colors where they overlap. In this case, 1's black pixels are lightened up by averaging when averaged with cyan, and the green is slightly tinted where it overlaps with cyan or black.
The QDColor.green is irrelevant in this case, but change the first argument to QDConstants.transparent and suddenly the result is very different, as shown in Figure 5-9.
Figure 5-9. Drawing with transparent graphics mode
Image:QuickTime for Java: A Developer's Notebook I 5 tt79.png
A GraphicsMode takes a constant to specify behavior, and a color that is used by some of the available modes. In the case of transparent, any pixels of the specified color (green in this case) become invisible, allowing the background picture to show through.
Warning
Don't jump to the conclusion that this is similar to transparency in a GIF or a PNG. Those are indexed color formats, where one of the index values can be made transparent. But in such a format, you could have 254 index values that all represented the same shade of green, and a 255th that becomes invisible. In this QuickDraw example, all green pixels are transparent. If you've worked with television equipment, this should be familiar as the chroma key concept frequently used in news and weather, where someone will stand in front of a green wall, and an effects box will replace all green pixels with video from another source.
There are too many supported graphics mode values to list here, but some of the most useful are as follows:
- srcCopy
- Copies source to destination. This is the normal behavior.
- transparent
- Punches out specified color and lets background show through.
- blend
- Mixes foreground and background colors.
- addPin
- Adds foreground and background colors, up to a maximum value.
- subPin
- Calculates the difference between sum and destination colors, to a minimum value.
- ditherCopy
- Replaces destination with a dither mix of source and destination.
A complete list of values is provided in "Graphic Transfer Modes" on Apple's developer web site at. | http://commons.oreilly.com/wiki/index.php?title=QuickTime_for_Java:_A_Developer's_Notebook/Working_with_QuickDraw&oldid=4159 | CC-MAIN-2016-36 | refinedweb | 6,200 | 58.48 |
AIR REPAIR by Anita Sands Hernandez
In an increasingly polluted world, our homes are our first and last line of defense. The city beats against the windows but it can't come in. The home is fortress combined with temple. Home is where the heart is.
HOME is where our children take their first breath, and a trillion more that come after. Home sweet home is the ultimate citadel. Or is it? The sad reality is that no matter how we scour the place, the average house crawls with filth. The carpet where our infant gurgles is teeming with dust mites, bacterial life and toxic poisons from chemicals in that rug which his little damp hands carry to his mouth and eye.
The sheets on baby's bed are buttered with spores and mold from that wet spot behind the tub, that
leaks into the bedroom air. The sheets are painted with detergents which penetrate his skin. Toxic gases waft in from the city streets and are inhaled into lungs to line the surfaces with toxins that block air. They even go into his bloodstream to end up residing permanently in his brain where they go about their nasty work of
stealing oxygen molecules.
While Binky's parents proudly proclaim that their urban palace is spanky clean just because Binky's
toys are put away at night, they are in denial about the absolute filth of their city's air, the grime on
every inch of every surface in their home, and the carcinogens in every cell of little Binky's body.
Our homes and bodies are toxic dump sites so filthy they make Love Canal look clean. Our plush, $100 a
yard carpets are allergin factories regurgitating dust and chemicals. The gas blowing in from city streets is
thick with atoms of corrosive acids with names like nitrous oxide, sulphur dioxide, carbon monoxide,
heavy metals, and plain old soot. Factory waste fills every lungful that the laughing baby lying on the
clean, white rug inhales ---along with propyl alcohol from his soap and body lotion, carcinogenic talc from
his diapers and common dust, molds and pollen---all of it insulting the immune system, stealing the
oxygen from his blood, aggravating allergies and opening the way for catastrophic disease itself.
California's EPA admits that big city air pollution causes us to die prematurely of heart and lung
ailments, reduces life expectancy 1-3 years and increases risk of early death by 26%. I, a non-smoker
Los Angeleno certainly feel it. Every time I breathe in, my entire chest cavity itches and I am forced to
cough.
What can conscious people do? The answer is clear.
As modern man girds his loins against the
metropolis, prime among his tools is the
electrostatic air purifier, the cheapest of all, or the
HEPA (for High Efficiency Particulate Air) carbon
filter air conditioner. One day, in house
"ElectroStatic" purifiers or in-duct, built in HEPA air
cleaners and filters will be the rule. Today they are
the exception.
The city dweller's lethargy about cleaning the air in
his home and office is predicated on denial which can
easily be confronted. Once he learns how his allergies
to the air compromise his entire immune system and
that of his children, his ignorance of the harmfulness
of the poisoned air he breathes will soon be a thing of
the past.
True, the average man has just mastered the
eggbeater and electric shaver, and with great
difficulty, a few evolved specimens of us have
dominated the VCR and computer but evolve we
must and by the turn of the century we must get the
hang of the central Air cleaner or be conquered by
the modern age and perish.
Price tags on air tech are extraordinarily user-
friendly. 30% efficiency (either electrostatic or pleated
filtering media but not HEPA) costs a modest 150$.
65% efficiency costs a pleasant 350$ while 99.9
efficiency is only found in the HEPA or high
efficiency particulate air machine. A sample machine,
for instance the one made by PURE-AIR SYSTEMS
costs a tolerable $995. An electrostatic air purifier is
half that amt. They have a grid which gets so loaded
that you must wash off the soot daily! These
machines could easily be manufactured by factories in
third world countries. Some bright exporter might
take one of our best, 400$ machines, a simple
electrified grid, tear it apart, analyze how the low
current attracts the gas molecules by static, then
retro-engineer one, and then create a line that they
wholesale a more affordable model back to the
northern hemisphere's big cities, creating franchises
in every city like McDonalds. Whoever does this
with franchises in every city called AIR REPAIR will
be the next billionaire to follow Bill Gates. I mean
what good is it to have WINDOWS without air
coming in thru the actual window, air to breathe!?
Bite the bullet. To purify your home, to get rid of
that cough factor, for 400$ to 1000$ dollars, ( chump
change really) to be able to breathe get your first
machine now! Then, investigate being the HENRY
FORD of the E.S.P! You could find a third world
factory that could manufacture your line and create
the electrostatic machine much more cheaply than the
ones Sharper Image has right now.
If we could bring in really good air purification
machinery at lower rates than ORECK or Sharper
Image, we wouldn't have to skimp. To heck with one
machine per house, SPLURGE. You can dot porticos,
floors, wings or windows with a bevy of them.
Start with the HEPA machines, which we have
already and which Air Conditioning shops install. A
little true blue ALL-AMERICAN filter, dust
arrestance 88%, efficiency 20% costs $75. Why not
put one on each of the kid's rooms. A DUST FREE
has 93% dust arrestance and a price tag of $88. We'll
put it in the kitchen. The big PURE AIR from
Plainfield Indiana is the Grandaddy we'll put in the
basement and connect to an air-duct system. For
$995 it'll give the house a pair of carbon-filtered
lungs. In the West wing where Mom sometimes does
her sewing, a very trim Hi-Tech for 88$ or a
Newtron from Cincinnati, dust arrestance 86%
costing $400 will do fine.
In the spring, when pollen floats everywhere, we'll
run the little guys on windy mornings while the
PURE AIR pulls in rich, oxygen-laden garden air at
night. In summer, when the chemicals in smog photo-
react to sunlight to become harmful ozone, we'll run
the big guy full time, keep all the windows shut, fill
the house with wide-leafed, oxygen producing house
plants and remember to shut doors as fast as we open
them. We won't let any gasses in. We'll make the
entry hall into an hermetically sealed airlock. Beam
me in, Scottie, but puhleeeze, keep the burnt
petroleum-particulates out.
As sensitivities mutate into compulsions, and
compulsions into refinements, the strategies multiply.
Air Repair will become an art form, an arena of
experimentation. After dinner conversation will pivot
on filter tech and the wisdom of pre-filters, non-
corrosive finishes, or which machines hum, which are
silent, which have blowers, which have lifetime
warranties, vs. warranties for five or ten years.
There are satisfying complexities, choices to be made.
Some require a $150 filter put on every three years,
less demanding machines require prefilters that cost
$20 for a box of six but you have to change them
every 3 months. Another has a 35$ filter.
Imagine a game this complex, satisfying and
inexpensive that also saves lives. And the money Air
Repair costs is chicken feed for what you're getting: a
healthy family, a new hobby, purity and peace of mind
all wrapped up in one pastime. For some of you, it
would be a business, "AIR REPAIR" we take the pain
out of breathing. Estimates free" I can see the
business card now. It would be a major hit in any big
city even at 5 to l0 thousand a pop.
Air Repair is an armed response to a brave new
world ---but a damn filthy world---one that is
beating on our windows and doors 24 hours a day,
targeting our family's lungs.
Air Repair is a responsible tackling of the pollution
problem that is wreaking havoc on modern man's
body, spirit and mind. And on our children's too.
Filthy air is shortening our lives and making the
quality of life we 'enjoy' very limited, indeed. This
filth and toxidity is only going to get worse and
worse as China and the third world gets the
automobile. A layer of soot and petroleum particles
will block out the sun! Worse, it will block out every
breath you try to inhale!
If you have a nephew or son who doesn't know what
to do with his life, send him to trade school for air
conditioning. If you know someone in the third world
who wants to live there, have him set up a small
factory to make the electrostatic unit and you start
the franchises in the big filthy cities! You will not
only save lives but you will be wealthy beyond
measure.
A simple google search that I did located a French
company that makes a handsome E.S. unit. You could
import that machine and make a living with a
pyramid selling club. The time is ripe!
Do a google search on the words electrostatic air
purifier, putting it "in quotes."
I found dozens of manufacturers but these looked
good:
Or-
5-0323.html which has the best prices of any online
store. With 50$ for a room size air purifier, HEPA
and OZONE, you can really do one in each room
right now, today. Its catalogue has lowest prices for
across the board cameras, appliances, etc, they beat
everybody price wise. Or Visit:
Just imagine starting an import business that
warehouses the machines by the thousand and
wholesales them to your internet friends WHO DO
THE AIR REPAIR FRANCHISE in every big city.
Imagine all of you learning a simple installation riff.
Imagine selling them the way they sold vacuum
cleaners, door to door. Get hold of the Consumer
Reports article on the electrostatic purifier. Your
librarian can find it for you.
Think of your nephew who's handy. He and your son
could do this as a business. You could type up a
prospectus (the secret of doing this is at my main
website, PROSPECTUS WRITING and deal memo
form is there, and out of every 3 you give it to, one
will invest. You an easily raise the money. Talk it
over with young men in your family. Air Repair is
the wave of the future!
<---BACK TO THE WEBSITE MAIN PAGE | http://home.earthlink.net/~astrology/air_repair.htm | crawl-002 | refinedweb | 1,827 | 70.63 |
I started using Sublime Text 2 very recently, awesome feedback so far. I have a project created in ST2 with a make file, which I'm able to build using a CMD+B combination. The build is blazingly fast, so I'm trying to figure ways to avoid that extra CMD+B combination and tie it to a save operation. Is there a way to trigger a build on each save operation in a project?
Thanks for the support.
Is 'Save All on Build' option an option for you ?
Thanks for replying. Save on build wouldn't work for my requirement. What I'm looking for is exactly the opposite, a 'Build on Save' option.
Actually if you have not other requirement, I don't see why replacing 'Save' step by 'Build' step in your workflow doesn't resolve your issue Anyway, you could probably write a plugin that trigger a 'build' command on the 'on_post_save' event, something like that (not tested):
class AutoBuildOnSave(sublime_plugin.EventListener):
def on_post_save(self, view):
view.run_command('build')
Thanks bizoo. I actually ended up writing the same yesterday. I would've saved a lot of trouble if I had seen your recent post before I sat down to learn the syntax of Sublime plugins and a bit of Python.
I've hosted the same on my GitHub in case anyone is looking for the same functionality.github.com/alexnj/SublimeOnSaveBuild
I would want to add a menu item with a checkbox so that it can be easily enabled or disabled for projects that need or don't need the functionality, instead of removing the plugin from the file system to do the same. I couldn't figure out from the API docs how to add a checkbox to a menu item though, pretty much everything else is ready for the same
Thanks for the support, once again.
Hi, I extended your Plugin with a setting "saveOnBuild", so you can disable it by default and only activate it in specific projects within the project settings. Works for me very well: github.com/lunow/SublimeOnSaveBuild | https://forum.sublimetext.com/t/how-to-force-a-build-on-save/2608/6 | CC-MAIN-2017-09 | refinedweb | 348 | 62.58 |
DEBSOURCES
Skip Quicknav
sources / dnspython / 1.15.0-1%2Bdeb9u1 /
2016-09-29 Bob Halley <halley@dnspython.org>
* IDNA 2008 support is now available if the "idna" module has been
installed and IDNA 2008 is requested. The default IDNA behavior
is still IDNA 2003. The new IDNA codec mechanism is currently
only useful for direct calls to dns.name.from_text() or
dns.name.from_unicode(), but in future releases it will be
deployed throughout dnspython, e.g. so that you can read a
masterfile with an IDNA 2008 codec in force.
* By default, dns.name.to_unicode() is not strict about which
version of IDNA the input complies with. Strictness can be
requested by using one of the strict IDNA codecs.
* Add AVC RR support.
* Some problems with newlines in various output modes have been
addressed.
* dns.name.to_text() now returns text and not bytes on Python 3.x
* More miscellaneous fixes for the Python 2/3 codeline merge.
2016-05-27 Bob Halley <halley@dnspython.org>
* (Version 1.14.0 released)
* Add CSYNC RR support
* Fix bug in LOC which destroyed N/S and E/W distinctions within
a degree of the equator or prime merdian respectively.
* Misc. fixes to deal with fallout from the Python 2 & 3 merge.
[issue #156], [issue #157], [issue #158], [issue #159],
[issue #160].
* Running with python optimization on caused issues when
stripped docstrings were referenced. [issue #154]
* dns.zone.from_text() erroneously required the zone to be provided.
[issue #153]
2016-05-13 Bob Halley <halley@dnspython.org>
* dns/message.py (make_query): Setting any value which implies
EDNS will turn on EDNS if 'use_edns' has not been specified.
2016-05-12 Bob Halley <halley@dnspython.org>
* TSIG signature algorithm setting was broken by the Python 2
and Python 3 code line merge. Fixed.
2016-05-10 Bob Halley <halley@dnspython.org>
* (Version 1.13.0 released)
2016-05-10 Bob Halley <halley@dnspython.org>
* Dropped support for Python 2.4 and 2.5.
* Zone origin can be specified as a string.
* Support string representation for all DNSExceptions.
* Use setuptools not distutils
* A number of Unicode name bug fixes.
* Added support for CAA, CDS, CDNSKEY, EUI48, EUI64, and URI RR
types.
* Names now support the pickle protocol.
* NameDicts now keep the max-depth value correct, and update
properly.
* resolv.conf processing rejects lines with too few tokens.
* Ports can be specified per-nameserver in the stub resolver.
2016-05-03 Arthur Gautier
* Single source support for python 2.6+ and 3.3+
2014-09-04 Bob Halley <halley@dnspython.org>
* Comparing two rdata is now always done by comparing the binary
data of the DNSSEC digestable forms. This corrects a number of
errors where dnspython's rdata comparison order was not the
DNSSEC order.
* Add CAA implementation. Thanks to Brian Wellington for the
patch.
2014-09-01 Bob Halley <halley@dnspython.org>
* (Version 1.12.0 released)
2014-08-31 Bob Halley <halley@dnspython.org>
* The test system can now run the tests without requiring dnspython
to be installed.
2014-07-24 Bob Halley <halley@dnspython.org>
* The 64-bit version of Python on Windows has sys.maxint set to
2^31-1, yet passes 2^63-1 as the "unspecified bound" value in
slices. This is a bug in Python as the documentation says the
unspecified bound value should be sys.maxint. We now cope with
this. Thanks to Matthäus Wander for reporting the problem.
2014-06-21 Bob Halley <halley@dnspython.org>
* When reading from a masterfile, if the first content line
started with leading whitespace, we raised an ugly exception
instead of doing the right thing, namely using the zone origin as
the name. [#73] Thanks to Tassatux for reporting the issue.
* Added dns.zone.to_text() convenience method. Thanks to Brandon
Whaley <redkrieg@gmail.com> for the patch.
* The /etc/resolv.conf setting "options rotate" is now understood
by the resolver. If present, the resolver will shuffle the
nameserver list each time dns.resolver.query() is called. Thanks
to underrun for the patch. Note that you don't want to add
"options rotate" to your /etc/resolv.conf if your system's
resolver library does not understand it. In this case, just set
resolver.rotate = True by hand.
2014-06-19 Bob Halley <halley@dnspython.org>
* Escaping of Unicode has been corrected. Previously we escaped
and then converted to Unicode, but the right thing to do is
convert to Unicode, then escape. Also, characters > 0x7f should
NOT be escaped in Unicode mode. Thanks to Martin Basti for the
patch.
* dns.rdtypes.ANY.DNSKEY now has helpers functions to convert
between the numeric form of the flags and a set of human-friendly
strings. Thanks to Petr Spacek for the patch.
* RRSIGs did not respect relativization settings in to_text().
Thanks to Brian Smith for reporting the bug and submitting a
(slightly different) patch.
2014-06-18 Bob Halley <halley@dnspython.org>
* dns/rdtypes/IN/APL.py: The APL from_wire() method did not accept an
rdata length of 0 as valid. Thanks to salzmdan for reporting the
problem.
2014-05-31 Bob Halley <halley@dnspython.org>
* dns/ipv6.py: Add is_mapped()
* dns/reversename.py: Lookup IPv6 mapped IPv4 addresses in the v4
reverse namespace. Thanks to Devin Bayer. Yes, I finally fixed
this one :)
2014-04-11 Bob Halley <halley@dnspython.org>
* dns/zone.py: Do not put back an unescaped token. This was
causing escape processing for domain names to break. Thanks to
connormclaud for reporting the problem.
2014-04-04 Bob Halley <halley@dnspython.org>
* dns/message.py: Making a response didn't work correctly if the
query was signed with TSIG and we knew the key. Thanks to Jeffrey
Stiles for reporting the problem.
2013-12-11 Bob Halley <halley@dnspython.org>
* dns/query.py: Fix problems with the IXFR state machine which caused
long diffs to fail. Thanks to James Raftery for the fix and the
repeated prodding to get it applied :)
2013-09-02 Bob Halley <halley@dnspython.org>
* (Version 1.11.1 released)
2013-09-01 Bob Halley <halley@dnspython.org>
* dns/tsigkeyring.py (to_text): we want keyname.to_text(), not
dns.name.to_text(keyname). Thangs to wangwang for the fix.
2013-08-26 Bob Halley <halley@dnspython.org>
* dns/tsig.py (sign): multi-message TSIGs were broken for
algorithms other than HMAC-MD5 because we weren't passing the
right digest module to the HMAC code. Thanks to salzmdan for
reporting the bug.
2013-08-09 Bob Halley <halley@dnspython.org>
* dns/dnssec.py (_find_candidate_keys): we tried to extract the
key from the wrong variable name. Thanks to Andrei Fokau for the
fix.
2013-07-08 Bob Halley <halley@dnspython.org>
* dns/resolver.py: we want 'self.retry_servfail' not just
retry_servfail. Reported by many, thanks! Thanks to
Jeffrey C. Ollie for the fix.
2013-07-08 Bob Halley <halley@dnspython.org>
* tests/grange.py: fix tests to use older-style print formatting
for backwards compatibility with python 2.4. Thanks to
Jeffrey C. Ollie for the fix.
2013-07-01 Bob Halley <halley@dnspython.org>
* (Version 1.11.0 released)
2013-04-28 Bob Halley <halley@dnspython.org>
* dns/name.py (Name.to_wire): Do not add items with offsets >= 2^14
to the compression table. Thanks to Casey Deccio for discovering
this bug.
2013-04-26 Bob Halley <halley@dnspython.org>
* dns/ipv6.py (inet_ntoa): We now comply with RFC 5952 section
5.2.2, by *not* using the :: syntax to shorten just one 16-bit
field. Thanks to David Waitzman for reporting the bug and
suggesting the fix.
2013-03-31 Bob Halley <halley@dnspython.org>
* lock caches in case they are shared
* raise YXDOMAIN if we see one
* do not print empty rdatasets
* Add contributed $GENERATE support (thanks uberj)
* Remove DNSKEY keytag uniqueness assumption (RFC 4034, section 8)
(thanks James Dempsey)
2012-09-25 Sean Leach
* added set_flags() method to dns.resolver.Resolver
2012-09-25 Pieter Lexis
* added support for TLSA RR
2012-08-28 Bob Halley <halley@dnspython.org>
* dns/rdtypes/ANY/NSEC3.py (NSEC3.from_text): The NSEC3 from_text()
method could erroneously emit empty bitmap windows (i.e. windows
with a count of 0 bytes); such bitmaps are illegal.
2012-04-08 Bob Halley <halley@dnspython.org>
* (Version 1.10.0 released)
2012-04-08 Bob Halley <halley@dnspython.org>
* dns/message.py (make_query): All EDNS values may now be
specified when calling make_query()
* dns/query.py: Specifying source_port had no effect if source was
not specified. We now use the appropriate wildcard source in
that case.
* dns/resolver.py (Resolver.query): source_port may now be
specified.
* dns/resolver.py (Resolver.query): Switch to TCP when a UDP
response is truncated. Handle nameservers that serve on UDP
but not TCP.
2012-04-07 Bob Halley <halley@dnspython.org>
* dns/zone.py (from_xfr): dns.zone.from_xfr() now takes a
'check_origin' parameter which defaults to True. If set to
False, then dnspython will not make origin checks on the zone.
Thanks to Carlos Perez for the report.
* dns/rdtypes/ANY/SSHFP.py (SSHFP.from_text): Allow whitespace in
the text string. Thanks to Jan Andres for the report and the
patch.
* dns/message.py (from_wire): dns.message.from_wire() now takes
an 'ignore_trailing' parameter which defaults to False. If set
to True, then trailing junk will be ignored instead of causing
TrailingJunk to be raised. Thanks to Shane Huntley for
contributing the patch.
2011-08-22 Bob Halley <halley@dnspython.org>
* dns/resolver.py: Added LRUCache. In this cache implementation,
the cache size is limited to a user-specified number of nodes, and
when adding a new node to a full cache the least-recently used
node is removed.
2011-07-13 Bob Halley <halley@dnspython.org>
* dns/resolver.py: dns.resolver.override_system_resolver()
overrides the socket module's versions of getaddrinfo(),
getnameinfo(), getfqdn(), gethostbyname(), gethostbyname_ex() and
gethostbyaddr() with an implementation which uses a dnspython stub
resolver instead of the system's stub resolver. This can be
useful in testing situations where you want to control the
resolution behavior of python code without having to change the
system's resolver settings (e.g. /etc/resolv.conf).
dns.resolver.restore_system_resolver() undoes the change.
2011-07-08 Bob Halley <halley@dnspython.org>
* dns/ipv4.py: dnspython now provides its own, stricter, versions
of IPv4 inet_ntoa() and inet_aton() instead of using the OS's
versions.
* dns/ipv6.py: inet_aton() now bounds checks embedded IPv4 addresses
more strictly. Also, now only dns.exception.SyntaxError can be
raised on bad input.
2011-04-05 Bob Halley <halley@dnspython.org>
* Old DNSSEC types (KEY, NXT, and SIG) have been removed.
* Bounds checking of slices in rdata wire processing is now more
strict, and bounds errors (e.g. we got less data than was
expected) now raise dns.exception.FormError rather than
IndexError.
2011-03-28 Bob Halley <halley@dnspython.org>
* (Version 1.9.4 released)
2011-03-24 Bob Halley <halley@dnspython.org>
* dns/rdata.py (Rdata._wire_cmp): We need to specify no
compression and an origin to _wire_cmp() in case names in the
rdata are relative names.
* dns/rdtypes/ANY/SIG.py (SIG._cmp): Add missing 'import struct'.
Thanks to Arfrever Frehtes Taifersar Arahesis for reporting the
problem.
2011-03-24 Bob Halley <halley@dnspython.org>
* (Version 1.9.3 released)
2011-03-22 Bob Halley <halley@dnspython.org>
* dns/resolver.py: a boolean parameter, 'raise_on_no_answer', has
been added to the query() methods. In no-error, no-data
situations, this parameter determines whether NoAnswer should be
raised or not. If True, NoAnswer is raised. If False, then an
Answer() object with a None rrset will be returned.
* dns/resolver.py: Answer() objects now have a canonical_name field.
2011-01-11 Bob Halley <halley@dnspython.org>
* Dnspython was erroneously doing case-insensitive comparisons
of the names in NSEC and RRSIG RRs. Thanks to Casey Deccio for
reporting this bug.
2010-12-17 Bob Halley <halley@dnspython.org>
* dns/message.py (_WireReader._get_section): use "is" and not "=="
when testing what section an RR is in. Thanks to James Raftery
for reporting this bug.
2010-12-10 Bob Halley <halley@dnspython.org>
* dns/resolver.py (Resolver.query): disallow metaqueries.
* dns/rdata.py (Rdata.__hash__): Added a __hash__ method for rdata.
2010-11-23 Bob Halley <halley@dnspython.org>
* (Version 1.9.2 released)
2010-11-23 Bob Halley <halley@dnspython.org>
* dns/dnssec.py (_need_pycrypto): DSA and RSA are modules, not
functions, and I didn't notice because the test suite masked
the bug! *sigh*
2010-11-22 Bob Halley <halley@dnspython.org>
* (Version 1.9.1 released)
2010-11-22 Bob Halley <halley@dnspython.org>
* dns/dnssec.py: the "from" style import used to get DSA from
PyCrypto trashed a DSA constant. Now a normal import is used
to avoid namespace contamination.
2010-11-20 Bob Halley <halley@dnspython.org>
* (Version 1.9.0 released)
2010-11-07 Bob Halley <halley@dnspython.org>
* dns/dnssec.py: Added validate() to do basic DNSSEC validation
(requires PyCrypto). Thanks to Brian Wellington for the patch.
* dns/hash.py: Hash compatibility handling is now its own module.
2010-10-31 Bob Halley <halley@dnspython.org>
* dns/resolver.py (zone_for_name): A query name resulting in a
CNAME or DNAME response to a node which had an SOA was incorrectly
treated as a zone origin. In these cases, we should just look
higher. Thanks to Gert Berger for reporting this problem.
* Added zonediff.py to examples. This program compares two zones
and shows the differences either in diff-like plain text, or
HTML. Thanks to Dennis Kaarsemaker for contributing this
useful program.
2010-10-27 Bob Halley <halley@dnspython.org>
* Incorporate a patch to use poll() instead of select() by
default on platforms which support it. Thanks to
Peter Schüller and Spotify for the contribution.
2010-10-17 Bob Halley <halley@dnspython.org>
* Python prior to 2.5.2 doesn't compute the correct values for
HMAC-SHA384 and HMAC-SHA512. We now detect attempts to use
them and raise NotImplemented if the Python version is too old.
Thanks to Kevin Chen for reporting the problem.
* Various routines that took the string forms of rdata types and
classes did not permit the strings to be Unicode strings.
Thanks to Ryan Workman for reporting the issue.
* dns/tsig.py: Added symbolic constants for the algorithm strings.
E.g. you can now say dns.tsig.HMAC_MD5 instead of
"HMAC-MD5.SIG-ALG.REG.INT". Thanks to Cillian Sharkey for
suggesting this improvement.
* dns/tsig.py (get_algorithm): fix hashlib compatibility; thanks to
Kevin Chen for the patch.
* dns/dnssec.py: Added key_id() and make_ds().
* dns/message.py: message.py needs to import dns.edns since it uses
it.
2010-05-04 Bob Halley <halley@dnspython.org>
* dns/rrset.py (RRset.__init__): "covers" was not passed to the
superclass __init__(). Thanks to Shanmuga Rajan for reporting
the problem.
2010-03-10 Bob Halley <halley@dnspython.org>
* The TSIG algorithm value was passed to use_tsig() incorrectly
in some cases. Thanks to 'ducciovigolo' for reporting the problem.
2010-01-26 Bob Halley <halley@dnspython.org>
* (Version 1.8.0 released) get() method now returns Token
objects, not (type, text) tuples.
2009-11-13 Bob Halley <halley@dnspython.org>
* Support has been added for hmac-sha1, hmac-sha224, hmac-sha256,
hmac-sha384 and hmac-sha512. Thanks to Kevin Chen for a
thoughtful, high quality patch.
* dns/update.py (Update::present): A zero TTL was not added if
present() was called with a single rdata, causing _add() to be
unhappy. Thanks to Eugene Kim for reporting the problem and
submitting a patch.
* dns/entropy.py: Use os.urandom() if present. Don't seed until
someone wants randomness.
2009-09-16 Bob Halley <halley@dnspython.org>
* dns/entropy.py: The entropy module needs locking in order to be
used safely in a multithreaded environment. Thanks to Beda Kosata
for reporting the problem.
2009-07-27 Bob Halley <halley@dnspython.org>
* dns/query.py (xfr): The socket was not set to nonblocking mode.
Thanks to Erik Romijn for reporting this problem.
2009-07-23 Bob Halley <halley@dnspython.org>
* dns/rdtypes/IN/SRV.py (SRV._cmp): SRV records were compared
incorrectly due to a cut-and-paste error. Thanks to Tommie
Gannert for reporting this bug.
* dns/e164.py (query): The resolver parameter was not used.
Thanks to Matías Bellone for reporting this bug.
2009-06-23 Bob Halley <halley@dnspython.org>
* dns/entropy.py (EntropyPool.__init__): open /dev/random unbuffered;
there's no need to consume more randomness than we need. Thanks
to Brian Wellington for the patch.
2009-06-19 Bob Halley <halley@dnspython.org>
* (Version 1.7.1 released)
2009-06-19 Bob Halley <halley@dnspython.org>
* DLV.py was omitted from the kit
* Negative prerequisites were not handled correctly in _get_section().
2009-06-19 Bob Halley <halley@dnspython.org>
* (Version 1.7.0 released)
2009-06-19 Bob Halley <halley@dnspython.org>
* On Windows, the resolver set the domain incorrectly. Thanks
to Brandon Carpenter for reporting this bug.
* Added a to_digestable() method to rdata classes; it returns the
digestable form (i.e. DNSSEC canonical form) of the rdata. For
most rdata types this is the same uncompressed wire form. For
certain older DNS RR types, however, domain names in the rdata
are downcased.
* Added support for the HIP RR type.
2009-06-18 Bob Halley <halley@dnspython.org>
* Added support for the DLV RR type.
* Added various DNSSEC related constants (e.g. algorithm identifiers,
flag values).
* dns/tsig.py: Added support for BADTRUNC result code.
* dns/query.py (udp): When checking that addresses are the same,
use the binary form of the address in the comparison. This
ensures that we don't treat addresses as different if they have
equivalent but differing textual representations. E.g. "1:00::1"
and "1::1" represent the same address but are not textually equal.
Thanks to Kim Davies for reporting this bug.
* The resolver's query() method now has an optional 'source' parameter,
allowing the source IP address to be specified. Thanks to
Alexander Lind for suggesting the change and sending a patch.
* Added NSEC3 and NSEC3PARAM support.
2009-06-17 Bob Halley <halley@dnspython.org>
* Fixed NSEC.to_text(), which was only printing the last window.
Thanks to Brian Wellington for finding the problem and fixing it.
2009-03-30 Bob Halley <halley@dnspython.org>
* dns/query.py (xfr): Allow UDP IXFRs. Use "one_rr_per_rrset" mode when
doing IXFR.
2009-03-30 Bob Halley <halley@dnspython.org>
* Add "one_rr_per_rrset" mode switch to methods which parse
messages from wire format (e.g. dns.message.from_wire(),
dns.query.udp(), dns.query.tcp()). If set, each RR read is
placed in its own RRset (instead of being coalesced).
2009-03-30 Bob Halley <halley@dnspython.org>
* Added EDNS option support.
2008-10-16 Bob Halley <halley@dnspython.org>
* dns/rdtypes/ANY/DS.py: The from_text() parser for DS RRs did not
allow multiple Base64 chunks. Thanks to Rakesh Banka for
finding this bug and submitting a patch.
2008-10-08 Bob Halley <halley@dnspython.org>
* Add entropy module.
* When validating TSIGs, we need to use the absolute name.
2008-06-03 Bob Halley <halley@dnspython.org>
* dns/message.py (Message.set_rcode): The mask used preserved the
extended rcode, instead of everything else in ednsflags.
* dns/message.py (Message.use_edns): ednsflags was not kept
coherent with the specified edns version.
2008-02-06 Bob Halley <halley@dnspython.org>
* dns/ipv6.py (inet_aton): We could raise an exception other than
dns.exception.SyntaxError in some cases.
* dns/tsig.py: Raise an exception when the peer has set a non-zero
TSIG error.
2007-11-25 Bob Halley <halley@dnspython.org>
* (Version 1.6.0 released)
2007-11-25 Bob Halley <halley@dnspython.org>
* dns/query.py (_wait_for): if select() raises an exception due to
EINTR, we should just select() again.
2007-06-13 Bob Halley <halley@dnspython.org>
* dns/inet.py: Added is_multicast().
* dns/query.py (udp): If the queried address is a multicast address, then
don't check that the address of the response is the same as the address
queried.
2007-05-24 Bob Halley <halley@dnspython.org>
* dns/rdtypes/IN/NAPTR.py: NAPTR comparisons didn't compare the
preference field due to a typo.
2007-02-07 Bob Halley <halley@dnspython.org>
* dns/resolver.py: Integrate code submitted by Paul Marks to
determine whether a Windows NIC is enabled. The way dnspython
used to do this does not work on Windows Vista.
2006-12-10 Bob Halley <halley@dnspython.org>
* (Version 1.5.0 released)
2006-11-03 Bob Halley <halley@dnspython.org>
* dns/rdtypes/IN/DHCID.py: Added support for the DHCID RR type.
2006-11-02 Bob Halley <halley@dnspython.org>
* dns/query.py (udp): Messages from unexpected sources can now be
ignored by setting ignore_unexpected to True.
2006-10-31 Bob Halley <halley@dnspython.org>
* dns/query.py (udp): When raising UnexpectedSource, add more
detail about what went wrong to the exception.
2006-09-22 Bob Halley <halley@dnspython.org>
* dns/message.py (Message.use_edns): add reasonable defaults for
the ednsflags, payload, and request_payload parameters.
* dns/message.py (Message.want_dnssec): add a convenience method for
enabling/disabling the "DNSSEC desired" flag in requests.
* dns/message.py (make_query): add "use_edns" and "want_dnssec"
parameters.
2006-08-17 Bob Halley <halley@dnspython.org>
* dns/resolver.py (Resolver.read_resolv_conf): If /etc/resolv.conf
doesn't exist, just use the default resolver configuration (i.e.
the same thing we would have used if resolv.conf had existed and
been empty).
2006-07-26 Bob Halley <halley@dnspython.org>
* dns/resolver.py (Resolver._config_win32_fromkey): fix
cut-and-paste error where we passed the wrong variable to
self._config_win32_search(). Thanks to David Arnold for finding
the bug and submitting a patch.
2006-07-20 Bob Halley <halley@dnspython.org>
* dns/resolver.py (Answer): Add more support for the sequence
protocol, forwarding requests to the answer object's rrset.
E.g. "for a in answer" is equivalent to "for a in answer.rrset",
"answer[i]" is equivalent to "answer.rrset[i]", and
"answer[i:j]" is equivalent to "answer.rrset[i:j]".
2006-07-19 Bob Halley <halley@dnspython.org>
* dns/query.py (xfr): Add IXFR support.
2006-06-22 Bob Halley <halley@dnspython.org>
* dns/rdtypes/IN/IPSECKEY.py: Added support for the IPSECKEY RR type.
2006-06-21 Bob Halley <halley@dnspython.org>
* dns/rdtypes/ANY/SPF.py: Added support for the SPF RR type.
2006-06-02 Bob Halley <halley@dnspython.org>
* (Version 1.4.0 released)
2006-04-25 Bob Halley <halley@dnspython.org>
* dns/rrset.py (RRset.to_rdataset): Added a convenience method
to convert an rrset into an rdataset.
2006-03-27 Bob Halley <halley@dnspython.org>
* Added dns.e164.query(). This function can be used to look for
NAPTR RRs for a specified number in several domains, e.g.:
dns.e164.query('16505551212',
['e164.dnspython.org.', 'e164.arpa.'])
2006-03-26 Bob Halley <halley@dnspython.org>
* dns/resolver.py (Resolver.query): The resolver deleted from
a list while iterating it, which makes the iterator unhappy.
2006-03-17 Bob Halley <halley@dnspython.org>
* dns/resolver.py (Resolver.query): The resolver needlessly
delayed responses for successful queries.
2006-01-18 Bob Halley <halley@dnspython.org>
* dns/rdata.py: added a validate() method to the rdata class. If
you change an rdata by assigning to its fields, it is a good
idea to call validate() when you are done making changes.
For example, if 'r' is an MX record and then you execute:
r.preference = 100000 # invalid, because > 65535
r.validate()
The validation will fail and an exception will be raised.
2006-01-11 Bob Halley <halley@dnspython.org>
* dns/ttl.py: TTLs are now bounds checked to be within the closed
interval [0, 2^31 - 1].
* The BIND 8 TTL syntax is now accepted in the SOA refresh, retry,
expire, and minimum fields, and in the original_ttl field of
SIG and RRSIG records.
2006-01-04 Bob Halley <halley@dnspython.org>
* dns/resolver.py: The windows registry irritatingly changes the
list element delimiter in between ' ' and ',' (and vice-versa)
in various versions of windows. We now cope by always looking
for either one (' ' first).
2005-12-27 Bob Halley <halley@dnspython.org>
* dns/e164.py: Added routines to convert between E.164 numbers and
their ENUM domain name equivalents.
* dns/reversename.py: Added routines to convert between IPv4 and
IPv6 addresses and their DNS reverse-map equivalents.
2005-12-18 Bob Halley <halley@dnspython.org>
* dns/rdtypes/ANY/LOC.py (_tuple_to_float): The sign was lost when
converting a tuple into a float, which broke conversions of
south latitudes and west longitudes.
2005-11-17 Bob Halley <halley@dnspython.org>
* dns/zone.py: The 'origin' parameter to from_text() and from_file()
is now optional. If not specified, dnspython will use the
first $ORIGIN in the text as the zone's origin.
* dns/zone.py: Sanity checks of the zone's origin node can now
be disabled.
2005-11-12 Bob Halley <halley@dnspython.org>
* dns/name.py: Preliminary Unicode support has been added for
domain names. Running dns.name.from_text() on a Unicode string
will now encode each label using the IDN ACE encoding. The
to_unicode() method may be used to convert a dns.name.Name with
IDN ACE labels back into a Unicode string. This functionality
requires Python 2.3 or greater. returns the
parent of a name.
2005-10-01 Bob Halley <halley@dnspython.org>
* dns/resolver.py: Added zone_for_name() helper, which returns
the name of the zone which contains the specified name.
* dns/resolver.py: Added get_default_resolver(), which returns
the default resolver, initializing it if necessary.
2005-09-29 Bob Halley <halley@dnspython.org>
* dns/resolver.py (Resolver._compute_timeout): If time goes
backwards a little bit, ignore it.
2005-07-31 Bob Halley <halley@dnspython.org>
* (Version 1.3.4 released)
2005-07-31 Bob Halley <halley@dnspython.org>
* dns/message.py (make_response): Trying to respond to a response
threw a NameError while trying to throw a FormErr since it used
the wrong name for the FormErr exception.
* dns/query.py (_connect): We needed to ignore EALREADY too.
* dns/query.py: Optional "source" and "source_port" parameters
have been added to udp(), tcp(), and xfr(). Thanks to Ralf
Weber for suggesting the change and providing a patch.
2005-06-05 Bob Halley <halley@dnspython.org>
* dns/query.py: The requirement that the "where" parameter be
an IPv4 or IPv6 address is now documented.
2005-06-04 Bob Halley <halley@dnspython.org>
* dns/resolver.py: The resolver now does exponential backoff
each time it runs through all of the nameservers.
* dns/resolver.py: rcodes which indicate a nameserver is likely
to be a "permanent failure" for a query cause the nameserver
to be removed from the mix for that query.
2005-01-30 Bob Halley <halley@dnspython.org>
* (Version 1.3.3 released)
2004-10-25 Bob Halley <halley@dnspython.org>
* dns/rdtypes/ANY/TXT.py (TXT.from_text): The masterfile parser
incorrectly rejected TXT records where a value was not quoted.
2004-10-11 Bob Halley <halley@dnspython.org>
* dns/message.py: Added make_response(), which creates a skeletal
response for the specified query. Added opcode() and set_opcode()
convenience methods to the Message class. Added the request_payload
attribute to the Message class.
2004-10-10 Bob Halley <halley@dnspython.org>
* dns/zone.py (from_xfr): dns.zone.from_xfr() in relativization
mode incorrectly set zone.origin to the empty name.
2004-09-02 Bob Halley <halley@dnspython.org>
* dns/name.py (Name.to_wire): The 'file' parameter to
Name.to_wire() is now optional; if omitted, the wire form will
be returned as the value of the function.
2004-08-14 Bob Halley <halley@dnspython.org>
* dns/message.py (Message.find_rrset): find_rrset() now uses an
index, vastly improving the from_wire() performance of large
messages such as zone transfers.
2004-08-07 Bob Halley <halley@dnspython.org>
* (Version 1.3.2 released)
2004-08-04 Bob Halley <halley@dnspython.org>
* dns/query.py: sending queries to a nameserver via IPv6 now
works.
* dns/inet.py (af_for_address): Add af_for_address(), which looks
at a textual-form address and attempts to determine which address
family it is.
* dns/query.py: the default for the 'af' parameter of the udp(),
tcp(), and xfr() functions has been changed from AF_INET to None,
which causes dns.inet.af_for_address() to be used to determine the
address family. If dns.inet.af_for_address() can't figure it out,
we fall back to AF_INET and hope for the best.
2004-07-31 Bob Halley <halley@dnspython.org>
* dns/rdtypes/ANY/NSEC.py (NSEC.from_text): The NSEC text format
does not allow specifying types by number, so we shouldn't either.
* dns/renderer.py: the renderer module didn't import random,
causing an exception to be raised if a query id wasn't provided
when a Renderer was created.
* dns/resolver.py (Resolver.query): the resolver wasn't catching
dns.exception.Timeout, so a timeout erroneously caused the whole
resolution to fail instead of just going on to the next server.
2004-06-16 Bob Halley <halley@dnspython.org>
* dns/rdtypes/ANY/LOC.py (LOC.from_text): LOC milliseconds values
were converted incorrectly if the length of the milliseconds
string was less than 3.
2004-06-06 Bob Halley <halley@dnspython.org>
* (Version 1.3.1 released)
2004-05-22 Bob Halley <halley@dnspython.org>
* dns/update.py (Update.delete): We erroneously specified a
"deleting" value of dns.rdatatype.NONE instead of
dns.rdataclass.NONE when the thing being deleted was either an
Rdataset instance or an Rdata instance.
* dns/rdtypes/ANY/SSHFP.py: Added support for the proposed SSHFP
RR type.
2004-05-14 Bob Halley <halley@dnspython.org>
* dns/rdata.py (from_text): The masterfile reader did not
accept the unknown RR syntax when used with a known RR type.
2004-05-08 Bob Halley <halley@dnspython.org>
* dns/name.py (from_text): dns.name.from_text() did not raise
an exception if a backslash escape ended prematurely.
2004-04-09 Bob Halley <halley@dnspython.org>
* dns/zone.py (_MasterReader._rr_line): The masterfile reader
erroneously treated lines starting with leading whitespace but
not having any RR definition as an error. It now treats
them like a blank line (which is not an error).
2004-04-01 Bob Halley <halley@dnspython.org>
* (Version 1.3.0 released)
2004-03-19 Bob Halley <halley@dnspython.org>
* Added support for new DNSSEC types RRSIG, NSEC, and DNSKEY.
2004-01-16 Bob Halley <halley@dnspython.org>
* dns/query.py (_connect): Windows returns EWOULDBLOCK instead
of EINPROGRESS when trying to connect a nonblocking socket.
2003-11-13 Bob Halley <halley@dnspython.org>
* dns/rdtypes/ANY/LOC.py (LOC.to_wire): We encoded and decoded LOC
incorrectly, since we were interpreting the values of altitude,
size, hprec, and vprec in meters instead of centimeters.
* dns/rdtypes/IN/WKS.py (WKS.from_wire): The WKS protocol value is
encoded with just one octet, not two!
2003-11-09 Bob Halley <halley@dnspython.org>
* dns/resolver.py (Cache.maybe_clean): The cleaner deleted items
from the dictionary while iterating it, causing a RuntimeError
to be raised. Thanks to Mark R. Levinson for the bug report,
regression test, and fix.
2003-11-07 Bob Halley <halley@dnspython.org>
* (Version 1.2.0 released)
2003-11-03 Bob Halley <halley@dnspython.org>
* dns/zone.py (_MasterReader.read): The saved_state now includes
the default TTL.
2003-11-01 Bob Halley <halley@dnspython.org>
* dns/tokenizer.py (Tokenizer.get): The tokenizer didn't
handle escaped delimiters.
2003-10-27 Bob Halley <halley@dnspython.org>
* dns/resolver.py (Resolver.read_resolv_conf): If no nameservers
are configured in /etc/resolv.conf, the default nameserver
list should be ['127.0.0.1'].
2003-09-08 Bob Halley <halley@dnspython.org>
* dns/resolver.py (Resolver._config_win32_fromkey): We didn't
catch WindowsError, which can happen if a key is not defined
in the registry.
2003-09-06 Bob Halley <halley@dnspython.org>
* (Version 1.2.0b1 released)
2003-09-05 Bob Halley <halley@dnspython.org>
* dns/query.py: Timeout support has been overhauled to provide
timeouts under Python 2.2 as well as 2.3, and to provide more
accurate expiration.
2003-08-30 Bob Halley <halley@dnspython.org>
* dns/zone.py: dns.exception.SyntaxError is raised for unknown
master file directives.
2003-08-28 Bob Halley <halley@dnspython.org>
* dns/zone.py: $INCLUDE processing is now enabled/disabled using
the allow_include parameter. The default is to process $INCLUDE
for from_file(), and to disallow $INCLUDE for from_text(). The
master reader now calls zone.check_origin_node() by default after
the zone has been read. find_rdataset() called get_node() instead
of find_node(), which result in an incorrect exception. The
relativization state of a zone is now remembered and applied
consistently when looking up names. from_xfr() now supports
relativization like the _MasterReader.
2003-08-22 Bob Halley <halley@dnspython.org>
* dns/zone.py: The _MasterReader now understands $INCLUDE.
2003-08-12 Bob Halley <halley@dnspython.org>
* dns/zone.py: The _MasterReader now specifies the file and line
number when a syntax error occurs. The BIND 8 TTL format is now
understood when loading a zone, though it will never be emitted.
The from_file() function didn't pass the zone_factory parameter
to from_text().
2003-08-10 Bob Halley <halley@dnspython.org>
* (Version 1.1.0 released)
2003-08-07 Bob Halley <halley@dnspython.org>
* dns/update.py (Update._add): A typo meant that _add would
fail if the thing being added was an Rdata object (as
opposed to an Rdataset or the textual form of an Rdata).
2003-08-05 Bob Halley <halley@dnspython.org>
* dns/set.py: the simple Set class has been moved to its
own module, and augmented to support more set operations.
2003-08-04 Bob Halley <halley@dnspython.org>
* Node and all rdata types have been "slotted". This speeds
things up a little and reduces memory usage noticeably.
2003-08-02 Bob Halley <halley@dnspython.org>
* (Version 1.1.0c1 released)
2003-08-02 Bob Halley <halley@dnspython.org>
* dns/rdataset.py: SimpleSets now support more set options.
* dns/message.py: Added the get_rrset() method. from_file() now
allows Unicode filenames and turns on universal newline support if
it opens the file itself.
* dns/node.py: Added the delete_rdataset() and replace_rdataset()
methods.
* dns/zone.py: Added the delete_node(), delete_rdataset(), and
replace_rdataset() methods. from_file() now allows Unicode
filenames and turns on universal newline support if it opens the
file itself. Added a to_file() method.
2003-08-01 Bob Halley <halley@dnspython.org>
* dns/opcode.py: Opcode from/to text converters now understand
numeric opcodes. The to_text() method will return a numeric opcode
string if it doesn't know a text name for the opcode.
* dns/message.py: Added set_rcode(). Fixed code where ednsflags
wasn't treated as a long.
* dns/rcode.py: ednsflags wasn't treated as a long. Rcode from/to
text converters now understand numeric rcodes. The to_text()
method will return a numeric rcode string if it doesn't know
a text name for the rcode.
* examples/reverse.py: Added a new example program that builds a
reverse (address-to-name) mapping table from the name-to-address
mapping specified by A RRs in zone files.
* dns/node.py: Added get_rdataset() method.
* dns/zone.py: Added get_rdataset() and get_rrset() methods. Added
iterate_rdatas().
2003-07-31 Bob Halley <halley@dnspython.org>
* dns/zone.py: Added the iterate_rdatasets() method which returns
a generator which yields (name, rdataset) tuples for all the
rdatasets in the zone matching the specified rdatatype.
2003-07-30 Bob Halley <halley@dnspython.org>
* (Version 1.1.0b2 released)
2003-07-30 Bob Halley <halley@dnspython.org>
* dns/zone.py: Added find_rrset() and find_rdataset() convenience
methods. They let you retrieve rdata with the specified name
and type in one call.
* dns/node.py: Nodes no longer have names; owner names are
associated with nodes in the Zone object's nodes dictionary.
* dns/zone.py: Zone objects now implement more of the standard
mapping interface. __iter__ has been changed to iterate the keys
rather than values to match the standard mapping interface's
behavior.
2003-07-20 Bob Halley <halley@dnspython.org>
* dns/ipv6.py (inet_ntoa): Handle embedded IPv4 addresses.
2003-07-19 Bob Halley <halley@dnspython.org>
* (Version 1.1.0b1 released)
2003-07-18 Bob Halley <halley@dnspython.org>
* dns/tsig.py: The TSIG validation of TCP streams where not
every message is signed now works correctly.
* dns/zone.py: Zones can now be compared for equality and
inequality. If the other object in the comparison is also
a zone, then "the right thing" happens; i.e. the zones are
equal iff.: they have the same rdclass, origin, and nodes.
2003-07-17 Bob Halley <halley@dnspython.org>
* dns/message.py (Message.use_tsig): The method now allows for
greater control over the various fields in the generated signature
(e.g. fudge).
(_WireReader._get_section): UnknownTSIGKey is now raised if an
unknown key is encountered, or if a signed message has no keyring.
2003-07-16 Bob Halley <halley@dnspython.org>
* dns/tokenizer.py (Tokenizer._get_char): get_char and unget_char
have been renamed to _get_char and _unget_char since they are not
useful to clients of the tokenizer.
2003-07-15 Bob Halley <halley@dnspython.org>
* dns/zone.py (_MasterReader._rr_line): owner names were being
unconditionally relativized; it makes much more sense for them
to be relativized according to the relativization setting of
the reader.
2003-07-12 Bob Halley <halley@dnspython.org>
* dns/resolver.py (Resolver.read_resolv_conf): The resolv.conf
parser did not allow blank / whitespace-only lines, nor did it
allow comments. Both are now supported.
2003-07-11 Bob Halley <halley@dnspython.org>
* dns/name.py (Name.to_digestable): to_digestable() now
requires an origin to be specified if the name is relative.
It will raise NeedAbsoluteNameOrOrigin if the name is
relative and there is either no origin or the origin is
itself relative.
(Name.split): returned the wrong answer if depth was 0 or depth
was the length of the name. split() now does bounds checking
on depth, and raises ValueError if depth < 0 or depth > the length
of the name.
2003-07-10 Bob Halley <halley@dnspython.org>
* dns/ipv6.py (inet_ntoa): The routine now minimizes its output
strings. E.g. the IPv6 address
"0000:0000:0000:0000:0000:0000:0000:0001" is minimized to "::1".
We do not, however, make any effort to display embedded IPv4
addresses in the dot-quad notation.
2003-07-09 Bob Halley <halley@dnspython.org>
* dns/inet.py: We now supply our own AF_INET and AF_INET6
constants since AF_INET6 may not always be available. If the
socket module has AF_INET6, we will use it. If not, we will
use our own value for the constant.
* dns/query.py: the functions now take an optional af argument
specifying the address family to use when creating the socket.
* dns/rdatatype.py (is_metatype): a typo caused the function
return true only for type OPT.
* dns/message.py: message section list elements are now RRsets
instead of Nodes. This API change makes processing messages
easier for many applications.
2003-07-07 Bob Halley <halley@dnspython.org>
* dns/rrset.py: added. An RRset is a named rdataset.
* dns/rdataset.py (Rdataset.__eq__): rdatasets may now be compared
for equality and inequality with other objects. Rdataset instance
variables are now slotted.
* dns/message.py: The wire format and text format readers are now
classes. Variables related to reader state have been moved out
of the message class.
2003-07-06 Bob Halley <halley@dnspython.org>
* dns/name.py (from_text): '@' was not interpreted as the empty
name.
* dns/zone.py: the master file reader derelativized names in rdata
relative to the zone's origin, not relative to the current origin.
The reader now deals with relativization in two steps. The rdata
is read and derelativized using the current origin. The rdata's
relativity is then chosen using the zone origin and the relativize
boolean. Here's an example.
$ORIGIN foo.example.
$TTL 300
bar MX 0 blaz
If the zone origin is example., and relativization is on, then
This fragment will become:
bar.foo.example. 300 IN MX 0 blaz.foo.example.
after the first step (derelativization to current origin), and
bar.foo 300 IN MX 0 blaz.foo
after the second step (relativization to zone origin).
* dns/namedict.py: added.
* dns/zone.py: The master file reader has been made into its
own class. Reader-related instance variables have been moved
form the zone class into the reader class.
* dns/zone.py: Add node_factory class attribute. An application
can now subclass Zone and Node and have a zone whose nodes are of
the subclassed Node type. The from_text(), from_file(), and
from_xfr() algorithms now take an optional zone_factory argument.
This allows the algorithms to be used to create zones whose class
is a subclass of Zone.
2003-07-04 Bob Halley <halley@dnspython.org>
* dns/renderer.py: added new wire format rendering module and
converted message.py to use it. Applications which want
fine-grained control over the conversion to wire format may call
the renderer directly, instead of having it called on their behalf
by the message code.
2003-07-02 Bob Halley <halley@dnspython.org>
* dns/name.py (_validate_labels): The NameTooLong test was
incorrect.
* dns/message.py (Message.to_wire): dns.exception.TooBig is
now raised if the wire encoding exceeds the specified
maximum size.
2003-07-01 Bob Halley <halley@dnspython.org>
* dns/message.py: EDNS encoding was broken. from_text()
didn't parse rcodes, flags, or eflags correctly. Comparing
messages with other types of objects didn't work.
2003-06-30 Bob Halley <halley@dnspython.org>
* (Version 1.0.0 released)
2003-06-30 Bob Halley <halley@dnspython.org>
* dns/rdata.py: Rdatas now implement rich comparisons instead of
__cmp__.
* dns/name.py: Names now implement rich comparisons instead of
__cmp__.
* dns/inet.py (inet_ntop): Always use our code, since the code
in the socket module doesn't support AF_INET6 conversions if
IPv6 sockets are not available on the system.
* dns/resolver.py (Answer.__init__): A dangling CNAME chain was
not raising NoAnswer.
* Added a simple resolver Cache class.
* Added an expiration attribute to answer instances.
2003-06-24 Bob Halley <halley@dnspython.org>
* (Version 1.0.0b3 released)
2003-06-24 Bob Halley <halley@dnspython.org>
* Renamed module "DNS" to "dns" to avoid conflicting with
PyDNS.
2003-06-23 Bob Halley <halley@dnspython.org>
* The from_text() relativization controls now work the same way as
the to_text() controls.
* DNS/rdata.py: The parsing of generic rdata was broken.
2003-06-21 Bob Halley <halley@dnspython.org>
* (Version 1.0.0b2 released)
2003-06-21 Bob Halley <halley@dnspython.org>
* The Python 2.2 socket.inet_aton() doesn't seem to like
'255.255.255.255'. We work around this.
* Fixed bugs in rdata to_wire() and from_wire() routines of a few
types. These bugs were discovered by running the tests/zone.py
Torture1 test.
* Added implementation of type APL.
2003-06-20 Bob Halley <halley@dnspython.org>
* DNS/rdtypes/IN/AAAA.py: Use our own versions of inet_ntop and
inet_pton if the socket module doesn't provide them for us.
* The resolver now does a better job handling exceptions. In
particular, it no longer eats all exceptions; rather it handles
those exceptions it understands, and leaves the rest uncaught.
* Exceptions have been pulled into their own module. Almost all
exceptions raised by the code are now subclasses of
DNS.exception.DNSException. All form errors are subclasses of
DNS.exception.FormError (which is itself a subclass of
DNS.exception.DNSException).
2003-06-19 Bob Halley <halley@dnspython.org>
* Added implementations of types DS, NXT, SIG, and WKS.
* __cmp__ for type A and AAAA could produce incorrect results.
2003-06-18 Bob Halley <halley@dnspython.org>
* Started test suites for zone.py and tokenizer.py.
* Added implementation of type KEY.
* DNS/rdata.py(_base64ify): \n could be emitted erroneously.
* DNS/rdtypes/ANY/SOA.py (SOA.from_text): The SOA RNAME field could
be set to the value of MNAME in common cases.
* DNS/rdtypes/ANY/X25.py: __init__ was broken.
* DNS/zone.py (from_text): $TTL handling erroneously caused the
next line to be eaten.
* DNS/tokenizer.py (Tokenizer.get): parsing was broken for empty
quoted strings. Quoted strings didn't handle \ddd escapes. Such
escapes are appear not to comply with RFC 1035, but BIND allows
them and they seem useful, so we allow them too.
* DNS/rdtypes/ANY/ISDN.py (ISDN.from_text): parsing was
broken for ISDN RRs without subaddresses.
* DNS/zone.py (from_file): from_file() didn't work because
some required parameters were not passed to from_text().
2003-06-17 Bob Halley <halley@dnspython.org>
* (Version 1.0.0b1 released)
2003-06-17 Bob Halley <halley@dnspython.org>
* Added implementation of type PX.
2003-06-16 Bob Halley <halley@dnspython.org>
* Added implementation of types CERT, GPOS, LOC, NSAP, NSAP-PTR.
* DNS/rdatatype.py (_by_value): A cut-and-paste error had broken
NSAP and NSAP-PTR.
2003-06-12 Bob Halley <halley@dnspython.org>
* Created a tests directory and started adding tests.
* Added "and its documentation" to the permission grant in the
license.
2003-06-12 Bob Halley <halley@dnspython.org>
* DNS/name.py (Name.is_wild): is_wild() erroneously raised IndexError
if the name was empty.
2003-06-10 Bob Halley <halley@dnspython.org>
* Added implementations of types AFSDB, X25, and ISDN.
* The documentation associated with the various rdata types has been
improved. In particular, instance variables are now described.
2003-06-09 Bob Halley <halley@dnspython.org>
* Added implementations of types HINFO, RP, and RT.
* DNS/message.py (make_query): Document that make_query() sets
flags to DNS.flags.RD, and chooses a random query id.
2003-06-05 Bob Halley <halley@dnspython.org>
* (Version 1.0.0a2 released)
2003-06-05 Bob Halley <halley@dnspython.org>
* DNS/node.py: removed __getitem__ and __setitem__, since
they are not used by the codebase and were not useful in
general either.
* DNS/message.py (from_file): from_file() now allows a
filename to be specified instead of a file object.
* DNS/rdataset.py: The is_compatible() method of the
DNS.rdataset.Rdataset class was deleted.
2003-06-04 Bob Halley <halley@dnspython.org>
* DNS/name.py (class Name): Names are now immutable.
* DNS/name.py: the is_comparable() method has been removed, since
names are always comparable.
* DNS/resolver.py (Resolver.query): A query could run for up
to the lifetime + the timeout. This has been corrected and the
query will now only run up to the lifetime.
2003-06-03 Bob Halley <halley@dnspython.org>
* DNS/resolver.py: removed the 'new' function since it is not the
style of the library to have such a function. Call
DNS.resolver.Resolver() to make a new resolver.
2003-06-03 Bob Halley <halley@dnspython.org>
* DNS/resolver.py (Resolver._config_win32_fromkey): The DhcpServer
list is space separated, not comma separated.
2003-06-03 Bob Halley <halley@dnspython.org>
* DNS/update.py: Added an update module to make generating updates
easier.
2003-06-03 Bob Halley <halley@dnspython.org>
* Commas were missing in some of the __all__ entries in various
__init__.py files.
2003-05-30 Bob Halley <halley@dnspython.org>
* (Version 1.0.0a1 released) | https://sources.debian.org/src/dnspython/1.15.0-1+deb9u1/ChangeLog/ | CC-MAIN-2022-05 | refinedweb | 7,855 | 62.64 |
18 December 2008 23:40 [Source: ICIS news]
HOUSTON (ICIS news)--Most US ethylene contracts for December were settled at a 9.5-cent/lb ($209/tonne, €146/tonne) reduction from November, market sources said on Thursday.
The drop puts ethylene in December at 28.50 cents/lb, according to global chemical market intelligence service ICIS pricing.
Market participants said a full-market settlement at that figure could be reached by the end of the week.
The decrease is the fifth consecutive monthly drop for ethylene since prices peaked at 74.50 cents/lb in July.
The contract fell by a combined 36.50 cents/lb in August-November due to weak demand and lower feedstocks costs.
?xml:namespace>
Chevron Phillips, Equistar, ExxonMobil, INEOS and Shell Chemicals are among the major producers of ethylene in the
Dow Chemical,
( | http://www.icis.com/Articles/2008/12/18/9180587/most-us-dec-ethylene-contracts-settle-down-9.5-centslb.html | CC-MAIN-2015-06 | refinedweb | 138 | 59.09 |
Back to index
This interface is a unicode encoder for use by scripts. More...
import "nsIScriptableUConv.idl";
This interface is a unicode encoder for use by scripts.
8/Jun/2000
Definition at line 55 of file nsIScriptableUConv.idl.
Converts an array of bytes to a unicode string.
Converts the data from Unicode to one Charset.
Returns the converted string. After converting, Finish should be called and its return value appended to this return value.
Convert a unicode string to an array of bytes.
Finish does not need to be called.
Converts a unicode string to an input stream.
The bytes in the stream are encoded according to the charset attribute. The returned stream will be nonblocking.
Converts the data from one Charset to Unicode.
Returns the terminator string.
Should be called after ConvertFromUnicode() and appended to that function's return value.
Current character set.
Definition at line 103 of file nsIScriptableUConv.idl. | https://sourcecodebrowser.com/lightning-sunbird/0.9plus-pnobinonly/interfacens_i_scriptable_unicode_converter.html | CC-MAIN-2017-51 | refinedweb | 152 | 62.75 |
Sub-project management via sbt
As your project grows, the need of splitting it into multiple ones might rise. There are couple good examples around how to do multi-project management by sbt, like this one.
The use case this post cover is little different.
We would like to extrac some logic from our project into a child one (a jar), later, other projects could import it.
So you see, this is more like a parent-child relationship:
- A parent project.
- Extrac some logic from parent project into a child project.
- Publish a jar to artifactory from child project.
- Create jar of parent project, with everything child project has. Later we still want to deploy our service with this jar.
Github example of what this post cover.
In the github example above, we simply the usecase into following:
- Parent project with main entry
Hello, and one test class
HelloSpec.
- Child project with one base trait
HelloFromChild, and one base test trait
HelloFromChildTest.
- Class in parent project depends on trait inside child project.
- Test class in parent depends on test trait inside child project.
Create Child Project
First order of business, create child project in build.sbt. Given the parent project has setting like following:
lazy val root = (project in file("."))
Let’s create child:
lazy val child = Project("child", file("child"))
file(“child”) ask sbt to project with code under /child.
With IDE (IntelliJ), let’s create a folder tree next to /src : /child → /src →/main and /test. Under /main and /test, create /scala , then we can create package under it.
Classpath
Create a trait inside childExample package:
//HelloFromChild.scala in child
package childExample
trait HelloFromChild {
lazy val greeting: String = "HelloFromChild"
}
To use child’s trait inside parent, we need to tell sbt that parent depends on child:
//build.sbt
lazy val root = (project in file("."))
.settings(
...
)
.dependsOn(child)
Then use child trait inside parent is a straight forward import:
package example
import childExample.HelloFromChild
object Hello extends HelloFromChild with App {
println(greeting)
}
How about test ? Let’s say we have a base test trait in child like this:
package childExample
import org.scalatest._
trait HelloFromChildTest extends FlatSpec with Matchers {
}
Let’s use it in parent’s test ?
package example
import childExample.HelloFromChildTest
class HelloSpec extends HelloFromChildTest {
"The Hello object" should "say hello" in {
Hello.greeting shouldEqual "HelloFromChild"
}
}
However above won’t just work.
sbt test complains about
HelloFromChildTest:
Why ?
So by default, sbt won’t include child’s test class into parent’s classpath, we need to specify it by
test->test:
//build.sbt
lazy val root = (project in file("."))
.settings(
...
)
.dependsOn(child % "compile->compile;test->test")
Now
sbt test works:
Create & deploy a single jar from parent
We still would like to create a jar with everything we have in this project (parent + child). With assembly plugin, specify settings in parent project:
//build.sbt
lazy val assemblySettings = Seq(
assemblyJarName in assembly := name.value + "-assembly.jar",
)
lazy val root = (project in file("."))
.settings(
...
assemblySettings,
name := "Project-with-example",
...
)
.aggregate(child)
.dependsOn(child % "compile->compile;test->test")
Aggregate
You might notice
sbt assembly create a parent jar, and a child jar.
Why ?
notice the
.aggregate(child) in build.sbt above ? It’s telling sbt that every task we run on parent, we also want it on child. But do we need it ?
It depends.
In our use case, say during
sbt test, if we would like to run all tests in parent and in child, aggregate make sense.
If you are not sure what a task under a project would do, ask sbt:
Clean task of root(parent) will trigger clean task of child.
What if I don’t want certain task (say reStart) to be aggregated ? Just tell sbt in the setting:
#build.sbt
lazy val root = project.in(file("."))
.settings(
...
aggregate in reStart := false,
Publish to artifactory from child
If we would like to publish only from child project to some artifactory, specify the setting only under child would do:
//build.sbt
lazy val child = Project("child", file("child"))
.settings(
...
publishTo := Some("Artifactory Realm" at ";build.timestamp=" + new java.util.Date().getTime),
credentials += Credentials("Artifactory Realm", "where-your-artifactory-is", ${USER}, ${PWD}),
)
Then run publish task for child:
sbt child/publish
I find it a good pratice to narrow down settings only to the project need.
Like the
publishTo and
credentials above, they only tie to child, even if you do
sbt publish, it won’t publish parent — parent has no idea where to publish at all 😂 | https://medium.com/@linda0511ny/sub-project-management-via-sbt-26e9f7bccbad | CC-MAIN-2019-13 | refinedweb | 750 | 66.54 |
Want to hear the Mix 09 presentations and see the new Star Trek film at one event? Know a student who wants to apply to Google' Summer of Code? Get details about these stories, as well as information about Java, PHP, Silverlight 3, and more.
Apparently, one of the standard ASP.NET WebControls (asp:menu) wasn't standards compliant and would either render wrong in IE 8, or force developers to put the page in compatibility mode. Microsoft has just released a patch that fixes this issue. I was surprised that only one control had this problem. Read more details on the ASP.NET team's blog.
Google's Summer of Code is accepting applications
Funny enough, I knew Google's Summer of Code was coming up, but I still let it slip under my radar. Fortunately, ZDNet blogger Christopher Dawson's post about it reminded me.
The Summer of Code is an event where students can be selected to receive a modest stipend of $4,500 to write open source code. Google is now accepting applications for the event. It's been happening since 2005, and a lot of good things have come out of it. If you are a student with an itch to write open source code instead of flipping burgers this summer, put in your application!
PHP 5.3 RC1 is out
The first release candidate of PHP 5.3 is now available. The PHP development team knows it has some bugs, but it is feature complete. As a result, you can start checking your applications in it and targeting new work to it and know that it won't change from underneath your feet. The big items in the feature list are lambda functions/closures, namespaces, and late static bindings. You should be aware that there are also a few breaks with backwards compatibility.
Is Java becoming a legacy language?
Multiple sources pointed me to Bruce Eckel's interesting post regarding C++ and Java. What I find most interesting is his assertion that the Java language is on its way to legacy status, while other languages on the JVM will thrive. I'm not a Java guy and I don't pretend to be, but languages like Groovy and Scala have been generating enough heat for this non-Java guy to hear about it. From my brief flirtation with Java ages ago, I would not be surprised if in the next few years, the folks who are wedded to the Java platform but dislike the Java language could very well out-.NET the .NET world. Remember, .NET's big promise was "the right language for the job," but all we've really gotten so far is VB.NET and C#, which are two sides to the same coin.
The Economist predicts the demise of revenue-poor Web 2.0
File this one under "duh." The Economist warns that Web 2.0 companies such as Twitter and Facebook are in grave danger because it's unproven that these companies can make money and that ad revenues rarely carry a service. I'll take it one step further: Not only do most of the Web 2.0 companies have no good way to monetize their users, but their users will never pay for their services. The worst part is that one of the cornerstones of Web 2.0 (in my mind) are APIs, which make it even more difficult to make money, since APIs let other people wrap your code and content in their moneymaking skin.
Missed Mix 09? Then go to Stir Trek
Microsoft is sending a lot of its folks who were at Mix 09 to Columbus, OH on May 8th to give presentations on the same topics. As a bonus, at 3:00 PM, they will show the new Star Trek film! Jeff Blankenburg has more details, or you can go to the Stir Trek site (get it?).
(Check out these Mix 09 presentation images, in which Microsoft designer Stephan Hoefnagels traces the evolution of the Windows 7 OS.)
IronPython 2.6 Alpha 1 is available
For those who like to live on the wild side (that is neither a Mötley Crüe nor a Lou Reed reference), IronPython 2.6 Alpha 1 has been released. It aims to bring IronPython inline with Python 2.6 in terms of feature set and functionality. There is a very good chance that I will be learning to work with IronPython once I've gotten through the Ruby book I've been meandering through, so I would appreciate any feedback you may have about it.
Microsoft expands DreamSpark to high school students
For some time now, Microsoft has used the DreamSpark program as a chance to get college kids using Microsoft products to learn about computing, particularly programming. Microsoft is expanding the program to include high school students.
Public high schools aren't known for having big budgets for things like teaching programming. Of course, there are plenty of open source alternatives out there, and Microsoft has offered various Express versions of Visual Studio for some time now. All the same, it is great to see Microsoft making this generous offer, even if it is shrewdly designed to wed future developers to the Microsoft platform early.
Silverlight 3 works better with SEO
Historically, one of the problems with RIAs is that they are not search engine friendly. Lately, search engines have been working to index text within RIAs. All the same, it would be nice to see the RIAs helping that out. The folks over at Microsoft are working hard to make Silverlight 3 SEO friendly. This could be another killer feature in Silverlight's assault on Flash's castle walls.
MacHeist provides an insanely great deal
MacHeist (caution: does not seem to work in Internet Explorer) is a really neat idea: They offer a bundle of Macintosh applications for a very low price ($39) and then donate much of the money to charity. The deal is only offered for two weeks, and as more people purchase the deal, more items get added to the bundle. This year's MacHeist 3 Bundle includes a few photo editors and an HTML editor, which may be useful to Web developers. Also included is World of Goo, which is a great game. The offer is only available for about another week, so check it out! | https://www.techrepublic.com/blog/software-engineer/programming-news-stir-trek-aspmenu-patch-php-53-rc1-ironpython-26-alpha-1/ | CC-MAIN-2019-43 | refinedweb | 1,064 | 71.34 |
From: Matias Capeletto (matias.capeletto_at_[hidden])
Date: 2007-05-23 04:20:21
Hello boost!
I have upload a new version of the Boost.Bimap library to the vault:
Documentation can be viewed online here:
Or can be downloaded from:
I have take into consideration most of the advices from the formal review.
In the next following days I will be posting the more important
changes so we can discuss them.
For now, the most important changes are:
* Change replace and modify functions
* Change operator[] (add standard at(key) function )
* Change the way we use tags, from free functions to member functions
* Functions now takes templated CompatibleKey
* Added projection of iterators
* namespace boost::bimap -> namespace boost::bimaps
* Improved performance
* Lots of small fixes
* Improved docs, new diagrams, more examples, new sections
I will need CVS write access to start testing the lib. What are the
steps to follow?
Thanks to all
Best regards
Matias
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2007/05/122126.php | CC-MAIN-2020-10 | refinedweb | 176 | 65.22 |
Hi,
I have one Data Contract class like this.
[DataContract()]
public class Contract
{
.......
.........
Some Properties with DataMembers Attribute.
............
..............
[DataMember(Name = "Messages")]
public string Messages
{
get;
set;
}
}
In the above class, i need change name of the Property Messages based on the requirements. Means some times i have to return the same property with name of "ValidMessages" and in some times i have to return the same property with name of "InValidMessages".
Like wise i have to change my property name based on our business requirements. I don't want to create new properties for ValidMessages and InValidMessages.
With out creating new properties in the class how to solve this type problems?
Any help greatly appreciated.?
How
I would like to require a change to one field if a secondary dropdown field is changed at all. At Minimum I would like to change the formatting or color of the field in question.
Hi, all.
I create some users with wrong e-mail, some without e-mail. When I need to give them true email address I encountered a problem: i can't do this.
I trying to change email in sql base aspnetdb, wss_content_webapp, then restart sql, restart IIS, restart server - nothing helps me.
I tried to go to user - my settings - change and didn't see e-mail form to change.
I don't need MySite collection.
Question is - how i can change all information of user when user already exist. When i create user with writing same information - all information picking.
Hi i have table lots of heartbeats i need to know when heartbeat started and when it stopped, and not all the heartbeats in between. Can be multiple heartbeat stops. Thanks
set nocount on
create table #temp (id int, alive bit, checktime datetime)
go
insert #temp values (1,1,dateadd(minute,-7,GETDATE()))
insert #temp values (1,1,dateadd(minute,-6,
Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend | http://www.dotnetspark.com/links/45560-how-to-change-datamember-name-based-on-our.aspx | CC-MAIN-2017-43 | refinedweb | 328 | 73.37 |
Option handling lies at the heart of Trade Federation's modular approach. In particular, options
are the mechanism by which the Developer, Integrator, and Test Runner can work together without
having to duplicate each-other's work. Put simply, our implementation of option handling allows the
Developer to mark a Java class member as being configurable, at which point the value of that member
may be augmented or overridden by the Integrator, and may be subsequently augmented or overridden by
the Test Runner. This mechanism works for all Java intrinsic types, as well as for any
Maps or
Collections of intrinsic types.
Note: the option-handling mechanism only works for classes implementing one of the interfaces included in the Test Lifecycle, and only when that class is instantiated by the lifecycle machinery.
Developer
To start off, the developer marks a member with the
@Option annotation.
They specify (at a minimum) the
name and
description values, which
specify the argument name associated with that Option, and the description that will be displayed on
the TF console when the command is run with
--help or
--help-all.
As an example, let's say we want to build a functional phone test which will dial a variety of phone numbers, and will expect to receive a sequence of DTMF tones from each number after it connects.
public class PhoneCallFuncTest extends IRemoteTest { @Option(name = "timeout", description = "How long to wait for connection, in millis") private long mWaitTime = 30 * 1000; // 30 seconds @Option(name = "call", description = "Key: Phone number to attempt. " + "Value: DTMF to expect. May be repeated.") private Map<String, String> mCalls = new HashMap<String, String>; public PhoneCallFuncTest() { mCalls.add("123-456-7890", "01134"); // default }
That's all that's required for the Developer to set up two points of configuration for that
test. They could then go off and use
mWaitTime and
mCalls as normal,
without paying much attention to the fact that they're configurable. Because the
@Option fields are set after the class is instantiated, but before the
run method is called, that provides an easy way for implementors to set up defaults for
or perform some kind of filtering on
Map and
Collection fields, which are
otherwise append-only.
Integrator
The Integrator works in the world of Configurations, which are written in XML. The config format
allows the Integrator to set (or append) a value for any
@Option field. For instance,
suppose the Integrator wanted to define a lower-latency test that calls the default number, as well
as a long-running test that calls a variety of numbers. They could create a pair of configurations
that might look like the following:
<?xml version="1.0" encoding="utf-8"?> <configuration description="low-latency default test; low-latency.xml"> <test class="com.example.PhoneCallFuncTest"> <option name="timeout" value="5000" /> </test> </configuration>
<?xml version="1.0" encoding="utf-8"?> <configuration description="call a bunch of numbers; many-numbers.xml"> <test class="com.example.PhoneCallFuncTest"> <option name="call" key="111-111-1111" value="#*#*TEST1*#*#" /> <option name="call" key="222-222-2222" value="#*#*TEST2*#*#" /> <!-- ... --> </test> </configuration>
Test Runner
The Test Runner also has access to these configuration points via the Trade Federation console.
First and foremost, they will run a Command (that is, a config and all of its arguments) with the
run command <name> instruction (or
run <name> for short).
Beyond that, they can specify any list of arguments are part of the command, which may replace or
append to fields specified by Lifecycle Objects within each config.
To run the low-latency test with the
many-numbers phone numbers, the Test Runner
could execute:
tf >run low-latency.xml --call 111-111-1111 #*#*TEST1*#*# --call 222-222-2222 #*#*TEST2*#*#
Or, to get a similar effect from the opposite direction, the Test Runner could reduce the wait time
for the
many-numbers test:
tf >run many-numbers.xml --timeout 5000 | https://source.android.com/devices/tech/test_infra/tradefed/fundamentals/options.html | CC-MAIN-2014-15 | refinedweb | 647 | 52.39 |
16 April 2012 04:21 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
The company’s 27,000 tonne/year low density polyethylene (LDPE) plant, its 29,000 tonne/year linear low density polyethylene (LLDPE) unit, its 43,000 tonne/year LDPE/EVA swing plant and its 120,000 tonne/year high density polyethylene (HDPE) unit at the site were taken off line on 12 March, the source said.
“We will restart the EVA/LDPE swing plant on 20 April to produce LDPE, and thereafter, we will produce EVA in early May,” the source said.
Tosoh was producing EVA prior to the March maintenance shutdown, he and DuPont-Mitsui Polychemicals; China’s BASF-YPC, Beijing Organic, DuPont Packaging & Industrial Polymers and The Polyolefin Co Singapore | http://www.icis.com/Articles/2012/04/16/9550487/japans-tosoh-to-restart-yokkaichi-pe-eva-units-on-20-april.html | CC-MAIN-2014-41 | refinedweb | 124 | 52.73 |
VSCode-ObjectScript release 0.7.11
Hi all, I have released the latest version of VSCode extension for ObjectScript already a month ago, and finally a time for the info about this new release.
So, what's new in the release:
What's new in this version
- added export setting "objectscript.export.addCategory" if enabled uses the previous behaviour, adds category folder to export folder, disabled by default
- added Server actions menu, by clicking on server info from status bar. Open Management portal, Class Reference and toggle connection.
- Class Suggestion in ##class, Extends, As, CompileAfter, DependsOn, PropertyClass
- $SYSTEM suggestion by Classes from %SYSTEM
- Import and compile folder or file by context menu in File Explorer
- Server Explorer, now possible to open any other namespace
- Macros suggestion
For details how it works now, look further.
Intellisense
- Class name suggestion after Extends keyword
- Class name suggestion after keyword As. Should work in many cases such as with
- Property
- Parameter
- Method definition
- and so on.
- Macros usage suggestion name after typing triple dollar $$$
- Special variable $SYSTEM suggestion.
Other changes
- Import and Compile all files from the File Explorer
- Server Explorer, view another namespace.
- Go to System Management Portal
- And to Class Reference.
As always fill free to fill any issues you faced when used this extension, or any new features you would like to see.
If you like my job, you can donate to me in any preferable way. | https://community.intersystems.com/post/vscode-objectscript-release-0711 | CC-MAIN-2022-21 | refinedweb | 236 | 53.61 |
Hi, I would like to write a function untilM, which would be to until as mapM is to map. An example use would be if I had a function dice :: State Int Int which returns random dice throws within a state monad. Then I'd like to be able to do something like untilM (\(s,p)-> s>=100) f (0,1) where f (s,p) = do d <- dice return (s+d, p*d) This should throw dice until the sum exceeds 100, and keeping track of the product as we go along. The problem is that I get stuck trying to manage the interaction of the conditional and the recursion in untilM. Let's start with until: until p f x = if p x then x else until p f (f x) So I figure untilM should look something like: untilM :: Monad m => (a -> Bool) -> (a -> m a) -> a -> m a untilM p f x = return (if p x then x else untilM p f (f x)) The problem is that the two branches of the conditional have different types. If I try to remedy that by changing "then x" to "then return x", it still isn't happy. (My real question is, how do I do conditional tail recursion in a monad?) Thanks in advance. | http://www.haskell.org/pipermail/haskell-cafe/2007-March/023748.html | CC-MAIN-2014-15 | refinedweb | 216 | 70.57 |
This site uses strictly necessary cookies. More Information
Hello! I am making a game for the A Game by it's Cover contest, which requires you to make a game based off of a fake cover. I chose this one. I need to reproduce the old school wireframe arcade vector look on the box, but I'm not sure how to achieve this. I tried applying the Barycentric wireframe shader, but I wasn't sure what it meant by mesh.uv1, and I'm not sure if I have to use one of the scripts or both. I've also seen this question on here, which asks a similar question, but there wasn't a clear solution other than the aforementioned shader. So, do any of you know how to achieve this look in Unity? Here's the image of the look I have to recreate, if you didn't feel like clicking the link:
Answer by DaveA
·
Jun 27, 2012 at 10:35 PM
mesh.uv1 looks to me like a typo. I would think he means mesh.uv The C# script has a routine to make a simple set of uv's, looks like. You'd probably want to copy/paste that function into your script (convert to javascript if needed), then call like
myMesh.uv = GetBarycentricFixed();
The other script is a shader, so create new shader (like you would a script) and paste that in. And thanks Unity for changing the shader language around. So you probably need to fix the syntax on that shader. Then use that shader on your material for whatever objects you want rendered with it.
Better examples and a screenshot would have been nice.
Thanks! I did this, it worked, but the edges were tris not quads (image here:). Is this because of the way Unity creates meshs? I know in Blender it creates them with quad faces, which is what I'm looking for. I'm about to create a quick sphere in Blender and import it and see what happens, thank you!
I don't know enough about shaders to comment on that. But warning: this will probably just draw triangles, not quads or polys, which you may want for the look of what you're doing. Vectrosity draws quads.
It's not possible for meshes to use anything other than triangles, that's how the hardware works. The quads in Blender are a convenience made with software, but are still actually internally converted to triangles for display (even in Blender).
(However, you can use the Line$$anonymous$$aker utility in Vectrosity to design vector shapes with quads, or in fact any arbitrary number of sides, *cough*. ;) )
*cough*
this baricentric shader is actually a bit strange, it can only draw lines between
[0,0] and [0,1],
[0,0] and [1,0],
[0,0] and [1,1]
The first "if" in the fragment shader will handle the first two cases. The second "if" will draw the third case. However with those coordinates you can't form a triangle or any other "shape". Every line starts (or ends) at 0,0. I would say the shader is wrong ;) The actual condition should be that at least one of the two components (x or y) need to be either in range [0.0 to linewidth] or [1.0-linewidth to 1.0].
So the fail condition would be
if ((x > lineWidth && x < 1.0-lineWidth) && (y > lineWidth && y < 1.0-lineWidth))
return _GridColor;
else
return _LineColor;
if i'm not mistaken ;)
Answer by Mortoc
·
Jun 27, 2012 at 06:59 PM
I've been using Vectrosity from the asset store, it's really well done. If you're willing to spend $30, it can save you a lot of time.
$25 on my site. ;)
thanks, but I don't really have any money to spend on this.
First and best $25 I spent on a Unity add-on. Well worth it. Comes with vector BattleZone tank game.
Answer by Bunny83
·
Jun 28, 2012 at 01:47 AM
Ok, i've taken a look at the shader and changed it so it actually works ;) The original shader used the second texture channel (TEXCOORD1) I changed it to the first / main texture channel TEXCOORD0.
I've created a package with some shader variations (transparent, "quad-mode", transparent with backface culling). Here's a test webplayer.
This is the transparent shader without backface culling. Keep in mind that without zwrite and backfaces, it only works as some kind of cutout shader and only with solid colors, otherwise you get weird results since the z-order is random.
As you can see the cylinder looks strange, that's because it is unwrapped to one texture. this shader needs every triangle to be mapped to the "whole texture". So only coordinates of 0 or 1. The big problem are shared vertices. Depending on the topology the coordinates can't be shared in all cases. If you have another premade model, the easiest way to use this shader is to remove all shared vertices and create single triangles. If it's a quite big mesh this can of course exceed the vertex limit, so keep that in mind.
Shader "WireFrameTransparent"
{
Properties
{
_LineColor ("Line Color", Color) = (1,1,1,1)
_GridColor ("Grid Color", Color) = (0,0,0,0)
_LineWidth ("Line Width", float) = 0.05
}
SubShader
{
Pass
{
Blend SrcAlpha OneMinusSrcAlpha
Cull Off
ZWrite Off
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
uniform float4 _LineColor;
uniform float4 _GridColor;
uniform float _LineWidth;
// vertex input: position, uv1, uv2
struct appdata {
float4 vertex : POSITION;
float4 texcoord1 : TEXCOORD0;
float4 color : COLOR;
};
struct v2f {
float4 pos : POSITION;
float4 texcoord1 : TEXCOORD0;
float4 color : COLOR;
};
v2f vert (appdata v) {
v2f o;
o.pos = mul( UNITY_MATRIX_MVP, v.vertex);
o.texcoord1 = v.texcoord1;
o.color = v.color;
return o;
}
float4 frag(v2f i ) : COLOR
{
float2 uv = i.texcoord1;
float d = uv.x - uv.y;
if (uv.x < _LineWidth) // 0,0 to 1,0
return _LineColor;
else if(uv.x > 1 - _LineWidth) // 1,0 to 1,1
return _LineColor;
else if(uv.y < _LineWidth) // 0,0 to 0,1
return _LineColor;
else if(uv.y > 1 - _LineWidth) // 0,1 to 1,1
return _LineColor;
else if(d < _LineWidth && d > -_LineWidth) // 0,0 to 1,1
return _LineColor;
else
return _GridColor;
}
ENDCG
}
}
Fallback "Vertex Colored", do you write a flexible wireframe shader with backface culling?
1
Answer
Shader input position.w, what is it for?
1
Answer
My custom shader doesn't render in wireframe
0
Answers
Wireframe shader that only draws the outside edge?
1
Answer
How to get "Shaded Wireframe" view in Game View with equivalent shader
0
Answers
EnterpriseSocial Q&A | https://answers.unity.com/questions/275629/how-to-make-vector-monitor-graphics-in-unity.html | CC-MAIN-2021-21 | refinedweb | 1,115 | 73.98 |
Qt grew to a quite mature framework over the past few years, and I dare to say that it probably can keep up with "mighty" frameworks like .NET or Java from a functional point of view. Of course, there is stuff like the Entity Framework for .NET, but with Qt 5.2, a developer gets a solid Framework base to build up great applications on. However, there is still one thing missing from Qt 5.2: You can have a QMutex, you have Semaphores, but you don't have a global named mutex.
Now, why would you need a global Mutex anyways? And why would you want to give it a name?
I came across the situation that I needed a global named Mutex just recently, and because it makes a great example of a Use Case where one needs a global named Mutex, I decided to show it off:
The first approach would've worked fine with the usual QMutex only - A Process with a Reader and a Writer inside (important: Reader and Writer live in different threads), both of them accessing a common data storage (A Directory, to be exact, but it could also be a shared Memory, a single file - You get my point).
QMutex
But what happens if we extend this scenario? Let's say there is still just one data storage, but 4 threads accessing it from two different processes:
alt="Image 2" data-src="/KB/cross-platform/750545/image2.png" class="lazyload" data-sizes="auto" data->
This leads us to the problem that QMutex is only accessible within a single process and not globally to every process on the computer. That is where the global Mutex is needed: to synchronize access to a common resource over multiple threads and processes.
Mutex
I did my bit of Googlin' but haven't found a named Mutex class, but plenty of suggestions on what one can do to implement their own global named QMutex. What I found to be the easiest and most durable solution is to encapsule the QSharedMemory class with a custom wrapper.
QSharedMemory is a class of Qt which allows multiple processes to access the same bit of memory, and it has also its own Mutex to lock access to the shared memory before reading from or writing to it.
QSharedMemory
The class I came up with is pretty, small, and simple. All it does is encapsulate an instance of QSharedMemory and only provide access to the Lock() and Unlock() methods:
Lock()
Unlock()
/// Description
/// =========================================
/// QtGlobalMutex provides the functionality of a global named Mutex
/// which is missing in the current version of Qt (Qt 5.2)
///
/// ========================================
/// Available under the terms of the CodeProject Open License (CPOL)
#ifndef QTGLOBALMUTEX_H
#define QTGLOBALMUTEX_H
#include <QObject>
#include <QString>
#include <QSharedMemory>
class QtGlobalMutex: public QObject{
Q_OBJECT
private:
QSharedMemory* sharedMemory; /*!< Used to provide the locking and unlocking
implementation*/
public:
QtGlobalMutex(QString name);
~QtGlobalMutex();
void Lock();
void Unlock();
};
#endif //QTGLOBALMUTEX_H
The implementation of the code is equally simple, the constructor just initializes a heap pointer to a QSharedMemory object, passing on the Mutex's name as argument to the QSharedMemory:
QSharedMemory
//+------------------------------------------------------------------------------------
//! Initializes a new instance of a QtGlobalMutex.
//! The name of the Mutex is expected as parameter.
//+------------------------------------------------------------------------------------
//| Arguments:
//! name QString [in] Name of the Mutex, used to identify it globally
//+------------------------------------------------------------------------------------
QtGlobalMutex::QtGlobalMutex(QString name){
sharedMemory = new QSharedMemory(name);
}
Did I just initialize a heap pointer? Yes. What do we do with heap pointers? Right! We make sure that they are deleted correctly:
//+------------------------------------------------------------------------------------
//! Uninitializes an instance of a QtGlobalMutex.
//+------------------------------------------------------------------------------------
//| Arguments:
//! - N/A -
//+------------------------------------------------------------------------------------
QtGlobalMutex::~QtGlobalMutex(){
delete sharedMemory;
}
The Lock() and Unlock() methods aren't really worth talking about since all that they do is call the corresponding method on the QSharedMemory.
Lock()
Unlock()
I'm still kinda puzzled why Qt doesn't offer a global named Mutex, especially since making one on your own is a 15 minutes task. I'll post an update if I ever find out, so far I'm doing good with my homemade. | https://www.codeproject.com:443/Tips/750545/Having-a-Global-named-Mutex-in-Qt?fid=1856410&df=90&mpp=25&sort=Position&view=Normal&spc=Relaxed&prof=True&PageFlow=Fluid | CC-MAIN-2021-43 | refinedweb | 664 | 50.06 |
SO WHAT IS IT ? 🙄 🙄 🙄
Emotion is a library made for writing CSS styles with JavaScript. It allows you to style apps quickly with string or object styles. The most amazing thing about this library – other than it’s size and performance – is the flexibility of use. It supports strings, objects and functions and makes all the composable goodness possible.
The way this library intertwines itself with your React components is really amazing and it’s a joy to work with.
Add css prop magic to your styles
Emotion gives us CSS prop support. The
CSS prop is similar to the
style prop available in other libraries but also adds support for nested selectors, media queries, and auto-prefixing.
This provides a concise and flexible API to style my React components. The CSS prop also accepts a function that is called with your theme as an argument allowing developers easy access to common and customizable values.
How do I get started? 🤔 🤔 🤔
First step is to add it via npm or yarn:
yarn add @emotion/core
Now you can just import it like so:
import { css } from '@emotion/core';
or install the
@emotion/styled package and import the styled prop like so:
import styled from '@emotion/styled'
Using it with React components
Any component or element that accepts a
className prop can also use the css prop. The styles supplied to the css prop are evaluated and the computed class name is applied to the prop.
1.Object Styles
The css prop accepts object styles directly and does not require an additional import. It needs camelCase instead of kebab-case, for example background-color would be backgroundColor
With the css prop
import { css } from '@emotion/core'; render( <div css={{ color: 'red', backgroundColor: 'gray', }} > This is red. </div>, );
With styled
import styled from '@emotion/styled'; const Button = styled.button( { color: 'red', }, props => ({ fontSize: props.fontSize, }), ); render(<Button fontSize={16}>This is a red button.</Button>);
Child Selectors
/* @jsx jsx */ import { css } from '@emotion/core'; render( <div css={{ color: 'blue', '& .name': { color: 'orange', }, }} > This is blue. <div className="name">This is orange</div> </div>, );
import { css } from '@emotion/core'; render( <div css={{ color: 'green', '@media(min-width: 420px)': { color: 'orange', }, }} > This is orange on a big screen and green on a small screen. </div>, );
For Numbers, Arrays, Fallbacks, Composition refer. 😇 😇 😇
- String Styles
To pass string styles, you must use css which is exported by @emotion/core
/** @jsx jsx */ import { css, jsx } from '@emotion/core'; const color = 'green'; render( <div css={css` background-color: pink; &:hover { color: ${color}; } `} > This has a pink background. </div>, );
- Style Precedence
Two things to keep in mind:
- Class names containing emotion styles from the className prop override css prop styles.
- Class names from sources other than emotion are ignored and appended to the computed emotion class name.
The precedence order may seem counter-intuitive, but it allows components with styles defined on the css prop to be customized via the className prop passed from the parent.
The
P component in this example has its default styles overridden in the headerComponent.
/** @jsx jsx */ import { jsx } from '@emotion/core'; const P = props => ( <p css={{ fontSize: 12, fontFamily: 'Sans-Serif', color: 'black', margin: 0, }} {...props} /> ); const headerComponent = props => ( <P css={{ fontSize: 14, color: 'red', }} {...props} // <- props contains the `className` prop /> );
Here is what happens to the original
P component:
.css-1 { font-size: 12px; font-family: sans-serif; color: black; margin: 0; }
and these are the styles applied when the headerComponent component is rendered:
.css-2 { font-size: 14px, color: red; }
Result
.css-result { - font-size: 12px; + font-family: 'sans-serif'; - color: black; + font-size: 14px, + color: red; + margin: 0; }
But how different is this from styled-components? 😈 😈 😈
Emotion does basically have all the features of styled-components but
The Difference is
Size difference via bundlephobia 😢
Now isn’t that amazing! 😎 | https://tech.shaadi.com/2019/09/02/getting-emotional-with-react/ | CC-MAIN-2022-05 | refinedweb | 638 | 62.27 |
BBC micro:bit
The Speech Module
Introduction
In 1982, a program called SAM (Software Automated Mouth) was released for the Commodore 64 personal computer. This was a pretty early version of a 'text to speech' program and the quality is not amazing. It is, as the MicroPython team say, 'good enough' and a nice feature to have on a microcontroller. Text-to-speech in a microcontroller project usually means buying a £20 integrated circuit and the quality is no better, and the programming not easier.
In order to use the speech module, you will need to connect a speaker. A piezo buzzer isn't going to work for this. You can use the headphones hack for this though. Connect your speaker to pin 0, the default audio pin.
Programming
There are lots of ways to configure the nature of the speaking voice, all wonderfully documented in the MicroPython Documentation. The easiest way to kick off is with the speech.say() statement.
from microbit import * import speech speech.say("microbit speaking to you")
Notice that you need to import a separate module to use the speech module. You can use punctuation with your text and appropriate pauses will be used in the speech.
Check out the documentation to read about more of the statements in this module. You can control the style of the speaking voice as well as the speed of delivery. You can also get the micro:bit to pronounce 'phonemes' (the units of sound used to express ourselves in our language). This can give a much more accurate representation than the software's best guess. One final possiblity is to make the micro:bit voice sing - a matter of time before someone uses this for a geeky Rick Roll.
The Quality
The quality of the speech is not, as previously stated, amazing. It is quite usable though. Speech synthesis that is as rough and ready as this is quite interesting in itself though. Let's say that you try this out with the example program above. You will not have any problem recognising that the phrase you wrote was said out loud by the micro:bit. So, all excited, you write some hilariously funny jokes and play it to somebody else. Surprisingly, they don't seem to be able to work out what is being said.
Your first reaction might be to listen yourself. You'll say it sounds fine, the words are crystal clear. They still say they can't quite capture every word. Maybe you start to think that there is something not quite right about them or their hearing. You will be wrong.
This shows us something about the way that our brains process speech. One of the early attempts at speech synthesis was called Sine Wave Speech. Search the WWW and you can find examples. If you play one of these files to someone saying only 'What Is This?', they will probably tell you it is some whistling. If they are old enough, they will tell you it's a clanger. Tell them it is meant to be speech and ask them to listen again. This time they might make out some words or parts of words. Tell them what is being said and they will hear it clearly. Processing of speech and language is a complex thing. Knowing that we are listening for words seems to change the way we process the sensory input in our brains. Knowing what those words are is an even greater change for us.
The SAM program on the micro:bit is very similar. It's easier to tell that it's speech you are listening to but still tough to make out the words sometimes. If you write the program, you already know what is going to be said and you will always hear it more clearly than someone who doesn't know the words. This is obviously something to bear in mind when developing your projects.
Challenges
- First, get over the fun of making it say rude words, insults and so on. Speech could be used in lots of different ways in your exisiting projects, pretty much any time you want to output information.
- You can write a genuine Magic 8 Ball program which speaks out loud its answer to your questions. The program should wait until the micro:bit is shaken. Then it should output one of the answers that you can find if you look for this information online. The idea is that the user asks a questions and shakes the micro:bit to hear an answer. If the shaking is messing up your circuit, go for a button press instead.
- In the MicroPython Documentation, there is an example of a limerick generator. You could adapt this version for a different set of rhyming words (keep it clean though). You could go a little further and try some different formats. There are several ways to go about planning something like that. My preferred method is to start with an example of a complete verse. This could be something you write or something you use as a model. Then you need to think about the things you want to vary and the things you keep the same. If you are using rhyme (which makes more impressive results), you can use the same sounds in all your variations or write a more complex program that can select words to rhyme with previously chosen ones.
- If the poetry isn't you, think of something else that has a formula to it. Post-match interviews with sportspeople tend to be full of clichés. | http://multiwingspan.co.uk/micro.php?page=speech | CC-MAIN-2017-22 | refinedweb | 935 | 73.68 |
Seems like a good approach. Could that be made more flexible.Architectures today expose cache miss numbers, which through simple markovian chains approximation allow a much better estimationof the cache footprint then inferring from time. Any chance to incoporate something likethis into your cost function flexibly or is this just too <way out there> ?-- Hubertus* Davide Libenzi <davidel@xmailserver.org> [20011018 20;56]:"> > This patch try to achieve a better goodness() calculation through the> introduction of the concept of 'cpu time spent onto a processor', aka> probable cache footprint of the task.> Right now if we've a couple of cpu bound tasks with the classical editor> keyboard ticking, we can see the two cpu bound tasks swapped when the> editor task kicks in.> This is wrong coz the cpu bound task has probably run for a bunch of ms> and hence has a quite big memory footprint on the cpu.> While the editor task very likely run for a very short time by keeping the> footprint of the cpu bound tasks amlost intact.> This patch addresses in a better way even the proc change penalty problem> that, right now, is constant.> It's abvious that having to choose between the cpu bound task and the> editor task, the better candidate should be the one that has less memory> footprint aka the editor task.> So we can change the constant penalty into a constant plus an estimate of> the memory footprint.> The patch tracks the number of jiffies that a task run as :> > JR = Jin - Jout> > where Jin is the jiffies value when the task in kicked in and Jout is the> time when it's kicked out.> In a given time the value of the footprint is :> > W = JR > (J - Jout) ? JR - (J - Jout): 0> > where J is the current time in jiffies.> This means that if the task is run for 10 ms, 10 ms ago, it's current> weight will be zero.> This is quite clear coz the more a process does not run the more footprint> will lose.> I'm currently testing the patch by analyzing process migration and> latencies with the latsched patch :> >> > Comments ?> > > > - Davide> > > > diff -Nru linux-2.4.12.vanilla/include/linux/sched.h linux-2.4.12.scx/include/linux/sched.h> --- linux-2.4.12.vanilla/include/linux/sched.h Thu Oct 11 10:40:49 2001> +++ linux-2.4.12.scx/include/linux/sched.h Thu Oct 18 17:11:35 2001> @@ -393,12 +393,14 @@> int (*notifier)(void *priv);> void *notifier_data;> sigset_t *notifier_mask;> -> +> /* Thread group tracking */> u32 parent_exec_id;> u32 self_exec_id;> /* Protection of (de-)allocation: mm, files, fs, tty */> spinlock_t alloc_lock;> +/* a better place for these brothers must be found */> + unsigned long cpu_jtime, sched_jtime;> };> > /*> diff -Nru linux-2.4.12.vanilla/kernel/fork.c linux-2.4.12.scx/kernel/fork.c> --- linux-2.4.12.vanilla/kernel/fork.c Mon Oct 1 12:56:42 2001> +++ linux-2.4.12.scx/kernel/fork.c Thu Oct 18 17:12:49 2001> @@ -687,6 +687,9 @@> if (!current->counter)> current->need_resched = 1;> > + p->cpu_jtime = 0;> + p->sched_jtime = jiffies;> +> /*> * Ok, add it to the run-queues and make it> * visible to the rest of the system.> diff -Nru linux-2.4.12.vanilla/kernel/sched.c linux-2.4.12.scx/kernel/sched.c> --- linux-2.4.12.vanilla/kernel/sched.c Tue Oct 9 19:00:29 2001> +++ linux-2.4.12.scx/kernel/sched.c Thu Oct 18 17:15:48 2001> @@ -171,9 +171,15 @@> #ifdef CONFIG_SMP> /* Give a largish advantage to the same processor... */> /* (this is equivalent to penalizing other processors) */> - if (p->processor == this_cpu)> + if (p->processor == this_cpu) {> weight += PROC_CHANGE_PENALTY;> -#endif> + if (p->cpu_jtime > jiffies)> + weight += p->cpu_jtime - jiffies;> + }> +#else /* #ifdef CONFIG_SMP */> + if (p->cpu_jtime > jiffies)> + weight += p->cpu_jtime - jiffies;> +#endif /* #ifdef CONFIG_SMP */> > /* .. and a slight advantage to the current MM */> if (p->mm == this_mm || !p->mm)> @@ -382,7 +388,7 @@> * delivered to the current task. In this case the remaining time> * in jiffies will be returned, or 0 if the timer expired in time> *> - * The current task state is guaranteed to be TASK_RUNNING when this> + * The current task state is guaranteed to be TASK_RUNNING when this> * routine returns.> *> * Specifying a @timeout value of %MAX_SCHEDULE_TIMEOUT will schedule> @@ -574,7 +580,10 @@> case TASK_RUNNING:;> }> prev->need_resched = 0;> -> + if (prev != idle_task(this_cpu)) {> + prev->cpu_jtime -= prev->cpu_jtime > prev->sched_jtime ? prev->sched_jtime: 0;> + prev->cpu_jtime += (jiffies - prev->sched_jtime) + jiffies;> + }> /*> * this is the scheduler proper:> */> @@ -611,6 +620,7 @@> next->has_cpu = 1;> next->processor = this_cpu;> #endif> + next->sched_jtime = jiffies;> spin_unlock_irq(&runqueue_lock);> > if (prev == | https://lkml.org/lkml/2001/10/19/56 | CC-MAIN-2021-43 | refinedweb | 762 | 61.56 |
Agenda
See also: IRC log
Wilhelm introduced himself
and explained his work at Opera
He does have some experience on Mobile device testing and hope to contribute to group
Wilhelm will working as a replacement for Till.
Till will be leaving the Group.
Any question from Wilhelm for the Group
None at this time. He been reading the mail logs
<dom> ACTION: Dom to look at implementing an id for test session [recorded in] [PENDING]
<dom> ACTION: Dom to look at implementing Till's suggestions for the DOM Test Suites [recorded in] [DONE]
<dom> Automation of DOM Test suite
Carmelo asked as to when the results are recorded when running the suite
Results are recorded as the tests are run.
Dom: Automation works on Firefox but not on Opera just yet.
Till will have a look into that.
<scribe> ACTION: Till to do extra reseach on autmatic execution using Opera [recorded in]
ok waiting on DOM
<dom> ACTION: Dom to look at integrating the WICD test suite into the harness [recorded in] [PENDING]
<dom> ACTION: Till to try the SVG Tiny test suite in the harness on a mobile device [recorded in] [DONE]
Everything seems to have run fine using the Mobile device
Till: made a few comments regarding the test suite itself.
<scribe> ACTION: Till to draft a first message to regarding SVG test suite [recorded in]
<scribe> ACTION: carmelo: To contact SVG Working group with Till's comments. [recorded in]
<dom> ACTION: Carmelo to try the SVG Tiny test suite in the harness on a mobile device [recorded in] [PENDING]
Dom: Is Opera using teh SVG suite?
Till: Yes very much so
<dom> ACTION: Carmelo to contact the SVG Working Group to let them know about the test harness including SVG Tiny [recorded in] [DONE]
Till: To post a message regarding Opera's usage of SVG Tiny suite.
<dom> ACTION: Dom to check whether we're allowed to put the WCSS Test suite in the mobile test harness [recorded in] [PENDING]
<dom> ACTION: Dom to explore using XHTML MP as part of the harness [recorded in] [PENDING]
Do we need to do anything specific regarding to testfest (an event from OMA)
Allen: Asked about moving harness to a more prominent place in W3C Site
<dom> ACTION: Allen to see from where the harness should be linked [recorded in]
What about using main page advertised for the harness
Dom: This is usually done for public documents.
Maybe we are not ready for that kind of attention just yet
Allen: Not a strong opinion either way just yet.
<dom> ACTION: Dom to check with SVG and CSS WG whether they want more visibility of their test suites through the harness [recorded in]
<dom> ACTION: Dom to send responses to respondents of survey [recorded in] [DONE]
<dom> ACTION: Carmelo to update the test submissions document based on Dom's comments and put on the W3C site [recorded in] [DONE]
<dom> [DRAFT] Guidelines for the Mobile Web Test Suites Group (MWTS) Tests Submissions
Carmelo: based on a document for XML Query
... [summarizes content of the document]
... positive test: test with well-formed correct content
... negative test: test with content that should trigger an error of sort
... behavioral test: test that highlights a specific behavior of a browser
Dom: the point on not using company names on so on should probably apply to all the content, not only function names
<dom> MWBP Tests cases on encoding
<dom> more test cases
dom: I don't think we need the sections on collation; the point on URIs should be to use relative URIs rather than absolute URIs
... can remove also the section on XML and namespaces, "Results and Serialization", as well as numerical boundaries
Dmitri: will send comments by email on positive vs negative tests; would be good to have examples
Carmeo: will publish an updated version of the document later today
dom: I'll be looking into creating the web interface for submitting testcases | http://www.w3.org/2007/04/24-mwts-minutes.html | CC-MAIN-2016-40 | refinedweb | 660 | 51.65 |
import "github.com/ponzu-cms/ponzu/system/search"
Package search is a wrapper around the blevesearch/bleve search indexing and query package, and provides interfaces to extend Ponzu items with rich, full-text search capability.
var ( // Search tracks all search indices to use throughout system Search map[string]bleve.Index // ErrNoIndex is for failed checks for an index in Search map ErrNoIndex = errors.New("No search index found for type provided") )
Backup creates an archive of a project's search index and writes it to the response as a download
DeleteIndex removes data from a content type's search index at the given identifier
MapIndex creates the mapping for a type and tracks the index to be used within the system for adding/deleting/checking data
TypeQuery conducts a search and returns a set of Ponzu "targets", Type:ID pairs, and an error. If there is no search index for the typeName (Type) provided, db.ErrNoIndex will be returned as the error
UpdateIndex sets data into a content type's search index at the given identifier
type Searchable interface { SearchMapping() (*mapping.IndexMappingImpl, error) IndexContent() bool }
Searchable ...
Package search imports 14 packages (graph) and is imported by 8 packages. Updated 2019-01-12. Refresh now. Tools for package owners. | https://godoc.org/github.com/ponzu-cms/ponzu/system/search | CC-MAIN-2019-13 | refinedweb | 208 | 50.57 |
Hello, I am a beginner and I am so stumped with these two problems. The first one is, how do I get the output to look like this?
Enter the number of pods followed by
the number of peas in a pod:
22 10
22 pods and 10 peas per pod.
The total number of peas = 220
I have attempted for hours and so far I could only get the output to display this.
Enter the number of pods followed by
the number of peas in a pod:
This is what I wrote in my code.
import java.util.Scanner;
public class JavaApplication6 {
public static void main(String[] args) {
Scanner keyboard = new Scanner(System.in);
System.out.println("Enter the number of pods followed by");
System.out.println("the number of peas in a pod:");
int numberOfPods = keyboard.nextInt();
int peasPerPod = keyboard.nextInt();
int totalNumberOfPeas = numberOfPods*peasPerPod;
System.out.print(22 + " pods and ");
System.out.println(peasPerPod + " peas per pod.");
System.out.println("The total number of peas = "
+ totalNumberOfPeas);
}
}
I'm supposed to use a scanner for this, but the thing that gets me the most is how am I supposed to write the code that would display the numbers 22 and 10? I am having so many problems trying to get those two numbers to display.
Also, how do you output all the letters in a string to uppercase?
Like I'm trying to get a sentence to output in all uppercase letters but to no avail.
I used this code.
System.out.println("I like soda.");
String text = console.nextLine();
text.toUpperCase();
I used the above code but all I get is "I like soda." without that sentence returning in all uppercase letters.
If anyone can help me with these two problems it would be greatly appreciated. I'm sorry if I'm asking too much, I'm a beginner and I am so lost with this. Thank you. | http://www.javaprogrammingforums.com/%20whats-wrong-my-code/13835-can-someone-help-me-please-printingthethread.html | CC-MAIN-2014-15 | refinedweb | 321 | 67.25 |
Java Persistence/Basic Attributes
Contents
- 1 Basics
- 1.1 Example of basic mapping annotations
- 1.2 Example of basic mapping XML
- 1.3 Common Problems
- 2 Advanced
- 2.1 Temporal, Dates, Times, Timestamps and Calendars
- 2.2 Enums
- 2.3 LOBs, BLOBs, CLOBs and Serialization
- 2.4 Lazy Fetching
- 2.5 Optional
- 2.6 Column Definition and Schema Generation
- 2.7 Insertable, Updatable / Read Only Fields / Returning
- 2.8 Converters (JPA 2.1)
- 2.9 Custom Types
Basics[edit]
A basic attribute is one where the attribute class is a simple type such as
String,
Number,
Date or a primitive. A basic attribute's value can map directly to the column value in the database. The following table summarizes the basic types and the database types they map to.
In JPA a basic attribute is mapped through the
@Basic annotation or the
<basic> element. The types and conversions supported depend on the JPA implementation and database platform. Some JPA implementations may support conversion between many different data-types or additional types, or have extended type conversion support, see the advanced section for more details. Any basic attribute using a type that does not map directly to a database type can be serialized to a binary database type.
The easiest way to map a basic attribute in JPA is to do nothing. Any attributes that have no other annotations and do not reference other entities will be automatically mapped as basic, and even serialized if not a basic type. The column name for the attribute will be defaulted, named the same as the attribute name, as uppercase. Sometimes auto-mapping can be unexpected if you have an attribute in your class that you did not intend to have persisted. You must mark any such non-persistent fields using the
@Transient annotation or
<transient> element.
Although auto-mapping makes rapid prototyping easy, you typically reach a point where you want control over your database schema. To specify the column name for a basic attribute the
@Column annotation or
<column> element is used. The column annotation also allows for other information to be specified such as the database type, size, and some constraints.
Example of basic mapping annotations[edit]
@Entity public class Employee { // Id mappings are also basic mappings. @Id @Column(name="ID") private long id; @Basic @Column(name="F_NAME") private String firstName; // The @Basic is not required in general because it is the default. @Column(name="L_NAME") private String lastName; // Any un-mapped field will be automatically mapped as basic and column name defaulted. private BigDecimal salary; // Non-persistent fields must be marked as transient. @Transient private EmployeeService service; ... }
Example of basic mapping XML[edit]
<entity name="Employee" class="org.acme.Employee" access="FIELD"> <attributes> <id name="id"> <column name="ID"/> </id> <basic name="firstName"> <column name="F_NAME"/> </basic> <basic name="lastName"> <column name="L_NAME"/> </basic> <transient name="service"/> </attributes> </entity>
Common Problems[edit]
Translating Values[edit]
- See Conversion
Truncated Data[edit]
- A common issue is that data, such as Strings, written from the object are truncated when read back from the database. This is normally caused by the column length not being large enough to handle the object's data. In Java there is no maximum size for a String, but in a database
VARCHARfield, there is a maximum size. You must ensure that the length you set in your column when you create the table is large enough to handle any object value. For very large Strings
CLOBs can be used, but in general
CLOBs should not be over used, as they are less efficient than a
VARCHAR.
- If you use JPA to generate your database schema, you can set the column length through the
Columnannotation or element, see Column Definition and Schema Generation.
How to map timestamp with timezones?[edit]
How to map XML data-types?[edit]
- See Custom Types
How to map Struct and Array types?[edit]
- See Custom Types
How to map custom database types?[edit]
- See Custom Types
How to exclude fields from INSERT or UPDATE statements, or default values in triggers?[edit]
Advanced[edit]
Temporal, Dates, Times, Timestamps and Calendars[edit]
Dates, times, and timestamps are common types both in the database and in Java, so in theory mappings these types should be simple, right? Well sometimes this is the case and just a normal
Basic mapping can be used, however sometimes it becomes more complex.
Some databases do not have
DATE and
TIME types, only
TIMESTAMP fields, however some do have separate types, and some just have
DATE and
TIMESTAMP. Originally in Java 1.0, Java only had a
java.util.Date type, which was both a date, time and milliseconds. In Java 1.1 this was expanded to support the common database types with
java.sql.Date,
java.sql.Time, and
java.sql.Timestamp, then to support internationalization Java created the
java.util.Calendar type and virtually deprecated (almost all of the methods) the old date types (which JDBC still uses).
If you map a Java
java.sql.Date type to a database
DATE, this is just a basic mapping and you should not have any issues (ignore Oracle's
DATE type that is/was a timestamp for now). You can also map
java.sql.Time to
TIME, and
java.sql.Timestamp to
TIMESTAMP. However if you have a
java.util.Date or
java.util.Calendar in Java and wish to map it to a
DATE or
TIME, you may need to indicate that the JPA provider perform some sort of conversion for this. In JPA the
@Temporal annotation or
<temporal> element is used to map this. You can indicate that just the
DATE or
TIME portion of the date/time value be stored to the database. You could also use
Temporal to map a
java.sql.Date to a
TIMESTAMP field, or any other such conversion.
Example of temporal annotation[edit]
@Entity public class Employee { ... @Basic @Temporal(DATE) private Calendar startDate; ... }
Example of temporal XML[edit]
<entity name="Employee" class="org.acme.Employee" access="FIELD"> <attributes> ... <basic name="startDate"> <temporal>DATE</temporal> </basic> </attributes> </entity>
Milliseconds[edit]
The precision of milliseconds is different for different temporal classes and database types, and on different databases. The
java.util.Date and
Calendar classes support milliseconds. The
java.sql.Date and
java.sql.Time classes do not support milliseconds. The
java.sql.Timestamp class supports nanoseconds.
On many databases the
TIMESTAMP type supports milliseconds. On Oracle prior to Oracle 9, there was only a
DATE type, which was a date and a time, but had no milliseconds. Oracle 9 added a
TIMESTAMP type that has milliseconds (and nanoseconds), and now treats the old
DATE type as only a date, so be careful using it as a timestamp. MySQL has
DATE,
TIME and
DATETIME types. DB2 has a
DATE,
TIME and
TIMESTAMP types, the
TIMESTAMP supports microseconds. Sybase and SQL Server just have a
DATETIME type which has milliseconds, but at least on some versions has precision issues, it seems to store an estimate of the milliseconds, not the exact value.
If you use timestamp version locking you need to be very careful of your milliseconds precision. Ensure your database supports milliseconds precisely otherwise you may have issues, especially if the value is assigned in Java, then differs what gets stored on the database, which will cause the next update to fail for the same object.
In general I would not recommend using a timestamp and as primary key or for version locking. There are too many database compatibility issues, as well as the obvious issue of not supporting two operations in the same millisecond.
Timezones[edit]
Temporals become a lot more complex when you start to consider time zones, internationalization, eras, locals, day-light savings time, etc. In Java only
Calendar supports time zones. Normally a
Calendar is assumed to be in the local time zone, and is stored and retrieved from the database with that assumption. If you then read that same
Calendar on another computer in another time zone, the question is if you will have the same
Calendar or will you have the
Calendar of what the original time would have been in the new time zone? It depends on if the
Calendar is stored as the GMT time, or the local time, and if the time zone was stored in the database.
Some databases support time zones, but most database types do not store the time zone. Oracle has two special types for timestamps with time zones,
TIMESTAMPTZ (time zone is stored) and
TIMESTAMPLTZ (local time zone is used). Some JPA providers may have extended support for storing
Calendar objects and time zones.
- TopLink, EclipseLink : Support the Oracle
TIMESTAMPTZand
TIMESTAMPLTZtypes using the
@TypeConverterannotation and XML.
Forum Posts
Joda-Time[edit]
Joda-Time is a commonly used framework for date/time usage in Java. It replaces Java Calendars which many people find difficult to use and have poor performance.
There is no standard Joda-Time support in JPA, but a
Converter can be used to convert from Joda-Time classes and database types.
- TopLink, EclipseLink : The base product offers no specific Joda-Time support, but there is a custom converter provided by a third party library, joda-time-eclipselink-integration.
Enums[edit]
Java
Enums are typically used as constants in an object model. For example an
Employee may have a
gender of enum type
Gender (
MALE,
FEMALE).
By default in JPA an attribute of type Enum will be stored as a
Basic to the database, using the integer Enum values as codes (i.e.
0,
1). JPA also defines an @Enumerated annotation and
<enumerated> element (on a
<basic>) to define an Enum attribute. This can be used to store the Enum as the
STRING value of its name (i.e.
"MALE",
"FEMALE").
For translating Enum types to values other than the integer or String name, such as character constants, see Translating Values.
Example of enumerated annotation[edit]
public enum Gender { MALE, FEMALE } @Entity public class Employee { ... @Basic @Enumerated(EnumType.STRING) private Gender gender; ... }
Example of enumerated XML[edit]
<entity name="Employee" class="org.acme.Employee" access="FIELD"> <attributes> ... <basic name="gender"> <enumerated>STRING</enumerated> </basic> </attributes> </entity>
LOBs, BLOBs, CLOBs and Serialization[edit]
A
LOB is a Large OBject, such as a
BLOB (Binary LOB), or a
CLOB (Character LOB). It is a database type that can store a large binary or string value, as the normal
VARCHAR or
VARBINARY types typically have size limitations. A LOB is often stored as a locator in the database table, with the actual data stored outside of the table. In Java a
CLOB will normally map to a
String, and a
BLOB will normally map to a
byte[], although a
BLOB may also represent some serialized object.
By default in JPA any
Serializable attribute that is not a relationship or a basic type (String, Number, temporal, primitive), will be serialized to a
BLOB field.
JPA defines the @Lob annotation and
<lob> element (on a
<basic>) to define that an attribute maps to a
LOB type in the database. The annotation is just a hint to the JPA implementation that this attribute will be stored in a LOB, as LOBs may need to be persisted specially. Sometimes just mapping the LOB as a normal
Basic will work fine as well.
Various databases and JDBC drivers have various limits for LOB sizes. Some JDBC drivers have issues beyond 4k, 32k or 1meg. The Oracle thin JDBC drivers had a 4k limitation in some versions for binding LOB data. Oracle provided a workaround for this limitation, which some JPA providers support. For reading LOBs, some JDBC drivers prefer using streams, some JPA providers also support this option.
Typically the entire LOB will be read and written for the attribute. For very large LOBs reading the value always, or reading the entire value may not be desired. The fetch type of the
Basic could be set to
LAZY to avoid reading a LOB unless accessed. Support for
LAZY fetching on
Basic is optional in JPA, so some JPA providers may not support it. A workaround, which is often a good idea in general given the large performance cost of LOBs, is to store the LOB in a separate table and class and define a
OneToOne to the LOB object instead of a
Basic. If the entire LOB is never desired to be read, then it should not be mapped. It is best to use direct JDBC to access and stream the LOB in this case. In may be possible to map the LOB to a
java.sql.Blob/
java.sql.Clob in your object to avoid reading the entire LOB, but these require a live connection, so may have issues with detached objects.
Example of lob annotation[edit]
@Entity public class Employee { ... @Basic(fetch=FetchType.LAZY) @Lob private byte[] picture; ... }
Example of lob XML[edit]
<entity name="Employee" class="org.acme.Employee" access="FIELD"> <attributes> ... <basic name="picture" fetch="LAZY"> <lob/> </basic> </attributes> </entity>
Lazy Fetching[edit]
The
fetch attribute can be set on a
Basic mapping to use
LAZY fetching. By default all
Basic mappings are
EAGER, which means the column is selected whenever the object is selected. By setting the
fetch to
LAZY, the column will not be selected with the object. If the attribute is accessed, then the attribute value will be selected in a separate database select. Support for
LAZY is an optional feature of JPA, so some JPA providers may not support it. Typically support for lazy on basics will require some form of byte code weaving, or dynamic byte code generation, which may have issues in certain environments or JVMs, or may require preprocessing your application's persistence unit jar.
Only attributes that are rarely accessed should be marked lazy, as accessing the attribute causes a separate database select, which can hurt performance. This is especially true if a large number of objects is queried. The original query will require one database select, but if each object's lazy attribute is accessed, this will require
n database selects, which can be a major performance issue.
Using lazy fetching on basics is similar to the concept of fetch groups. Lazy basics is basically support for a single default fetch group. Some JPA providers support fetch groups in general, which allow more sophisticated control over what attributes are fetched per query.
- TopLink, EclipseLink : Support lazy basics and fetch groups. Fetch groups can be configured through the EclipseLink API using the
FetchGroupclass.
Optional[edit]
A
Basic attribute can be
optional if its value is allowed to be null. By default everything is assumed to be optional, except for an
Id, which can not be optional. Optional is basically only a hint that applies to database schema generation, if the persistence provider is configured to generate the schema. It adds a
NOT NULL constraint to the column if
false. Some JPA providers also perform validation of the object for optional attributes, and will throw a validation error before writing to the database, but this is not required by the JPA specification. Optional is defined through the
optional attribute of the
Basic annotation or element.
Column Definition and Schema Generation[edit]
There are various attributes on the
Column annotation and element for database schema generation. If you do not use JPA to generate your schema you can ignore these. Many JPA providers do provide the feature of auto generation of the database schema. By default the Java types of the object's attributes are mapped to their corresponding database type for the database platform you are using. You may require configuring your database platform with your provider (such as a persistence.xml property) to allow schema generation for your database, as many database use different type names.
The
columnDefinition attribute of
Column can be used to override the default database type used, or enhance the type definition with constraints or other such DDL. The
length,
scale and
precision can also be set to override defaults. Since the defaults for the
length are just defaults, it is normally a good idea to set these to be correct for your data model's expected data, to avoid data truncation. The
unique attribute can be used to define a unique constraint on the column, most JPA providers will automatically define primary key and foreign key constraints based on the
Id and relationship mappings.
JPA does not define any options to define an index. Some JPA providers may provide extensions for this. You can also create your own indexes through native queries
Example of column annotations[edit]
@Entity public class Employee { @Id @Column(name="ID") private long id; @Column(name="SSN", unique=true, nullable=false, description="description") private long ssn; @Column(name="F_NAME", length=100) private String firstName; @Column(name="L_NAME", length=200) private String lastName; @Column(name="SALARY", scale=10, precision=2) private BigDecimal salary; @Column(name="S_TIME", columnDefinition="TIMESTAMPTZ") private Calendar startTime; @Column(name="E_TIME", columnDefinition ="TIMESTAMPTZ") private Calendar endTime; ... }
Example of column XML[edit]
<entity name="Employee" class="org.acme.Employee" access="FIELD"> <attributes> <id name="id"> <column name="ID"/> </id> <basic name="ssn"> <column name="SSN" unique="true" optional="false"/> </basic> <basic name="firstName"> <column name="F_NAME" length="100"/> </basic> <basic name="lastName"> <column name="L_NAME" length="200"/> </basic> <basic name="startTime"> <column name="S_TIME" columnDefinition="TIMESTAMPTZ"/> </basic> <basic name="endTime"> <column name="E_TIME" columnDefinition="TIMESTAMPTZ"/> </basic> </attributes> </entity>
If using BigDecimal with Postgresql, JPA maps salary to a table column of type NUMERIC(38,0). You can adjust scale and precision for BigDecimal within the @Column annotation.
@Column(precision=8, scale=2) private BigDecimal salary;
Insertable, Updatable / Read Only Fields / Returning[edit]
The
Column annotation and XML element defines
insertable and
updatable options. These allow for this column, or foreign key field to be omitted from the SQL INSERT or UPDATE statement. These can be used if constraints on the table prevent insert or update operations. They can also be used if multiple attributes map to the same database column, such as with a foreign key field through a
ManyToOne and
Id or
Basic mapping. Setting both
insertable and
updatable to false, effectively mark the attribute as read-only.
insertable and
updatable can also be used in the database table defaults, or auto assigns values to the column on insert or update. Be careful in doing this though, as this means that the object's values will be out of synch with the database, unless it is refreshed. For
IDENTITY or auto assigned id columns a
GeneratedValue should normally be used, instead of setting
insertable to false. Some JPA providers also support returning auto assigned fields values from the database after insert or update operations. The cost of refreshing or returning fields back into the object can affect performance, so it is normally better to initialize field values in the object model, not in the database.
- TopLink, EclipseLink : Support returning insert and update values back into the object using the
ReturnInsertand
ReturnUpdateannotations and XML elements.
Converters (JPA 2.1)[edit]
A common problem in storing values to the database is that the value desired in Java differs from the value used in the database. Common examples include using a
boolean in Java and a
0,
1 or a
'T',
'F' in the database. Other examples are using a
String in Java and a
DATE in the database, or mapping custom Java types such as Joda-Time types, or a Money type.
JPA 2.1 defines the
@Converter,
@Convert annotations and
<converter>,
<convert> XML elements. A
Converter is a user defined class that provides custom conversion routines in Java code. It must implement the
AttributeConverter interface and be annotated with the
@Converter annotation (or specified in XML). A
Converter can be used in one of two ways. Normally it is specified on a mapping using the
@Convert annotation or
<convert> XML element. Another option, if converting a custom type, is to have the
Converter applied to any mapped attribute that has that type. To define such as global converter the
autoApply flag is added to the
@Converter annotation. The
@Convert
disableConversion flag can be used to disable a global converter from being applied. The
@Convert
attributeName option can be used to override inherited or embeddable conversions.
Example Converter[edit]
@Entity public class Employee { ... @Convert(converter=BooleanTFConverter.class) private Boolean isActive; ... } @Converter); } }
Example global Converter[edit]
@Entity public class Employee { ... private Boolean isActive; ... } @Converter(autoApply=true)); } }
Conversion[edit]
Previous to JPA 2.1 there was no standard way to convert between a data-type and an object-type. One way to accomplish this was to translate the data through property get/set methods.
@Entity public class Employee { ... private boolean isActive; ... @Transient public boolean getIsActive() { return isActive; } public void setIsActive(boolean isActive) { this.isActive = isActive; } @Basic private String getIsActiveValue() { if (isActive) { return "T"; } else { return "F"; } } private void setIsActiveValue(String isActive) { this.isActive = "T".equals(isActive); } }
Also for translating date/times see, Temporals.
As well some JPA providers have special conversion support.
- TopLink, EclipseLink : Support translation using the
@Convert,
@Converter,
@ObjectTypeConverterand
@TypeConverterannotations and XML.
Custom Types[edit]
JPA defines support for most common database types, however some databases and JDBC driver have additional types that may require additional support.
Some custom database types include:
- TIMESTAMPTZ, TIMESTAMPLTZ (Oracle)
- TIMESTAMP WITH TIMEZONE (Postgres)
- XMLTYPE (Oracle)
- XML (DB2)
- NCHAR, NVARCHAR, NCLOB (Oracle)
- Struct (STRUCT Oracle)
- Array (VARRAY Oracle)
- BINARY_INTEGER, DEC, INT, NATURAL, NATURALN, BOOLEAN (Oracle)
- POSITIVE, POSITIVEN, SIGNTYPE, PLS_INTEGER (Oracle)
- RECORD, TABLE (Oracle)
- SDO_GEOMETRY (Oracle)
- LOBs (Oracle thin driver)
To handle persistence to custom database types you may be able to use a
Converter or special feature of your JPA provider. Otherwise you may need to mix raw JDBC code with your JPA objects. Some JPA provider provide custom support for many custom database types, some also provide custom hooks for adding your own JDBC code to support a custom database type.
- TopLink, EclipseLink : Support several custom database types including, TIMESTAMPTZ, TIMESTAMPLTZ, XMLTYPE, NCHAR, NVARCHAR, NCLOB, object-relational Struct and Array types, PLSQL types, SDO_GEOMETRY and LOBs. | https://en.wikibooks.org/wiki/Java_Persistence/Basic_Attributes | CC-MAIN-2019-26 | refinedweb | 3,674 | 55.13 |
Introduction:.
Before implement this first design one table UserInfo in your database like as shown below
Once table design complete enter some dummy for our example like as shown below
UserInfo
Now we are going to use our gmail account credentials to send mail for that first you need to enable POP enable option in your Gmail account for that you need to open your gmail account and go to Settings-->Forwarding and POP/IMAP
After that design your aspx page like this
After that add following namespaces in your codebehind
C# Code
After that write the following code in button click
VB.NET Code
Demo
Download Sample code Attached
25 comments :
hi Suresh ji! good morning ! You r doing great job.I daily go through with your articles.
Actually sir,right now i m looking for a job change and everybody is asking WCF and MVC in interview.
Please teach us MVC and WCF and make us master in WCF and MVC as in your other articles like gridview,javascript etc..
highly thankful to you.
Vipin Kumar Pandey
your all articles r awesome!!
Hello suresh sir,
Please add one type of article that How to change Language dynamically in a website.
nice article suresh ji......
thank for provide nice article its very helpful for interview dapfor. com
Sir, My final B.Tech Final Year Project is "Mailing Website like Gmail" obiviously it will going to be a milestone in myproject but sir please guide me on how we send and receive mails if we have our own domain name like "abhinavsingh993@richmail.com" where rich mail is my domain name....Please....Please guide me on that ........now as usual you are simply great and hats off to your dedication and curiosity to explain ASP.NET at the end I am eagerly waiting for your next post....thank you sir
@Abhinavsingh993, simply replace google domain name with yours and put your own smtp server (ask your admin if you don't know it) instead of googles.
Nothing else is google specific, it's standard mailing code in .NET
thanks bhijan i love u teacher
Dear Sir,
Please Give Some easy Examples For Learning JavaScripts..
Thank You
while implementing this application i am getting error like you need 5.5.1 authentication required.
help me to avoid this error
i got an idea from this, thank you!
Nice One..
thank you so much
This code cannot be used in real life due to security (Sql Injection) and maintenance (Values that could be changed like connectionstring, smtp etc). For the people that could copy this code should bear this in mind.
Thnx sir for the code..
This code help me to code in my project...
please tech us MVC
Hi.
using (SqlConnection con = new SqlConnection("Data Source=SureshDasari;Integrated Security=true;Initial Catalog=MySampleDB"))
can i replace with
string connectionString = ConfigurationManager.ConnectionStrings["ConnectionString"].ConnectionString;
SqlConnection con = new SqlConnection(connectionString);
because I dont know how to declare the data source, integrated security and initial catalog.
Thank you
it seems that this article is written by a beginner, this should not be implement in real project. i just post some ideas.
1. store a unique key (salted or hash or encrypted) in database table while creating a user.
2. send an email link to the user login Email while submitting forgot password request. for example
3. confirm the link from user by clicking the link
4. match the link key with the stored key in database user table
5. show to reset password page.
This is the basic idea that should be implement in the real project.
Thank you.
Regards
zeex.programmer@yahoo.com
The SMTP server requires a secure connection or the client was not authenticated. The server response was: 5.5.1 Authentication Required.
hello sir i am little bit confused about these 2 lines.how could be txtEmail.Text is same for both.please explain.thank you
Msg.From = New MailAddress(txtEmail.Text)
Msg.[To].Add(txtEmail.Text)
it was very helpful! thanks
sir , if Same email id associate with multiple email then what happen ???? As this code allow same email id to the multiple user ????
thanku so much it is very useful in my project
Awesome......
really helpful peace of code....
my professor give me 10 out of 10 for this code...
thank you sooooo much
braavoooooooooo
100% working
can you please make some this kind of code that send someone FB password to our mail please | http://www.aspdotnet-suresh.com/2012/11/code-for-forgot-password-in-aspnet.html?showComment=1351946827207 | CC-MAIN-2015-18 | refinedweb | 747 | 66.74 |
Build a Peer-to-Peer File Sharing Component in React & PeerJS
This article was peer reviewed by Dan Prince and Bruno Mota. Thanks to all of SitePoint’s peer reviewers for making SitePoint content the best it can be!
In this tutorial we’re going to build a file sharing app with PeerJS and React. I’ll be assuming that you’re a complete beginner when it comes to React so I’ll be providing as much detail as possible.
For you to have an idea of what we’re going to build, here are a couple of screenshots of what the app will look like. First, when the component is ready for use:
And here’s what it looks like when the current user is already connected to a peer and the peer has shared some files with the user:
The source code for this tutorial is available on GitHub.
The Tech Stack
As mentioned earlier, the file sharing app is going to use PeerJS and React. The PeerJS library allows us to connect two or more devices via WebRTC, providing a developer-friendly API. If you don’t know what WebRTC is, it is basically a protocol that allows real-time communications on the web. On the other hand, React is a component-based view library. If you’re familiar with Web Components, it’s similar in the way that it gives you the ability to create custom standalone UI elements. If you want to dive deeper into this, I recommend reading ReactJS For Stupid People.
Installing the Dependencies
Before we start building the app, we first need to install the following dependencies using npm:
npm install --save react react-dom browserify babelify babel-preset-react babel-preset-es2015 randomstring peerjs
Here’s a brief description of what each one does:
- react – the React library.
- react-dom – this allows us to render React components into the DOM. React doesn’t directly interact with the DOM, but instead uses a virtual DOM. ReactDOM is responsible for rendering the component tree into the browser. If you want to dive in more into this, I recommend reading ReactJS|Learning Virtual DOM and React Diff Algorithm.
- browserify – allows us to use
requirestatements in our code to require dependencies. This is responsible for bringing all the files together (bundling) so it can be used in the browser.
- babelify – the Babel transformer for Browserify. This is responsible for compiling the bundled es6 code to es5.
- babel-preset-react – the Babel preset for all react plugins. It’s used for transforming JSX into JavaScript code.
- babel-preset-es2015 – the Babel preset that translates ES6 code to ES5.
- randomstring – generates random string. We’ll use this for generating the keys needed for the file list.
- peerjs – the PeerJS library. Responsible for making connections and sharing files between peers.
Building the App
Now we’re ready to build the app. First let’s take a look at the directory structure:
-js -node_modules -src -main.js -components -filesharer.jsx index.html
- js – where the JavaScript files that will be bundled by Browserify are stored.
- src – where the React components are stored. Inside, we have the
main.jsfile in which we import React and the components used by the app. In this case we only have
filesharer.jsxwhich contains the main meat of the app.
- index.html – the main file of the app.
Index Page
Let’s start with the
index.html file. This contains the default structure of the app. Inside the
<head> we have the link to the main stylesheet and the PeerJS library. Inside the
<body> we have the title bar of the app and the main
<div> where we’ll append the React component that we create. Just before the closing
<body> tag is the main JavaScript file of the app.
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>React File Sharer</title> <link href="" rel="stylesheet" type="text/css" /> </head> <body> <div class="mui-appbar mui--appbar-line-height"> <div class="mui-container"> <span class="mui--text-headline"> React FileSharer </span> </div> </div> <br /> <div class="mui-container"> <div id="main" class="mui-panel"></div> </div> <script src="js/main.js"></script> </body> </html>
Main JavaScript File
The
src/main.js file is where we render the main component into the DOM.
First, we require the React framework, ReactDOM, and the
Filesharer component.
var React = require('react'); var ReactDOM = require('react-dom'); var Filesharer = require('./components/filesharer.jsx');
Then we declare an
options object. This is used to specify options for the
Filesharer component. In this case we’re passing in the
peerjs_key. This is the API key that you get from the PeerJS website so that you can use their Peer Cloud Service to set up peer-to-peer connections. In the case of our app, it serves as the middleman between the two peers (devices) that are sharing files.
var options = { peerjs_key: 'your peerjs key' }
Next we define the main component. We do that by calling the
createClass method of the
React object. This accepts an object as its argument. By default, React expects a
render function to be defined inside the object. What this function does is to return the UI of the component. In this case we’re simply returning the
Filesharer component which we imported earlier. We’re also passing in the
options object as a value for the
opts attribute. In React these attributes are called props and they become available for use inside the component, kind of like passing in arguments to a function. Later on, inside the
Filesharer component, you can access the options by saying
this.props.opts followed by any property you wish to access.
var Main = React.createClass({ render: function () { return <Filesharer opts={options} />; } });
Get the reference of the main
div from the DOM and then render the main component using ReactDOM’s
render method. If you’re familiar with jQuery, this is basically similar to the
append method. So what we’re doing is appending the main component into the main
div.
var main = document.getElementById('main'); ReactDOM.render(<Main/>, main);
Filesharer Component
The
Filesharer component (
src/components/filesharer.jsx), as I mentioned earlier, contains the main meat of the app. The main purpose of components is to have standalone code that can be used anywhere. Other developers can just import it (like we did inside the main component), pass in some options, render it and then add some CSS.
Breaking it down, we first import the React framework, randomstring library, and the PeerJS client.
var React = require('react'); var randomstring = require('randomstring'); var Peer = require('peerjs');
We expose the component to the outside world:
module.exports = React.createClass({ ... });
Earlier in our main JavaScript file we passed in an optional
prop to customize the labels that will be shown in the file sharer component. To ensure that the correct property name (
opts) and data type (
React.PropTypes.object) are passed to the component, we use
propTypes to specify what we’re expecting.
propTypes: { opts: React.PropTypes.object },
Inside the object passed to the
createClass method, we have the
getInitialState method which is what React uses for returning the default state of the component. Here we return an object containing the following:
peer– the PeerJS object which is used to connect to the server. This allows us to obtain a unique ID that can be used by others to connect to us.
my_id– the unique ID assigned by the server to the device.
peer_id– the ID of the peer you’re connecting to.
initialized– a boolean value that’s used for determining whether we have already connected to the server or not.
files– an array for storing the files that have been shared to us.
getInitialState: function(){ return { peer: new Peer({key: this.props.opts.peerjs_key}), my_id: '', peer_id: '', initialized: false, files: [] } }
Note that the PeerJS initialization code that we’ve used above is only for testing purposes, which means that it will only work when you’re sharing files between two browsers open in your computer, or when you’re sharing files on the same network. If you actually want to build a production app later on, you’d have to use the PeerServer instead of the Peer Cloud Service. This is because the Peer Cloud Service has limits on how many concurrent connections your app can have. You also have to specify a
config property in which you add the ICE server configuration. Basically what this does is allows your app to cope with NATs and Firewalls or other devices which exists between the peers. If you want to learn more you can read this article on WebRTC on HTML5Rocks. I’ve already added some ICE server config below. But in case it doesn’t work, you can either pick from here or create your own.
peer = new Peer({ host: 'yourwebsite.com', port: 3000, path: '/peerjs', debug: 3, config: {'iceServers': [ { url: 'stun:stun1.l.google.com:19302' }, { url: 'turn:numb.viagenie.ca', credential: 'muazkh', username: 'webrtc@live.com' } ]} })
Getting back on track, next we have the
componentWillMount method, which is executed right before the component is mounted into the DOM. So this is the perfect place for executing code that we want to run right before anything else.
componentWillMount: function() { ... });
In this case we use it for listening for the
open event triggered by the
peer object. When this event is triggered, it means that we are already connected to the peer server. The unique ID assigned by the peer server is passed along as an argument so we use it to update the state. Once we have the ID we also have to update
initialized to
true. This reveals the element in the component which shows the text field for connecting to a peer. In React, the state is used for storing data that is available throughout the whole component. Calling the
setState method updates the property that you specified, if it already exists, otherwise it simply adds a new one. Also note that updating the state causes the whole component to re-render.
this.state.peer.on('open', (id) => { console.log('My peer ID is: ' + id); this.setState({ my_id: id, initialized: true }); });
Next we listen for the
connection event. This is triggered whenever another person tries to connect to us. In this app, that only happens when they click on the connect button. When this event is triggered, we update the state to set the current connection. This represents the connection between the current user and the user on the other end. We use it to listen for the
open event and the
data event. Note that here we’ve passed in a callback function as the second argument of the
setState method. This is because we’re using the
conn object in the state to listen for the
open and
data events. So we want it to be already available once we do it. The
setState method is asynchronous, so if we listen for the events right after we’ve called it, the
conn object might still not be available in the state, which is why we need the callback function.
this.state.peer.on('connection', (connection) => { console.log('someone connected'); console.log(connection); this.setState({ conn: connection }, () => { this.state.conn.on('open', () => { this.setState({ connected: true }); }); this.state.conn.on('data', this.onReceiveData); }); });
The
open event is triggered when the connection to the peer is successfully established by the peer server. When this happens, we set
connected in the state to
true. This will show the file input to the user.
The
data event is triggered whenever the user on the other side (which I will call the “peer” from now on) sends a file to the current user. When this happens we call the
onReceiveData method, which we’ll define later. For now, know that this function is responsible for processing the files that we received from a peer.
You also need to add
componentWillUnmount() which is executed right before the component is unmounted from the DOM. This is where we clean up any event listeners that were added when the component was mounted. For this component, we can do that by calling the
destroy method on the
peer object. This closes the connection to the server and terminates all existing connections. This way we won’t have any other event listeners getting fired if this component is used somewhere else in the current page.
componentWillUnmount: function(){ this.state.peer.destroy(); },
The
connect method is executed when the current user tries to connect to a peer. We connect to a peer by calling the
connect method in the
peer object and passing it the
peer_id, which we also get from the state. Later on you’ll see how we assign a value to the
peer_id. For now, know that the
peer_id is the value input by the user in the text field for entering the peer ID. The value returned by the
connect function is then stored in the state. Then we do the same thing that we did earlier: listen for the
open and
data event on the current connection. Note that this time, this is for the user who is trying to connect to a peer. The other one earlier was for the user who is being connected to. We need to cover both cases so the file sharing will be two-way.
connect: function(){ var peer_id = this.state.peer_id; var connection = this.state.peer.connect(peer_id); this.setState({ conn: connection }, () => { this.state.conn.on('open', () => { this.setState({ connected: true }); }); this.state.conn.on('data', this.onReceiveData); }); },
The
sendFile method is executed whenever a file is selected using the file input. But, instead of using
this.files to get the file data, we use
event.target.files. By default,
this in React refers to the component itself so we can’t use that. Next we extract the first file from the array, and create a blob by passing the files and an object containing the type of the file as an argument to the
Blob object. Finally we send it to our peer along with the file name and type by calling the
send method on the current peer connection.
sendFile: function(event){ console.log(event.target.files); var file = event.target.files[0]; var blob = new Blob(event.target.files, {type: file.type}); this.state.conn.send({ file: blob, filename: file.name, filetype: file.type }); },
The
onReceiveData method is responsible for processing the data received by PeerJS. This is what catches whatever is sent by the
sendFile method. So the
data argument that’s passed to it is basically the object that we passed to the
conn.send method earlier.
onReceiveData: function(data){ ... });
Inside the function we create a blob from the data that we received… Wait, what? But we already converted the file into a blob and sent it using PeerJS, so why the need to create a blob again? I hear you. The answer is that when we send the blob it doesn’t actually stay as a blob. If you’re familiar with the
JSON.stringify method for converting objects into strings, it basically works the same way. So the blob that we passed to the
send method gets converted into a format that can be easily sent through the network. When we receive it, it’s no longer the same blob which we sent. That’s why we need to create a new blob again from it. But this time we have to place it inside an array since that’s what the
Blob object expects. Once we have the blob, we then use the
URL.createObjectURL function to convert it into an object URL. Then we call the
addFile function to add the file into the list of files received.
console.log('Received', data); var blob = new Blob([data.file], {type: data.filetype}); var url = URL.createObjectURL(blob); this.addFile({ 'name': data.filename, 'url': url });
Here’s the
addFile function. All it does is get whatever files are currently in the state, adds the new file to them and updates the state. The
file_id is used as the value for the
key attribute required by React when you’re making lists.
addFile: function (file) { var file_name = file.name; var file_url = file.url; var files = this.state.files; var file_id = randomstring.generate(5); files.push({ id: file_id, url: file_url, name: file_name }); this.setState({ files: files }); },
The
handleTextChange method updates the state whenever the value of the text field for entering the peer ID changes. This is how the state is kept up to date with the current value of the peer ID text field.
handleTextChange: function(event){ this.setState({ peer_id: event.target.value }); },
The
render method renders the UI of the component. By default, it renders a loading text because the component first needs to acquire a unique peer ID. Once it has a peer ID, the state is updated which then triggers the component to re-render, but this time with the
result inside the
this.state.initialized condition. Inside that we have another condition which checks if the current user is already connected to a peer (
this.state.connected). If they are then we call the
renderConnected method, if not then
renderNotConnected().
render: function() { var result; if(this.state.initialized){ result = ( <div> <div> <span>{this.props.opts.my_id_label || 'Your PeerJS ID:'} </span> <strong className="mui--divider-left">{this.state.my_id}</strong> </div> {this.state.connected ? this.renderConnected() : this.renderNotConnected()} </div> ); } else { result = <div>Loading...</div>; } return result; },
Also note that above we’re using props to customize the label of the files. So if
my_id_label is added as a property in the
options object earlier, it would use the value assigned to that instead of the value at the right side of the double pipe (
||) symbol.
Here’s the
renderNotConnected method. All it does is show the peer ID of the current user, a text field for entering the ID of another user, and a button for connecting to another user. When the value of the text field changes, the
onChange function is triggered. This calls the
handleTextChange which we have defined earlier. This updates the text that’s currently in the text field, as well as the value of the
peer_id in the state. The button executes the
connect function when clicked, which initiates the connection between the peers.
renderNotConnected: function () { return ( <div> <hr /> <div className="mui-textfield"> <input type="text" className="mui-textfield" onChange={this.handleTextChange} /> <label>{this.props.opts.peer_id_label || 'Peer ID'}</label> </div> <button className="mui-btn mui-btn--accent" onClick={this.connect}> {this.props.opts.connect_label || 'connect'} </button> </div> ); },
On the other hand, the
renderConnected function shows the file input and the list of files that were shared to the current user. Whenever the user clicks on the file input, it opens the file selection box. Once the user has selected a file, it fires off the
onChange event listener which in turn calls the
sendFile method which sends the file to the peer. Below it, we call either the
renderListFiles method or the
renderNoFiles depending on whether there are files currently in the state.
renderConnected: function () { return ( <div> <hr /> <div> <input type="file" name="file" id="file" className="mui--hide" onChange={this.sendFile} /> <label htmlFor="file" className="mui-btn mui-btn--small mui-btn--primary mui-btn--fab">+</label> </div> <div> <hr /> {this.state.files.length ? this.renderListFiles() : this.renderNoFiles()} </div> </div> ); },
The
renderListFiles method, as the name suggests, is responsible for listing out all the files that are currently in the state. This loops through all the files using the
map function. For each iteration, we call the
renderFile function which returns the link for each file.
renderListFiles: function(){ return ( <div id="file_list"> <table className="mui-table mui-table--bordered"> <thead> <tr> <th>{this.props.opts.file_list_label || 'Files shared to you: '}</th> </tr> </thead> <tbody> {this.state.files.map(this.renderFile, this)} </tbody> </table> </div> ); },
Here’s the
renderFile function which returns a table row containing the link to a file.
renderFile: function (file) { return ( <tr key={file.id}> <td> <a href={file.url} download={file.name}>{file.name}</a> </td> </tr> ); }
Finally, we have the function that’s responsible for rendering the UI when there are no files yet.
renderNoFiles: function () { return ( <span id="no_files_message"> {this.props.opts.no_files_label || 'No files shared to you yet'} </span> ); },
Bringing Everything Together
We use the
browserify command to bundle the code inside the src directory. Here’s the full command that you have to execute while inside the root directory of the project:
browserify -t [ babelify --presets [ es2015 react ] ] src/main.js -o js/main.js
Breaking it down, first we specify the
-t option. This allows us to use a transform module. Here we’re using Babelify which uses the react preset and es2015 preset. So what happens is that first Browserify looks at the file that we specified (
src/main.js), parses it and calls on Babelify to do its work. Babelify uses the es2015 preset to translates all the ES6 code to ES5 code. While the React preset transforms all the JSX code to plain JavaScript. Once Browserify has gone through all the files, it brings them together so it can run in the browser.
Points for Consideration
If you’re planning to use what you’ve learned in this tutorial in your own projects. Be sure to consider the following:
- Break down the
Filesharercomponent into smaller ones. You might have noticed that there’s a bunch of code inside the
Filesharercomponent. Usually this isn’t the way you go about things in React. What you’d want to do is to break the project down into smaller components as possible and then import those smaller components. Using the
Filesharercomponent as an example, we might have a
TextInputcomponent for entering the peer’s ID, a List component for listing the files that we’re received and a
FileInputcomponent for uploading files. The idea is to have each component fulfill only a single role.
- Check if WebRTC and File API are available in the browser.
- Handle errors.
- Use Gulp for bundling the code when you make changes to the files and live reload to automatically reload the browser once it’s done.
Conclusion
That’s it! In this tutorial you’ve learned how to work with PeerJS and React in order to create a file sharing app. You’ve also learned how to use Browserify, Babelify and the Babel-React-preset to transform JSX code into JavaScript code that can run in browsers. | https://www.sitepoint.com/file-sharing-component-react/ | CC-MAIN-2018-09 | refinedweb | 3,775 | 66.23 |
The QChildEvent class contains event parameters for child object events. More...
#include <QChildEvent>
Inherits QEvent.
The QChildEvent class contains event parameters for child object events.
Child events are sent immediately to objects when children are added or removed.
In both cases you can only rely on the child being a QObject (or, if QObject::isWidgetType() returns true, a QWidget). This is because in the QEvent::ChildAdded case the child is not yet fully constructed; in the QEvent::ChildRemoved case it might have already been destructed.
The handler for these events is QObject::childEvent().
Constructs a child event object of a particular. | http://doc.trolltech.com/4.5/qchildevent.html | crawl-002 | refinedweb | 101 | 67.65 |
The book is written for readers who have no previous experience with .NET, and in some cases, it will even be suitable for readers with not extensive programming knowledge. It starts off explaining what .NET is, the history of Mono and how it fits in all this. It also explains how to install Mono from scratch, but this guide is limited only to RPM-based distros and Windows.
The book goes on to introduce the reader to Monodevelop, although this guide is already a bit old now as the application has changed significantly since that book was written (in the Mono 1.1.4 days). The chapter will also explain how to use the C# compiler, although almost has zero depth regarding debugging (half a page about it).
The book continues with the introduction to C# language starting from the most basic information (e.g. variables) continuing all the way to events, delegates and namespaces. Throughout this part of the book there is quite some source code listed along with some adequate explanation next to it.
On chapter 5 things get really serious with the introduction to JIT, GAC, assemblies and modules. Introduction to Windows.Forms, GTK# and Glade follows. An RSS application is used as the practical example on how to create a graphical C# application using both toolkits. Unfortunately, these two huge subjects are not discussed in enough depth. After reading the book the reader won't be able to create real-world graphical applications without reading additional books or MonoDoc, or delving directly into other people's source code for ideas and information. The graphical application creation topic should have seen more love by the author.
The author quickly moves to other territories like ADO.NET and MySQL, XML and ASP.NET, which I personally don't have enough interest for, but these chapters were well-documented and presented. Back to the real deal, the author continues his journey into more advanced topics such as Remoting, the Mono Profiler, performance tips, networking, Reflection, multi-threading programming, and a quick look up on .NET 2.0.
Regarding the book's writing is easy to follow, although a bit too monotonic at times. Reads more than a school book rather than a book that you will also have some fun with while learning. Overall, this is a good book for first-timers on .NET, but probably not as good on brand new programmers who would find themselves a bit lost after trying to program a real world application on their own. Most of the information mentioned on the book can be easily found on MonoDoc or elsewhere, but it's nice to have a hard cover book with all the knowledge put together in perfect order.
Rating: 8/10 | http://www.osnews.com/story/13627/Book_Review_Practical_Mono | CC-MAIN-2015-32 | refinedweb | 460 | 62.68 |
Overview of Total Work Done
1. Brushing of C++
2. Compiled the MapServer code on Linux and Windows from source.
3. Explored the MapServer functionality and codebase through the documentation.
4. Read up about SVG syntax, parsing and rendering.
5. Researched on various non-intrusive open source graphics libraries that support SVG (Libsvg-cairo, cairo, Librsvg and AGG(Anti-Grain Geometry)). The research involved finding the dependencies, functionalities, integratability and ease of use of each library. AGG was decided for in the end as it had already been integrated into the MapServer codebase in the past. Libsvg-cairo, cairo and Librsvg were pushed aside due to their large number of dependencies and for the added task of having to integrate them into the MapServer codebase unlike AGG.
6. Understood, modified and added the SVG parser example from the AGG codebase into the MapServer codebase. The modifications included changing the namespaces from agg:: to mapserver::. The research included finding the subset of the SVG specification that was supported by the AGG SVG Parser example. It also included understanding the various functionaities provided by the parser and renderer. The modified code is stored at a project specific sandbox that can be viewed here
7. Wrote an RFC on the entire process of adding SVG symbol support to MapServer. The SVG details the various design decisions that were taken. The RFC can be accessed here.
8. Started working on getting SVG rendering support into MapServer. This was done through the following steps:
- Modified the MapServer lexical analyzer (in maplexer.l) and added the required new keywords ('SVG' for the new type and 'SVGPATH' for the path to the SVG image).
- Modified the MapServer mapfile parser (in mapsymbol.c and mapsymbol.h) and added parsing support for identifying and getting information about the SVG type.
- Created a new bridge file (mapsymbolsvg.cpp and mapsymbolsvg.h) to integrate the SVG parser and the MapServer C code. Began the function createBrushFromSVG's implementation in the new file. This function returns a gdImagePtr (for now, we concentrate on rendering with the GD renderer) from the specified SVG file by first rendering it to an AGG pixmap and then converting it to a GD pixmap.
- Modified the various rendering functions in mapgd.c to use the new createBrushFromSVG function when a SVG symbol is found in the symbolset. Currently createBrushFromSVG is still not fully implemented and tested.
Work to Do Next Week
Although the official pencils down date is 17th, August, I will continue to do the following in the next week to complete the project successfully.
1. I will be working on completely implementing and testing the createBrushFromSVG function and any remaining glue code.
2. Currently all the work involves getting SVG symbols to render with the GD renderer. Once that is completed, I will move onto getting it to render with the AGG renderer.
3. Any currently unforseen changes to the RFC will be made and some of the remaining documentation will be prepared (eg. the subset of the SVG specification supported by MapServer).
The Experience
Working with MapServer over the summer has been an amazing experience for me. I got to know some wonderful people in the process and, being my first foray into the world of open source development, taught me a lot about how things go on under the hood! I also learned a lot from my mentor (Daniel Morissette), who has been extremely supportive and encouraging right from the beginning. I am also extremely thankful to the other developers on the mailing list and IRC channel who helped me out in many corners constantly. I would love to continue to work along with this community in the near future and take up further MapServer projects.
Links
- Code Sandbox :
- Project Wiki :
- RFC Document : | http://trac.osgeo.org/mapserver/wiki/GSoC_SVG_Symbols_report_20090816 | crawl-003 | refinedweb | 630 | 56.86 |
Removing the History?
Discussion in 'Javascript' started by George Hester, Oct 17,,943
- Leon Mayne [MVP]
- Oct 26, 2005
removing a namespace prefix and removing all attributes not in that same prefixChris Chiasson, Nov 12, 2006, in forum: XML
- Replies:
- 6
- Views:
- 741
- Richard Tobin
- Nov 14, 2006
[ANN] irb-history 1.0.0: Persistent, shared Readline history for IRBSam Stephenson, Jun 18, 2005, in forum: Ruby
- Replies:
- 1
- Views:
- 360
- Andrew Walrond
- Jun 18, 2005
Values entered in the Form disappears when using the history.go(-1) or history.back(), Oct 16, 2006, in forum: Javascript
- Replies:
- 2
- Views:
- 557
- nutso fasst
- Oct 17, 2006
Using history.go() to get to the *last* item in the historyNiall, Dec 4, 2006, in forum: Javascript
- Replies:
- 3
- Views:
- 272
- Niall
- Dec 6, 2006 | http://www.thecodingforums.com/threads/removing-the-history.880131/ | CC-MAIN-2015-40 | refinedweb | 132 | 58.92 |
What is Vendure?
Vendure is a headless ecommerce framework built with TypeScript and Node.js.
Vendure features
- Products and variants
- Stock management
- Payment provider integrations
- Shipping provider integrations
What does “headless” mean?
You might have heard the term “API first” or “headless”. To sum it up, it provides your content (blog posts, products, etc.) as data over an API. This provides a lot of flexibility because it allows you to use the technology stack of your choice. Compared to a traditional CMS (content management system) which would require you to install a software and make you use a certain subset of technologies.
A little bit about our tech stack of choice
In this tutorial, I’m going to be using Next.js, GraphQL, and Apollo Client. Here’s a little bit about each:
- Next.js — a React framework that offers zero config, static generation, server-side rendering, and file-system routing, to name a few
- GraphQL — a query language for an API. In the query, you describe the type of data you want and it returns just that. Nothing more. GraphQL offers a variety of features including being able to get multiple resources in a single request, a type system, and much more!
- Apollo Client — a state management library that enables you to manage remote and local data with GraphQL. It allows us to fetch, cache, and modify our application while automatically updating our UI. Some of the features include declarative data fetching, being able to use modern React features (like Hooks), and it’s universally compatible
Getting started with Vendure
Requirements:
- Node.js v10.13.0 or above
- NPM v5.2 or above or Yarn v0.25 or above
- If on Windows, make sure you have the window build tools installed like this:
npm install --global --production windows-build-tools
# NPM npx @vendure/create name-of-your-app # Yarn yarn create @vendure my-app
This will run Vendure Create and it will ask you a couple of questions regarding your project. For this, I have chosen not to use TypeScript and I chose SQLite for the database. Vendure recommends that you use SQLite if you just want to test out Vendure because it doesn’t require any external requirements.
Running it locally
After the installation,
cd (change directory) into your project and run the following command:
npm run start
After running that command, let’s head on over to to log in as an admin and add/view our products.
Login with the admin username and password that you created during installation. If you forgot what those were, open the application in a text editor and head into the
src > vendure-config.js file and under
authOptions, you should see your username and password.
Once logged in, you should see the dashboard. Now let’s add some products.
Creating our store
In the dashboard, under catalog, click on
Products:
If you chose the option to populate with data then it should look something like this:
If not, then go ahead and click the button that says
New Product and fill in the information.
Integrating Next.js & GraphQL with Vendure
First, let’s take a look at the data. To do that, head over to.
This will bring up the GraphQL playground which is a place to view all the available queries and mutations.
Setting up Next.js
npx create-next-app # or yarn create next-app
After installation, when you try to run the command
npm run dev, you will get an error in the terminal saying that “Port 3000 is already in use.”
Open your project in a text editor and head into the
package.json file and under
scripts > dev, add the following:
next -p 8080
Now back in the terminal, run the following:
npm run dev
Go to localhost:8080 and you should see this:
Setting up Apollo Client
Let’s install everything we need for GraphQL and Apollo:
npm i apollo-boost graphql react-apollo @apollo/react-hooks -S
We’re going to be using
next-with-apollo which is a high order component for Next.js:
npm install next-with-apollo # or with yarn: yarn add next-with-apollo
In the root of your project, create a folder called
lib and a file called
apollo.js and add the following:
lib > apollo.js import withApollo from 'next-with-apollo'; import ApolloClient, { InMemoryCache } from 'apollo-boost'; import { ApolloProvider } from '@apollo/react-hooks'; export default withApollo( ({ initialState }) => { return new ApolloClient({ uri: '<>', cache: new InMemoryCache().restore(initialState || {}) }); }, { render: ({ Page, props }) => { return ( <ApolloProvider client={props.apollo}> <Page {...props} /> </ApolloProvider> ); } } );
Now, we need to wrap the whole app in Apollo’s high order component. In
pages, head into
_app.js and add the following:
import '../styles/globals.css'; import withApollo from '../lib/apollo'; function MyApp({ Component, pageProps }) { return <Component {...pageProps} />; } export default withApollo(MyApp);
With this, we can now query for all of our products. In the
pages >
index.js file, delete the contents, and add the following:
pages > index.js import Head from 'next/head'; import styles from '../styles/Home.module.css'; import gql from 'graphql-tag'; import { useQuery } from '@apollo/react-hooks'; const QUERY = gql` { products { items { slug description assets { source } } } } `; function Home() { const { loading, data } = useQuery(QUERY); console.log(data); return <div className={styles.container}>home</div>; } export default Home;
In your browser’s console, you should see something that looks like this:
Adding CSS & building the store homepage
In this tutorial, I will be using Material-UI. Let’s go ahead and install that:
# NPM npm install @material-ui/core # Yarn yarn add @material-ui/core
At the root of the project, create a folder called
components and add a file called
ProductCard.js.
This will allow us to use Material UI components and our own stylesheet to center everything.
Add the following:
import styles from '../styles/ProductCard.module.css'; import { makeStyles } from '@material-ui/core/styles'; import Card from '@material-ui/core/Card'; import CardActionArea from '@material-ui/core/CardActionArea'; import CardActions from '@material-ui/core/CardActions'; import CardContent from '@material-ui/core/CardContent'; import CardMedia from '@material-ui/core/CardMedia'; import Button from '@material-ui/core/Button'; import Typography from '@material-ui/core/Typography';
This will control the CSS for the Material UI components. Here we are defining the maximum width and the height for the images in the card component:
const useStyles = makeStyles({ root: { maxWidth: 345, }, media: { height: 140, }, });
Here we‘re destructuring the prop called
data that we passed in through the
Home component in
pages > index.js:
function ProductCard({ data }) { const classes = useStyles(); const { items } = data.products; . . . }
Here we are looping through our items array and creating our product card in which we are passing the image URL, product name, etc., to our Material UI components:
return ( <section className={styles.container}> {items.map((item) => { const imgUrl = item.assets[0].source; return ( <div key={item.slug}> <Card className={classes.root}> <CardActionArea> <CardMedia className={classes.media} image={imgUrl} title={item.slug} /> <CardContent> <Typography gutterBottom {item.name} </Typography> <Typography variant='body2' color='textSecondary' component='p' > {item.description} </Typography> </CardContent> </CardActionArea> <CardActions> <Button variant='outlined' color='primary'> Add To Cart </Button> <Button variant='outlined' color='secondary'> Details </Button> </CardActions> </Card> </div> ); })} </section> ); export default ProductCard;
Now, let’s center our product card and make it responsive. In the
styles folder, create a file called
ProductCard.module.css and add the following:
styles > ProductCard.module.css .container { display: grid; grid-template-columns: repeat(auto-fit, minmax(300px, 1fr)); grid-gap: 1rem; }
If you save it, it should now look like this:
Building our shopping cart/checkout process
A cool thing about GraphQL is that it allows us to not only query our data but also update our data. We can do this with something called mutations. In Apollo Client, we’re going to use the
useMutation hook to update our data and get the loading state, error, and success state.
To use it, we provide the
useMutation hook a GraphQL query and in turn, it returns two of the following:
- A mutate function that you can call to execute the mutation
- An object that reflects the current status of the mutation execution
If we head back to and click on
DOCS >
Mutations, you can see all the available mutations you can use:
To start, I’m going to take a look at the
addItemToOrder mutation to start building out our checkout process.
In the GraphQL playground, open a new tab and paste in the following mutation:
mutation { addItemToOrder(productVariantId: 3 quantity: 1) { lines { productVariant { id name } unitPrice quantity totalPrice } } }
This will add a product to being an active order. If we query for
activeOrder, we should see the recently added product:
{ activeOrder { // Details you want to see } }
Now, let’s take a look at shipping and how we can use the
setOrderShippingAddress mutation:
fragment ActiveShippingOrder on Order { id createdAt updatedAt code state active } mutation AddAddressToOrder($input: CreateAddressInput!) { setOrderShippingAddress(input: $input) { ...ActiveShippingOrder } }
A fragment is like a function, it’s reusable logic. In
ActiveShippingOrder, we’re defining what data we want back when we invoke the
setOrderShippingAddress function. In the
AddAddressToOrder mutation, we’re creating a query variable called
input and passing that to our
setOrderShippingAddress function. Then, in the query variables tab, I’m only passing in an object with the two required parameters. The input parameter only requires a street address and a country code. To check out the full list of parameters you view them at
DOCS > setOrderShippingAddress > input.
To see the results, you need to add a product to the active order state. To do this, you can open up a new tab in your GraphQL playground and write the
addItemToOrder mutation then head back to your
setOrderShippingAddress mutation and run it and you should see the following:
If you didn’t add a product to the active order state, then when you try to run your
setOrderShippingAddress mutation, it will return null.
Lastly, we’ll take a look at payment. For this, we will use the
addPaymentToOrder mutation:
mutation { addPaymentToOrder(method: "", metadata: {}) }
Note:
method:— this field should correspond to the
codeproperty of a
PaymentMethodHandler
metadata:— this field should contain arbitrary data passed to the specified
PaymentMethodHandler‘s
createPayment()method as the “metadata” argument. For example, it could contain an ID for the payment and other data generated by the payment provider
In Next.js, we can use the
useMutation hook to apply our mutations. For example, let’s try adding a product. First, we define the mutation
gql:
const ORDER_FRAGMENT = gql` fragment ActiveOrder on Order { id code state total currencyCode lines { id productVariant { id name currencyCode } unitPriceWithTax quantity totalPrice featuredAsset { id preview } } } `; const ADD_TO_CART = gql` mutation AddItemToOrder($productVariantId: ID!, $quantity: Int!) { addItemToOrder(productVariantId: $productVariantId, quantity: $quantity) { ...ActiveOrder } } ${ORDER_FRAGMENT} `;
Similar to the GraphQL playground, we’re using query variables to pass that into our
addItemToOrder mutation and using the
ActiveOrder fragment to define what data we want back.
In addition, we also need a query to get all the active orders so let’s define that:
const GET_ACTIVE_ORDER = gql` { activeOrder { ...ActiveOrder } } ${ORDER_FRAGMENT} `;
Now, we pass both
gql to our
useMutation hook:
const [addItemToOrder] = useMutation(ADD_TO_CART, { update: (cache, mutationResult) => { const { activeOrder } = cache.readQuery({ query: GET_ACTIVE_ORDER, }); cache.writeQuery({ query: GET_ACTIVE_ORDER, data: { activeOrder: mutationResult.data.addItemToOrder, }, }); }, });
We want to be able to update our query so when we try to query the
GET_ACTIVE_ORDER, it won’t return null. To do this, you can provide a second argument to the
useMutation hook. The
update property accepts a method with two parameters to get the cache and the results. We’re going to use a method on the cache called
writeQuery that will add our results from
mutationResults to our
GET_ACTIVE_ORDER query.
Now, we can call the
addItemToOrder function on a
onClick handler. This could be used when a user clicks on our “add to cart” button. It will move our selected product to the active state and then we can query for all active orders.
To add a shipping address, it’s similar to the
addItemToOrder process in which you pass the
gql to the
useMutation hook and then to the
onSubmit handler. Then, you can pass the data from your form to your mutation function.
To add payment, similar to
addItemToOrder process, you would use the
addPaymentToOrder mutation and depending on how you want to handle payments, i.e if you want to use Stripe or another service, then you would configure that in the method parameter.
How to deploy
Essentially, a Vendure application is simply just a Node.js application and it can be deployed anywhere Node.js is supported.
The docs recommend this article if you want to run it on the server and use nginx as a reverse proxy to direct requests to your Vendure application.
Conclusion
In conclusion, Vendure offers another perspective on being the next modern headless GraphQL-based ecommerce framework. It’s decoupled and flexible which means it chooses to focus on developer productivity and ease of customization. Although it’s still in beta, Vendure offers a wide range of features out of the box such as guest checkout, built-in admin interface, integrations with payment and shipping providers, and much more. A perfect choice for any small business that needs to get an ecommerce site up and.
2 Replies to “Getting started with Vendure”
Hi Natalie, thanks for this details blog, i’m trying to get this working on my local dev-env, and it works well until the step where you describe the ProductCard.js, i must be doing wrong because I never see the product on screen, else than in the console, and since i get no error in the terminal, i tried some “things” but with no result, can you please give me the exact contents of the ProductCard.js file, so I get it work and can go over the rest of your post.
Thanks in advance
Greeting
Ruud
Hi Ruud,
For the ProductCard.js, you can find it here on the GitHub repo:
For the whole project you can also find it here:
Hope that helps
Best,
Natalie Smith | https://blog.logrocket.com/getting-started-with-vendure/ | CC-MAIN-2020-40 | refinedweb | 2,346 | 53.31 |
BugTraq
Back to list
|
Post reply
[SE-2012-01] Issue 69 details and IBM Java vulnerabilities
Oct 16 2013 12:11PM
Security Explorations (contact security-explorations com)
Hello All,
The CPU released yesterday (Oct 15, 2013) by Oracle included information
about a fix for Java SE 7 vulnerability (Issue 69) that was reported to
the company in July.
Issue 69 allows to conduct a very classic attack against Java VM - the so
called class spoofing attack. To quote the paper from 2002 [1] (5.2 Class
Loader attack / class spoofing paragraph):
"Protection of Class Loader objects is one of the key aspects of the Java
Virtual Machine security. This is due to the role Class Loaders play in
the process of class loading and dynamic linking. Class Loaders are
primarily
responsible for providing JVM with classes? definitions. When doing this,
Class Loaders always make sure that a given class file is loaded into Java
Runtime only once by a given Class Loader instance. Additionally, they make
sure that there exists only one and unique class file for a given class
name.
These two requirements are maintained in order to provide proper separation
of namespaces belonging to different Class Loader objects. [...] for each
instance of Class Loader object, separate namespace is maintained. Each such
namespace contains a unique set of classes that were loaded by a given Class
Loader instance. Because of the possibility that two different Class Loader
objects can exist in one JVM, proper maintenance of their namespaces is
critical to the overall JVM security. This is primarily due to the fact that
any overlapping of two different namespaces can easily lead to class
spoofing
and as a result, to type confusion attack."
Issue 69 allows to violate the security constraints imposed on Class Loaders
that guard their namespaces. This is due to new Reflection API and the
way it
was implemented at the core VM level.
With new Reflection API, Method Handles got introduced to Java as a form of
arbitrary code execution transfer.
Additional quote from same abovementioned paper states the following:
"There exist at least two other theoretical variants, which could be used to
conduct class spoofing attack without implicit use (and overriding) of
the Class
Loader?s loadClass method. Both of these attacks are based upon the idea of
spoofing class definitions at the point in a Java program when code
execution
is transferred from one namespace to the other. In Java, such execution
transfer
can be done with the use of exceptions and virtual methods. In the first
case,
an attack variant known as Princeton Class Loader attack was identified
in the
past. This attack was based upon the fact that exceptions could be thrown in
one namespace and caught in the other. As a result, a definition of a
subclass
of java.lang.Throwable class could be spoofed and confused along different
namespaces. In the second variant of the class spoofing attack, an arbitrary
hierarchy of classes is created. This hierarchy contains the classes
that come
from different namespaces and that define the same virtual method. Upon the
invocation of the virtual method done from one namespace, a call to its
overridden
instance in the class defined in the other namespace could be
theoretically done.
Consequently, some arbitrary types of the method?s arguments could be
confused
as they could be defined differently in different namespaces."
Our class spoofing attack relies on the possibility to transfer code
execution
from one Class Loader namespace to the other one by the means of Method
Handles.
The transfer is done across a method signature that has a different
definition
for a given named type in both Class Loader namespaces. Thus, class
spoofing.
In normal circumstances, presence of conflicting class names (spoofed
classes)
in a method signature should be caught by Java VM. This was not the case
for new
Reflection API and Method Handle based calls done across Class Loader
namespaces.
Actual details and a Proof of Concept code illustrating the described
vulnerability
and class spoofing attack are available at the following address:
Due to the fact that in Sep 2013 Oracle backported (from JDK 8)
implementation of
the affected component to JDK 7 Update 40, the POC code will only work
on Java SE
7 Update 25 and below.
---
As for other things, we would also like to report that a new
vulnerability notice
was sent to IBM today. It included information and Proof of Concept
codes for two
new complete Java sandbox escape vulnerabilities affecting IBM SDK, Java
Technology
Edition, Version 7.0 SR5 (Linux 32-bit x86 build pxi3270sr5-20130619_01
SR5 tested).
Apart from that we also pointed out to IBM that one of the issues
originally reported
to the company in Sep 2012 has not been fixed properly. The patch for it
(the second
attempt to address it) can be still successfully bypassed. As a result,
complete Java
security sandbox escape can be gained in the environment of vulnerable
IBM Java SDK.
Thank you.
Best Regards
Adam Gowdiak
---------------------------------------------
Security Explorations
"We bring security research to the new level"
---------------------------------------------
References:
[1] Java and Java VM security vulnerabilities and their exploitation
techniques,
Last Stage of Delirium Research Group,
[ reply ]
Privacy Statement | http://www.securityfocus.com/archive/1/529245/30/360/threaded | CC-MAIN-2014-10 | refinedweb | 867 | 50.77 |
Happy.
As Team Inafune, we would like to proudly present Soul Sacrifice for all of the true gamers in the world to enjoy.
1
+ Silent_Gig on December 21st, 2012 at 7:04 am said:
Thank you Soul Sacrifice team, i tried the Japanese Demo this week and it was brilliant. Cant wait for the game to come to the west!
2
+ Xer0Signal on December 21st, 2012 at 7:10 am said:
This game looks so good. Can’t wait!
3
+ wolverine81 on December 21st, 2012 at 7:10 am said:
I am eagerly anticipating Soul Sacrifice! Happy Holidays!
4
+ su43berkut17 on December 21st, 2012 at 7:12 am said:
I so waiting for this game, when will a demo be available in the north american psn? ^.^/
5
+ subadictos on December 21st, 2012 at 7:13 am said:
Greetings, mr. Inafune and all your team!!!
Why don’t you put the great demo of your game available for us too as a christmas gift!!!??? XD
I’m reallywaiting for you awesome game… day one buy for me. :)
6
+ Nates4Christ on December 21st, 2012 at 7:13 am said:
Yeah definitely getting a happy holidays and not a Merry Christmas from a team making a game called soul sacrifice.
7
+ madmaxx350 on December 21st, 2012 at 7:25 am said:
Do we get an english demo too, I’ve been playing the japanese demo and it’s soooo good. Worldwide release please!!!!
8
+ Dustinwp on December 21st, 2012 at 7:27 am said:
Hey, Inafune-san think you could send me a copy of Soul Sacrifice early for Christmas? I’m excited for the release of the game! Happy Holidays!
9
+ RenderMonk on December 21st, 2012 at 7:34 am said:
.
10
+ PsychoticUsagi on December 21st, 2012 at 7:36 am said:.~
11
+ Elvick_ on December 21st, 2012 at 7:44 am said:
@6: Christmas is a Holiday. Therefore included. People these days.
12
+ Elvick_ on December 21st, 2012 at 7:46 am said:
Anyhoo, the demo looks awesome, watching videos on it. Looks hefty as well.
Really looking forward to the game. Happy Holidays to everyone involved with making the game.
13
+ RdgMenezes on December 21st, 2012 at 7:47 am said:
Thank you Soul Sacrifice team, definitely a day one purchase for me next year too. Cant wait for this game.. :)
14
+ TXCScorpion on December 21st, 2012 at 7:54 am said:
Can’t wait to play, another day one purchase here. Thanks for bringing it over!
15
+ cool_trainer on December 21st, 2012 at 8:01 am said:
Can’t wait for this one, Happy Holidays!
16
+ MakoSOLIDER on December 21st, 2012 at 8:06 am said:
Can’t wait! Whats the framerate btw?
17
+ IllustratedDEO on December 21st, 2012 at 8:07 am said:.
18
+ gus_xl on December 21st, 2012 at 8:08 am said:
This game is my worst nightmare come to life. It takes me 20 minutes just to pick a NAME for a character…. I’ll never make it though all of the decisions..
19
+ Tomoyo-MKII on December 21st, 2012 at 8:12 am said:
Any Plans for U.S Demo?
20
+ DZORMAGEN on December 21st, 2012 at 8:18 am said:
LBP Cross Controller STILL NOT WORKING!!! I WANT MY MONEY BACK SONY!!!! Keep fooling your customers!! THIS IS THE LAST TIME I BUY ANYTHING FROM YOU GUYS!!!
21
+ AkelisRain on December 21st, 2012 at 8:28 am said:
QQ more DZORMAGEN!
22
+ TheGrimHeaper on December 21st, 2012 at 8:36 am said:.
23
+ Ryumoau on December 21st, 2012 at 9:00 am said:
i hope this game has a demo. Don’t want to be disappointed like i was with Ragnorak.
24
+ ItsIntegrity on December 21st, 2012 at 9:16 am said:.
25
+ JdyVexes on December 21st, 2012 at 9:30 am said:
Why can’t March come faster >.<
26
+ Christian399 on December 21st, 2012 at 9:37 am said:
Can’t wait! Now how about a REAL XMAS present and the same demo that our lucky Japanese gamer cousins in the East got? Please?
Do we have a release date on this yet btw?
27
+ evil_raffaello on December 21st, 2012 at 9:44 am said:
Demo for us westerners Master Inafune, please!!
28
+ Chidori_93 on December 21st, 2012 at 10:01 am said:
Please, Please ,Please
Bring the Limited edition Vita Bundle to US
that Red/Black Vita Looks Gorgeous.
import is too expensive
i don’t mind pay 350$ to 400$
But Please give it a chance in the US.
29
+ IzoGray on December 21st, 2012 at 10:13 am said:
Happy Holidays!
My most anticipated Vita game- next to Gravity Rush 2!
Will be a day 1 purchase.
30
+ zzamaro on December 21st, 2012 at 10:13 am said:
Happy holidays Mr. Inafune and everyone.
I might try it, but I have to check it out first ;)
31
+ thunderbear on December 21st, 2012 at 10:48 am said:
I had to sell my Vita but when this comes out I’ll definitely buy one again!
32
+ andremal on December 21st, 2012 at 10:52 am said:
Thank you for the Holiday wishes. I’m really looking forward to trying out your Demo when it hits. Everything I’ve seen so far is phenomenal.
33
+ jimmyfoxhound on December 21st, 2012 at 10:52 am said:
Happy Holidays Mr. Inafune!! I’ll def be picking up a copy of this game when it comes out! Thank you for brining it to the States!
34
+ EternityInGaming on December 21st, 2012 at 11:04 am said:
Thanks happy holidays too:) Most anticipated Vita game for sure! can’t wait to play online all my friends want it too lol
35
+ ragincajun7712 on December 21st, 2012 at 11:17 am said:
Thanks Team Inafune, and Merry Christmas to you all as well. I have to say that I was hoping for a surprise demo for North America as a Christmas gift, but i guess good things come to those who wait.
36
+ reson8er on December 21st, 2012 at 11:47 am said:
Can someone point me to a guide to d/l the demo on my Vita from the Japanese Store?
Thanks!
37
+ Riptide8 on December 21st, 2012 at 11:48 am said:
Show me the goods, give me a demo, Ill thank the pictures and kudos after the fact.Without the gods its hype hype hype.
Merry Christmas anyways… In Japan?
38
+ reson8er on December 21st, 2012 at 11:49 am said:
Forgot to mention,
To Mr. Inafune and the whole Comcept team, Merry Christmas!!
39
+ BlueBl1zzard on December 21st, 2012 at 11:59 am said:
Happy Holidays Comcept! Can’t wait for this game, definitely one of my Most anticipated games on any console in 2013.
40
+ Bruno_Helghast on December 21st, 2012 at 12:07 pm said:
I bought a PS Vita ONLY because of this game! This is gonna be epic!
41
+ xClayMeow on December 21st, 2012 at 12:08 pm said:
This is the game I’ve wanted most since even before I bought my Vita.
Please let us know the U.S. release date and when we can expect the demo!
42
+ Rainwater on December 21st, 2012 at 12:26 pm said:
Really looking forward to this one. Will there be a demo for the west?
43
+ polo155 on December 21st, 2012 at 1:09 pm said:
Dzormagen is an idiot; speak for yourself dude, IM LOVING LBP2 CrossController! learn how to set it up.
HAPPY HOLIDAY KEIJI AND TSST
44
+ MmaFanQc on December 21st, 2012 at 1:19 pm said:
im getting the game in a heartbeat.
45
+ Pendharker on December 21st, 2012 at 1:27 pm said:
I so very much can’t wait for this game!! It’s like DMC + Monster Hunter! You may have already answered this, but will there be a demo? If so, when? I’m SUPER stoked to try this game out.
46
+ pOcHo_OX on December 21st, 2012 at 2:03 pm said:
Happy Holidays Mr Inafune and Team Inafune, I’m really lloking for Soul sacrifice.
47
+ Aeryn_James on December 21st, 2012 at 2:33 pm said:
Looking forward to this!
48
+ Aeryn_James on December 21st, 2012 at 2:34 pm said:
Hurry :p
49
+ epanjoj416 on December 21st, 2012 at 3:49 pm said:
i have a very good amount of faith, this game will be spectacular. The more love a developer gives to a project, the more it shows
50
+ JeanLucAwesome on December 21st, 2012 at 4:07 pm said:
Happy Holidays to you guys too! I’m megally stoked to play Soul Sacrifice. | http://blog.us.playstation.com/2012/12/21/happy-holidays-from-keiji-inafune-and-the-soul-sacrifice-team/ | CC-MAIN-2015-06 | refinedweb | 1,444 | 73.68 |
Or how we can use ASP.NET in a real AJAX flavor and live happily.
The UpdatePanel represents Microsoft's main interpretation of AJAX techniques. The UpdatePanel's indubitable merit is that it brings the magic of AJAX in every day's programming work for any ASP.NET programmer. But now, three years after it was born, the UpdatePanel shows all its limits even to me, a trusty and loyal Microsoft programmer.
UpdatePanel
Let's see why the UpdatePanel is starting to stink.
The UpdatePanel is an ASP.NET container control.
When certain events occur, it communicates with the server without a full page postback. Communication consists in sending the event name, control name, arguments, page control values, cookies, and the entire viewstate to the server.
The server replies with a full set of markup that has to be injected into the client web page.
The disadvantages of this approach are:
It's a matter of bravery, fantasy, and no fear of the unknown.
Since several years, AJAX has been available from JavaScript. JavaScript is able to modify the page DOM, manage CSS dynamically, and use the XMLHttpRequest object. The problem has always been that it is hard to write JavaScript, because different browser engines interpret it differently.
XMLHttpRequest
Furthermore, JavaScript is hard to debug, and since is not a statically typed language, it requires a high level of programming discipline and control over the source code.
Some of these limits today are only bad memories, thanks to jQuery.
jQuery is a great JavaScript framework that guarantees compatibility across a large range of modern browsers. And, jQuery is also our reply to the question how it is possible to work with AJAX in ASP.NET.
Subsequently, I describe my personal interpretation of the problem.
To reach a complete and organic method for organizing web projects and abandoning Microsoft AJAX, but continuing to work with ASP.NET, I was inspired by suggestions and tricks found on the web and personal work experiences.
This article is a full report of what I understand as modern web development.
You can download a simple sample project to better understand the following points.
Let's sharpen our weapons.
In the demo project, you can see that every ASPX page contains the HTML markup, some Web Service methods, and the onLoad server method to initialize controls. In no other case can an ASPX page contain other methods than these.
onLoad
Every page has some JavaScript files to accomplish the required tasks: from server communication to page rendering.
The main page of the demo project is StuffSelection.aspx.
StuffSelection.aspx needs two JavaScript pages: to manage events, to expose functions, and to apply dynamic CSS styles.
I decided to put a js file depending on the ASPX page in the same folder as the page. To preserve the order inside the project, any js file must have the same name as the related ASPX page:
In StuffSelection.js, we firstly need to link to the onchange event on the category selector (a DropDownList control), because we have to register the event on the client side to avoid the server events approach.
onchange
DropDownList
Since ddlCategory is a server ID, first of all, we have to locate the object on the client side.
ddlCategory
We can't use <%= ddlCategory.ClientID %> to resolve the client ID because our JavaScript code doesn't reside on the ASPX page, but we can wrap the object with a client tag and use an appropriate jQuery selector to reach it (in our code, it is a simple span).
<%= ddlCategory.ClientID %>
span
In this way, we can load the handler to the change event in the StuffSelection document.ready function.
StuffSelection document.ready
//File: StuffSelection.aspx
<span id="spanDdlCategory">
Select Stuff category: <asp:DropDownList
</span>
The JavaScript code:
//File: StuffSelection.js
$(document).ready(function () {
var ddlCategory = $('#spanDdlCategory select');
ddlCategory.change(function () {
var categoryValue = $(this).val();
GetStuffList(categoryValue);
});
});
function GetStuffList(categoryValue) {
if (categoryValue == 0) {
$('#divStuffList').html('');
}
else {
// TODO: Get Stuff from server
}
}
The lightness of smart communication
The onChange client event of our DropDownList calls the GetStuffList function. GetStuffList has the task of calling the server to obtain the list of items for that category and - if possible - without making a complete postback. I usually create another JavaScript file to call the server.
onChange
GetStuffList
It's a good thing to physically separate the different functionality areas of the JavaScript logic. Don't hesitate to do that if you use clear naming rules. Within some chapters, we will see how to optimize JavaScript files proliferation.
Our new JavaScript file will be StuffSelection_Proxy.js.
For this kind of helper files, I usually mimic a static class with static methods. It's very easy to obtain this effect with JavaScript.
static
//File: StuffSelection_Proxy.js
function StuffSelection_Proxy() { }
StuffSelection_Proxy.GetStuffListHttpGet =
function (category, successCallback, failureCallback) {
$.ajax({
type: "GET",
contentType: "application/json; charset=utf-8",
url: "StuffSelection.aspx/GetStuffListServiceHttpGet?category=" + category,
success: function (data) { successCallback(data); },
error: function (data) { failureCallback(data); }
});
}
Consequentially, our calling function in the StuffSelection.js file will change in this way:
//File: StuffSelection.js
function GetStuffList(categoryValue) {
if (categoryValue == 0) {
$('#divStuffList').html('');
}
else {
StuffSelection_Proxy.GetStuffListHttpGet(categoryValue, null, null);
}
}
In this way, we will start to request something from a web method called GetStuffListService.
GetStuffListService
Later, we will see how to receive data from the web method, but now let's describe how GetStuffList works.
This 'static' method receives as a parameter the value of the category DropDownList and two callback functions. For now, only the first parameter is useful to communicate to our server method. To do that, we use a kind of data representation which is called JSON.
static
DropDownList
JSON represents data as a value-key pair collection. Since our web method expects a single parameter called "category", we build our JSON parameter from the pair "category" as key and a category value as value.
category
The result will be something like {"category":"1"}, depending on the category selection.
1
We use the JSON.stringify method that guarantees a syntactically correct result from the various grouped elements.
JSON.stringify
JSON is a static library available in almost all browsers, except for older IE (Internet Explorer natively implements JSON since version 8). To work around this problem, I recommend the json2 library that checks the browser for native JSON ability; otherwise, it supplies the application with JSON method support ().
$.ajax is a jQuery method that allows to post a request to the server. By default, the communication is posted in asynchronous mode so that users don't experience page freezing and can post multiple requests to the server without having to wait for any response. $.ajax can notify of a successful response, and a failure one in case of server errors.
$.ajax
The Url option identifies the endpoint for the client request. An endpoint could be a method of a Web Service (.asmx), a web handler (.ashx), or a web method inside a page (.aspx).
Url
$.ajax supports the GET and POST methods, and in the demo project, I included both to show the two different syntaxes. I'll show only the GET call here that is more correct in this case.
//File: StuffSelection.aspx
[WebMethod]
[ScriptMethod(UseHttpGet = true,
ResponseFormat = ResponseFormat.Json, XmlSerializeString = false)]
public static IList<Stuff> GetStuffListServiceHttpGet(int category)
{
return StuffHelper.GetStuffList(category);
}
Let's analyze the result of this communication (a simple post to our web method could be analyzed through Firebug; see chapter "Debug with Firebug").
The complete post weighs less than 1 KBytes because it contains only our simple request with the category parameter, a JSON string as response, and eventually, cookies and the message header. In no way will we receive more data than that, no more markup, no more viewstate. Hooray!
Not so much, don't worry.
Well... we are losing control state.
It seems terrible at first look... how can we work without stateful controls?
In ASP.NET, the control state resides in the same controls and in the viewstate (for non-visible controls). When an UpdatePanel does a partial postback, we get access on the server side to every page control with its consistent state.
This is the reason why we can access the content of textboxes, the selected item of a combobox, and so on even during a partial postback.
But ASP.NET is not a smart guy, because it can't predict what control states we need on the server side, so it posts everything: the visible control values and the entire viewstate that contains a lot of useless information even for a partial postback. Too heavy for us!
Trust me and don't worry. The stateful controls approach was a great comfort in the old style web form architecture, but it is not necessary... no more.
Now we have to collect the server reply.
To do that, we need to implement a success callback function to pass as parameter to our proxy method. Let's see how our js file changes again.
//File: StuffSelection.js
function GetStuffList(categoryValue) {
if (categoryValue == 0) {
$('#divStuffList').html('');
}
else {
StuffSelection_Proxy.GetStuffListHttpGet(categoryValue,
successCallback, failureCallback);
}
}
var successCallback = function (data) {
var response = eval(data.d);
$('#divStuffList').html(response[0].Name + ' ' +
response[0].Description + ' (€ ' + response[0].Price + ')');
}
var failureCallback = function (data) {
alert('Request failure');
}
Firebug helps us again. Put a breakpoint on the first line of the successCallback function to see how the server replies to the request.
successCallback
The "data" parameter contains a string in JSON format that can be evaluated and converted into a JavaScript object.
data
I evaluate data.d because "d" is a container object that boxes any JSON serialization done by ASP.NET. In that way, I have a variable response that contains an object returned by the web method, i.e., List<Stuff> of the requested category.
data.d
d
List<Stuff>
Naturally, a List<T> has to be serialized into objects that JavaScript can understand. ASP.NET serializer converts it into an array of T, and this is enough for our goal.
List<T>
T
...Or how I can completely (and finally) control my grids.
Now we have an array of objects and we need to print it on our web page.
The former method:
$('#divStuffList').html(response[0].Name + ' ' + response[0].Description ...
is naturally a fast way to show the result, but it is not the best way to render objects.
We better use templating.
JavaScript templating is available through independent frameworks or through jQuery plug-ins.
jQuery plug-ins are the hidden treasury of this fantastic framework, because they supply the core framework with tons of graphical effects, services, components, and so on. Think about anything you need in your web page and surely a jQuery plug-in exists to accomplish this task.
Lately, I have been using a jQuery template plug-in called jBind. It's a very good plug-in, but it seems that it hasn't been supported for a long time. So I won't describe this plug-in in detail but the concepts behind templating in JavaScript (waiting for the Microsoft client templating plug-in).
Templating on the client side is exactly the same as templating on the server side. Instead of having Repeaters, GridViews, or other super complex ASP.NET controls, you only have a piece of HTML with some placeholders inside. Usually, template plug-ins link data by binding key names to labels with the same name.
Repeater
GridView
If our data source is an object array, template plug-ins repeat a group of HTML tags enough times to render all objects.
We can decide to put our template inside the same page in which it'll be rendered (making that piece of code invisible), or create tags dynamically from JavaScript.
I prefer the first approach because I can easily view my template for design purposes by unhiding it.
For the same reason, I can draw my template with an editor, using styles and classes, and immediately viewing the resulting graphical rendering.
This is a simple example of templating:
//File: StuffSelection.aspx
<div style="display: none;">
<%-- Templates --%>
<%-- Template for stuff list --%>
<div id="divStuffListTemplate">
<div>
<table class="tableList">
<tr>
<td>Name</td>
<td>Code</td>
<td>Description</td>
<td>Price</td>
<td>Is available</td>
</tr>
<!--data-->
<tr>
<td>{Name}</td>
<td>{Code}</td>
<td>{Description}</td>
<td>€{Price}</td>
<td>{IsAvailable}</td>
</tr>
<!--data-->
</table>
</div>
</div>
</div>
The external div is useful to hide/unhide the entire block. The most interesting lines are the ones that contain placeholders that have to be linked to the array object.
div
From a JavaScript perspective, we only have to activate our plug-in according to its syntax. In my case, the code is this:
//File: StuffSelection.js
var successCallback = function(data){
var response = eval(data.d);
//$('#divStuffList').html(response[0].Name + ' ' +
// response[0].Description + ' (€ ' + response[0].Price + ')');
$('#divStuffList').html('');
var template = $('#divStuffListTemplate').html();
$(template).bindTo(response, { fill: true, appendTo: '#divStuffList' });
}
Since client templating could be directly applied to the HTML code. It is super easy to add classes, decorations, styles, numbering, alternating background colors, and so on. And without doubt, theming HTML is easier and more flexible than skinning a GridView or whatever server control.
Well... it's difficult in ASP.NET too.
Indeed it's not difficult with a bit of JavaScript, and it makes possible to happily forget that our server approach is stateless (as promised before).
We have a page to register a new user (SignupUser.aspx), and we need to pass a lot of data to our web method: user name, first name, surname, password, and whatever else.
It is possible to implement a web method with several arguments, but our intention is to build a clear and maintainable structure. In other words, we want to pass a complex object to the server.
With the ASP.NET serialization process (and deserialization too), this is done automatically.
In our JavaScript, we define a simple pseudo class named UserJs.
UserJs
//File: SignupUser_User.js
function UserJs() {
this.FirstName = '';
this.Surname = '';
this.Username = '';
this.Password = '';
}
We can do the same on the server:
//File Ecommerce.Business.UserJs
public class UserJs
{
public string FirstName { get; set; }
public string Surname { get; set; }
public string Username { get; set; }
public string Password { get; set; }
}
This is all you need to exchange data between the client and server.
If our web method RegisterUserService in the SignupUser.aspx page expects an Ecommerce.Business.UserJs parameter, we have to pass a serialized object that has the same name, same fields (same field names), and same type (if automatic conversion is not possible) from JavaScript. That's all.
RegisterUserService
Ecommerce.Business.UserJs
//File: SignupUser.js
var user = new UserJs();
user.FirstName = $('#firstName').val();
user.Surname = $('#surname').val();
user.Username = $('#userName').val();
user.Password = $('#password').val();
In the sample project, let's try to register a user with Visual Studio in Debug mode to check the deserialized UserJs instance in the web method.
Same question but different direction
We can't tie ourselves to call specific web methods during page load to supply our client controls with their initial values. Imagine that you have several controls to feed and you don't want to massively use ASP.NET server controls.
There is a simple but very smart workaround to supply the client with values available on the server side.
Don't think of injecting JavScript variables from the server with the RegisterClientScriptBlock or RegisterStartupScript functions. We are trying to be clean and elegant in our new programming way.
RegisterClientScriptBlock
RegisterStartupScript
Instead, have a look at this web page:.
This Hawaiian guy has written a beautiful class that allows to load a collection of key - value objects on the server and make it available on the client side in a sort of hashtable.
Let's look at GetListFromServerVars.aspx in our demo project.
This page loads the entire Category collection (category name - category value) through the AddClientVariables method supplied by BasePage.
AddClientVariables
BasePage
Then, GetListFromServerVars.js reads all the values and prints them on the page.
//File: GetListFromServerVars.aspx
foreach(Category category in categoryList)
{
base.AddClientVariable(category.Key, category.Value);
}
//File: GetListFromServerVars.js
var html = 'Computer: ' + serverVars["Computer"];
html += '<br/>Monitor: ' + serverVars["Monitor"];
html += '<br/>Keyboard: ' + serverVars["Keyboard"];
The zen garden of optimization
Until now, we haven't taken care of the proliferation of JavaScript files. I appreciate a development method that splits the source code in several logic units and therefore in several files. It's a matter of organization, order, and maintainability.
But there is a disadvantage in that. If our web server has to distribute a lot of small files instead of a bigger one, we are losing bandwidth and performance.
But there is a simple workaround: the minimization and merge process.
Our project has a lot of JavaScript files per each ASPX page. In the JavaScript folder, we have some jQuery plug-in.
In the sample project, there is a folder "lib" that contains a program called jsmin.exe (see). Jsmin minimizes and merges JavaScript files.
Now go to the EcommerceWebSite project and ask for its properties. In the Build Events tab, I inserted a post-build event command.
EcommerceWebSite
type "$(ProjectDir)js\jquery.*.js" | "$(ProjectDir)..\..\lib\jsmin" >
"$(ProjectDir)js\min\Plugins.min.js" %copyright%
ECHO Minify and merge aspx js
type "$(ProjectDir)SignupUser*.js" | "$(ProjectDir)..\..\lib\jsmin" >
"$(ProjectDir)js\min\SignupUser.min.js" %copyright%
type "$(ProjectDir)StuffSelection*.js" | "$(ProjectDir)..\..\lib\jsmin" >
"$(ProjectDir)js\min\StuffSelection.min.js" %copyright%
type "$(ProjectDir)GetListFromServerVars*.js" | "$(ProjectDir)..\..\lib\jsmin" >
"$(ProjectDir)js\min\GetListFromServerVars.min.js"
The post-build command takes care of supplying all jQuery plug-ins found in the JavaScript folder to jsmin.exe. Jsmin creates a single JavaScript file that is copied into the folder js/min.
In the same way, post-build commands must be configured for any ASPX page that needs JavaScript files.
Every ASPX page will produce a minimized JavaScript file. For example, for the StuffSelection.aspx page, a single JavaScript file called StuffSelection.min.js will be produced.
At this point, a big problem is born: how can we use the minimized JavaScript files preserving the ability to debug client script? Minimized and merged code is not understandable, even by a good programmer. So we need to maintain the ability to have both: minimized and not minimized files.
The solution is a simple and brilliant idea found on various web sites. JavaScript files linked from ASPX depend on a parameter in our configuration.
When in the web.config file, we set Debug mode (<compilation debug="true"/>), we link to the original JavaScript files. And when we set Release mode (<compilation debug="false"/>), we link to the minimized files.
<compilation debug="true"/>
<compilation debug="false"/>
Consequentially, our ASPX files change in this way:
//File: StuffSelection.aspx
<% if (HttpContext.Current.IsDebuggingEnabled) { %>
<script src="StuffSelection.js" type="text/javascript"></script>
<script src="StuffSelection_Proxy.js" type="text/javascript"></script>
<% } else { %>
<script src="js/min/StuffSelection.min.js" type="text/javascript"></script>
<% } %>
In that way, we can have source or minimized files by changing a simple property in the web.config.
Naturally, if we want to have the ability to debug client scripts even in production environment, we have to publish both the source JavaScript and the minimized JavaScript.
When a good feature could cause trouble
The approach to web programming that I'm presenting here has various consequences. One of them is that the entire UI logic and some pieces of business logic are moved from the server side to the client side. In other words, it means that you have to write tons of JavaScript code.
I consider this fact as an excellent pro with some little cons. One disadvantage is that we are forced to deeply know the various browsers that become important tools of our developing techniques.
jQuery helps a lot in hiding most peculiarities of different browser engines, but the game is not over.
Let's see a simple example.
Imagine that during a quiet and boring Friday afternoon, your boss comes into your office with the usual request: "Can you change the message that appears when I push the button in that web application...". Bosses are special for making this kind of requests on Friday afternoons.
The message is naturally cabled into a JavaScript file.
"Ok" - you think - "the task is very easy this time and I shouldn't stay here until late".
You change your local JavaScript file, test it, commit it (you have a source control system haven't you?), and copy it to the test and production environments. Everything in five minutes and without recompiling the entire project.
Now you can proudly go to your boss telling him: "It's ready".
Instead of being amazed about your super-speed execution, your beloved boss goes to the web site, pushes that button, and... discovers that the message has not changed.
"Damn cache" you think.
Anyway the solution is very easy: press CTRL + F5 and the browser refreshes its cache, the JavaScript file is correctly reloaded, and everything goes well.... but we can't force our users to press CTRL + F5 to be sure to have the new file. And your boss is not so happy about that...
The problem is that the browsers are very aggressive in managing their caches.
If you watch what happens during a page request, you could be surprised. For monitoring requests to a server, you can use, as usual, Firebug, or another beautiful free tool called Fiddler.
Both tools could monitor every page request and split it into its elementary parts.
You'll see that for every requested resource, the browser can:
I don't know what the rules are that every browser follows to choose the right option, but I surely know that, in some cases, some browsers dramatically fail this task.
The possible solution could be playing with headers to instruct the browsers to cache or not some resources, but this method is quite unpredictable too.
So we have two easier and safer methods to solve this situation:
This is the method used by jQuery developers (jQuery 1.4.1 is a different resource than jQuery 1.4.2, and your browser is forced to download it). But this method is difficult to implement because when you change a resource name, you'll have to change any reference to that resource too. It's a manual task, prone to errors and lapses, and for these reasons, we don't like it very much. It's suitable for a JavaScript component or framework files, but not for application files.
code.js?version=1 is different from code.js?version=2, and your browser surely asks for it even if it has cached the file code.js.
In this way, you can centralize the management of the "version" variable, for example, by putting it into the application web.config, and change that value after every modification of every JavaScript file.
In the ASPX file you can write something like:
<script src="code.js?version=<%= Config.Version %>" type="text/Javascript">
</script>
to update any reference in all your project in one step.
I liked this second method very much when I found it on Internet, but it was not perfect. When I change a simple message in one single JavaScript file, I force the browsers to download all JavaScript files in all the application web pages because the version value changes for every file.
To avoid this waste, I simply write down a method that I called ResolveAndVersionUrl that resolves the URL and adds to the resolved resource name a value that represents the file hash.
ResolveAndVersionUrl
In this way, only the resources effectively changed are the ones that change their querystring.
Well... I'm not crazy! I don't make my application calculate a single hash of any JavaScript resource at every single page request. For this task, the ASP.NET cache becomes very useful. You can see a simplified example of my algorithm in Controls/BasePage.cs, and a typical use in the CacheJs.aspx page.
In this example, the cache survives for one day, and it depends on a file on the file system (when I change a JavaScript file, I have to remember to delete the depending file to make the application recalculate all the file hashes).
Since any JavaScript resource is requested with GET, the browser is forced to ask for it to the server, and only if the resource has not changed, take it from its cache.
A request that receives a status 304 weights a bit, but since we have merged all page JavaScript resources in one single file, the total traffic weight is acceptable.
...our best friend.
Firebug is a fantastic Firefox plug in.
One of its best functions is debugging a client script through several tools:
Since version 8, Internet Explorer has its developer console which is powerful too. But old habits die hard, so my best friend remains Firebug.
Firebug allows to monitor every post to a server, even the ones that originate from the client script.
In this way, you can monitor the parameters passed to web methods and the data obtained as a reply.
I recommend installing Firecookie together with Firebug to monitor, manage, and edit cookies.
With this instrumentation, your tool belt is quite complete for any task regarding client debugging.
The icing on the cake
Yes, it is possible to organize your JavaScript code with regions, now that it is beginning to grow more and more after this article.
You simply have to include the following macro code in Visual Studio:
Option Explicit On
Option Strict On
Imports System
Imports EnvDTE
Imports EnvDTE80
Imports EnvDTE90
Imports System.Diagnostics
Imports System.Collections.Generic
Imports System.Text.RegularExpressions
Public Module JsMacros
Sub OutlineRegions()
Dim selection As EnvDTE.TextSelection = _
CType(DTE.ActiveDocument.Selection, EnvDTE.TextSelection)
Const REGION_START As String = "//\s*?\#region"
Const REGION_END As String = "//\s*?\#endregion"
selection.SelectAll()
Dim text As String = selection.Text
selection.StartOfDocument(True)
Dim startIndex As Integer
Dim endIndex As Integer
Dim lastIndex As Integer = 0
Dim startRegions As New Stack(Of Integer)
Dim rStart As New Regex(REGION_START, RegexOptions.Compiled)
Dim rEnd As New Regex(REGION_END, RegexOptions.Compiled)
Do
Dim matchStart As Match = rStart.Match(text, lastIndex)
Dim matchEnd As Match = rEnd.Match(text, lastIndex)
startIndex = matchStart.Index
endIndex = matchEnd.Index
If startIndex + endIndex = 0 Then
Return
End If
If matchStart.Success AndAlso startIndex < endIndex Then
startRegions.Push(startIndex)
lastIndex = startIndex + 1
Else
' Outline region ...
Dim tempStartIndex As Integer = CInt(startRegions.Pop())
selection.MoveToLineAndOffset(CalcLineNumber(text, _
tempStartIndex), CalcLineOffset(text, tempStartIndex))
selection.MoveToLineAndOffset_
(CalcLineNumber(text, endIndex) + 1, 1, True)
selection.OutlineSection()
lastIndex = endIndex + 1
End If
Loop
selection.StartOfDocument()
End Sub
Private Function CalcLineNumber(ByVal text As String, _
ByVal index As Integer) As Integer
Dim lineNumber As Integer = 1
Dim i As Integer = 0
While i < index
If text.Chars(i) = vbLf Then
lineNumber += 1
i += 1
End If
If text.Chars(i) = vbCr Then
lineNumber += 1
i += 1
If text.Chars(i) = vbLf Then
i += 1 'Swallow the next vbLf
End If
End If
i += 1
End While
Return lineNumber
End Function
Private Function CalcLineOffset(ByVal text As String, ByVal index As Integer) _
As Integer
Dim offset As Integer = 1
Dim i As Integer = index - 1
'Count backwards from //#region to the previous line counting the white spaces
Dim whiteSpaces = 1
While i >= 0
Dim chr As Char = text.Chars(i)
If chr = vbCr Or chr = vbLf Then
whiteSpaces = offset
Exit While
End If
i -= 1
offset += 1
End While
'Count forwards from //#region to the end of the region line
i = index
offset = 0
Do
Dim chr As Char = text.Chars(i)
If chr = vbCr Or chr = vbLf Then
Return whiteSpaces + offset
End If
offset += 1
i += 1
Loop
Return whiteSpaces
End Function
End Module
Now link your macro code with a custom shortcut, and here we are: regions are activated every time your shortcut is pressed.
You can find the second part of this. | http://www.codeproject.com/Articles/95525/ASP-NET-and-jQuery-to-the-Max?fid=1580126&df=90&mpp=10&sort=Position&spc=None&tid=4107598 | CC-MAIN-2017-17 | refinedweb | 4,664 | 57.06 |
An array is a series of elements of the same type placed in contiguous memory locations that can be individually referenced by adding an index to a unique identifier. To use an array in C++, you'll need to declare it first, for example,
int arr[10];
This declares an array of type int of size 10. This can store 10 integers in contiguous memory. To Refer to any of its element, you need to use the array access operator and provide it the index of the element you want to access. The indexing in C++ array start from 0. So in the array arr, we have 10 elements with indices 0, 1, 2, ... 9. To access the third element, ie, the element at index 2, you'd write: arr[2].
You can access all the elements in a loop like −
#include<iostream> using namespace std; int main() { int arr[10]; // Create a loop that starts from 0 and goes to 9 for(int i = 0; i < 10; i++) { cin >> arr[i]; // Input the ith element } // Print the elements you took as input for(int i = 0; i < 10; i++) { cout << arr[i] << endl; // Input the ith element } }
This will give the output −
1 5 -6 45 12 9 -45 12 3 115 | https://www.tutorialspoint.com/How-do-I-use-arrays-in-Cplusplus | CC-MAIN-2021-10 | refinedweb | 213 | 62.61 |
06 July 2012 11:10 [Source: ICIS news]
LONDON (ICIS)--?xml:namespace>
A decision from the ministry on whether to sell its 50.67% ZAP stake to the Polish synthetic rubber producer would be issued prior to the 20 July expiry date of the first phase of Synthos' zlotych (Zl) 1.96bn ($576.4m, €464.5m) bid.
In the first phase, running from 9–20 July, Synthos is offering Zl102.5 per share, while in the second phase, which expires on 7 August, Zl98.85 per share is on offer (this figure was increased from Zl98.77 per share on 3 July).
In its official response to the offer, ZAP's management said the takeover could seriously hinder the company's growth strategy.
The offer price of Zl102.5 per share does not reflect the positive impact of the completed and planned investments at the nitrogen fertilizer, caprolactam (capro) and melamine producer, it added.
“Any acquisition of control of ZAP by the bidder may jeopardise the current development strategy and adversely affect the value of the group," the management board stated in the response, advising shareholders not to accept the bid.
Synthos has said it believes its offer certainly does meet the fair value criterion and that it would not be tempted into buying ZAP “at any price”.
The strategy of Synthos would be to build up a strong domestic player in the chemical sector with bolstered profitability and an expanded product portfolio, including polyamide 6 (nylon 6), according to Synthos CEO Tomasz Kalwat.
“All in all, we believe that Synthos' acquisition synergies will require some time to be unlocked and, as its CEO has stated, will be evolutionary and not revolutionary,” investment bank Wood & Company said in its latest assessment of the bid.
($1 = Zl3.40, €1 = Zl | http://www.icis.com/Articles/2012/07/06/9575884/zap-management-advises-polish-treasury-not-to-accept-synthos.html | CC-MAIN-2015-18 | refinedweb | 299 | 60.75 |
Is it possible to have a markov chain with an infinite number of transient states and an infinite number of positive recurrent states?
Printable View
Is it possible to have a markov chain with an infinite number of transient states and an infinite number of positive recurrent states?
Hello,
Let
(integers) be the space of sets.(integers) be the space of sets.
Perhaps you can define the probability transitions :
andand
??
Thanks for the help.
I can see where the infinite transient states come from but how do you know it gives an infiinte number of positive recurrent states? I don't really understand the definition I have of positive recurrent.....
a recurrent state i is said to be positive recurrent if the expected time of 1st return (starting from i) is finite
P(i-->j)>0 (its possible to get to j)
P(j-->i)=0 (its impossible to get back)
(2) i is positive recurrent if for all j
P(i-->j)>0 (its possible to get to j)
P(j-->i)>0 (its ALWAYS possible to get back)
(3) If its always possible to get back, then E(time of first return to i)< oo.
Don't get caught up with expectation, just think of it in terms of (2).
where c is some constant, then when you take the expectation you getwhere c is some constant, then when you take the expectation you get
which is not finite. However here you will get back in finite time a.s.
The intuition is that you will get there eventually, just in an arbitrarily large time.
I'm having a similar sort of problem in understanding null-recurrence. What I want is a finite, irreducible Markov chain whereby ALL states are null-recurrent.
But isn't this just a simple (symmetric) random walk with reflecting barriers?
If anyone can help I would be really grateful!
thanks nyc,
makes sense! | http://mathhelpforum.com/advanced-statistics/137724-markov-chain-print.html | CC-MAIN-2016-50 | refinedweb | 320 | 63.09 |
In this article, I build on Part 1 of this series and introduce you to using modules and services to organize your application components. After viewing this article, you will have concise and important knowledge for starting your first Angular application using these advanced concepts.
Most applications have separate areas of concern for dividing up workflows. In some languages, these are called components, which are a strict set of business rules that handle a particular set of tasks. In other languages, these workflow rules are referred to as modules.
Scripting languages often refer to components as modules because a module is really just a lightweight component. The primary usage for modules is code reusability. With that said, let's learn about what modules mean to Angular.
Modules
If you look on the web for the definition of module in Angular, you will likely get mixed answers. I have seen the definition referenced to mean that a module is an application. Others say a module is a collection of dependencies. I think the best way to describe the Angular module concept is to think of them like components in other languages.
Components are simply a collection of classes and interfaces that implement a unit of functionality. For example, an engine has components, and this is also true in the software world.
One component handles a set of operations, and the sum of the components make up the engine as a whole. So if a module were like a component, it would make sense that a module could be reused across different applications. You could also assume that modules, like components, use a namespace or package name to identify them. With this in mind, let's take a look at an example in Angular that uses a module.
Let's say your application has a page for users to check some email. The email page, call it MyMailer.htm, can have its own controller and services that serve this view (MyMailer). It is quite common to have a controller, service, and model that serve a single view.
If we create a module that handles MyMailer, all of the MyMailer dependencies are wrapped together in this one module, which can be used across the rest of the application or other applications. To define a module, use the following syntax:
var myMailer = angular.module('MyMailer', []);
So what about the controller(s) and services(s) to be used by this module? Where do they come from? Now that we have a module, both controllers and services can be attached to the module.
For example, to declare a service for the module, use this syntax:
myMailer.service('messageService', function(){ this.messages = [{sender:foo@yahoo.com, subject:Hello,message:Hello from Foo.}, sender:bar@yahoo.com, subject:Hello,message:Hello from Bar.}]; });
It is important to note that this module declaration code would be made available to a controller that serves the view (in this case, the view called MyMailer.htm).
To display these messages in a list, we could have a List page that uses the controller that contains the scope binding for the MyMailer module having the messages provided by the messageService, thus binding this scope to our List view.
To demonstrate, see the following sample list.htm code:
<body ng- <h1>Email Messages</h1> <table ng- <tr ng- <td>{{message.sender}}</td> <td>{{message.subject}}</td> <td>{{message.message}}</td> </tr> </table> </body>
Now, check out what is in the controller (controllers.js) to make this work:
function ListController($scope, messageService) { $scope.messages = messageService.messages; }
As you can see, the controller is the binding between the view and the controller. For larger applications, you would have a model that sits between the controller and the view. The model would contain the implementation details in a service.
The ng-app directive refers to the module that can be used on just one view—usually a view template. The module holds the routing configuration or rules for reaching pages, which is discussed in the next section.
Routing
To make this simple example more complete, let's add a routing configuration to the MyMailer module. To add a routing configuration, place the following code into the application's controllers.js file:
function emailRouteConfig($routeProvider) { $routeProvider. when('/', { controller: ListController, templateUrl: 'list.htm' }). } myMailer.config(emailRouteConfig);
In the code, there is more dependency injection using an Angular built-in service for routing ($route) that uses a method called 'when' to define url routing paths that map the path to a controller.
For example, when we go to the app and hit the MyMailer.htm page, the routing configuration forwards the request to the list.htm page that is using the ListController.
To make the routing available, we have to pass it to the module's config function, as shown above. More routes can be easily added by just repeating the 'when' code block.
The last part of this example is to add the MyMailer.htm page shown below that serves as the entry point to the application:
<html ng- <head> <title>My Mailer</title> <script type="text/javascript" <script src=""></script> <script src="src/controls.js"></script> <body> <div ng-view></div> </body> </html>
Notice that you need very little for this page because it already knows what to do by including the controls.js file, which handles the routing, service, module, and controller declarations for each view.
Conclusion
In this article you learned more-advanced concepts of the AngularJS framework, which is essential for exploring and fast-track learning of Angular.
Try building a simple web application that expands the example application in this article. For example, add a details.htm page to display email message details.
The next step is to learn how Angular works with web forms and databases, which, as you may have guessed, is mostly through Ajax-like functionality. | http://www.informit.com/articles/article.aspx?p=2245730 | CC-MAIN-2017-39 | refinedweb | 975 | 56.45 |
A class for for indicating the status of slow-moving loops. More...
#include <l_heartbeat.h>
A class for for indicating the status of slow-moving loops.
This class is really only appropriate when you know in advance how many iterations will occur for your loop. Assuming you know that number, then you would use this class something like as shown
Make a "heart" which will beat while your slow algorithm runs.
The idea is that you call the beat() method for this object inside the outermost loop of your code, and a fraction, such as one in every 1024 calls to beat(), causes timing output to be emitted on standard output.
This ctor calls isatty(3) to determine if it can recycle the line it is printing to; if standard input is a TTY (not a file) then this uses the carriage return character to re-use the line. But if it is a file, then this emits a newline character and each output string thus shows up on a separate line. A kluge.
Indicate one tiny step of your algorithm's execution.
This indicates one small step of your algorithm or loop is occurring, which usually passes silently, but on a deliberately small fraction of calls to this method, some timing output will be generated and sent to console standard error so that the user of the program does not despair.
However, that's only if standard input is attached TO the console, i.e., you are running the program "interactively." If standard input is not coming from a TTY, then this object assumes the program is running in batch mode, in which case this method emits nothing to standard output.
Also the same string is returned, in case you want it. (For maximum speed, just ignore the string that is returned.)
Call this when your algorithm is finished, to see total time.
This is an optional cleanup method; if you call it (ONCE) at the end of your algorithm or loop, then you can show the user the total amount of time expended during the loop.
This method sends its output to standard output regardless of whether it thinks the program is interactive. | http://kobus.ca/research/resources/doc/doxygen/classkjb_1_1Heartbeat.html | CC-MAIN-2022-21 | refinedweb | 365 | 67.08 |
Subject: Re: [boost] [build] Tests automatically create header links, but library builds do not
From: Peter Dimov (lists_at_[hidden])
Date: 2015-01-05 12:05:20
Andrey Semashev wrote:
> That doesn't match my definition of "right". :) Because it yields
> incorrect result with a valid C++ code. If it can't make a valid result,
> better not parse C++ at all and declare a new protocol for dependency
> definition.
The protocol for dependency definition is already defined. It consists of
lines of the form
#include <header>
or
#include "header"
inline in the C++ source files.
You just refuse to follow it.
Any other protocol will be equivalent to this one, in that it necessarily
would need to contain the same information.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2015/01/218769.php | CC-MAIN-2020-45 | refinedweb | 143 | 59.5 |
Agenda
See also: IRC log
<scribe> Scribe: hugo
Sept 23 minutes at approved
Jonathan asks editors to review their list of action items at and let him know when one has been done
Chair goes throught his list (see agenda)
Hugo: regarding the format required for the publication of the Z notation document, the normative version must be the one with unicode chars
... we can provide an alternate one with graphics instead of them
... tests on lists.w3.org with Paul have shown that it works for lots of people
No questions
Jonathan: still working on this; I just need to put Kevin's in
Arthur has questions about changes that we've made that aren't a result of a LC issue
Jonathan: so far, we only did that as a result of Bijan's F2F presentation
Anish: all the F2F changes are done
<Marsh>
Anish: the only issue left is the proposed text about expanding the introduction text
DBooth: does that affect the media type registration?
Anish: no
Jonathan: any comments?
... we have consensus to accept Anish's proposal
RESOLUTION: accepted as new intro text for Media Type Description Note
Jonathan: we have several options for going forward
... maybe it would be best to hear from XMLP before taking our vote to go to LC
<scribe> ACTION: Jonathan to contact XMLP to review the Media Type Description document in order for us to go to LC with it
Anish: I would like people to make sure I did the right thing with my edits
Jonathan: we could decide to go to LC next week if everything goes right
<Marsh> ACTION: Working Group to review Media Type note in preparation for LC vote next week.
<dbooth>;%20charset=utf-8#ietf-draft,text
DBooth: I have a related item about the media type registration
... there is an editors note at the top of appendix A saying that the WG does not agree on the media type
Jonathan: in the XML Core WG, somebody thought that we did not need to register a new media type
... however, I heard that there was a W3C policy pushing us to do so
<pauld> observes that application/soap+xml media type was published only this week:
David: I think that it's an informal policy
... moreover, it seems to me that it's something we would want to do
Tom: why is it application/ instead of text/?
Jonathan: an RFC (not sure which number) prefers application/* to text/* for XML vocabularies, saying that XML's not for human consumption
<pauld> points tom at this article:
Jonathan: originally, you could serve XML as application/xml or text/xml
... now, they are pushing people to use application/xml
Tom: I am a little worried about interoperability
... it sounds that my browser wouldn't know how to deal with a WSDL document with such a media type
Arthur: I believe that it's an IETF RFC which says that we should register a media type, and use application/*
... and we could specify what to do in cases where the document is read by a Web browser
<dbooth> RFC 2048 discusses media type registration
Jonathan: there might be an issue here about registering a new media type, interacting with existing tools, etc.
<dbooth> RFC 3023 discusses registration specifically of XML media types
[ people do some tests with browsers ]
Arthur: the purpose of the +xml convention is that you can deal with such cases gracefully
<dbooth>
DBooth: the Web Arch says: In general, a representation provider SHOULD NOT assign Internet media types beginning with "text/" to XML representations. --
<dbooth> [[
<dbooth> In general, a representation provider SHOULD NOT assign Internet media types beginning with "text/" to XML representations.
<dbooth> ]]
Tom: I think that a new media type is going to be trouble
... Axis doesn't pay attention to the media type of WSDL documents
<sanjiva> If we don't introduce a mime type don't we lose the fragment stuff?
Jonathan: do we want to track this as an issue?
<sanjiva> If that was discussed I'll shut up
<dorchard> We have to do media type registration for frag-id.
Arthur: we can do some tests
Jonathan: why don't David or Hugo set up a test resoure?
<dorchard> what is the issue?
<Marsh> Whether to recommend people serve up WSDL as application/wsdl+xml
<Marsh> Or whether application/xml is a better choice.
<scribe> ACTION: Hugo to set up a application/wsdl+xml on the W3C site for tests
<dorchard> it has to be wsdl+xml
Hugo: but WSDL documents are targetted to WSDL processors, not people through Web browsers
<Zakim> dbooth, you wanted to ask what impact the outcome would have
Tom: I often look at some
DBooth: what would the outcome of the test be?
... it will not be a WSDL-specific problem
<pauld> i'd create a perl CGI first line print "Content-type: application/wsdl+xml\\n"
Jonathan: agreed, it would be a problem for the TAG
DBooth: will that affect our registration?
... I discovered that we were supposed to do this a while ago
Jonathan: we should just go ahead for now
DBooth: with application/wsdl+xml?
Jonathan: yes
DBooth: the deadline for us was 2 weeks before LC
... our LC ends on Monday
... why don't we extend our LC period for them?
Jonathan: what about the 18th?
... schema and maybe XForms asked for extensions too
<asir> displays fine for me
Decision to use "WSDL" for the Macintosh File Type Code
Jonathan: let's separate the content and display discussions
<Arthur>
Arthur: the advantages is that it's precise and can be validated
... see screenshots at
Jonathan: we always expressed interest; does that meet our expectations?
DBooth,Sanjiva: it exceeds my expectation
Amy: the symbols do not talk to me [ paraphrased by scribe ]
Jonathan: we talked about including a short intro to describe common Z notation symbols
Amy: it's better indeed, even though there still is a risk of making this dense for people not used to it
Roberto: it looks great; there is duplication between English and Z, which is troubling me as an editor
... it's twice as much work
... and we don't know what the reaction of our readers will be; we wouldn't want to have people *not* read the WSDL 2.0 because of this
Kevin: I'm concerned that we're adding another layer to the spec; people have complained that there are too many layers with the abstract components, the mapping to XML, ...
Arthur: we previously discussed having a reader's digest version
<dbooth> Kevin, I think "normal readers" can primarily read the Primer. The spec is more intended for people who need the precision.
Arthur: there could be another version, lighter one, with Z
... however, we wouldn't want another document for that; we could have English statements very close in writing as well in placement to the Z statements
<dbooth> +1 to Arthur's comment. MUCH easier to maintain correspondence if the Z is integrated, rather than in a separate document.
Dave: this seems weird to me to add the Z notation in light of the decision we made about spec simplification
DBooth: I view the Z notation to help me to clarify, whereas I don't see the boilerplate stuff as necessary
Jonathan: it seems that we need to publish a draft with the notation in to see how people are going to react to it
Roberto: that means reformating the text, and potentially having to take it out afterwards; that's a lot of work!
... it would be better not to remove any text for now, just to add Z so that it can be removed easily
... we need feedback to make a decision
Jonathan: we could get feedback from our companies
<Arthur> I can keep the Z separate from the existing text
Jonathan: we could publicize Arthur's example too
Arthur: we can do the Z as a net addittion, in a different namespace, so that it will be easy to deal with later on
Kevin: is Z WSDL-specific?
Arthur: I did it for the Infoset too
... as a test case
Kevin: maybe we could recommend this to other WGs
Jonathan: I hear that we do like this but we're concerned about our users' feedback
... I suggest that we should just include it and we'll see what feedback we get
DBooth: it seems to me that gathering more feedback is the slower route; we've always endorsed this, so we should just go ahead with this
Jonathan: the proposal is to go ahead and include the Z notation in the drafts
Kevin: I would like to have another version, more readable, without the Z notation
<sanjiva> hmm not passing thru
<sanjiva> hang on
Consensus to accept the amended proposal
<Marsh> RESOLVED: Appoint Arthur as Part 1 editor
Arthur is appointed as Part 1 editor to add Z notation
<sanjiva> ok returns application/wsdl+xml and returns text/xml
Hugo: we also need Z in Part 3
Arthur: Agreed, we need Z everywhere we make formal statements
... but let's just start with Part 1 for now
... I would like a consistency check be part of the build process
Hugo: Arthur and I need to talk about it
RESOLUTION: Integrate Z notation to drafs
<scribe> ACTION: Arthur to add Z notation to Part 1
Jonathan describes
Jonathan: XML Core suggested a naming prefix (not a namespace prefix) to hint that our fragment ids are WSDL-related
Arthur: sounds reasonable
<sanjiva> Jonathan: Can we decide the application/xml thing? we have the evidence we need.
Arthur: we could use "wsdl-"
Jonathan: any objection to "wsdl-"?
Arthur: actually, I like "wsdl." better
Jonathan: any objection to "wsdl."?
RESOLUTION: we have no issue about our scheme name
... we will preface our names with "wsdl."
... LC6d closed
<scribe> ACTION: Editors to add "wsdl." to XPointer syntax
Sanjiva: we have evidence that application/wsdl+xml does not display well in lots of popular browsers
People report that application/wsdl+xml renders under IE
<dmoberg> both 9999 and 9998 in my version of IE is displayed
<sanjiva> +1 for Hugo's proposal
<Arthur> I opened with IE and it first opened in Wordpad because I had made that file association.
<Arthur> I changed the .wsdl files association to IE and opened as an XML document.
Hugo: I think that we need to register application/wsdl+xml; the rendering problem is orthogonal to me, and that Tom may want to ask the TAG want they think about media types not rendering about browsers not rendering ...+xml documents well
DaveO: we define frag ids, we need to indeed
<Arthur> This works because of the .wsdl extension.
DaveO: we could report our experience about the deployment of our application/...+xml media type
<Arthur> IE downloaded it and then matched the .wsdl extension, not the Mime Content Type
Jonathan: maybe we should stay silent about what the right thing to do is
Tom: maybe we should say nothing and let the problem work itself out
Arthur: actually, I originally proposed text/xml but got shot down
<pauld> wants to register application/wsdl+xml and let the world sort this out ..
Jonathan: proposes to do nothing about it, and maybe do something about the rendering issue later
Arthur: we can also post a bug report for Firefox
<dbooth>
[ Discussions around the +xml syntax ]
<dbooth> [[
<dbooth> This document standardizes five new media types -- text/xml,
<dbooth> application/xml, text/xml-external-parsed-entity, application/xml-
<dbooth> external-parsed-entity, and application/xml-dtd -- for use in
<dbooth> exchanging network entities that are related to the Extensible Markup
<dbooth> Language (XML). This document also standardizes a convention (using
<dbooth> the suffix '+xml') for naming media types outside of these five types
<dbooth> when those media types represent XML MIME (Multipurpose Internet Mail
<dbooth> Extensions) entities.
<dbooth> ]]
<dbooth> -- from
RESOLUTION: no change with regards to our media type | http://www.w3.org/2004/09/30-ws-desc-minutes.html | CC-MAIN-2016-30 | refinedweb | 1,997 | 59.98 |
New GOP Domain Name Violates RFC 2146 235
Macki writes "Citing the poor quality of republican websites, Republican Conference Chairman J.C. Watts has started a project called 'GOP.gov' to help improve their websites. This is all well and good, except GOP.gov isn't just their name, it's also their domain. This is a pretty clear violation of RFC 2146." (Please click below for more.)
The domain is registered to 'US House of Representatives Republican Conference' and should rightfully be GOP.HOUSE.GOV.
Excerpt from RFC 2146:."
Comment from Roblimo: Well, that's Mackie's opinion. I disagree, at least in part. I believe a political organization - and that's what a political party is; it's certainly not a government agency - should be an ".org", not a ".gov". BTW, I don't see this as a Republican vs. Democrat thing, either, but as evidence of general Congressional cluelessness. Anyone else care to weigh in on this?
Re:Should be .org. (Score:2)
I'm gonna go off topic here but, that doesn't make it right, ya know? With only two parties to chose from, it's a safe bet that most candidates follow the party line nearly to the letter. If your vote goes to a Rep., you get a less taxes (maybe), but you'll probably get shafted with ridiculous censorship attempts in the name of the children. If you vote Dem., sure, you'll sleep better knowing that you're helping the less fortunate, but more than likely you'll have to face the fact that if you make more than 50g's a year, you're going to hell. Personally I hate the government and don't trust them with anything. That's why I vote Libertarian [lp.org] [note the
.org :)]. And don't even tell me that I'm throwing away my vote. It's because of that philosophy that we have congressmen/senators with 40 year incumbancies(sp?) making policy about things that they have no knowledge of... E-mail tax, gimmie a f***ing break. Screw the Post Office....whoops, did I say that out loud?... sorry. Anyway...
The lack of diversity of parties, on the other hand, is in my opinion a good thing, since it keeps flakey parties from getting elected with a plurality (instead of a majority) -sometimes of only 20 or 30 percent. For example, Hitler came to power with only a third of the vote - but there were too many uncooperating parties spltting the non-moron vote. Ergo the ass won.
Maybe you should change that to "The lack of a Hitler is a good thing." Cause that's what you meant. That was an entirely different system. Congress does NOT elect our chief executive. Even if the ENTIRE congress supported David Duke for Pres, the people are not that stupid (hopefully). The only thing that a two party system accomplishes is stiffling out change in the interest of campaign supporters. Period.
--MessiahXI
messiah11@mindless.com
Re:What in the world does "GOP" stand for? (Score:1)
Read the RFC (Score:3)
This clearly makes a policy for exceptions. The FNC Executive Committee is allowed to make exceptions to the policy at their discression.
Is GOP.GOV a reasonable use of the
Re:Should be .org. (Score:1)
Re:.org != non-profit (Score:1)
But Slashdot clearly does fit somewhere else:
[/devils-advocate]
(Totally off-topic: Has anyone figured out how to get literal angle brackets in a
Re:What about USPS? (Score:1)
Re:Should be .org. (Score:1)
And who is informing them? The media? Do you think that's a "good thing"? If you merely consume mainstream news, then you are clueless about many issues. For example, how many wars, i mean "police actions" or whatever, has the US been engaged in the past 10 years? OK, now how many of those can truly be called a success? ZERO. Phillipenes(sp?), Somalia, Iraq, Kosovo, to name a few. We got involved in these regions in the name of human rights and what did we accomplish really? The Phillipenes is as corrupt as ever. Chaos still reigns in Somalia. Hussein and Milosovich(sp?) are still in power; how long till we hear from them again? And what happens after we pull out? So does the press. If the atrocities in Somalia and Kosovo are so bad, if it was so bad that we had to get involved, then why aren't we still hearing about it? Did we fix it? No, we didn't. So tell me how informed the majority of the public is.
Most people are satisfied with their representation, however, and there is little upheaval.
I would say that is because they aren't fully aware of their rights and that the government walks over our rights on a daily basis. It's become the status-quo. And it is sad.
The real question is (Score:1)
My bet is this is a result of some sort of power struggle within the republican party. I'm sure that the house republican caucus would love to get a jump on the national committee: it would give them much more control over the agenda.
[Of course, there's a certain amount of unfairness here: I'm sure that if Bernie Sanders, the one Congressman outside the two party system, tried to create, he wouldn't be allowed to.]
ObSideNote: exists. It's been poached by.
Re:gop.org (Score:1)
Re:.org != non-profit (Score:1)
You can use the standard < and >.
Re:.org != non-profit (Score:1)
Ug.
Anyway, use ampersand-lt and ampersand-gt.
There.
Re:usps.com (Score:1)
This all kind of sucks for a lot of small businesses, who end up spending more than they should on postage. Anybody who sends out adverts will tell you they cost more to mail than to print.
The reason they aren't a
.GOV domain is that they are a semi-autonomous corporation owned wholly by the US gov. - so they could probably use a .com or .gov and still be honest both ways.
The USA is a one-party state. (Score:1)
America is a one-party state. But with typical American extravagence, they have two of them.
Re:No, this isn't a mistake, perhaps (Score:1)
I disagree. I believe a political orgaization - and that's what political party is; it's certainly not a government agency - should be an ".org", not a ".gov"
Do you see? But perhaps you are right about gop.house.gov over gop.gov
Re:usps.com (Score:1)
Re:usps.com (Score:1)
Can we really say this is unfair, though? Considering that the mail gets through (most of the time
I guess, maybe a better question would be, how much do other coutries charge for stamps/shipping and how good is the quality?
--------------------------
Re:Low quality of advice (Score:1)
I think this was advice of the highest quality.
gop.gov has some measure of branding kudos, and thus it's valuable. It's the GOP's job to try and acquire such things, and the objective election watchers and federal civil servants to exercise whatever control they can to stop them.
Should it be there ? No. hrc.house.gov should be permitted instead and scrutinised very carefully to ensure it doesn't exceed whatever Whitehouse rule there is about limiting Federal funding of party campaigning.
Is it understandable ? Absolutely. It's just politicians taking anything that wasn't nailed down, and we clearly didn't nail this one down firmly enough beforehand. Don't blame crocodiles for biting your leg off, it's just what they do best.
Personally I'm dubious on any
.gov domain that isn't honestly a UN-based New World Order. Having .com imply the US is reasonable enough, but the Whitehouse should stick firmly to .gov.us and stop trying to rule the whole world.
Re:Cited RFC is not a standard (Score:1)
In this case, the RFC would be a wonderful thing to have everyone follow, but...well that will not happen.
We all have to realize that as this thing of ours (no pun intended to all of you italians/mobsters) gets out to more and more people, who adopt it with abosolutely no respect for its tradidtions (or for the traditions fo those who created/run it) the RFC process which served us so well in the apst will have to be replaced by a more corrupt, easily manipulated system which could mean lots of cash for one group, or even one senator.
The current process allows argument, best of breed adaptation, and some degree of removal of the standardization process from both government and industry.
So saying that it is "just for comment" is fully correct, but you are wrong to say that this is unimportant.
Re:Should be .org. (Score:1)
Yeah, you did say that out loud, and you probably shouldn't have, since the whole thing was just a stupid chain-letter hoax. But, of course, just be because the government didn't actually do something is no reason to hate them for doing it, right?
Re:And, they hijack you to boot! (Score:1)
What you are experiencing is a javascripot which many organizations use to prevent the fracturing of their precious framed sites. Tell them to GET OFF FRAMES, and this can be avoided.
Re:gop.org (Score:1)
gop.org (Score:3)
UK perspective (Score:1)
This isn't petty. It's important. Anything with a
.gov domain element should clearly be in the control of the government. Party groups are .orgs, and that's that.
Well (Score:1)
If you think you know what the hell is really going on you're probably full of shit.
Re:The real question is (Score:1)
But to your original point, the reason gop.gov is being set up is because the RNC and other Republican sites aren't being updated, and so are fairly useless. That's all.
Political parties and government (Score:2)
In a two-party system like ours, a political party's role depends on whether it is in power or out of power. In power it is the government, out of power it is an organization wanting power. But it should not be two hard to check on whether a party is acting as a government entity (i.e., when it is looking out for the public interest) and when it is acting as a private organization (e.g., when it is looking out for its own interests).
In theory that is; in practice it is not often easy to tell which is which...
--
Re:gop.org (Score:1)
Re:You whining liberals just don't get it. (Score:1)
I THOUGHT THIS WAS NOT A POLITICAL ARGUMENT!!!
ACHA!! Im A REPUBLICAN... YES I LIKE TO MAKE MONEY, AND KEEP ALL OF THE PROCEEDS OF MY WORK....
BUT THIS WAS NOT SUPPOSED TO GET POLITICAL, SO FOR ALL OF THE PEOPLE OUT THERE WHO ARE DEMOCRATS, THIS GUY JUST BIT IT:
I AS A REPUBLICAN HAVE A SENSE OF SHAME TODAY, FOR ONE REASON. THE LEADERS OF MY PARTY GO AROUND cALLING DEMOCRATS TREASONOUS BASTA_DS, BAD MOUTH THEM, ETC. ETC. ETC.
BUT THEY MISS THE POINT. THIS IS THE WAY OUR COUNTRY HAS TO BE, WITHOUT DEMOCRATS, WE REPUBLICANS BECOME TOO OBVIOUS. WE WANT POWER, AND WE DO NOT CARE ABOUT ANY ONE ELSE BUT OURSELVES.
I AM SAYING THIS BECAUSE TODAY I AM ASHAMED. I AM ASHAMED THAT OUR PARTY, THE GRAND OLD PARTY, FAMOUS FOR STEALING FROM THE POOR, AND KEEPING THE PEOPLE IN THE DARK ABOUT OUR POLICIES, KILLED THE NON-PROLIFERATION TREATY.
WITH ONE FELL SWOOP, IN THE NBAME OF MAKING A POLITICAL STATEMENT, WE HAVE MEDE THE ENTIRE COUNTRY LOOIK LIKE A BUNCH OF IDIOTS, AND LOST ANY CLAIM TO INTERNATIONAL LEADERSHIP WHICH WE MIGHT HAVE HAD IN THE PAST.
SO GO BLOW IT OUT YOUR EAR.
Re:usps.com (Score:1)
As far as universal coverage goes, can you name a place in the US that isn't served by UPS of FedEx? (Or RPS, or Airborne Express...)
Low quality of advice (Score:2)
Re:.org != non-profit (Score:1)
Should be .org. (Score:2)
Re:Typical GOP stupidity (Score:1)
Re:.org != non-profit (Score:2)
--
Uh... (Score:2)
Macki says GOP.gov is a violation of the RFC and shouldn't be allowed.
Then Roblimo says (roughly) "I don't agree with Macki's opinion, it should be GOP.org and not GOP.gov".
Don't you two agree then ?
Confused,
--Jonathan
Re:No, this isn't a mistake, perhaps NOPE (Score:2)
Really! What the hell is that about? It's part of their "Join the risk-free Revolution" logo? WTF kind of company slogan/logo is that supposed to be? Someone please explain because I _really_ would like to know.
Misunderstanding (Score:2)
(By the way, it's Congressman J.C. Watts, Not J.C. Watt)
Australian Chaos (Score:2)
We dont have a
The website i am involved in kicks most of our Gov'ts websites -
The whole reason this isnt is because one lame duck on a desk 'doesnt like it'.
sorta gives you an idea of what goes in in government, doesnt it?
[sorry about Anon Coward - email me on scorpion@australia.airforce.net - btw thats an american company, not our own domain!]
Teo.
well, the rfc sez... (Score:2)
I suppose it doesn't really matter. Seems pretty inconsequential to me, since naming standards are pretty routinely violated.
-lx
Re:Should be .org. (Score:1)
I admire your optimism, but tell it to the Germans in 1932. Yikes!
Re:No, this isn't a mistake, perhaps NOPE (Score:2)
This relationship would be better described as RobLimo suggested; GOP.HOUSE.GOV. This is what is intended in RFC 2146.
An interesting experiment would be, as you suggested, to have the Democrats register DEMOCRATS.GOV, or better yet, INDEPENDENTS.GOV, and see what kind of stink that would raise.
Now what I find even more interesting is that CAIS.COM [cais.com], the nameservice provider for GOP.GOV, has a banner [cais.com] image of a major city skyline being destroyed in massive, flaming explosion. Coming on the heels of the Senate voting down participation in the Nuclear Proliferation Treaty, my paranoid conspiracy theory engine purrs...
Cited RFC is not a standard (Score:2)
RFC's are just requests for comments. They are not necessarily standards. Some of them wind up getting approved through the standardization process, but apparently this is not one of them.
It's interesting that they have the gop.gov domain, but it's not interesting that they violated a non-binding, non-standard RFC.
And Malda's $3.5 million in stock options? (Score:1)
"Malda will receive an additional $3.5 million plus stock over the next two years should he remain with Andover.net. "
Note - I don't think there's anything whatsoever wrong with this - any tech site with such a big crowd is worth a lot. and it is good that linux/open source work is making money. After all, the code is GPL'd
However, it did interest me in terms of the
Can you sell a non-profit
Re:What about Slashdot? (Score:1)
Re:Should be .org. (Score:1)
Actually, Congress does have the power to elect the President if no candidate gets at least 270 votes from the electoral college( more than 50% of the 538 votes ). Each state in the House gets one vote to elect someone from the top 3 candidates. If none of them win by majority, the top two then goes to the Senate for their decision. See the FEC rules [tqn.com] for more info. The party in majority generally should have the advantage in this situation.
Re:Should be .org. (Score:1)
Totally agree w/ you, Mess. Thanks for saying it better than I wanted to...
Re:usps.com (Score:1)
I would make two assertions. First, a private corporation could deliver the mail at a lower cost and with greater reliability. The US gov itself uses Fedex for it's overnight shipping - NOT the USPS! It's just the sort of service that works much better when it's subject to competition. Second, it shouldn't even matter how efficient the USPS is. In a free country, I should be able to start a business delivering mail in my hometown. Right now, if I did, I'd be shut down and imprisoned. No kidding. It happened in Baltimore in the 80's. (not to me of course!)
Re:The USA is a one-party state. (Score:1)
better things to do than keep up with rules (Score:1)
Only geeks think hierarchically. The average non-geek has a single ACCumulator and just a JMP instruction, no JSUB. Sure, they could theoretically implement recursion, but they won't and never will because they don't have the stackspace anyway. They do stuff like watch "Voyager" on Wednesay without ever thinking, "'Voyager.UPN.net' and 'Voyager.WSBK.Ch56.tv'... must be multiple inheritance."
Actually, they don't even watch Voyager at all, but that was the only program I knew the call letters for.
TLDs should be wiped out completely. For all of our geek glory, exactly how many sites do we actually use TLDs for either? "Hmmm... let me think, do I want the profit-making foo, or the non-profit foo?" It's so stupid! We think just like everyone else does: "I want Altavista, I want Slashdot, I want Yahoo, I want InterNIC" and then we have to remember which TLD is appropriate.
"Hierarchy through obscurity" is what it is, and it's stupid.
Re:Be disgusted because it makes things harder (Score:1)
Why only two parties? (Score:1)
When you elect a single candidate from a territory (a President from the nation as a whole, a Senator from a state, etc.), this creates a very powerful incentive for all people in the system to reduce the relevent choices down to two. Suppose you have three parties contesting elections of this sort. As a quick and dirty principle, we will also suppose that everyone can rank all three parties in terms of preference, and that those rankings are reasonably consistent - supporters of party A will tend to rank all three parties in the same order (which, since we are being generic here, we can describe as A,B,C - meaning A is their first choice, and C is their last choice) and supporters of the furthest opponent (party C) will tend to rank the parties in the opposite way. If all three parties keep contesting elections, there will almost certainly be many cases where two of the parties realize that if they worked together, they could defeat the remaining party which is currently winning the election - and they would be happier with the result. For example, party A gets 40% of the vote, party B gets 35%, and party C gets 25% - so parties B and C realize that if they united behind the candidate from party B, they would win the election, and they would prefer that result. The same logic affects voters (who don't want to "waste" their vote), party organizers (who want to win elections for their party), issue activists (who want to get their preferred policies adopted), etc. Of course, once the vast majority of elections are being contested between two parties, it is quite likely that rules in the Congress will reflect that partisan "fact of life."
This contrasts sharply with the experience of many European countries, which use a proportional representation system of elections. They do not divide territory up into single-member districts, but instead cast votes nationwide, and allocate seats within the legislature in proportion to the votes received (ignoring a broad range of variation in mechanics - there are a whole bunch of different proportional representation schemes). In these countries, we usually see a noticeably larger array of different parties regularly participating in elections, and we see entirely different forms of legislative organization as well.
Of course, this explanation still has a few holes. The continued survival of the British Liberal party, and the rise of the Social Democrats in Britain, and the New Democrats in Canada are still seen as anomolies (both Britain and Canada use single-member district elections). There are probably additional "problem" cases elsewhere in the world (my comparative politics classes were taken several years ago.) However, I think it is fair to say that for many political scientists, the interesting question is not why there are only two parties in the US, but why there are more than two in Canada and Britain.
Re:All these "new" domains are not new (Score:2)
Tonga, or rather the hey.to domain, has been bastardized fully [hey.to] to my own purposes.
Wait 'till November (Score:1)
If they have their way it will be gop.gov
Re:Should be .org. (Score:1)
So who gets to register .gov domains now? (Score:4)
Can the Reform, Green, Libertarian, and Communist parties get
.govs? Or hey, how about an anarchist "no.gov"? Or a Lenny Bruce "fuckthe.gov"? ("If you can't say `Fuck,' you can't say, `Fuck the government.")
Most importantly, can I register EmperorNorton.gov to commemorate the first and only Emperor of the United States [sfmuseum.org]?
The Republican and Democratic parties are private entities with no more special legal standing than other parties, or the Church of SubGenius for that matter. If a group of them in the House want a domain, the house.gov admin can give them gop.house.gov. If the party can get a
.gov, anyone should be able to.
Re:Should be .org. (Score:1)
That's true, but there are many other examples to the contrary, like Spain, and even France. Maybe Swedes are more homogenous in their political beliefs, in spite of the multiplicity of parties.
Yeah. Stability. Great. Just imagine how terribly unstable the whole system would be if there was a chance for *poor* people to get into politics - or a non-WASP majority. Gee, then the people of America may become politically aware, and a reasonable amount of people might vote, and then where would we be? Sheesh
You really see no value in benign stability? Most Americans want the government to be neither seen nor heard. It's hard to see where your sarcasm is taking you. Are you a US citizen? If you were, you'd know that poor people can get into politics, though I wouldn't start with the US Senate. I certainly wouldn't vote for somebody who couldn't get a job, though. What idiot would? And there are many non-WASPs involved in politics. And the people of America are fairly aware, especially when a poor job is being done. Most people are satisfied with their representation, however, and there is little upheaval.
Re:Should be .org. (Score:1)
How about freekevin.mil to stir things up a bit (Score:1)
TLD's for Dummies (Score:1)
Re:Political parties and government (Score:1)
--
It probably is (Score:5)
Their use of GOP instead of HRC makes me particularly suspicious that the intent of the site is for party business, not HRC business. They are using the HRC's government status to get access to an address they would otherwise not have access to. A political party should never masquerade as a government entity, we are not the Soviet Union (nor is Russia anymore).
In fact, I question the need to give the HRC (and whatever the Democrats' counterpart is, the HDC?) official house committee standing. The fact that members of Congress share a party should not be something to form a committee over, it should be an unofficial caucus at best.
----
Re:Political parties and government (Score:1)
Second, it is usually easy to tell the difference. Where does the money come from? Private donations or tax dollars? Parties, although they probably do receive some government funds, receive most of their money from private donations, thus that ARE NOT part of government.
Re:No, this isn't a mistake, perhaps NOPE (Score:1)
Maybe you're right, but gop.gov is much simpler, and that Conference Committee is probably going to be here longer than the IRS.
An interesting experiment would be, as you suggested, to have the Democrats register DEMOCRATS.GOV, or better yet, INDEPENDENTS.GOV, and see what kind of stink that would raise
Now, I'm sure the Dems could do this - they have a Conference Committee, too. But there is no 'Independent' conference in the House. There is only one Independent member, and he organizes with the Dems.
Now what I find even more interesting is that CAIS.COM, the nameservice provider for GOP.GOV, has a banner image of a major city skyline being destroyed in massive, flaming explosion. Coming on the heels of the Senate voting down participation in the Nuclear Proliferation Treaty, my paranoid conspiracy theory engine purrs...
hehe- that's funny. But what dumbass would sign an unverifiable treaty? The image of a burning city would be more likable if the skyline was DC instead of NYC, though.
;)
All these "new" domains are not new (Score:2)
Both of these domains quite verifiably belong to those countries--and if you notice, the global TLDs section ONLY has
.com, .edu, .gov, .int, .mil, .net, and .org. Everything else is a country.
/me expresses his disgust at the abuse of the domain name system.
And, they hijack you to boot! (Score:1)
...but characteristic of GOP Internet understandin (Score:1)
In fact, although some would say this is a violation of an obscure, subtle, lesser known rule of the net... I would say that this is an example of how the GOP don't know or care to really know the culture and rules and how things work on the Net.
Yet, they want to legislate it.
The time to complain about domain names and TLD (Score:3)
Years ago, non network entites registered
But as soon as you had to pay for the domain, as long as you had the money, you were able to register what you want.
The time to complain was back when the first non
My personal favorite is wildwildwest.net - a domain to promote a movie has exactly WHAT to do with offering network services? Warner Brothers didn't answer my e-mail asking that question, and the InterNIC's e-mail was like "So what".
If they allow GOP.gov, then
What it comes down to. (Score:2)
The evolution of the American political process has led people to equate the parties with government, but the fact remains that the Republican and Democratic parties are not part of the government.
To me, the GOP's efforts to secure a
Re:It probably is (Score:1)
Hate to break it to you, but the political parties pretty much act like they are the elected government...and for good reason. Hopefully, this reason will go away within our lifetimes - but until then, the Democrats and Republicans are, for the most part, the de facto American government.
Re:usps.com (Score:1)
Re:It's use, not abuse (Score:1)
I suspect however, that at some point, Niue will either get a new namespace if the current system gets scrapped, or that Niue will be assigned a new country code. Same with Tonga.
Re:What about USPS? (Score:1)
Re:All these "new" domains are not new (Score:1)
While I think that the "domain name crunch" is all hype (kxq7m_zy.com is as good a domain name as any), and that these businessmen went about this in the wrong way, it was with the permission, even blessings, of those countries. They get their infrastructure upgraded, the businessmen make their money, and greedy, rapacious Americans (but I repeat myself) get their shiny, new, 1-word domain status symbols to park in their driveways next to their shiny new BMWs.
dot com (Score:3)
Hmmm. eSenator.com -> buy yourself an ear in American politics. Methinks I have a new startup idea.
Re:All these "new" domains are not new (Score:1)
What about it?
.arpa is not a TLD--not according to RFC 1591. And if it is, show me a site that uses it--doing a quick search in google, I came up with no .arpa domains.
Re:Should be .org. (Score:1)
>You're throwing away your vote. The only libertarian who are ever going to vote in Congress are going to be elected as Republicans, like Ron Paul (R-TX), who was previously the Libertarian candidate for President. If enough libertarians had his sense, they could actually accomplish something within the GOP.
Except that being in the GOP is antithetical to the idea of Libertarianism. In fact, at its deepest heart, being in the LP is antithetical to being a libertarian, but as you say, we have to work within reality.
When I vote LP, I am NOT throwing away my vote. If I simply failed to vote, THEN I would be throwing it away. But, by voting LP, I am voting None Of The Above, which is an entirely different matter. It's an active statement, rather than passive disinterest.
By the way, in many parts of the country, you cannot write in a ballot. If you were to attempt it, you would be required to get a form from the people checking registration, and then, with a strong possibility that you were the only one asking for such, your write-in is no longer secret.
In fact, I feel any kind of organized political parties are a serious distortion of the ideas of the founding fathers, and it was only after they were actually infected with running the government that they succumbed to the idea of parties.
Personally, I think anyone who feels like they need to run for political office, especially anyone who does so because they think others need their help in running their lives, is certifiably insane, and should be declared incompetent and confined for treatment. I'm too busy running my own life to try to run someone elses.
What do we do instead? Try the suggestion in James P. Hogan's Voyage to Yesteryear.
Which is worse: the TLD abuse or FrontPage? (Score:1)
What really pisses me off is that GOP.gov was written in: [drum roll] FrontPage.
Yes, the all-powerful Grand Old Party, powerful enough to merit its own
And, so, I hereby offer my service as web developer for GOP.gov for the low-low price of one (1) cogent.gov, free of charge, forever. Who should I talk to in GOP.gov? (FrontPage put the META NAME="generator" tag in, but the developer didn't put the META NAME="author" tag in.)
s/TDL/TLD/g (Score:1)
Re:general cluelessness happens (Score:1)
Umm, I think that was the point. But thanks for playing.
Re:...but characteristic of GOP Internet understan (Score:1)
But actually this is true of most issues, not just technical ones. Congressmen must vote every day on things they cannot possible have had time to study and understand. It's just a fact of life.
The Evil Empire sets the rules (Score:3)
Re:It probably is (Score:2)
Then you truly, truly, don't know how Congress works on a day-to-day basis. The member runs around all day meeting constituents, attending to lots of committee meetings, occasionally making speeches, going to hearings, and voting in the full House. They have only a small amount of time for learning about legislation, or party work. The Conference Committees keep members from duplicating tons and tons of effort. Most of the bills Congress votes on are incredibly complicated and non-controversial. The Conference Committees are essential to sorting all of this mess out. They are a vital organizing element of Congress.
Re:...but characteristic of GOP Internet understan (Score:1)
What with Janet "No Strong Crypto" Reno, and Louis "Wiretappin' Must Be" Freeh, I think there are plenty of people on both sides of the aisle to complain about.
Isn't .gov itself a violation of good sense (Score:1)
On a side line, I wonder why the US have a
Is that the new world order?
---
Re:.org != non-profit (Score:2)
And, actually, it doesn't really fit within the original plan for
It'd be better if we had a more complicated hierarchical structure (a la usenet) from the beginning -- slashdot.tech.news, or something. But it's probably too late for that.
--
Re:Isn't .gov itself a violation of good sense (Score:2)
No one made the rest of the world adopt the protocols and RFCs of ARPAnet/Internet; you could have all standardized on JANET's wierd reverse protocol, or stuck with UUCP, or made up something new. But Internet was the biggest and best, and so with that came the inheritance of
I'll note that no one is stopping other countries from setting up their own nameservers that don't pay attention to
Laura
Re:What about Slashdot? (Score:2)
Here's the snippet from RFC 1591 [isi.edu], written by Jon Postel himself:
"ORG - This domain is intended as the miscellaneous TLD for organizations that didn't fit anywhere else. Some non-government organizations may fit here."
.org != non-profit (Score:5)
"ORG" was NEVER meant to be restricted to non-commercial entities, despite the widesspread misconception. Check out RFC 1591 [faqs.org]:
--
No, this isn't a mistake, perhaps (Score:5)
I think the source of the misunderstanding here is that you guys think this is something coming out of the RNC (Republican Nat'l Committee) Headquarters-this is totally different. The Party Conference is an official standing sommittee of the US House, and exists as long as there are Republicans on the Hill. The Democrats could do the same thing. Any party could.
Re:What about USPS? (Score:2)
Re:gop.org (Score:2)
There should be a line drawn between special interests bodies and official government services.
What about Slashdot? (Score:4)
Re:It probably is (Score:2)
I agree that it's questionable whether or not it should be an official part of the government; but the fact is that the HRC (like its Democratic counterpart) is provided for in the House rules (established in accordance with Article I Section 5 of the Constitution).
Re:usps.com (Score:2)
I need online tracking like I need a hole in my head - I don't care what route my package takes, I just want it delivered quickly and undamaged. And if they can't do that (and they can't), I want a quick resolution, not to have to make eight phone calls to finally have someone come out to look at a computer with its case bashed in, shrug, and say, "Sorry 'bout your luck."
Sorry if I'm ranting, but UPS has managed to make my Permanent Shit List.
Re:It probably is (Score:2)
Secondly, if the purpose of these groups is to handle all the complicated technical details, preventing duplication of effort on non-controversial issues, then why are they separated by party? Every technical committee is bipartisan, these are not technical committees. My understanding is that their purpose is to make it easier for those congressmen who want to be loyal party members to see which bills they are supposed to be voting for or against.
Sure enough, a quick look at these websites show them both to be sharply partisan. The HRC opens with bright red text saying "STOP THE RAID", referring to the GOP's stand on recent social security debates. The HDC hides it a little better, but the partisan sentiment is at least as strong on their site.
Again, these are clearly extensions of the political parties, why do they have official government standing? Secondly, why does the HRC, who already have a website, need another one; particularly one whose name carries the implication that the Republican Party is a core government agency?!?
----
Re:Political parties and government (Score:2)
I hate to tell you this but this tendency towards a two-party system predates TV. You are partially right when you say that there have always been multiple parties, but the system is set up to encourage two main parties.
Other parties are formed when the two "official" parties lose touch due to the rise of a new political reality. They are usually small, single-issue parties and very often disappear. Sometimes they grow and become one of the two dominant political parties (like the GOP did circa the Civil War when the other parties became paralyzed by the foremost issue of the day: slavery).
If you would like to learn more about the political party system in the U.S., be sure to read Dynamics of the Party System : Alignment and Realignment of Political Parties in the United States by James L. Sundquist. It illustrates how political parties must remain relevant or they will either die out or be absorbed by another party.
And yes, I studied Political Science and History in college...
--
Re:usps.com (Score:2)
Is postal mail important enough to warrant government intervention to ensure universal one-price service? I dunno. Most of mine goes right to the recycling bin. (And I mean "right to" - I have a small trash can right underneath my mailbox, don't even bother bringing the junk into the house anymore.)
I tell you this, though: after my experience with UPS's abysmal customer service when they damaged two packages of mine, USPS looks a whole lot better than it used to! Their "Priority Mail" service is a pretty good deal.
EXACTLY (Score:2)
The price is almost irrelevant. The real issue is that if it was a private corporation, they would instantly disenfranchise anybody living in that shack at the top of a mountain- thus getting the lower price. Well, other countries may feel differently (socialist ones may actually understand this even better than we do!) but the USA was founded on the concept that _everyone_ counts, and that the government looks after everybody's interests, as best it can- very likely unimpressively, but you have to give it points just for being willing to try. The mail system is a perfect example- it is in fact pretty competitive on price with the private corporations (though you can pay extra to a private corporation whose representative then refuses to bother to knock on your door and squanders the time savings you thought you were buying), but the real issue is what the private corporations will refuse to do because it's a money sink- who they'll put the screws to in order to make better offers to the majority.
Damn right it's socialist thinking. This nation was founded on little carefully chosen bits of socialist thinking. It's a problem when that is lightly brushed aside. Why yes, let's disband the post office! Hell, let's disband the judicial system, and law enforcement, and people can take their gripes and concerns, for instance about fraud practiced by big corporations, or negligence resulting in loss of life, to efficient for-pay courts paid for by the mysteriously immune-from-guilt defendants! Then they can be informed of the loss of their suit through a for-pay mail system that refuses to deliver to an address that won't co-pay (or something- now wouldn't that be profitable: pay to get your mail!). Most efficiently of all, we could have the for-pay law enforcement take notice of these miserable plebian worthless drags on the country, and go out and shoot them in the head, whereby the whole nation can be made to run more efficiently and profitably!
If anybody thinks that isn't sarcasm, go see a doctor...
Re:What about USPS? (Score:2)
For that matter, what the Post Office? Sure, we've got usps.gov, but its also usps.com. Surely the USPS part of the government and not allowed to be a
IRS.Gov? MyName.Gov? (Score:2)
As I've been politically active, I asked NIC.Gov if I can register my name under
.Gov also. I wonder how large a staff they'll need to monitor when political clubs drop below the registration requirements and get de-registered.
But I am surprised that AlGore didn't already get GOP.Gov registered to the GOvernment Printing office... | https://slashdot.org/story/99/10/17/0032222/new-gop-domain-name-violates-rfc-2146 | CC-MAIN-2017-39 | refinedweb | 6,838 | 65.01 |
A New, Simpler Way to Do Dependency Injection in Go
Dependency injection (DI) is a great thing. Even if you haven’t heard of the term, it’s likely that you have already used it.
This article assumes zero existing knowledge of DI. However, a basic understanding of Go. I will work from fundaments, challenges, solutions and eventually lead to how to build a complete service container.
Anatomy of This Article
If you are already familiar with DI, you can skip to Introducing 🐺 Dingo. I will talk about a new project for generating a service container from YAML.
- Overview and Terminology: Some basic language and concepts that will be referred to.
- Simple Example: Diving right in with a simple example that leads to a common problem. We will refactor it to use dependency injection so that unit tests can be created.
- With Great Complexity Comes Great Responsibly: Explains the cost and associated problems with DI as the codebase grows.
- Building the Services With Functions: The simplest method of dealing with the aforementioned problems in the previous section.
- Singletons: An important optimization and DI concept. Explains how it affects your code.
- Introducing: 🐺 dingo: Putting it all together with
dingo— a YAML configurable DI framework.
Overview and Terminology
DI literally means to inject your dependencies. A dependency can be anything that effects the behavior or outcome of your logic. Some common examples are:
- Other services. Making your code more modular, less duplicate code and more testable.
- Configuration. Such as a database passwords, API URL endpoints, etc.
- System or environment state. Such as the clock or file system. This comes in extremely important when writing tests that depend on time or random data.
- Stubs of external APIs. So that API requests can be mocked within the system during tests to keep things stable and quick.
Some terminology:
- A service is an instance of a class. It’s called a service because its often referred to by name rather than type. For example
Emaileris the name of a service, but it is an instance of a
SendEmail. We can change the underlying implementation of a service. As long as it has the same interface we need not rename the service.
- A container is a collection of services. Services are lazy-loaded and only initialized when they are requested from the container.
- A singleton is an instance that is initialised once, but can be reused many times.
Simple Example
Let’s consider a very simple example. We have a service that sends an email:
type SendEmail struct {
From string
}
func (sender *SendEmail) Send(to, subject, body string) error {
// It sends an email here, and perhaps returns an error.
}
We also have a service that welcomes new customers:
type CustomerWelcome struct{}
func (welcomer *CustomerWelcome) Welcome(name, email string) error {
body := fmt.Sprintf("Hi, %s!", name)
subject := "Welcome"
From: "hi@welcome.com",
}
return emailer.Send(email, subject, body)
}
We could use it like:
welcomer := &CustomerWelcome{}
err := welcomer.Welcome("Bob", "bob@smith.com")
// check error...
It looks good. However, already we have run into one major problem. This is difficult to unit test. We don’t want to actually send out emails, but we still want to verify that the correct customer receives the correctly formatted email message.
This is where DI comes in. If we provide (inject) in the dependency (the
SendEmail, in this case) we can provide a fake one during test. Let’s refactor our
CustomerWelcome:
// EmailSender provides an interface so we can swap out the
// implementation of SendEmail under tests.
type EmailSender interface {
Send(to, subject, body string) error
}
type CustomerWelcome struct{
}
func (welcomer *CustomerWelcome) Welcome(name, email string) error {
body := fmt.Sprintf("Hi, %s!", name)
subject := "Welcome"
return welcomer.Emailer.Send(email, subject, body)
}
The usage becomes more complicated because we are now injecting the
From: "hi@welcome.com",
}
welcomer := &CustomerWelcome{
}
err := welcomer.Welcome("Bob", "bob@smith.com")
// check error...
However, now this is unit testable:
import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
)
type FakeEmailSender struct {
mock.Mock
}
func (mock *FakeEmailSender) Send(to, subject, body string) error {
args := mock.Called(to, subject, body)
return args.Error(0)
}
func TestCustomerWelcome_Welcome(t *testing.T) {
"bob@smith.com", "Welcome", "Hi, Bob!").Return(nil)
welcomer := &CustomerWelcome{
}
err := welcomer.Welcome("Bob", "bob@smith.com")
assert.NoError(t, err)
}
With Great Complexity Comes Great Responsibly
Fundamentally this principle of extracting dependencies works great. However, everything in technology is a trade off. If we continue to follow this path we will see:
- Duplicate code. Imagine needing to use
CustomerWelcomein more than one place (several, or even dozens). Now we have duplicate code that initialises the
SendEmail. Especially repeating the
From. Argh!
- Complexity and misunderstanding. To use a service we now have to know how to setup all of its dependencies. Each of its dependencies may, in turn, have their own. Also, there may be several ways to satisfy dependencies that compile correctly but provide the wrong runtime logic. For example, if we has a
EmailSender. We might provide the wrong service and a customer receives a message that should have been sent to a supplier. Oops!
- Maintainability rapidly decreases. If a service initialization needs to change, or it needs new dependencies you now have to refactor all cases where it is used. Or worse, you miss something, such as changing the
Fromaddress in the
SendEmail. And now some emails are being sent with the wrong reply address. Oh no!
Don’t worry, there are solutions for these as well. Read on.
Building the Services With Functions
One fairly obvious solution is to use “create” functions that build the services for us. That is, one function is responsible for building one service:
func CreateSendEmail() *SendEmail {
return &SendEmail{
From: "hi@welcome.com",
}
}
func CreateCustomerWelcome() *CustomerWelcome {
return &CustomerWelcome{
}
}
Now we can use it more simply and safely:
welcomer := CreateCustomerWelcome()
err := welcomer.Welcome("Bob", "bob@smith.com")
// check error...
The unit tests can also be updated:
func TestCustomerWelcome_Welcome(t *testing.T) {
"bob@smith.com", "Welcome", "Hi, Bob!").Return(nil)
welcomer := CreateCustomerWelcome()
welcomer.Emailer = emailer
err := welcomer.Welcome("Bob", "bob@smith.com")
assert.NoError(t, err)
}
A few things to note:
- I’ve used a
Createdprefix, rather than
Newas
Newis commonly associated with constructors in Go.
- The unit test does not need to use
CreateCustomerWelcome(). In fact you can leave it how it was. One advantage of replacing it with the create function is that if the definition of that service changes your unit tests will be less brittle. However, this is also a disadvantage because you might miss some key refactoring needed for the tests that are now failing. Again, trade offs.
Singletons
A singleton is an instance that is initialised once, but can be reused many times.
Up until now we have been creating a new instance every time we call a service. Especially in larger, more complex codebases this is quite wasteful. Not so much in terms of memory usage/garbage collection, but more in the way of dealing with services that are expensive to load.
For example, if we had a
CustomerManager that loaded all customers into memory from a file. If we knew didn’t change, we would surely only want to do this once rather than every time we would want to lookup a customer.
Getting back to the original example,
CustomerWelcome is stateless. That means that we can safely reuse it without needing to create a new one each time.
SendEmail does actually have state, the
From. However, we also consider this to be stateless because its a value that does not change throughout a single run of the application.
We can refactor our functions into a container to make our sevices singletons:
type Container struct {
CustomerWelcome *CustomerWelcome
SendEmail EmailSender
)
func (container *Container) GetSendEmail() EmailSender {
if container.SendEmail == nil {
container.SendEmail = &SendEmail{
From: "hi@welcome.com",
}
}
return container.SendEmail
}
func (container *Container) GetCustomerWelcome() *CustomerWelcome {
if container.CustomerWelcome == nil {
container.CustomerWelcome = &CustomerWelcome{
}
}
return container.CustomerWelcome
}
The functions are now attached to a struct called
Container. This is because if we allowed the services to remain global it would affect the unit tests. Making a change to service would persist through tests and lead to some strange and hard to debug issues.
Each unit test should create a new
Container:
func TestCustomerWelcome_Welcome(t *testing.T) {
"bob@smith.com", "Welcome", "Hi, Bob!").Return(nil)
container := &Container{}
container.SendEmail = emailer
welcomer := container.GetCustomerWelcome()
err := welcomer.Welcome("Bob", "bob@smith.com")
assert.NoError(t, err)
}
Also, I have renamed the functions with a
Get prefix because they may return a new service, or the already initialized service (singleton).
Introducing: 🐺 dingo
We’ve covered most of the basics of DI so far. However, one issue that still remains is that there is still a lot of code we need to create to configure the services. Since most services are configured the same way with slight variations we can use an intermediate language, YAML to describe what the services are rather than how they should be initialised.
Introducing
dingo, the first type-safe DI framework for Go. It reads YAML and generates the necessary Go code to build your container. Using YAML is easier and safer than trying to do it by hand.
Let’s consider the original example. What would the configuration look like? Well we create a file in the root of the project called
dingo.yml:
services:
SendEmail:
type: *SendEmail
interface: EmailSender
properties:
From: '"hi@welcome.com"'
CustomerWelcome:
type: *CustomerWelcome
properties:
We need only to run the command:
dingo
This will generate a new file called
dingo.go with the
DefaultContainer ready to use:
welcomer := DefaultContainer.GetCustomerWelcome()
err := welcomer.Welcome("Bob", "bob@smith.com")
// check error...
dingo has many more advanced features that I hope to explain in more detail in future articles. Until then, I hope that it saves you time and would love to hear your feedback. | https://medium.com/swlh/a-new-simpler-way-to-do-dependency-injection-in-go-9e191bef50d5?source=---------2------------------ | CC-MAIN-2019-26 | refinedweb | 1,641 | 51.24 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.