text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
#include <wx/valnum.h>
Validator for text entries used for integer entry.
This validator can be used with wxTextCtrl or wxComboBox (and potentially any other class implementing wxTextEntry interface) to check that only valid integer values can be entered into them.
This is a template class which can be instantiated for all the integer types (i.e.
short,
int,
long and
long long if available) as well as their unsigned versions.
By default this validator accepts any integer values in the range appropriate for its type, e.g.
INT_MIN..INT_MAX for
int or
0..USHRT_MAX for
unsigned short. This range can be restricted further by calling SetMin() and SetMax() or SetRange() methods inherited from the base class.
When the validator displays integers with thousands separators, the character used for the separators (usually "." or ",") depends on the locale set with wxLocale (note that you shouldn't change locale with setlocale() as this can result in a mismatch between the thousands separator used by wxLocale and the one used by the run-time library).
A simple example of using this class:
For more information, please see wxValidator Overview.
Type of the values this validator is used with.
Validator constructor. | https://docs.wxwidgets.org/trunk/classwx_integer_validator.html | CC-MAIN-2019-47 | refinedweb | 197 | 55.44 |
Understanding Compounding Interest on Your Investments
Over time, a modest-but-steady rate of compound interest can build into a sizable nest egg. The most powerful investments have stable, compounded returns. Regardless of what’s happening in the economy or stock market, you can always count on the magic of compounding.
Simple interest is a return that your financial institution pays you based on a certain percentage of every dollar you put aside in your savings account. For example, if you have $1,000 in your account (called principal), and the bank pays 2.5 percent annual interest, then you receive 2.5 cents for every dollar that was in your savings account for the entire year. After 12 months, you accrue an additional $25 in your savings account.
Savings accounts are the most familiar type of fixed-income investment, and they provide
Substantial safety for the principal balance
Low probability of failure to receive earned interest
High liquidity
One drawback to savings accounts is that returns (the amount of money you earn for giving up the immediate use of your money) often are the lowest available. A savings account is a classic example of a low-risk and low-return investment. Want to check out how much and how fast your savings can grow? Try the calculators at these Web sites:
FinanCenter features an online calculator that determines how much your money can earn. You can use this calculator to figure out how much money you’ll have at some future date or how long it will take to reach a predetermined savings goal. On this Web site’s home page, click on Our Products, then Calculators, then (under Calculator Categories) on Savings, and then on What will it take to become a millionaire?
CNNMoney provides a savings calculator to determine how fast your savings can grow depending on an interest rate, initial deposit, and additional payments. Go to CNN Money home page and click on Calculators in the left margin. Select the Savings Calculator.
Okay, so now you’re wondering just how long it takes to double your money. The Rule of 72 is a quick and dirty way to calculate your rate of return without using an online savings calculator. Simply divide the rate of return (interest rate) on your savings into 72. That gives you an estimate of how long it takes for your investment to double in value. An investment earning 6 percent annually, for example, doubles in 12 years (72 ÷ 6). | http://www.dummies.com/how-to/content/understanding-compounding-interest-on-your-investm.navId-323660.html | CC-MAIN-2016-07 | refinedweb | 417 | 52.8 |
Details
Description
add anchor /cmssite/cms/APACHE_OFBIZ_HTML#CASLDAP so can use it permanent link.
Activity
- All
- Work Log
- History
- Activity
- Transitions
BJ,
Finally, at r1069565. I have commited your patch modified with the only anchor in chapter. I close this issue and we should open (a) new one(s) for other anchors if neede
Thanks
OK, thanks BJ!
yeah sorry the docbook.xsd will give an error but will render correctly.
anchors go before what you want to display.
the conversion to books I estimate is about 24 manhours including testing, so will not be done for a while.
need to create a Set at toplevel which means updateding the content resources.
then redefine all the chapters in documentation folders to books and put them in the set.
as far as anchors
use
<chapter xml:id="ProductCatalogComponent" xmlns:xsi=""......
this will show the chapter header with table contents for chapter
<section xml:
this also puts them in the table contents as well as an anchor.
I will submit a patch for all you listed by tomorrow.
BJ,
I think we don't need as much answers at the same place. I think one is sufficient, and in chapter has you suggested seems the better way.
Could you please do the same (without the anchors in title, I mean) for the other anchors? I see it's not possible to put in section, then should be in titles (HELP_PRODUCT_Features.xml) because else it's not rendered (error as shown above)
I will wait your complete patch to commit them all. If you think/find it's not needed, I will simply commit the current patch, without the anchors in title
Thanks
PS: to have an idea of the changes I needed to do, have a look at
updated patch
yes I did a quick and dirty.
anchors can be multiple so I use the Html standard, since putting them in title makes it messy, IHmO
in reality anything below the anchor will be displayed.
so I did an ID in the section for CASLDAP which will be an anchor for the chapter.
seperate anchors for CAS and LDAP for the title.
I have a hard time with the new layout of the Jira the grey is hard to see for my old eyes. so could not make out you comment with code.
Forgot to say that your patch has been applied to trunk at r1069229
I updated the trunk at r1069372 and the demo (for the LDAP anchor only)
Ha yes, you can put it in the anchor, like
<title><anchor xml:OFBiz Single Sign On using CAS and LDAP</title>
Could you please handle that and submit a patch?
TIA
Mmm, Maybe there is a better place for anchors (than just below title), because it's a bit lower than the dynamic one, not a big deal, just less clear
BTW,I think you can now resonnably use this help feature in trunk. I believe, we have enough resources for that
=====TYPO=====
Actually there were some errors in
- anchors
- section with id
- note misplaced
- > and < chars
I fixed those that were blocking me, there are still some (minor?) to fix
before
[java] 2011-02-10 13:15:54,000 (http-0.0.0.0-8080-1) [ CmsEvents.java:145:INFO ] Path INFO for Alias: APACHE_OFBIZ_HTML [java]; Line #59; Column #45; Error attempting to parse XML file (href='ApacheOfbizUser.xml').
after
2011-02-10 13:38:11,843 (http-0.0.0.0-8080-2) [ ControlServlet.java:141:INFO ] [[[APACHE_OFBIZ_HTML] Request Begun, encoding=[UTF-8]- total:0.0,since last(Begin):0.0]] 2011-02-10 13:38:11,843 (http-0.0.0.0-8080-2) [ CmsEvents.java:145:INFO ] Path INFO for Alias: APACHE_OFBIZ_HTML; Line #189; Column #16; Note: namesp. cut : stripped namespace before processing Apache OFBiz official documentation.; Line #189; Column #16; Note: namesp. cut : processing stripped document Apache OFBiz official documentation.; Line #89; Column #16; Element br in namespace '' encountered in para, but no template matches. 2011-02-10 13:38:31,359 (http-0.0.0.0-8080-2) [ RequestHandler.java:638:INFO ] Ran Event [java:org.ofbiz.content.cms.CmsEvents#cms] from [request], result is [success] 2011-02-10 13:38:31,375 (http-0.0.0.0-8080-2) [ ConfigXMLReader.java:120:INFO ] controller loaded: 0.0s, 1 requests, 0 views in jndi:/0.0.0.0/cmssite/WEB-INF/controller.xml 2011-02-10 13:38:31,375 (http-0.0.0.0-8080-2) [ ConfigXMLReader.java:120:INFO ] controller loaded: 0.0s, 31 requests, 17 views in file:/D:/WorkspaceNew/sparta/framework/common/webcommon/WEB-INF/common-controller.xml 2011-02-10 13:38:31,375 (http-0.0.0.0-8080-2) [ ControlServlet.java:324:INFO ] [[[APACHE_OFBIZ_HTML] Request Done- total:19.532,since last([APACHE_OFBIZ_HTM...):19.532]]
It's committed at r1069368 in trunk. The trunk demo and the FAQ are updated
Thanks for your help!
updated
possibly you did not refresh(reload) the page after adding the patch.
Great, thanks BJ!
I see it works well on your site at (not yet CASLDAP BTW)
But, after applying patch, I can't access locally to, rather I get a blank page.
Same for BTW
Same also on trunk demo for
What I'm missing?
Thanks
Feel free to create an issue immediatly if you fear to lose the context | https://issues.apache.org/jira/browse/OFBIZ-4174?focusedCommentId=12993031&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel | CC-MAIN-2015-32 | refinedweb | 898 | 66.13 |
Fetch images with a React hook
Why fetching images with a React hook can be beneficial
29th JUNE 2020
3 min read
We all know the default way to fetch and display images:
<img src="/image.jpg" />
It is used across frameworks and libraries throughout the industry. But what if you want to improve your UX and display loading indicators at the position of the image while the image is still loading? Or what if you want to output a message if there was an error while loading the image? Unfortunately this is not that easy with the default way of loading images.
But no worries, because there is a solution. You can just load the image in parallel with a React hook, which will also tell you when the image has finished loading and if there has been an error.
First of all I would like to introduce the hook:
import { useState, useEffect } from "react"; export const useImage = (src: string) => { const [hasLoaded, setHasLoaded] = useState(false); const [hasError, setHasError] = useState(false); const [hasStartedInitialFetch, setHasStartedInitialFetch] = useState(false); useEffect(() => { setHasStartedInitialFetch(true); setHasLoaded(false); setHasError(false); // Here's where the magic happens. const image = new Image(); image.src = src; const handleError = () => { setHasError(true); }; const handleLoad = () => { setHasLoaded(true); setHasError(false); }; image.onerror = handleError; image.onload = handleLoad; return () => { image.removeEventListener("error", handleError); image.removeEventListener("load", handleLoad); }; }, [src]); return { hasLoaded, hasError, hasStartedInitialFetch }; };
What does this hook do? Well, it basically loads the image into the cache or retrieves it from the cache if it has already been loaded.
At the same time it provides useful information, such as whether the image has been loaded, whether an error occurred during loading or whether the initial fetch has started. You could then use this information to place a loading indicator in the place of the image as long as the image has not yet loaded. An example integration could look like the following:
const Demo = () => { const imageUrl = "/image.png"; const { hasLoaded, hasError } = useImage(imageUrl); if (hasError) { return null; } return ( <> ... {!hasLoaded && <LoadingIndicator />} {hasLoaded && <img src={imageUrl} />} </> ); };
No. One might expect this, since the source was entered in both the hook and the image element. However, only one request is sent, no matter how often a source is referenced. Fortunately, this also leads to the result that the hasLoaded is always turning true, if somewhere (no matter where exactly) on the page this exact image was loaded.
If you want to load images in an easy way and at the same time know the exact loading state, I can highly recommend this hook. In fact, all article thumbnails on typesafe.blog are loaded with this hook. In case you have a slow internet connection, you will see a loading indicator instead of the image until the image has loaded.
typesafe.blog - Made by Eyk RehbeinPrivacy Policy | https://typesafe.blog/article/fetch-images-with-a-react-hook | CC-MAIN-2020-40 | refinedweb | 466 | 55.44 |
Hello everyone!
I’m having a strange issue with the NodeMCU v2. Basically, I want to turn on an RGB led using the PWM pins on the board and setting the intensity of the red, green and blue pins with the pmw.write() function in order to pick a specific color. This is the simple code I used to test the led:
import pwm pinMode(D2, OUTPUT) # red pinMode(D5, OUTPUT) # green pinMode(D6, OUTPUT) # blue duty=0 while True: for i in range(-100, 100, 1): duty = 100-abs(i) pwm.write(D2.PWM, 100, duty, MICROS) pwm.write(D5.PWM, 100, duty, MICROS) pwm.write(D6.PWM, 100, duty, MICROS) sleep(10)
This should fade in and fade out the led (white color), but the problem is that it doesn’t seem to work well! The led starts flashing in a totally crazy way.
So, I decided to try the same code with another board, the ST Nucleo F401RE, and it runs flawlessly. Then again, I tried to use a very similar code in another IDE with the NodeMCU v2 to see if it was a problem with the hardware but, surprisingly, it worked well.
So, it seems like the problem is with Zerynth + NodeMCU v2 + PWM.
What do you suggest me to do?
Thanks!
| https://community.zerynth.com/t/issues-with-nodemcu-v2-and-pwm/1177 | CC-MAIN-2019-09 | refinedweb | 217 | 71.75 |
Chapter 4. Subroutines
You’ve already seen and used some of the built-in system functions, such as
chomp,
reverse,
&) in front. There’s a rule about when you
can omit the ampersand and when you cannot; you’ll see that rule by the
end of the chapter. For now, we’ll just use it every time that it’s not
forbidden, which is always a safe rule. We’ll tell you every place where
it’s forbidden, of course.
The subroutine name comes from a separate namespace, so Perl won’t be confused if you have a
subroutine called
&fred and a
scalar called
$fred in the same
program—although there’s no reason to do that under normal
circumstances.
Defining a Subroutine
To define your own subroutine, use the keyword
sub, the name
of the subroutine (without the ampersand), then the indented block of
code (in curly braces),[12] which makes up the body of the subroutine, something
like this:
sub marine { $n += 1; # Global variable $n print "Hello, sailor number $n!\n"; }
Subroutine definitions can be anywhere in your program text, but programmers who come from a background of languages like C or Pascal like to put them at the start of the file. Others may prefer to put them at the end of the file so that the main part of the program appears at the beginning. It’s up to you. In any case, you don’t normally need any kind of forward declaration.[*] Subroutine definitions are global; without some powerful trickiness, there are no private subroutines.[†] If you have two subroutine definitions with the same name, the later one overwrites the earlier one.[‡] That’s generally considered bad form, or the sign of a confused maintenance programmer.
As you may have noticed in the previous example, you may use any global variables within the subroutine body. In fact, all of the variables you’ve seen so far are globals; that is, they are accessible from every part of your program. This horrifies linguistic purists, but the Perl development team formed an angry mob with torches and ran them out of town years ago. You’ll see how to make private variables in Private Variables in Subroutines,” later in this chapter.
Invoking a Subroutine
Invoke a subroutine from within any expression by using the subroutine name (with the ampersand):[‖]
&marine; # says Hello, sailor number 1! &marine; # says Hello, sailor number 2! &marine; # says Hello, sailor number 3! &marine; # says Hello, sailor number 4!
Most often, we refer to the invocation as simply calling the subroutine.
Return Values
The subroutine is always invoked as part of an expression,
even if the result of the expression isn’t being used. When we invoked
&marine earlier, we were
calculating the value of the expression containing the invocation, but
then throwing away the result.
Many times, you’ll call a subroutine and actually do something with the result. This means that you’ll be paying attention to the return value of the subroutine. All Perl subroutines have a return value—there’s no distinction between those that return values and those that don’t. Not all Perl subroutines have a useful return value, however.
Since all Perl subroutines can be called in a way that needs a return value, it’d be a bit wasteful to have to declare special syntax to “return” a particular value for the majority of the cases. So Larry made it simple. As Perl is chugging along in a subroutine, it is calculating values as part of its series of actions. Whatever calculation is last performed in a subroutine is automatically also the return value.
For example, let’s define this subroutine:
sub sum_of_fred_and_barney { print "Hey, you called the sum_of_fred_and_barney subroutine!\n"; $fred + $barney; # That's the return value }
The last expression evaluated in the body of this subroutine is
the sum of
$fred and
$barney, so the sum of
$fred and
$barney will be the return value. Here’s that
in action:
$fred = 3; $barney = 4; $wilma = &sum_of_fred_and_barney; # $wilma gets 7 print "\$wilma is $wilma.\n"; $betty = 3 * &sum_of_fred_and_barney; # $betty gets 21 print "\$betty is $betty.\n";
That code will produce this output:
Hey, you called the sum_of_fred_and_barney subroutine! $wilma is 7. Hey, you called the sum_of_fred_and_barney subroutine! $betty is 21.
That
sub sum_of_fred_and_barney { print "Hey, you called the sum_of_fred_and_barney subroutine!\n"; $fred + $barney; # That's not really the return value! print "Hey, I'm returning a value now!\n"; # Oops! }
In this example, the last expression evaluated is not the
addition; it’s the
1,
meaning “printing was successful,”[*] but that’s not the return value you actually wanted. So be
careful when adding additional code to a subroutine, since the last
expression evaluated will be the return
value.
So, what happened to the sum of
$fred and
$barney in that second (faulty) subroutine? We
didn’t put it anywhere, so Perl discarded it. If you had requested
warnings, Perl (noticing that there’s nothing useful about adding two
variables and discarding the result) would likely warn you about
something like “a useless use of addition in a void context.” The term
void context is just a fancy way of saying that
the answer isn’t being stored in a variable or used in any other
way.
“The last expression evaluated” really means the last expression
evaluated, rather than the last line of text. For
example, this subroutine returns the larger value of
$fred or
$barney:
sub larger_of_fred_or_barney { if ($fred > $barney) { $fred; } else { $barney; } }
The last expression evaluated is either
$fred or
$barney, so the value of one of those
variables becomes the return value. You won’t know whether the return
value will be
$fred or
$barney until you see what those variables
hold at runtime.
These are all rather trivial examples. It gets better when you can pass values that are different for each invocation into a subroutine instead of relying on global variables. In fact, that’s coming right up.
Arguments
That subroutine called
larger_of_fred_or_barney would be much more
useful if it didn’t force you to use the global variables
$fred and
$barney. If you wanted to get the larger value
from
$wilma and
$betty, you currently have to copy those into
$fred and
$barney before you can use
larger_of_fred_or_barney. And if you had
something useful in those variables, you’d have to first copy those to
other variables, say
$save_fred and
$save_barney. And then, when you’re
done with the subroutine, you’d have to copy those back to
$fred and
$barney again.
Luckily, Perl has subroutine arguments. To pass an argument list to the subroutine, simply place the list expression, in parentheses, after the subroutine invocation, like this:
$n = &max(10, 15); # This sub call has two parameters
The list is passed to the subroutine; that
is, it’s made available for the subroutine to use however it needs to.
Of course, you have to store this list somewhere, so Perl automatically
stores the parameter list (another name for the argument list) in the
special array variable named
@_ for the duration of
the subroutine. The subroutine can access this variable to determine
both the number of arguments and the value of those arguments.
This means that the first subroutine parameter is stored in
$_[0], the second one is stored in
$_[1], and so on. But—and here’s an
important note—these variables have nothing whatsoever to do with the
$_ variable, any more than
$dino[3] (an element of the
@dino array) has to do with
$dino (a completely distinct scalar variable).
It’s just that the parameter list must be stored into some array
variable for the subroutine to use it, and Perl uses the array
@_ for this purpose.
Now, you could write the subroutine
&max to look a little like the subroutine
&larger_of_fred_or_barney, but instead of
using
$fred you
could use the first subroutine parameter (
$_[0]), and instead of using
$barney, you could use
the second subroutine parameter
(
$_[1]). And so you
could end up with code something like this:
sub max { # Compare this to &larger_of_fred_or_barney if ($_[0] > $_[1]) { $_[0]; } else { $_[1]; } }
Well, as we said, you could do that. But it’s pretty ugly with all of those subscripts, and hard to read, write, check, and debug, too. You’ll see a better way in a moment.
There’s another problem with this subroutine. The name
&max is nice and short, but it doesn’t
remind us that this subroutine works properly only if called with
exactly two parameters:
$n = &max(10, 15, 27); # Oops!
Excess parameters are ignored—since the subroutine never looks at
$_[2], Perl doesn’t care whether
there’s something in there or not. And insufficient parameters are also
ignored—you simply get
undef if you
look beyond the end of the
@_ array,
as with any other array. You’ll see how to make a better
&max, which works with any number of
parameters, later in this chapter.
The
@_ variable is private to
the subroutine;[*] if there’s a global value in
@_, it is saved away before the subroutine is
invoked and restored to its previous value upon return from the
subroutine.[†] This also means that a subroutine can pass arguments to
another subroutine without fear of losing its own
@_ variable—the nested subroutine invocation
gets its own
@_ in the same way. Even
if the subroutine calls itself recursively, each invocation gets a new
@_, so
@_ is always the parameter list for the
current subroutine invocation.
Private Variables in Subroutines
But if Perl can give us a new
@_ for every invocation, can’t it give us
variables for our own use as well? Of course it can.
By default, all variables in Perl are global variables; that is, they are
accessible from every part of the program. But you can create private
variables called lexical variables at any
time with the
my operator:
sub max { my($m, $n); # new, private variables for this block ($m, $n) = @_; # give names to the parameters if ($m > $n) { $m } else { $n } }
These variables are private (or scoped) to
the enclosing block; any other
$m or
$n is totally unaffected by these
two. And that goes the other way, too—no other code can access or modify
these private variables, by accident or design.[*] So, you could drop this subroutine into any Perl program
in the world and know that you wouldn’t mess up that program’s
$m and
$n
(if any).[†] It’s also worth pointing out that, inside the
if’s blocks, there’s no semicolon needed after the return value expression.
Although Perl allows you to omit the last semicolon in a block, in
practice you omit it only when the code is so simple that you can write
the block in a single line.
The subroutine in the previous example could be made even simpler.
Did you notice that the list
($m, $n)
was written twice? That
my operator
can also be applied to a list of variables enclosed in parentheses, so
it’s customary to combine those first two statements in the
subroutine:
my($m, $n) = @_; # Name the subroutine parameters
That one statement creates the private variables and sets their
values, so the first parameter now
has the easier-to-use name
$m and the
second has
$n. Nearly every
subroutine will start with a line much like that one, naming its
parameters. When you see that line, you’ll know that the subroutine
expects two scalar parameters, which you’ll call
$m and
$n
inside the subroutine.
Variable-Length Parameter Lists
In real-world Perl code, subroutines are often given
parameter lists of arbitrary length. That’s because of Perl’s “no
unnecessary limits” philosophy that you’ve already seen. Of course, this
is unlike many traditional programming languages, which require every
subroutine to be strictly typed (that is, to permit only a certain,
predefined number of parameters of predefined types). It’s nice that
Perl is so flexible, but (as you saw with the
&max routine earlier) that may cause
problems when a subroutine is called with a different number of
arguments than the author expected.
Of course, the subroutine can easily check that it has the right
number of arguments by examining the
@_ array. For example,
we could have written
&max to
check its argument list like this:[*]
sub max { if (@_ != 2) { print "WARNING! &max should get exactly two arguments!\n"; } # continue as before... . . . }
That
if test uses the “name” of
the array in a scalar context to find out the number of array elements,
as you saw in Chapter 3.
But in real-world Perl programming, this sort of check is hardly ever used; it’s better to make the subroutine adapt to the parameters.
A Better &max Routine
So let’s rewrite
&max to
allow for any number of arguments:
$maximum = &max(3, 5, 10, 4, 6); sub max { my($max_so_far) = shift @_; # the first one is the largest yet seen foreach (@_) { # look at the remaining arguments if ($_ > $max_so_far) { # could this one be bigger yet? $max_so_far = $_; } } $max_so_far; }
This code uses what has often been called the “high-water mark” algorithm; after a flood, when the
waters have surged and receded for the last time, the high-water mark
shows where the highest water was seen. In this routine,
$max_so_far keeps track of our high-water
mark, the largest number yet seen.
The first line sets
$max_so_far to
3 (the first parameter in the example code)
by shifting that parameter from the parameter array,
@_. So
@_
now holds
(5, 10, 4, 6), since the
3 has been shifted off. And the
largest number yet seen is the only one yet seen:
3, the first parameter.
Now, the
foreach loop will
step through the remaining values in the parameter list, from
@_. The control variable of the loop is, by
default,
$_. (But, remember,
there’s no automatic connection between
@_ and
$_; it’s just a coincidence that they have
such similar names.) The first time through the loop,
$_ is
5.
The
if test sees that it is larger
than
$max_so_far, so
$max_so_far is set to
5—the new high-water mark.
The next time through the loop,
$_ is
10.
That’s a new record high, so it’s stored in
$max_so_far as well.
The next time,
$_ is
4. The
if
test fails, since that’s no larger than
$max_so_far, which is
10, so the body of the
if is skipped.
The next time,
$_ is
6, and the body of the
if is skipped again. And that was the last
time through the loop, so the loop is done.
Now,
$max_so_far becomes the
return value. It’s the largest number we’ve seen, and we’ve seen them
all, so it must be the largest from the list:
10.
Empty Parameter Lists
That improved
&max
algorithm works fine now, even if there are more than two parameters. But what happens if there is
none?
At first, it may seem too esoteric to worry about. After all,
why would someone call
&max
without giving it any parameters? But maybe someone wrote a line like
this one:
$maximum = &max(@numbers);
And the array
@numbers might
sometimes be an empty list; perhaps it was read in from a file that
turned out to be empty, for example. So you need to know: what does
&max do in that case?
The first line of the subroutine sets
$max_so_far by using
shift on
@_, the (now empty) parameter array. That’s
harmless; the array is left empty, and
shift returns
undef to
$max_so_far.
Now the
foreach loop wants to
iterate over
@_, but since that’s
empty, the loop body is executed zero times.
So in short order, Perl returns the value of
$max_so_far—
undef—as the return value of the subroutine.
In some sense, that’s the right answer because there is no largest
value in an empty list.
Of course, whoever is calling this subroutine should be aware
that the return value may be
undef—or they could simply ensure that the
parameter list is never empty.
Notes on Lexical (my) Variables
Those lexical variables can actually be used in any block, not
merely in a subroutine’s block. For example, they can be used in the
block of an
if,
while, or
foreach:
foreach (1..10) { my($square) = $_ * $_; # private variable in this loop print "$_ squared is $square.\n"; }
The variable
$square is private
to the enclosing block; in this case, that’s the block of the
foreach loop. If there’s no enclosing block,
the variable is private to the entire source file. For now, your
programs aren’t going to use more than one source file, so this isn’t an
issue. But the important concept is that the
scope of a lexical variable’s name is limited to
the smallest enclosing block or file. The only code
that can say
$square and mean that
variable is the code inside that textual scope. This is a big win for
maintainability—if the wrong value is found in
$square, the culprit will be found within a
limited amount of source code. As experienced programmers have learned
(often the hard way), limiting the scope of a variable to a page of
code, or even to a few lines of code, really speeds along the
development and testing cycle.
Note also that the
my operator
doesn’t change the context of an assignment:
my($num) = @_; # list context, same as ($num) = @_; my $num = @_; # scalar context, same as $num = @_;
In the first one,
$num gets the
first parameter, as a list-context assignment; in the second, it gets
the number of parameters, in a scalar context. Either line of code
could be what the programmer wanted; you can’t tell
from that one line alone, and so Perl can’t warn you if you use the
wrong one. (Of course, you wouldn’t have both of
those lines in the same subroutine, since you can’t have two lexical
variables with the same name declared in the same scope; this is just an
example.) So, when reading code like this, you can always tell the
context of the assignment by seeing what the context would be without
the word
my.
So long as we’re discussing using
my() with parentheses, it’s worth remembering
that without the parentheses,
my only
declares a single lexical variable:[*]
my $fred, $barney; # WRONG! Fails to declare $barney my($fred, $barney); # declares both
Of course, you can use
my to
create new, private arrays as well:[†]
my @phone_number;
Any new variable will start out empty—
undef for scalars, or the empty list for
arrays.
The use strict Pragma
Perl tends to be a pretty permissive language.[*] But maybe you want Perl to impose a little discipline;
that can be arranged with the
use
strict pragma.
A pragma is a hint to a compiler, telling
it something about the code. In this case, the
use strict pragma tells Perl’s internal
compiler that it should enforce some good programming rules for the rest
of this block or source file.
Why would this be important? Well, imagine that you’re composing your program, and you type a line like this one:
$bamm_bamm = 3; # Perl creates that variable automatically
Now, you keep typing for a while. After that line has scrolled off the top of the screen, you type this line to increment the variable:
$bammbamm += 1; # Oops!
Since Perl sees a new variable name (the underscore is significant in a variable name), it creates a new variable and increments that one. If you’re lucky and smart, you’ve turned on warnings, and Perl can tell you that you used one or both of those global variable names only a single time in your program. But if you’re merely smart, you used each name more than once, and Perl won’t be able to warn you.
To tell Perl that you’re ready to be more restrictive, put the
use strict pragma at the top of your
program (or in any block or file where you want to enforce these
rules):
use strict; # Enforce some good programming rules
Now, among other restrictions,[†] Perl will insist that you declare every new variable,
usually done with
my:[‡]
my $bamm_bamm = 3; # New lexical variable
Now if you try to spell it the other way, Perl can complain that
you haven’t declared any variable called
$bammbamm, so your mistake is automatically
caught at compile time.
$bammbamm += 1; # No such variable: Compile time fatal error
Of course, this applies only to new variables; you don’t need to
declare Perl’s built-in variables, such as
$_ and
@_.[‖] If you add
use strict
to a program that is already written, you’ll generally get a flood of
warning messages, so it’s better to use it from the start, when it’s
needed.
Most people recommend that programs that are longer than a
screenful of text generally need
use
strict. And we agree.
From here on, most (but not all) of our examples will be written
as if
use strict is in effect, even
where we don’t show it. That is, we’ll generally declare variables with
my where it’s appropriate. But, even
though we don’t always do so here, we encourage you to include
use strict in your programs as often as
possible.
The return Operator
The
return operator
immediately returns a value from a subroutine:
my @names = qw/ fred barney betty dino wilma pebbles bamm-bamm /; my $result = &which_element_is("dino", @names); sub which_element_is { my($what, @array) = @_; foreach (0..$#array) { # indices of @array's elements if ($what eq $array[$_]) { return $_; # return early once found } } −1; # element not found (return is optional here) }
This subroutine is being used to find the index of
"
dino" in
the array
@names. First, the
my declaration names the parameters: there’s
$what, which is what we’re searching
for, and
@array, an array of values
to search within. That’s a copy of the array
@names, in this case. The
foreach loop steps through the indices of
@array (the first index is
0, and the last one is
$#array, as you saw in Chapter 3).
Each time through the
foreach
loop, we check to see whether the string in
$what is equal[*] to the element from
@array at the current index. If it’s equal, we
return that index at once. This is the most common use of the keyword
return in Perl—to return a value
immediately, without executing the rest of the subroutine.
But what if we never found that element? In that case, the author
of this subroutine has chosen to return
−1 as a “value not found” code. It would be
more Perlish, perhaps, to return
undef in that case, but this programmer used
−1. Saying
return −1 on that last line would be correct,
but the word
return isn’t really
needed.
Some programmers like to use
return every time there’s a return value, as a
means of documenting that it is a return value. For
example, you might use
return when
the return value is not the last line of the subroutine, such as in the
subroutine
&larger_of_fred_or_barney, earlier in this
chapter. It’s not really needed, but it doesn’t hurt anything. However,
many Perl programmers believe it’s just an extra seven characters of
typing.
Omitting the Ampersand
As promised, now we’ll tell you the rule for when a subroutine call can omit the ampersand. If the compiler sees the subroutine definition before invocation, or if Perl can tell from the syntax that it’s a subroutine call, the subroutine can be called without an ampersand, just like a built-in function. (But there’s a catch hidden in that rule, as you’ll see in a moment.)
This means that if Perl can see that it’s a subroutine call without the ampersand, from the syntax alone, that’s generally fine. That is, if you’ve got the parameter list in parentheses, it’s got to be a function[*] call:
my @cards = shuffle(@deck_of_cards); # No & necessary on &shuffle
Or if Perl’s internal compiler has already seen the subroutine definition, that’s generally okay, too; in that case, you can even omit the parentheses around the argument list:
sub division { $_[0] / $_[1]; # Divide first param by second } my $quotient = division 355, 113; # Uses &division
This works because of the rule that parentheses may always be omitted, except when doing so would change the meaning of the code.
But don’t put that subroutine declaration
after the invocation, or the compiler won’t know
what the attempted invocation of
division is all about. The compiler has to
see the definition before the invocation in order to use the
subroutine call as if it were a built-in.
That’s not the catch, though. The catch is this: if the subroutine has the same name as a Perl built-in, you must use the ampersand to call it. With an ampersand, you’re sure to call the subroutine; without it, you can get the subroutine only if there’s no built-in with the same name:
sub chomp { print "Munch, munch!\n"; } &chomp; # That ampersand is not optional!
Without the ampersand, we’d be calling the built-in
chomp, even though we’ve defined the
subroutine
&chomp. So, the real
rule to use is this one: until you know the names of all of Perl’s
built-in functions, always use the ampersand on
function calls. That means that you will use it for your first hundred
programs or so. But when you see someone else has omitted the
ampersand in his own code, it’s not necessarily a mistake; perhaps he
simply knows that Perl has no built-in with that name.[*] When programmers plan to call their subroutines as if
they were calling Perl’s built-ins, usually when writing modules, they often
use prototypes to tell Perl about
the parameters to expect. Making modules is an advanced topic, though;
when you’re ready for that, see Perl’s documentation (in particular,
the perlmod and perlsub
documents) for more information about subroutine prototypes and making
modules.
Nonscalar Return Values
A scalar isn’t the only kind of return value a subroutine may have. If you call your subroutine in a list context,[†] it can return a list of values.
Suppose you wanted to get a range of numbers (as from the range
operator,
..), except that you want
to be able to count down as well as up. The range operator only counts upward, but that’s easily
fixed:
sub list_from_fred_to_barney { if ($fred < $barney) { # Count upwards from $fred to $barney $fred..$barney; } else { # Count downwards from $fred to $barney reverse $barney..$fred; } } $fred = 11; $barney = 6; @c = &list_from_fred_to_barney; # @c gets (11, 10, 9, 8, 7, 6)
In this case, the range operator gives us the list from
6 to
11,
then
reverse reverses the list so
that it goes from
$fred (
11) to
$barney (
6), just as we wanted.
The least you can return is nothing at all. A
return with no arguments will return
undef in a scalar context or an empty list in
a list context. This can be useful for an error return from a
subroutine, signaling to the caller that a more meaningful return value
is unavailable.
Persistent, Private Variables
With
my we were able to
make variables private to a subroutine, although each time we called the
subroutine we had to define them again. With
state, we can still have private variables
scoped to the subroutine but Perl will keep their values between
calls.
Going back to our first example in this chapter, we had a
subroutine named
marine that
incremented a variable:
sub marine { $n += 1; # Global variable $n print "Hello, sailor number $n!\n"; }
Now that we know about
strict,
we add that to our program and realize that our use of the global
variable
$n isn’t allowed anymore. We
can’t make
$n a lexical variable with
my because it wouldn’t retain its
value.
Declaring our variable with
state tells Perl to
retain the variable’s value between calls to the subroutine and to make
the variable private to the subroutine:
use 5.010; sub marine { state $n = 0; # private, persistent variable $n $n += 1; print "Hello, sailor number $n!\n"; }
Now we can get the same output while being
strict-clean and
not using a global variable. The first time we call the subroutine, Perl
declares and initializes
$n. Perl
ignores the statement on all subsequent calls. Between calls, Perl
retains the value of
$n for the next
call to the subroutine.
We can make any variable type a
state variable; it’s not just for scalars.
Here’s a subroutine that remembers its arguments and provides a running
sum by using a
state array:
use 5.010; running_sum( 5, 6 ); running_sum( 1..3 ); running_sum( 4 ); sub running_sum { state $sum = 0; state @numbers; foreach my $number ( @_ ) { push @numbers, $number; $sum += $number; } say "The sum of (@numbers) is $sum"; }
This outputs a new sum each time we call it, adding the new arguments to all of the previous ones:
The sum of (5 6) is 11 The sum of (5 6 1 2 3) is 17 The sum of (5 6 1 2 3 4) is 21
There’s a slight restriction on arrays and hashes as
state variables, though. We can’t initialize
them in list context as of Perl 5.10:
state @array = qw(a b c); # Error!
This gives us an error that hints that we might be able to do it in a future version of Perl:
Initialization of state variables in list context currently forbidden ...
Exercises
See Appendix A for answers to the following exercises:
[12] Write a subroutine, named
total, that returns the total of a list of numbers. (Hint: the subroutine should not perform any I/O; it should simply process its parameters and return a value to its caller.) Try it out in this sample program, which merely exercises the subroutine to see that it works. The first group of numbers should add up to 25.
my @fred = qw{ 1 3 5 7 9 }; my $fred_total = total(@fred); print "The total of \@fred is $fred_total.\n"; print "Enter some numbers on separate lines: "; my $user_total = total(<STDIN>); print "The total of those numbers is $user_total.\n";
[5] Using the subroutine from the previous problem, make a program to calculate the sum of the numbers from 1 to 1000.
[18] Extra credit exercise: write a subroutine, called
&above_average, that takes a list of numbers and returns the ones that are above the average (mean). (Hint: make another subroutine that calculates the average by dividing the total by the number of items.) Try your subroutine in this test program.
my @fred = above_average(1..10); print "\@fred is @fred\n"; print "(Should be 6 7 8 9 10)\n"; my @barney = above_average(100, 1..10); print "\@barney is @barney\n"; print "(Should be just 100)\n";
[10] Write a subroutine, named
greet, that welcomes the person you name by telling them the name of the last person it greeted:
greet( "Fred" ); greet( "Barney" );
This sequence of statements should print:
Hi Fred! You are the first one here! Hi Barney! Fred is also here!
[10] Modify the previous program to tell each new person the names of all of the people it has previously greeted:
greet( "Fred" ); greet( "Barney" ); greet( "Wilma" ); greet( "Betty" );
This sequence of statements should print:
Hi Fred! You are the first one here! Hi Barney! I've seen: Fred Hi Wilma! I've seen: Fred Barney Hi Betty! I've seen: Fred Barney Wilma
[10]-in functions. That’s why this chapter is titled Subroutines, because it’s about the ones you can define, not the built-ins. Mostly.
[11] The code examples used in this book are recycled from at least 40% post-consumer programming and are at least 75% recyclable into your programs when properly decomposed.
[12] Okay, purists, we admit it: the curly braces are part of the block, properly speaking. And Perl doesn’t require the indentation of the block—but your maintenance programmer will. So please be stylish.
[*] Unless your subroutine is being particularly tricky and declares a “prototype,” which dictates how a compiler will parse and interpret its invocation arguments. This is rare—see the perlsub manpage for more information.
[†] If you wish to be powerfully tricky, read the Perl documentation about coderefs stored in private (lexical) variables.
[‖] And frequently a pair of parentheses, even if empty. As
written, the subroutine inherits the caller’s
@_ value, which we’ll be discussing
shortly. So don’t stop reading here, or you’ll be writing code with
unintended effects!
[*] The return value of
[*] Unless there’s an ampersand in front of the name for the
invocation, and no parentheses (or arguments) afterward, in which
case the
@_ array is inherited
from the caller’s context. That’s generally a bad idea, but is
occasionally useful.
[†] You might recognize that this is the same mechanism as used
with the control variable of the
foreach loop, as seen in the previous
chapter. In either case, the variable’s value is saved and
automatically restored by Perl.
[*] Advanced programmers will realize that a lexical variable may be accessible by reference from outside its scope, but never by name.
[*] As soon as you learn about
warn in the next chapter, you’ll see that
you can use it to turn improper usage like this into a proper
warning. Or perhaps you’ll decide that this case is severe enough to
warrant using
die, described in
the same chapter.
[*] As usual, turning on warnings will generally report this abuse
of
my, or you can call
1-800-LEXICAL-ABUSE and report it yourself. Using the
strict pragma, which we’ll see in a
moment, should forbid it outright.
[†] To learn about the other restrictions, see the documentation
for
strict. The documentation for
any pragma is filed under that pragma’s name, so the command
perldoc strict (or your system’s native
documentation method) should find it for you. In brief, the other
restrictions require that strings be quoted in most cases, and that
references be true (hard) references. Neither of these restrictions
should affect beginners in Perl.
[‖] And, at least in some circumstances, you don’t want to declare
$a and
$b because they’re used internally by
sort. So, if you’re testing this
feature, use other variable names than those two. The fact that
use strict doesn’t forbid these
two is one of the most frequently reported nonbugs in Perl.
[*] You noticed that we used the string equality test,
eq, instead of the numeric equality test,
==, didn’t you?
[*] In this case, the function is the subroutine
&shuffle. But it may be a built-in
function, as you’ll see in a moment.
[*] Then again, maybe it is a mistake; you can search the perlfunc and perlop manpages for that name, though, to see whether it’s the same as a built-in. And Perl will usually be able to warn you about this when you have warnings turned on.
Get Learning Perl, 5th Edition now with O’Reilly online learning.
O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers. | https://www.oreilly.com/library/view/learning-perl-5th/9780596520106/ch04.html | CC-MAIN-2021-43 | refinedweb | 5,972 | 67.89 |
Agenda
See also: IRC log
<Ashok> Scribenick: Ashok
<scribe> Scribe: Ashok Malhotra
Noah: Goals are to decide what we can do to move ahead or decide not to move ahead
... Tim the TPAC committee is waiting for you to tell them if you want to talk about it
Tim: Can we agree to define the charter of the group here
... we have started a task force ... Raman asked for it
... asked Norm to join ... said yes
<noah> I think what Larry suggests is reasonable IF we can convince ourselves that this could be a realistic effort, that we have a sense of what timeframe for impact, etc. Then we can think about soliciting participation.
Raman: We should not call it a TAG task force
<noah> Also, I think this can't be a volunteer effort -- I think the point is that Tim, with our help, will pick a small, balanced set of experts to see if some new avenue can be discovered.
Raman: the charter will determine who participates and who participates will determine how well the task force does
Tim: Agree charter is most important ... will decide who joins and how successful we are
<jar_> coordination?
Tim: Divergence of XML and HTML
LM: My understanding of this isuue ahs been hampered beacuse there are no participants from XML community
... we need XML usecases etc.
Raman: We need people who are most affected
... Tim, you had a list on Tuesday
<noah> Noodling on charter: Goal is to develop specifically technical proposals and/or best practices that would increase the ability to 1) use XML tools to create and manipulate HTML and 2) to include XML fragments in HTML and 3) to include HTML fragments in XML.
Tim: --subset of HTML5 needed for simpler devices
- Impossible derive a RESTful API comapred to XForms ... cannot tell waht data it transmits
<noah> Noodling on charter: Goal is to develop specific technical proposals and/or best practices that would increase the ability to 1) use XML tools to create and manipulate HTML and 2) to include XML fragments in HTML and 3) to include HTML fragments in XML.
<noah> Noodling on charter: Goal is to develop specific technical proposals and/or best practices that would increase the ability to 1) use XML tools to create, manipulate and "round trip" HTML and 2) to include XML fragments in HTML and 3) to include HTML fragments in XML.
Tim: Also use XML tools to create HTML and load XML
<noah> Addition to the charter: The requirement is to do this in a way that will be widely deployed.
Noah: Whether to push well-formed should wait until task force is formed
Tim: What is important for XML folks is infoset and what is important for HTML folks is DOM
Raman: Make sure you deliver good DOMs via good markup
Noah: The charter should start with broad success goals
<Zakim> noah, you wanted to discuss charter proposal
Noah: start by saying the taskforce has to find something that widely accepted
LM: It is not a good idea to talk about XML community and HTML community
... if we are successful there will be a single community
<noah> What I said more specifically was: the charter should require a solution that will be widely deployed, for the stated purposes. Achieving that will require the task force to confront the very difficult technical challenges, e.g. that well-formedness is seen as conflicting with existing practice, is cumbersome for some, and so can't in general be required.
LM: some use cases such as EXI and DSig work for XML but not HTML
... perhaps increase scope of polyglot documents
Tim: Raman spoke about organizing the ultimate converence not a quick fix
<noah> To my 3 charter points, I might add: attempt to increase the set of documents that behave as polyglot/chameleon, I.e. that can be served with reasonably compatible semantics, DOM, etc. as either text/html or application/xhtml+xml
Raman: HTML5 spec is starting to add rules
... such as Table/Tbody
... Webkit will fix bad Table syntax
... and serialize correctly
... I think these kinds of rules will become common
... problems are with things like xml:id and id, xml:lang and lang
... leads to 2 worlds
Noah: XML community may need to move
Raman: Larry talks about tools ... HTML tools was IE
... XML tools are there and are used ... legacy
... You said creating HTML from XML is easy ... I don't think so
Noah: The XMLstuff is of value to some people ... they have deployed code ... that is why we had trouble with stuff like XML 1.1
... Many of the changes would change the value proposition for XML
... they would resist change
<Zakim> ht, you wanted to disagree with TV's analysis: there are two XML communities
<johnk> masinter: isn't the standard HTML <-> XML transformation the thing (formerly?) known as XHTML5?
HT: There are 2 XML communities ... people who use XML as XML for interprocess communication, not interested in human readers
... there is another community who care about human readers. The 2 communities use some of the same tooling but have different goals
<Zakim> masinter`, you wanted to talk about transformation gateway, ask if their's a value to standardizing that
LM: XML/HML transformation gateways ... differences in the DOMs
<jar_> wannabe-commutative diagrams (html / xml / source / dom)
.
LM: How to apply XML DSig to Webpage?
<johnk>
Tim: Including scripts?
LM: That's a kind of workflow where we do not know that the standards work together
<Zakim> noah, you wanted to ask how this is positioned relative to HTML5 specification development process
LM: use this kind of value-based reasoning to drive the taskforce so it is nor percieved as an academic exercise
Tim: The Polyglot investigation did address that
<jar_> lm: Let's identify the real value of using XML, e.g. signed content, and suggest that it would be awfully nice to get the same value for HTML
Tim: there are some constraints ... I'm happy to live with these constraints
... happy not to write scripts that use some DOM properties
... polyglot defines some rules and if you follow them everything just works.
JK: Sceptical of that claim
<Zakim> noah, you wanted to ask how this is positioned relative to HTML5 specification development process
Noah: The chairs will ask how this affects them logistically
... questions about process to complete specs, etc.
Tim: The task force will take the polyglot idea and take it further ... polyglot idea did not come from HTML5
... Do we feel about changing XML so the XML parser would use the default namespace from the MIMe type
HT: I talked to Liam ... this may be acceptable as a change to XML
Yves: Include XML fragment in HTML ... this is the only one that will have impact on HTML
Noah: Task force will decide what to recommend
Yves: Should the task force work on pussing XML fragments in HTML
<noah> NM: I think the task force should try to figure out which of these specific changes, in XML or HTML specifications, would likely be deployable and a win for the communities as a whole
HT: You edit the doc ... edit the SVG still works because using a HTML5 parser ... then you cut it out and it fails
Yves: You have a serialized DOM ... issue on the tools not a language problem
LM: View Source may have some options
<noah> 4 mins to go on this subject
Noah: Tim, do you have what you need re. talk at TPAC and deciding whether and how to charter the task force?
Tim: GOAL of task force: Enable and enhance mixed XML/HTML workflows
... We have a good story and we can go ahead
... need 6 more folks
... Steps moving forward: Draft charter, circulate to TAG, Circulate to candidate TF, Circulate to AB,
LM: We may also need an IG
HT: You may want a period of negotiation with the candidate TF
Break for 10 minutes
<DKA>
<DKA> HTML5 is the future.
<DKA> ( apparently )
<noah> close ACTION-459
<trackbot> ACTION-459 Schedule F2F discussion of IRIbis status and Larry's proposal closed
Noah: HT wanted a f2f update from Larry
<noah> ACTION-410?
<trackbot> ACTION-410 -- Larry Masinter to let the TAG know whether and when the IRIEverywhere plan in HTML WG went as planned -- due 2010-11-01 -- OPEN
<trackbot>
HT: The TAG should hear about the way in which you rearchitected the way in which the specs will work
... what happened? Did the idea succeed.
LM: There is a spec ... above
... there is a WG chartered to finish the spec ... we are moving forward
... There were 4 different things
<noah> Private conversation: DanA kindly agrees to integrate minutes for Tuesday
IRI's could occur in:
XML
HTML
OTHERIRI
FORMALIRI -3987
scribe: LEIRI had its own IRI and had an algorithm for transforming to FORMALIRI
... URI is a FORMAL URI but also lossy
HTML URI was not well defined ...
<DKA> [ Apropos to the privacy discusion: Google Engineer Builds Facebook Disconnect (TechCrunch): ]
Added "PRESENTATIONS" below HTML
scribe: what you write on the side of a bus is not a sequence of codepoints
... impt to separate out the presentations from the sequence of unicode characters
LM: Deprecate the idea that there is a set of chars that are not allowed ... there is a syntax that determines what is allowed
IRI -> sequence of unicode parts
IRI components -> processing
<DKA> Image of whiteboard from XHTML-HTML taskforce discussion:
JR: There is a presentation, there are octets and then parts of the IRI
IRI is a sequence of codepoints
scribe: someone has to pick a character encoding
<ht> iso-8859-1, etc.
LM: The parsing happens on the IRI and from that you get a set of components and this is what you process
test
LM: What HTML was doing was not a syntactic item ... it was a processing step ... step can know about document encoding
... We now have a document with correct structure but lot of the details are still wrong
HT: The splitting into components is character independent
LM: There is a change proposal for HTML spec that points at this
Noah: How is HTML WG looking at this?
LM: Adam authored an alternate change proposal
... folks want this problem to go away
... 2 different versions of IDNA? how do we handle bi-di?
... for Bi-Di we need directional markers
... this is still up in the air
... trying to get critical mass
... There is a document ... still has bugs ... needs people to take a look
HT: This is good stuff. If the politics get sticky let us know.
LM: Politics is sticky
... Need to change registry to register IRI schemes
Tim: How do I need to change RDF code ... now it normalizes the URI
LM: We did not change URIs
Tim: If it is not URI it is hex encoded
LM: If you had a Chinese assertions the hex encoded URI will not work
Tim: This breaks 10 yrs worth of code
HT: The hex encoded URI will fail unless client libraries update
LM: It was a hard choice ... none of the choices were great ... I think this was the best choice
<noah> ACTION-410?
<trackbot> ACTION-410 -- Larry Masinter to let the TAG know whether and when the IRIEverywhere plan in HTML WG went as planned -- due 2010-11-01 -- OPEN
<trackbot>
<noah> close ACTION-410
<trackbot> ACTION-410 Let the TAG know whether and when the IRIEverywhere plan in HTML WG went as planned closed
<jar_> ACTION jar to assess potential impact of IRI draft on RDF/XML, OWL, and Turtle
<trackbot> Created ACTION-487 - Assess potential impact of IRI draft on RDF/XML, OWL, and Turtle [on Jonathan Rees - due 2010-10-28].
<jar_> action-487 due 2011-12-01
<trackbot> ACTION-487 Assess potential impact of IRI draft on RDF/XML, OWL, and Turtle due date now 2011-12-01
<jar_>
Noah: We need to discuss about how to proceed on Metadata
jar: Just typed this is in last night ... goal is to prevent bad things from hapenning
... lots of metadata efforts
... is my list about right?
... Trying to put things into a framework
<DKA> Another metadata effort I have been involved with in the past: PRISM -
Noah: Goal is to guide the community ...
jar: Options on how to add metadata ... how to choose
<DKA> Also - POWDER :
jar: If I get guidance i can produce draft by early spring
Tim: These are things we can do and how to do it ... where are the orange blobs [reference to diagram on whiteboard, blobs are research areas]?
... crises or opportunities?
... I need a heat map
<Zakim> masinter`, you wanted to talk about: bad things that can happen, other issues
<timbl> Threat level opportunity level
LM: Big space, stuff is moving
... preventing bad things has more urgency
... people choosing metadata vocabularies that are different to map between ... same data. different languages
<timbl> Threat: Lack of harmonization of metadata ontology
<timbl> --Larry
LM: Vocabulary divergence
... the other is when you have multiple sources of metadata and how to pick what you want
<timbl> Metadata services / indexes
<timbl> . . . SPARQL
LM: possible conflict between metadata sources ... metadata in lots of different places
... we are at risk of W3C exacerbating the situation
... The outline is fine
Noah: Probes on what LM wants to proceed
Tim: Orange blog on create/curate formats?
<Zakim> ht, you wanted to ask about metadata defined how?
ht: metadata for what?
... what's the domain?
jar: Metadata is data about digital objects
<Zakim> masinter`, you wanted to point out media annotation working group work
LM: There is a media annotations WG ... they have a document about metadata and metadata resolution etc.
... perhaps ask if their idea of metadata and metadata resolution matches ours
<ht> hst: "data about digital objects" is fine with me, against a background of focussing/foregrounding the Information Science community's view of what metadata/digital objects is/are
jar: Looks like a format ... for delivering they reference powder
... Easy answer is Semantic Web gives you an answer [JAR looking at these minutes has a feeling he said "RDF" not "Semantic Web"]
<Zakim> timbl, you wanted to say that at the moment of course the digital objects out there of great interest are [government] linked open datasets. It is very timely to encourage work on
<timbl>
Tim: UK Govt uses CKAN
Jar: You register things in it so that they can be found
Tim: Has an API ... not particularly RDF oriented ... cannot do Math
... Has a keywork query info
... Connect up people who are working on repositories with SPARQL code
<DKA> Just looking at (API for media annotation API) and thinking about in the context of Web Application APIs and also in the context of the metadata discussion...
Ashok: Should the TAG recommend "use RDF (and OWL) for metadata and SPARQL?
Tim explains 5 star rating for data
<noah> JAR: That's new in the outline: discussion of APIs to access metadata
<timbl> + to talk about Jeni Tennison's work
<johnk> is Jeni Tennison's rdfquery work
Tim: Emphasize, that RDF & SPARQL for data and for metadata
... Jeni's work ... RDF repository with SPARQL access
<ht>
<DKA> Interesting blog post from Russell Beattie on microformats in HTML5:
Tim: typical user does not understand/use SPARQL
<jar_> this is off topic
Tim: there is an API that generates SPARQL
<jar_> very interesting but it has nothing to do specifically with metadata
Noah: Pl. clarify what happens at client and what at the server
Tim: We need to say how to use the SPARQL
<Zakim> noah, you wanted to ask about going too far
Jar: How best to use the RDF stack is a goal and work in progress
... Here are the benefits you get if you use RDF and SPARQL
Noah: We could write good practice notes
jar: If someone is immersed in another metadata format we have to be more nuanced
... make properly qualified statements
<Zakim> masinter`, you wanted to argue that "metadata" is a different topic from "data"
<jar_> We lose credibility if our statement about the goodness of RDF is too broad.
LM: If everyone uses RDF but different vocabularies that would be bad. We want to say RDF is processable ... and use same vocabularies
<jar_> TBL said matching vocabularies is easy compared to getting the information formatted in the same way. JAR said that in his experience, format conversion is easy, relating semantics is very hard.
LM: If data formats are different then it may not be a big problem
... Some more immediate things we could do ...
Tim: What will the finding be ... Larry is pointing out a hotspot
LM: Two areas hapenning in W3C where we can help - microformats vs. RDFa and Media Annotations
Tim: Need some leadership in the area of metadata fo Linked Data
<noah> ACTION-282 Due 2011-04-01
<trackbot> ACTION-282 Draft a finding on metadata architecture. due date now 2011-04-01
LM: Personally, I would put this as higher priority than Persistence
jar: [persistence is higher priority for JAR]
<noah> John to integrate Wed. minutes
<noah> Jonathan to integrate Thurs. minutes
<jar_> TBL: Star rating for linked data.
<jar_> 1. on web
<jar_> 2. machine readable
<jar_> 3. non-proprietary
<jar_> 4. in rdf
<jar_> 5. is linked
<scribe> Scribe: Jonathan Rees
<scribe> Scribenick: jar_
<noah> How are we doing with scribing this afternoon?
Dan A introducing web app API topic
DKA: maybe aim for something could be useful to future people working in the space
<noah> Where's the threat, and where is there confusion?
DKA: what architectural principles might apply to API design?
... What other technologies are core building blocks, things that one is advised to build on?
... It would be beneficial to do some work in this area
masinter: Re APIs, there's a perspective that the web doesn't need versions.
... With content, you have ignore-unknown and so on
... But how do you do API evolution without versions?
... How big is the API, how do you decompose it. What happens when the language itself evolves?
... Are we talking only javascript, or also Java and so on?
... These are the kinds of questions I have in mind when we talk about architecture.
<Zakim> noah, you wanted to talk about composability?
noah: TAG should emphasize things that help the whole to come out right. Fit well compose with other things
... There is some of this in the Javascript world
... We could emphasize the things that have to do with the Web
<Zakim> masinter`, you wanted to ask some questions about versioning of APIs, granularity of APIs, data structures, evolution of JavaScript
<Zakim> timbl, you wanted to say Multi-lamguage APIs which the DOM aimed for are sub-optimal for tight binding of language t the data. 2) versioning? Programmers cope with bits not being
noah: What happens when there are multiple events/handlers any of which can fire (e.g.)
timbl: Back when, you were supposed to make an IDL, then bindings. What programmers look for now is a much closer binding of API and language (e.g. iterators). New pattern of higher-order functions.
... IDL may be too limited... short names are good, you can assume names are scoped [unlike IDL style]...
... instead of calling a method on something, just iterate over it... seems the way to go
<noah> I think IDL is motivated primarily when you want to support multiple languages, such as VBScript. I agree with Tim, getting JavaScript right, and benefiting from idiomatic constructs is the right way to go.
ashok: Touches on: How do you save state? How do you communicate with other apps? Storage? Authorization? We ought to coordinate this work
<noah> By the way, Tim pointing out the JQuery/Dojo idiom of chaining functions is a great example of using an overarching architectural approach to get composability.
dka: Re inter-webapp communication, look at Open Ajax Hub. Creates a secure environment for this.
... Between different frames
ashok: Example?
dka: 2 apps in different tabs, one wants to communicate with the other
noah: Restaurant app + map app mashup
dka: Ashok, you were talking about a local app in browser wanting to talk on an ODBC connection to a local database not in the browser? Nobody working on that
noah: What if we *don't* work on this?
dka: Privacy is bigger and more urgent compared to APIs
<noah> NM: I tried to say, what do we lose or what do we slow down if we work on this?
ht: EXT
jar: What about toolkits, if we were to work on this we might see what these efforts are and if they're happy
noah: Put off til spring, do privacy in meantime? Or what?
<DKA> action-461?
<trackbot> ACTION-461 -- Daniel Appelquist to draft "finding" on Web Apps API design -- due 2010-10-31 -- OPEN
<trackbot>
dka: I'd like to talk to people at TPAC doing API design
<noah> . ACTION: Appelquist to solicit at TPAC perspectives on what TAG could/should do on APIs
dka: to get better understanding of how TAG work would be received, and get input
<noah> . ACTION: Appelquist to solicit at TPAC perspectives on what TAG could/should do on APIs Due: 2010-11-09
<noah> ACTION-461 Due 2010-12-31
<trackbot> ACTION-461 Draft "finding" on Web Apps API design due date now 2010-12-31
<noah> ACTION: Appelquist to solicit at TPAC perspectives on what TAG could/should do on APIs Due: 2010-11-09 [recorded in]
<trackbot> Created ACTION-488 - Solicit at TPAC perspectives on what TAG could/should do on APIs Due: 2010-11-09 [on Daniel Appelquist - due 2010-10-28].
ashok: We ought to produce something on evercookie thing quickly
dka: draw on recent posts to www-tag
... "privacy" is a big thing, maybe shift subject to user intent
<noah> NM: I like thinking about this first in terms of: there is a risk that software will be able correlate events that the user does not wish to have correlated. E.g. the same user who bought this house at a real estate browsed for bars at some search site.
masinter: [see above] Really clearing private information is really hard, you have to cleanse every server
... Someone might be confused about this
<noah> NM: We can explain that, among the reasons this causes concern, is that such correlation can sometimes allow discovery of information that the user might wish to keep private (I don't want the people who are selling me the house to know that I drink a lot.)
dka: The scope in an ASAP statement should be limited to evercookie, how standards community might respond
ashok: We could say: browsers ought to let you clear private information. That's tricky to say
<Yves> think of spam, link to remove from "spam lists" that are there to check the validity of emails... and the never-ending spam/anti-spam race. same issue with tracking and cookies
<Zakim> ht, you wanted to ask about the connection with JAR's perspective
ashok: ... what else could we soapbox about?
ht: Butting heads: You must be able to clear browser state / You can't clear browser state
... That you're trying to protect your own private
<Zakim> noah, you wanted to discussing referring expression
ht: The way to achieve desired effect is to institute accountability
... [...] is not a referring expression
noah: Given that we can't plug all the holes, should we try to plug any?
... We can say that there are other ways to look at the problem
... Can we settle that question?
... The steganography example (URI history list) is what pushed me over
raman: As we go mobile, the trend is to be able to easily share context... URIs communicated from browser to phone
<noah> NM: The key thing I said is that I think the TAG should start by settling the question of whether Evercookie just shows that the list of things to worry about is a bit longer than you thought, or whether it indicates that we will not typically succeed in providing effective protection at this time.
raman: Out of band info being put into local storage, will soon be put into cloud storage. I predict all local storage will move to cloud
... Users have conflicting requirements
... Bad guys will use same hooks to do bad things
<noah> NM: I tend to believe the latter. That being the case, we need to then decide whether it's worth asking people to plug at least some of the holes. My leaning is "yes, sometimes", Henry seems to say "probably not", but I think that's the next discussion we should have.
raman: This is a wakeup call. No easy answer
dka: Looks like a false dichotomy. Either you admit web privacy is dead, or you plug a million holes
... There could be tactical approaches. Plug some holes, help a bit, help users to make informed decisions
... also longer term work to be done on privacy, much bigger than TAG, but TAG can play a role, workshops/intitiatives
<noah> . ACTION: Appelquist to prepare early draft of TAG thoughts on implications of Evercookie. Due: 2010-12-08
dka: We can show leadership by saying something [appropriate / reasoned / limited] about evercookie
<noah> ACTION: Appelquist to prepare early draft of TAG thoughts on implications of Evercookie. Due: 2010-12-08 [recorded in]
<trackbot> Created ACTION-489 - Prepare early draft of TAG thoughts on implications of Evercookie. Due: 2010-12-08 [on Daniel Appelquist - due 2010-10-28].
yves: It's more like spam. Never ending battle. But we have to keep fighting.
<noah> If my privacy is protected as well as I am insulated from spam, then my life is one big open book.
yves: Explaining issue is important
... it's a tax
masinter: 'Privacy' is not the quantity we're trying to optimize
... Goal is to match what's happening with what's expected
... I send email to a list that I think is limited readership, is it really?
... Put evercookies in that context and I'll be happy
noah: (administrative interrupt as DKA departs) Considering skipping call next week (TPAC is the week after that)
ashok: Encrypt everything at a low level?
jar: Hal [Abelson] has been looking at tagging-based hardware to support accountability
noah: Unknown people can't be held accountable
... they're piecing together information about me
masinter: We haven't done the threat analysis. Don't know what problem to solve
noah: ...
masinter: They have my IP address and HTTP headers, what's new?
ht: The evercookies point was that the vulnerabilities were already there
yves: Companies are buying one another, and otherwise sharing information with one another
... so you get correlations
masinter: Clearing your cookies not only doesn't get you what you need, it gets in the way of getting what you want
<Zakim> ht, you wanted to make the strategy point
(people use 'clear cookies' to logout / clear credentials, and it often works)
ht: As far as new activity in this area might go, it shouldn't confuse privacy with protecting private data
... If there are thought leaders giving a message that's the right one, then we (W3C) could get behind it
noah: We have enough actions, yes?
<noah> ACTION: Noah and others(?) going to privacy workshop to report back to the TAG? [recorded in]
<trackbot> Created ACTION-490 - And others(?) going to privacy workshop to report back to the TAG? [on Noah Mendelsohn - due 2010-10-28].
jar: Given that we're not on top of thinking on this, how about if someone gets educated by reading something (e.g. about Hal's ideas)
<noah> ACTION-490 Due 2010-12-21
<trackbot> ACTION-490 And others(?) going to privacy workshop to report back to the TAG? due date now 2010-12-21
A#B where A -> C#D
yves: Taxonomy of fragids
... FIrst case: book chapter. Also SVG file with multiple icons, point to a particular icon
... I tested the SVG fragment case and it worked (and it's in the spec)
timbl: In the python API, you don't have enough information
... x = urlib.urlopen('foo')
... s = x.read()
... x.headers
jar: I think Julian was saying the whole discussion is moot, since the decision was made 10 years ago and published in the errata.
... So the time to review it would have been then. Cat's out of the bag.
<Yves>
jar: (To Yves) Then this SVG case is the example that I had asked for?
(everyone trying to track down ietf commitment to this erratum)
(looking at archive's version of skrb.org)
<Yves>
ht: Possible courses of ACTION: (1) Revert this change to Location:. Does harm, no evidence of use. (2) We don't know if anyone's using it, we don't have fragment combination rules, leave it in. (3) Here are frag combination rules that cover particular cases, don't do it otherwise (i.e. negative consequences will follow).
masinter: (4) You can have one fragment id, but not two.
ht: The people who set up a redirect are not the same as the people who capture the fragid URI
... Or, people publish docs with names in them. I select something that seems to have a name, click view fragment source, put together a URI, and send it on to someone.
... It's not necessarily the case that there is coordination
masinter: You can do this, but something might break.
masinter (reworded by jar): If you deploy a 30x Location: C#D, then be aware that anyone who creates a URI A#B, might be inconvenienced (since there are no fragment combination rules).
scribe: (that is, A 30x redirects to C#D)
<timbl>
masinter: There's a tendency to want to control both sides of the conversation - MUSTs that apply to parties not constrained by the spec in question
<timbl> $ curl -I
<timbl> HTTP/1.1 302 Moved Temporarily
<timbl> Location:
yves: Case 2. Absolute fragment - exactly one place in the document.
... Compare xpointer, from one point in a doc, select something just following, not same as selecting from whole document.
<timbl> >>> import urllib
<timbl> >>> x = urllib.urlopen("")
<timbl> >>> s = x.read()
<timbl> >>> s
<timbl> '<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> )
timbl: Agree only on condition... that there is a sufficient health warning
yves: Notes that Larry proposed such a health warning
ht: Let's table this
<timbl> ____ [begin demo] ____
<timbl> $ curl -L
<ht> --2010-10-21 15:17:26--
<ht> Resolving purl.org (purl.org)... 132.174.1.35
<ht> Connecting to purl.org (purl.org)|132.174.1.35|:80... connected.
<ht> HTTP request sent, awaiting response... 302 Moved Temporarily
<ht> Location: [following]
<ht> --2010-10-21 15:17:26--
>
Take action-473 to email
action-454?
<trackbot> ACTION-454 -- Daniel Appelquist to take lead in organizing outside contacts for TAG F2F -- due 2010-10-05 -- OPEN
<trackbot>
<noah> close ACTION-454
<trackbot> ACTION-454 Take lead in organizing outside contacts for TAG F2F closed
<noah> ACTION-116 Due 2011-02-11
<trackbot> ACTION-116 Align the tabulator internal vocabulary with the vocabulary in the rules, getting changes to either as needed. due date now 2011-02-11
action-280?
<trackbot> ACTION-280 -- John Kemp to (with John K) to enumerate some CSRF scenarios discussed in Jun in Cambridge -- due 2010-10-11 -- OPEN
<trackbot>>
<johnk> -- due 2010-11-01 -- OPEN
<trackbot>
<noah> ACTION-381 Due 2010-12-01
<trackbot> ACTION-381 Spend 2 hours helping Ian with due date now 2010-12-01
<noah> ACTION-487?
<trackbot> ACTION-487 -- Jonathan Rees to assess potential impact of IRI draft on RDF/XML, OWL, and Turtle -- due 2011-12-01 -- OPEN
<trackbot>
<noah> ACTION-478?
<trackbot> ACTION-478 -- Jonathan Rees to prepare a first draft of a finding on persistence of references, to be based on decision tree from Oct. F2F Due: 2010-01-31 -- due 2010-10-27 -- OPEN
<trackbot>
<noah> ACTION-476?
<trackbot> ACTION-476 -- Jonathan Rees to draft a short note to 3023bis editors reflecting the discussion / consensus... -- due 2010-10-26 -- OPEN
<trackbot>
<noah> ACTION-23?
<trackbot> ACTION-23 -- Henry S. Thompson to track progress of #int bug 1974 in the XML Schema namespace document in the XML Schema WG -- due 2010-11-30 -- OPEN
<trackbot>
<timbl> I wonder whether has been overtaken by events - no one is doing XML 1.1, only XML 1.0 erratum
ADJOURNED | http://www.w3.org/2001/tag/2010/10/21-minutes | CC-MAIN-2018-26 | refinedweb | 5,424 | 70.53 |
Find Questions & Answers
Can't find what you're looking for? Visit the Questions & Answers page!
Hi All,
I am implementing a HTTP to IDOC scenario which will generate the multiple IDOC in target side.So We have changed the occurrence of IDOC 0 to unbounded.While doing test in ID tab I am getting the receiver determination error.
Note : I have changed the IDOC occurrence o to unbounded.
Even we have used IDOC namespace in the target side for operation mapping.
Can anyone help us why our interface is not working and what will be the root cause of this.
Below is the error screenshot:
Hi Aakanksha,
1)Please try to test end to end so that you will come to know the exact error,
2)Please share the screen shot if possible.
Thanks,
Guru | https://answers.sap.com/questions/251639/http-sender-to-multiple-idoc-receiver.html | CC-MAIN-2018-30 | refinedweb | 136 | 73.78 |
Components and supplies
Necessary tools and machines
About this project
Last year I made a DIY electric skateboard using a Raspberry Pi Zero and Wiimote. My homemade electric skateboard can reach speeds of up to 30km/h, accelerates in true Tesla-fashion and travels over 13km on a single charge… But… As with everything in life, it could be made better with LEDs! Lots of them!
Recently I used two 26-long NeoPixel strips from Adafruit to jazz up the underside of my board. NeoPixels are fully programmable RGB LEDs that are incredibly bright – perfect for lighting up the pavement as you carve it up on a skateboard. Take a look at the video below:
This guide is not just limited to electric skateboards however, you can customise any skateboard you want! Electric ones, especially of the DIY variety, lend themselves to this the most as there is already a battery to draw power from.
Parts List
· NeoPixel strip/s – Adafruit stock whole reels of NeoPixels that can easily be cut down to the necessary size. The number of NeoPixels on your board is totally up to you – more will lead to a brighter, more vibrant effect, however that comes with the trade-off of using more power. I used two 26-long strips, making 52 NeoPixels in total.
· Battery – a battery of some kind will be needed to power your NeoPixels. Bear in mind that the more NeoPixels you are using, the more current your battery will need to be able to provide at any instant. LiPo batteries are usually quite lightweight and energy dense, however they do require a specialist charger to recharge them. Also note that your batteries will most likely be mounted onto the bottom of your skateboard. Unsurprisingly this area is prone to knocks and incidents like rocks being kicked upwards at it. Batteries, especially Lithium-based ones, do not like this. If a rock punctures a LiPo battery then it can rapidly set fire and/or explode. Whatever battery option you go for ensure that you have adequate protection. For my project, I am taking power off of my electric skateboard’s main battery. This is a ~22V 8AH LiPo – it is encased in foam and a plastic box too.
· Voltage Regulator Module– your battery will probably not provide the exact 5V of goodness that your NeoPixels need to function. Consequently, you will need a voltage regulator. I would recommend using a step-down buck converter. Remember that skateboards are prone to a lot of vibrations so bear that in mind. My original step-down buck converters (LM13 modules) were shaken to pieces! I quite like these converters as they are encased in metal:
· Microcontroller Board – my electric skateboard is entirely controlled by a Raspberry Pi Zero… But when it came to adding these lights I didn’t want to bother the on-board Pi with extra work. Because of this I added a small Arduino Nano (a clone board can be picked up for less than £5) and I am using this as the NeoPixel driver board. I would recommend an Arduino or similar microcontroller.
Assembly
1. Firstly, ensure that your battery is fully charged. Then mount it inside your casing – don’t forget to keep the charging cables accessible! After this take your buck converter/voltage regulator module and wire the output of your battery to the input of your converter. Be cautious and check to make sure that you have connected the positive lead to the positive input and the same for the negatives. I would recommend soldering these wires together and then covering them in heat shrink for protection.
2. Next grab a multimeter and connect it up to the output of your buck converter. Adjust the screw on the top of the converter until your voltage reads 5V. After this, disconnect your multimeter.
3. Now you need to prepare your NeoPixels. If you have purchased yours in a reel then cut the specific number you would like. You can just use scissors to cut along the gold connection pads between the actual LEDs. I have found that the rubber casing that they come in makes a good semi-waterproof/semi-dirtproof casing, though it is up to you whether you want to use it as such.
4. Notice that NeoPixels strips have three inputs on them: +5V, ground and data. You can now use a soldering iron to solder the output power lines of your buck converter to the gold +5V and ground connection pads on your NeoPixels. Again, ensure that you are connecting the right ones! Soldering to the gold pads can be a tricky task. I usually find it is best to deposit some solder onto them first, then use your iron to reheat it and place a wire.
5. With the power to your NeoPixels successfully connected, all that is left to do is to connect the single data line and a ground wire from your Arduino/microcontroller. I would tackle ground first: use your soldering iron to solder a wire to a ground pin on your Arduino and then connect that to the ground of your NeoPixels. This is the common ground.
6. Finally wire a PWM-enabled pin from your microcontroller to the data line of your NeoPixels. For me I have used pin 6 of my Arduino Nano.
In my project, I use two 26 NeoPixel strips, however my Arduino just treats them as a single strip, with all patterns being mirrored. To do this I just connected another NeoPixel strip to the exact same connections and Arduino pin that I connected the first strip too!
When you have finished wiring up your LEDs, stick them to your skateboard! Getting them to stick can be challenging… I resorted to super glue. You may also want to use some plastic protective casing for the exposed wires in this project.
Programming
Adafruit has a comprehensive programming and information over at their website so you can find out how to create your own patterns of lights etc. I am actually using a collection of the Adafruit demos. My code is below and on GitHub here:
// Skateboard Neopixel Program // Controls 2*26 sets of neopixels on the bottom of my DIY electric skateboard // Used an Arduino Nano to control the lights to save logic level conversion on the Pi Zero and worrying about task management // Wiimote 'A' button triggers Pi that then triggers Arduino inputs that then turn on the lights #include <Adafruit_NeoPixel.h> #ifdef __AVR__ #include <avr/power.h> #endif #define PIN 6 #define BUTTON 7 #define INTERRUPT_PIN 2 Adafruit_NeoPixel strip = Adafruit_NeoPixel(26, PIN, NEO_GRB + NEO_KHZ800); void setup() { strip.begin(); strip.show(); pinMode(BUTTON, INPUT); attachInterrupt(digitalPinToInterrupt(INTERRUPT_PIN), ISRturn_off, RISING); } void loop() { int value = 0; int but_val = 0; blank(); while (true){ but_val = digitalRead(BUTTON); if (but_val == 1){ sequence(); } if (but_val == 0){ blank(); } } } void ISRturn_off(){ blank(); } void colorWipe(uint32_t c, uint8_t wait) { for(int i = (strip.numPixels()-1); i >= 0; i = i - 1) { strip.setPixelColor(i, c); strip.show(); delay(wait); } } void sequence(){ colorWipe(strip.Color(255, 0, 0), 25); colorWipe(strip.Color(0, 255, 0), 25); colorWipe(strip.Color(0, 0, 255), 25); for(int i = 0; i <=4; i++){ theaterChase(strip.Color(127, 127, 127), 25); // White theaterChase(strip.Color(127, 0, 0), 25); // Red theaterChase(strip.Color(0, 0, 127), 25); // Blue } rainbow(20); rainbowCycle(20); theaterChaseRainbow(25); } void blank(){ colorWipe(strip.Color(0, 0, 0), 25); } /////// ADAFRUIT PRESET PATTERNS BELOW////////// (uint16_t i=0; i < strip.numPixels(); i=i+3) { strip.setPixelColor(i+q, c); //turn every third pixel on } strip.show(); delay(wait); for (uint16); }
Note that this is for my specific project: when I press the A button of the Wiimote controlling my electric skateboard, a Bluetooth signal is sent to the Pi Zero underneath. The Pi Zero then sets one of its GPIO pins to high. This GPIO pin is connected to an interrupt pin on my Arduino. When this detects a change in state, my NeoPixel LED code is triggered.
If you want to keep things simple and would like your lights to come on just when you turn on the power, then use the strand test Adafruit demo. Don’t forget to change the number of NeoPixels you are using and the pin it is connected to!
Conclusion
In conclusion, this has been a super fun project. NeoPixels are great. The ability to program your own LED patterns is something that few skaters would even dream of being able to do!
Code
GitHub
Author
Matt Timmons-Brown
- 1 project
- 7 followers
Published onDecember 28, 2017
Members who respect this project
you might like | https://create.arduino.cc/projecthub/theraspberrypiguy/custom-led-lights-on-diy-electric-skateboard-raspberry-pi-8ca243 | CC-MAIN-2019-43 | refinedweb | 1,444 | 63.19 |
28 October 2010 10:56 [Source: ICIS news]
(Releads and updates throughout)
LONDON (ICIS)--Shell’s chemicals CCS (current cost of supplies) earnings soared 86% year on year during the third quarter to $315m (€230m) on the back of a rise in sales volumes and margins, the oil major said on Thursday.
“Chemicals CCS earnings compared to the third quarter 2009 reflected improved realised chemicals margins, higher chemicals sales volumes and lower operating costs,” the company said.
Sales volumes at its chemicals segment grew 13% year on year to 5.33m tonnes, helped by the start-up of its petrochemicals complex in ?xml:namespace>
Shell said that during the three months to September, its chemicals manufacturing plant availability increased to 96% from 95% in the same period last year.
The company’s overall downstream operations recorded a 75% year-on-year decline in CCS earnings to $325m from $1.29bn during the same quarter last year.
Shell said downstream earnings included charges of $1.13bn, reflecting asset impairments of $873m, related to the estimated fair value accounting of commodity derivatives and provisions. Any increase in value of the inventory over cost for commodities traded through the forward markets is not recognised in income until the sale of the commodity occurs in subsequent periods.
Earnings for the third quarter 2009 included a net gain of $536m.
After cost adjustments, earnings stood at $264m, down 83% from the same period last year, Shell said.
For the nine-month period to September, downstream CCS earnings were up 26% to $2.54bn.
Shell reported a net profit of $3.46bn in the third quarter, up 7% from the same period last year, with the nine-month figure jumping 26% to $13.3bn.
As part of the company’s strategy of driving down costs and improving capital efficiency, Shell would continue with rationalising some operations.
“We expect some $7bn-8bn of asset sales in the 2010-2011 timeframe, including exits from non-core refining and marketing positions in Europe and Africa, and rationalisation of our tight gas portfolio in North America, following recent acquisitions there,” said Shell president Peter Voser.($1 = €0.73) | http://www.icis.com/Articles/2010/10/28/9405234/shells-q3-chems-earnings-soar-86-on-sales-margins.html | CC-MAIN-2014-10 | refinedweb | 358 | 51.38 |
What is Universal Windows Platform (UWP)?
What is Universal Windows Platform (UWP)?
The author describes what UWP is, what it means for developers, and why it matters.
Join the DZone community and get the full member experience.Join For Free
Universal Windows Platform is the almost new kid on the street and it’s time for developers to find out what it is, how it works, and why to bother. In this post, I make a short introduction to UWP for developers. I also share some of my thoughts about the platform and point out some useful resources that help you to get started.
What Is Universal Windows Platform?
The idea of Universal Windows Platform (UWP) is to target different devices and hardware platforms that Windows supports with the same code base.
Instead of having one version of source code for phone apps and another for desktop apps, we have one code base and perhaps one set of views. We can build our application for target processor architecture that Windows supports with no modifications to source code, and we can run our applications on different devices.
This picture is from the MSDN article Guide to Universal Windows Platform (UWP) apps.
This is something that competitors have tried for years but we can’t find any big success stories. Microsoft seems to have finished their universal dream this year (it’s my speculation) and developers can already jump in today.
To get a better idea of what I mean, just think about an app that you want to be available on different devices and then take a look at the following picture that shows where you can easily go with universal apps.
Picture from Windows Experience blog post Welcoming Developers to Windows 10
What It Means for Developers?
Before Windows 10, universal apps were targeting operating systems. In Visual Studio, we had two projects — one for Windows 8 and the other for Windows Phone 8. With Windows 10 we have one project. We can build it for different target architectures as shown in the following screenshot:
If we select ARM then we can deploy and run our application on Windows Phone and Windows 10 IoT Core. x86 and x64 allow us to run our application on desktop Windows and Windows Phone emulators.
The main benefits for developers are:
- Less code to write.
- Same views work on screens with different sizes.
- Sell universal apps through Windows Store.
If it makes UWP seem like a silver bullet then here are some obstacles you may face:
- Not all universal apps can share the same views — just think about rich desktop clients and the thinner version for mobile.
- You still have to write some platform dependent code and it’s not always easy to choose between similar APIs in System namespace and universal apps.
- Although views can be used with different screens with no modifications you must still add little improvements to views based on screen sizes to provide the finest experience to users.
I am currently learning to build UWP apps and it’s not something hard. Okay, maybe it actually is but Microsoft has made it really easy for us.
Is There Any Future for UWP Apps?
This is a tough question to answer as we don’t have much information about Microsoft's plans with the platform, and especially with phones. Based on what I have read from the public space it seems like Microsoft is preparing for a new coming. I hope this time techies have a stronger position over marketing to say how things must be done. Rumors are that new Windows Phone devices by Microsoft are engineered by the team behind Surface tablets and Surface Book. Also, I see more and more UWP apps announced. Tempo is slow but, hopefully, we will see a rise by the fall of this year.
Although phones will be major players on UWP it’s still possible that the third coming fails. If this is the case then the only way for Microsoft to get better presence in the mobile space is to make CoreCLR for UWP to run on Android and iOS. I am not sure if they would make such a wild bet but let’s see. Microsoft has made some of their popular apps available on Android and iOS, and I take it as a sign of the importance of the mobile space for Microsoft.
Getting Started
To get started, you can use the following resources:
- Guide to Universal Windows Platform (UWP) apps (MSDN)
- Windows 10: Getting Started with UWP (MVA)
- UWP sessions from different events (Channel 9)
- Presentation: Brewing Eisbock with Raspberry PI and Windows 10 IoT (my presentation and reference materials you can use to get started with code)
Published at DZone with permission of Gunnar Peipman . See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/what-is-universal-windows-platform-uwp | CC-MAIN-2020-05 | refinedweb | 824 | 60.85 |
223 Best Practices URI Construction
Contents
- 1 Best Practices: URI Construction
- 1.1 Status
- 1.2 See Also
- 1.3 Design principles
- 1.4 Best Practices Checklist
- 1.5 URI Persistence
- 1.6 Internationalized Resource Identifiers: Using non-ASCII characters in URIs
- 1.7 Working Notes
Best Practices: URI Construction
Back to Best Practices Wiki page
Purpose of this wiki: This page is provide a collaboration page for creating URIs for use in government linked data.
Status
- 21st Feb 2012 - Rewritten principles and IRIs note (Dani)
- Feb 2012 - Preparation for inclusion in Editors Draft Best Practices FPWD
- Dec 2011 - Initial revisions by Ghislain, Boris, Dani, JohnE
See Also
- Cool URIs for the Semantic Web The canonical reference, but not the most usable
- Designing URI Sets for the UK Public Sector (PDF)
- Creating URIs (data.gov.uk) Concise but incomplete version of "Designing URI Sets
- URI Design Principles: Creating Unique URIs for Government Linked Data (TWC RPI) Focuses on "ID" URIs
- URL Design (Kyle Neath, Director of Design at Github) One of the best discussions of URI design we've seen
Guidance will be produced not only for minting URIs for governmental entities, such as schools or agencies, but also for vocabularies, concepts, and datasets.
Design principles
The Web makes use of the URI (Uniform Resource Identifiers) as a single global identification system. The global scope of URIs promotes large-scale "network effects", in order to benefit from the value of Linked Data, government and governmental agencies need to identify their resources using URIs. This section provides a set of general principles aimed at helping government stakeholders to define and manage URIs for their resources.
- Use HTTP URIs
- What it means: [LDPrinciples], HTTP URIs enable people to "look-up" or "dereference" a URI in order to access a representation of the resource identified by that URI.
- Provide at least one machine-readable representation of the resource identified by the URI
- What it means: In order to enable HTTP URIs to be "dereferenced", data publishers have to set up the neccesary infrastructure elements (e.g. TCP-based HTTP servers) to serve representations of the resources they want to make available (e.g. a human-readable HTML representation or a machine-readable RDF/XML representation). A publisher may supply zero or more representations of the resource identified by that URI. However, there is a clear benefit to data users in providing at least one machine-readable representation. More information about serving different representations of a resource can be found in Cool URIs for the Semantic Web.
- A URI structure will not contain anything that could change
- What it means: It is good practice that URIs do not contain anything that could easily change or that is expected to change, such as [MDinURI] and Architecture of the World Wide Web: URI Opacity.
Best Practices Checklist
High-level Considerations for Constructing URIs
The purpose of URIs is to uniquely and reliably name resources on the Web. According to Cool URIs for the Semantic Web (W3C IG Note), URIs should be designed with simplicity, stability and manageability in mind, thinking about them as identifiers rather than as names for Web resources.
Many general-purpose guidelines exist for the URI designer to consider, including Cool URIs for the Semantic Web, which provides guidance on how to use URIs to describe things that are not Web documents; Designing URI Sets for the UK Public Sector, a document from the UK Cabinet offices that defines the design considerations on how to URIs can be used to publish public sector reference data; and (3) Style Guidelines for Naming and Labelling Ontologies in the Multilingual Web (PDF), which proposes guidelines for designing URIs in a multilingual scenario.
The purpose of this subsection is to provide specific, practical guidance to government stakeholders who are planning to create systems for publishing government Linked Data and therefore must create sensible, sustainable URI designs that fit their specific requirements.
A "Checklist" for Constructing Government URIs
The following checklist is based in part on Creating URIs (short; on the Web) and Designing URI Sets for the UK Public Sector (long; in PDF).
- What will your proposed URIs name? Will they:
- Point to something downloadable? (e.g. PDF, CSV, RDF, TTL or ZIP files)
- Identify some real world thing? (e.g. school, department, agency)
- Point to information about a real world thing?
- Identify some abstract thing? (e.g. a position, a service, a relationship)
- Define a concept? (e.g. a vocabulary term or metadata element)
- Do you already have (non-URI) names for those things? (e.g. using other information systems)
- Do URIs already exist for naming these things?
- Are you sure that the existing URIs refer to the same thing as you intend?
- Will you or some other organization have control over the new URIs?
- Do you have any strong syntax preferences or requirements?
- Will your stakeholders need to easily write the chosen URI on a piece of paper, or remember it easily?
- Will you spell URIs on the phone?
- Will the URIs need to give hints about the content of the resource?
- Is it necessary for the URI structure to make guessing of related URIs easier?
- What are the long-term persistence requirements of your URIs?
- Should the URIs you create still make sense if the named resource evolves?
- How far into the future must your resolvable URIs lead to results (e.g. data, documents, definitions)
- Will you need to move the URI-named resources in the future?
- Will such moves be related to organizational changes and may need to be reflected in the URIs?
- Will these moves be technical only and should not need to be reflected in the URIs?
- Should the government sector (e.g. "Health," "Energy," "Defense") be included in the domain of the URI?
- Have these sectors been defined formally (e.g. by statute)?
- Will informal or equivalent sector names also be used?
- Is sensible resolution of partial/incomplete URIs necessary or anticipated?
URI Persistence
@@TODO@@ Expand this section (Bernadette)
Advice, info related to persistent URIs
As is the case with many human interactions, confidence in interactions via the Web depends on stability and predictability. For an information resource, persistence depends on the consistency of representations. The representation provider decides when representations are sufficiently consistent (although that determination generally takes user expectations into account).
Although persistence in this case is observable as a result of representation retrieval, the term URI persistence is used to describe the desirable property that, once associated with a resource, a URI should continue indefinitely to refer to that resource.
- Consistent representation
A URI owner SHOULD provide representations of the identified resource consistently and predictably.
URI persistence is a matter of policy and commitment on the part of the URI owner. The choice of a particular URI scheme provides no guarantee that those URIs will be persistent or that they will not be persistent.).
In addition, content negotiation also promotes consistency, as a site manager is not required to define new URIs when adding support for a new format specification. Protocols that do not support content negotiation (such as FTP) require a new identifier when a new data format is introduced. Improper use of content negotiation can lead to inconsistent representations.
For more discussion about URI persistence, see [Cool].
Internationalized Resource Identifiers: Using non-ASCII characters in URIs
Guidelines for those interested in minting URIs in their own languages (German, Dutch, Spanish, Chinese, etc.)
The URI syntax defined in RFC 3986 STD 66 (Uniform Resource Identifier (URI): Generic Syntax) restricts URIs to a small number of characters: basically, just upper and lower case letters of the English alphabet, European numerals and a small number of symbols. There is now a growing need to enable use of characters from any language in URIs.
The purpose of this section is to provide guidance to government stakeholders who are planning to create URIs using characters that go beyond the subset defined in RFC 3986.
First we provide two important definitions:
IRI (RFC 3987) is a new protocol element, that represents a complement to the Uniform Resource Identifier (URI). An IRI is a sequence of characters from the Universal Character Set (Unicode/ISO 10646) that can be therefore be used to mint identifiers that use a wider set of characters than the one defined in RFC 3986.
The Internationalized Domain Name or IDN is a standard approach to dealing with multilingual domain names was agreed by the IETF in March 2003.
Althought there exist some standards focused on enabling the use of international characters in Web identifiers, government stakeholders need to take into account several issues before constructing such internationalized identifiers. This section is not meant to be exhaustive and we point the interested audience to An Introduction to Multilingual Web Addresses, however some of the most relevant issues are following:
- Domain Name lookup: Numerous domain name authorities already offer registration of internationalized domain names. These include providers for top level country domains as .cn, .jp, .kr, etc., and global top level domains such as .info, .org and .museum.
- Domain names and phishing: One of the problems associated with IDN support in browsers is that it can facilitate phishing through what are called 'homograph attacks'. Consequently, most browsers that support IDN also put in place some safeguards to protect users from such fraud.
- Encoding problems: IRI provides a standard way for creating and handling international identifiers, however the support for IRIs among the various semantic Web technology stacks and libraries is not homogenic and may lead to difficulties for applications working with this kind of identifiers. A good reference on this subject can be found in "I18n of Semantic Web Applications" by Auer et al.
Working Notes
TWC RPI Draft
@@TODO@@ Format/Update URI Design Principals per TWC RPI Draft (JohnE)
TWC RPI has drafted URI Design Principles: Creating Unique URIs for Government Linked Data with an eye toward instance identifier URIs that may be easily re-hosted --- a syntactic design that can be modeled and demonstrated on one host (e.g. TWC's Instance Hub demonstrator) but can be easily re-hosted on another, such as a government agency responsible for a set of named entities.
URI Design Goals
The design principles should produce...
- URIs that are easily re-hosted (eg from)
URI patterns this form
http:// {sector}. yourdomain / or http:// data. http:// data. {sector}. {yourdomain} /, you may consider having the following pattern:
http:// data. {sector}. {yourdomain} / def/ {ontoDomain}}}. Note the keyword *def* for the vocabulary, {ontoDomain} can be the scope of the ontology, (geo, stat, service, transport, etc..)
- 2.2- Using this solution, instances are formed using the following scheme:
http:// id. {sector}. {yourdomain} / {ontoDomain}. Note the presence of the keyword *id* (individuals) at the beginning of the URI pattern
URI Design Template
'http://' BASE '/' 'id' '/' ORG '/' CATEGORY ( '/' TOKEN )+
In the example of TWC RPI's Instance Hub demonstration, BASE is logd.tw.rpi.edu
Notes on the RPI Design
- id
- This is required, to avoid polluting the top namespace of BASE with identifiers.
- id is preferred over other alternatives to keep the token as short as possible.
- The id token adds no semantics; it is merely a syntactic way of distinguishing instance identifier URIs from others.
- Some consistency with [data.gov.uk data.gov.uk] URIs is considered A Good Thing.
- ORG
- This is a short token representing the agency, government, or organization that has authority over
(Per TWC RPI URI Design Draft)
The URI Design Principles page provides examples of applying this template to:
- US Government agencies:
- States and Territories:
- Counties:
- US Postal Codes (Zip Codes):
- Congressional Districts:
- EPA Facilities:
OData Protocol URI Conventions
Government linked data providers using the Windows Azure Platform and implementing the Open Data Protocol (OData) specification may also wish to consider the OData: URI Conventions recommendation. That document "...defines a set of recommended (but not required) rules for constructing URIs to identify the data and metadata exposed by an OData server as well as a set of reserved URI query string operators, which if accepted by an OData server, MUST be implemented as required by (that document)..."
References
- [URI] T. Berners-Lee; R. Fielding; L. Masinter. Uniform Resource Identifiers (URI): generic syntax. January 2005. Internet RFC 3986. URL:
- [IRI] M. Duerst, M. Suignard. Internationalized Resource Identifiers (IRI). January 2005. Internet RFC 3987. URL:
- [HTTP] RFC2616
- [WEBARCH]
- [MDinURI]
- [LDPrinciples]
- [Cool] | https://www.w3.org/2011/gld/wiki/223_Best_Practices_URI_Construction | CC-MAIN-2018-26 | refinedweb | 2,057 | 51.58 |
From: jk_at_[hidden]
Date: 2001-06-08 23:42:06
8 Jun 2001 23:14:18 +0400 Douglas Gregor ÎÁÐÉÓÁÌ:
>> Like someone else said, we would have to avoid the name "nil" since it's
>> used as a macro in Apple system code. I like the name "null" better.
>> Also, shouldn't there be comparisons with pointers-to-member too? A
>> revised class could be:
>[snip]
>
>Yes, there should. Thanks.
Why not to leave type name as it is, nil_t? It is less to type :) and it seems
there is no name conflicts. Then everyone could define theirs own literals as
they wish:
namespace {
const boost::nil_t null;
const boost::nil_t NoOneWantsToUseThisVeryLongName;
const boost::nil_t nn; // for lazy people :))
}
and use temporary objects:
int f(boost::nil_t);
f(boost::nil_t());
-- jk
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2001/06/13080.php | CC-MAIN-2019-43 | refinedweb | 153 | 83.76 |
- NAME
- DESCRIPTION
- Notice
- Core Enhancements
- Security
- Incompatible Changes
- Modules and Pragmata
- Documentation
- Utility Changes
- Configuration and Compilation
- Testing
- Platform Support
- Internal Changes
- Selected Bug Fixes
- Known Problems
- Acknowledgements
- Reporting Bugs
- SEE ALSO
NAME
perldelta - what is new for perl v5.15.3
DESCRIPTION
This document describes differences between the 5.15.2 release and the 5.15.3 release.
If you are upgrading from an earlier release such as 5.15.1, first read perl5152delta, which describes differences between 5.15.1 and 5.15.2.
Notice
This release includes a rewrite of the perl OO docs which represent a significant modernization of the OO documentation. All of the old OO tutorials (perltoot, perlboot, etc.) have been replaced with pointers to the new docs.
Core Enhancements
More CORE subs are callable through references.
New debugger commands
The debugger now has
disable and
enable commands for disabling existing breakpoints and reënabling them. See perldebug..
Incompatible Changes
$[ has been removed.
Borland compiler
All support for the Borland compiler has been dropped. The code had not worked for a long time anyway.
Weakening read-only references
Weakening read-only references is no longer permitted. It should never hove worked anyway, and in some cases could result in crashes.
Modules and Pragmata
Updated Modules and Pragmata
AnyDBM_File has been upgraded from version 1.00 to version 1.01.
This is only a minor documentation update.
Archive::Extract has been upgraded from version 0.52 to version 0.56.
Resolved an issue where
unzipexecutable was present in
PATHon MSWin32
Archive::Tar has been upgraded from version 1.76 to version 1.78.
attributes has been upgraded from version 0.15 to version 0.16.
Attribute::Handlers has been upgraded from version 0.92 to version 0.93.
B::Deparse has been upgraded from version 1.07 to 1.08.
It now correctly deparses
$#{/}and
qq(${#}a).
Carp has been upgraded from version 1.21 to 1.23.
Carp is now a dual life module and several fixes have been make to make it more portable to older versions of perl.
CPAN::Meta has been upgraded from version 2.112150 to version 2.112621.
CPAN::Meta::YAML has been upgraded from version 0.003 to version 0.004.
CPANPLUS has been upgraded from version 0.9109 to version 0.9111.
CPANPLUS::Dist::Build has been upgraded from version 0.56 to version 0.58.
Devel::PPPort has been upgraded from version 3.19 to version 3.20.
diagnostics has been upgraded from version 1.24 to version 1.25.
It now strips out
SZ<><...>formatting codes before displaying descriptions [perl #94488].
Data::Dumper has been upgraded from version 2.133 to version 2.134.
The XS code for sorting hash keys has been simplified slightly.
Exporter has been upgraded from version 5.64_03 to version 5.65.
ExtUtils::ParseXS has been upgraded from version 3.03_01 to version 3.04_04.
The handling of
dVARand
PERL_EUPXS_NEVER_EXPORTpreSwill define a copy of the
XS_INTERNAL/
XS_EXTERNALmacrosnow properly strips trailing semicolons from inputmaps. These could previously trigger warnings (errors in strict C89 compilers) due to additional semicolons being interpreted as empty statements.
Now detects and throws a warning if there is a
CODEsection using
RETVAL, but no
OUTPUTsection (CPAN RT #69536).
Locale::Codes has been upgraded from version 3.17 to version 3.18.
The CIA world added non-standard values, so this is no longer used as a source of data.
File::Glob has been upgraded from version 1.12 to version 1.13.
On Windows, tilde (~) expansion now checks the
USERPROFILEenvironment variable, after checking
See also "Security".
Filter::Simple has been upgrade from version 0.87 to 0.88.].
IO has been upgraded from version 1.25_05 to 1.25_06, and IO::Handle from version 1.32 approprate I/O layers to the newly-opened file [rt.cpan.org #66474].
Math::BigFloat has been upgraded from version 1.995 to version 1.997.
Math::BigInt has been upgraded from version 1.996 to version 1.997.
Math::BigInt::FastCalc has been upgraded from version 0.29 to 0.30.
Math::BigRat has been upgraded from version 0.2602 to version 0.2603.
int()on a Math::BigRat object containing -1/2 now creates a Math::BigInt containing 0, rather than -0. Math::BigInt does not even support negative zero, so the resulting object was actually malformed [perl #95530].
Module::CoreList has been upgraded from version 2.55 to 2.56.
Updated module for 5.15.3, additionally it was missing a few entries: DB_File in 5.8.2, Errno in 5.6.0 and 5.6.1, and VMS::Filespec in 5.12.3.
Module::Metadata has been upgraded from version 1.000005_01 to version 1.000007.
Module::Load::Conditional has been upgraded from version 0.44 to version 0.46.
ODBM_File has been upgraded from version 1.11 to version 1.12.
This is only a minor refactoring of the XS code to bring it closer to the other
?DBM_Filemodules.
open has been upgraded from version 1.08 to 1.09.
It no longer turns of layers on standard handles when invoked without the ":std" directive. Similarly, when invoked with the ":std" directive, it now clears layers on STDERR before applying the new ones, and not just on STDIN and STDOUT [perl #92728].
perlfaq has been upgraded from version 5.01500302 to version 5.0150034.
Pod::Perldoc has been upgraded from version 3.15_06 to 3.15_07.
When rendering a file specified as an HTTP URL, it now use a manpage name based on the URL, instead of the name of the temporary file.
Pod::Simple has been upgraded from version 3.18 to version 3.19.
POSIX has been upgraded from version 1.24 to version 1.25.now defaults the third argument to
TCSANOW, instead of 0. On most platforms
TCSANOWis defined as 0, but on some 0 is not a valid parameter, which caused a call with defaults to fail.
Search::Dict has been upgraded from version 1.03 to 1.04.
Storable has been upgraded from version 2.31 to version 2.32.
XS code which duplicates functionality of ppport.h has been removed. Tests should now pass on older versions of Test::More. Storable now builds and passes tests back to perl 5.004.
Text::Abbrev has been upgraded from version 1.01 to 1.02.
threads has been upgraded from version 1.83 to 1.85.
threads::shared has been upgraded from version 1.38 to 1.40.
Destructors on shared objects used to be ignored sometimes if the objects were referenced only by shared data structures. This has been mostly fixed, but destructors may still be ignored if the objects still exist at global destruction time [perl #98204].
Unicode::UCD has been upgraded from version 0.34 to version 0.35.
UNIVERSAL has been upgraded from version 1.08 to version 1.09.
XSLoader has been upgraded from version 0.15 to version 0.16.
Documentation
New Documentation
perlootut
This a new OO tutorial. It focuses on basic OO concepts, and then recommends that readers choose an OO framework from CPAN.
Changes to Existing Documentation
perlobj
This document has been rewritten from scratch, and its coverage of various OO concepts has been expanded.
perlpragma
There is now a standard convention for naming keys in the
%^H, documented under Key naming.
Removed Documentation
Old OO Documentation
All the old OO tutorials, perltoot, perltooc, and perlboot, have been removed. The perlbot (bag of object tricks) document has been removed as well.
Development Deltas
The old perldelta files for development cycles prior to 5.15 have been removed.].
Configuration and Compilation will not be visible outside the build process.
Testing
t/porting/globvar.t has been added, to run a sanity check on globar.sym. globar.sym is not needed on most *nix platforms, but is for Win32, hence previously was it was possible to inadvertently commit changes that worked perfectly locally, but broke the build on Win32.
t/op/unlink.t has been added to test the
unlinkfunction.
Several tests were added in POSIX.
ext/POSIX/t/export.t added to test
@EXPORTand
@EXPORT_OK. ext/POSIX/t/sigset.t added to see if
POSIX::SigSetworks..
ext/XS-APItest/t/gotosub.t in XS::APItest tests
goto &xsuband hints.
t/io/shm.t was added to see if SysV shared memory works.
t/op/coreamp.t was added to test
&foo()calls for CORE subs.
Platform Support
Platform-Specific Notes
- VMS
Remove unnecessary includes, fix miscellaneous compiler warnings and close some unclosed comments on vms/vms.c.
Remove sockadapt layer from the VMS build.
Internal Changes the C files that make up the Perl core have been converted to UTF-8.
Selected Bug Fixes
In Perl 5.15.0].
Perl 5.10.0 introduced a similar bug:
defined(*{"foo"})where "foo" represents the name of a built-in global variable used to return false if the variable had never been used before, but only on the first call. This, too, has been fixed.
Various functions that take a filehandle argument in rvalue context (.
defined ${ $tied_variable }used to call
FETCHmultiple times, but now calls it just once.
Some cases of dereferencing a complex expression, such as
${ (), $tied } = 1, used to call
FETCHmultiple times, but now call it once.
For a tied variable returning a package name,
$tied->methodused to call
FETCHmultiple times (even up to six!), and sometimes would fail to call the method, due to memory corruption..
It used to be possible to free the typeglob of a localised array or hash (e.g.,).
Assignments.
Perl 5.15.1 inadvertently stopped
*foo =~ s/\*//rfrom working, as it would try to force the *foo glob into a string. This has been fixed [perl #97954].
If things were arranged in memory the right way, it was possible for thread joining to emit "Attempt to free unreferenced scalar" warnings if
callerhad been used from the
DBpackage prior to thread creation, due to the way pads were reference-counted and cloned [perl #98092].
CORE:: subs were introduced in the previous development release, but
defined &{"CORE::..."}did not return true. That has been rectified [perl #97484].
Lvalue subroutines were made to autovivify in 5.15.0, but it did not work in some cases involving an intervening list operator between the dereference operator and the subroutine call (
${(), lvsub()}) [perl #98184].].
Weakening the first argument to an automatically-invoked
DESTROYmethod could result in erroneous "DESTROY created new reference" errors or crashes. Now it is an error to weaken a read-only reference.
Under miniperl (used to configure modules when perl itself is built),
globnow clears %ENV before calling csh, since the latter croaks on some systems if it does not like the contents of the LS_COLORS enviroment variable [perl #98662].
++and
--now work on copies of globs, instead of dying.
The subroutines in the CORE:: namespace that were introduced in the previous development release run with the lexical hints (strict, warnings) of the caller, just as though the built-in function had been called. But this was not the case for
goto &CORE::sub. The CORE sub would end up running with the lexical hints of the subroutine it replaced, instead of that subroutine's caller. This has been fixed.
Stacked
-l(followed immediately by other filetest operators) did not work previously; now it does. It is only permitted when the rightmost filetest op has the special "_" handle for its argument and the most recent
stat/
lstatcall was an
lstat..
Known Problems
We have a failing test in op/sigdispatch.t on i386-netbsd 3.1
On Solaris, we have two kinds of failure.
Acknowledgements. | https://metacpan.org/changes/release/STEVAN/perl-5.15.3 | CC-MAIN-2015-18 | refinedweb | 1,946 | 61.73 |
Hi, > When ?!$@ are possible: why doesn't use Lua one of this chars for > special identifiers instead of <underscore><uppercase>? I would like to > use my own namespace*s* and this isn't easy because _... is the only > possibility to start with a non-letter? According to the documentation and source code, you can't use the characters ?!$@ in identifiers. There are probably sereval ways to implement different namespaces in Lua. I think that the most natural way is to create a table and put all protected symbols in it. Something like: local namespace = {} namespace.x = 2 print(namespace.x) Regards, Diego. | https://lua-users.org/lists/lua-l/2001-04/msg00145.html | CC-MAIN-2020-45 | refinedweb | 103 | 60.92 |
This tutorial will show you how to speed up the processing of NumPy arrays using Cython. By explicitly specifying the data types of variables in Python, Cython can give drastic speed increases at runtime.
The sections covered in this tutorial are as follows:
- Looping through NumPy arrays
- The Cython type for NumPy arrays
- Data type of NumPy array elements
- NumPy array as a function argument
- Indexing, not iterating, over a NumPy Array
- Disabling bounds checking and negative indices
- Summary
For an introduction to Cython and how to use it, check out my post on using Cython to boost Python scripts. Otherwise, let's get started!
Bring this project to life
Looping Through a NumPy Array
We'll start with the same code as in the previous tutorial, except here we'll iterate through a NumPy array rather than a list.)
I'm running this on a machine with Core i7-6500U CPU @ 2.5 GHz, and 16 GB DDR3 RAM. The Python code completed in 458 seconds (7.63 minutes). It's too long.
Let's see how much time it takes to complete after editing the Cython script created in the previous tutorial, as given below. The only change is the inclusion of the NumPy array in the for loop. Note that you have to rebuild the Cython script using the command below before using it.
python setup.py build_ext --inplace
The Cython script in its current form completed in 128 seconds (2.13 minutes). Still long, but it's a start. Let's see how we can make it even faster.
Cython Type for NumPy Array
Previously we saw that Cython code runs very quickly after explicitly defining C types for the variables used. This is also the case for the NumPy array. If we leave the NumPy array in its current form, Cython works exactly as regular Python does by creating an object for each number in the array. To make things run faster we need to define a C data type for the NumPy array as well, just like for any other variable.
The data type for NumPy arrays is ndarray, which stands for n-dimensional array. If you used the keyword int for creating a variable of type integer, then you can use ndarray for creating a variable for a NumPy array. Note that ndarray must be called using NumPy, because ndarray is inside NumPy. So, the syntax for creating a NumPy array variable is numpy.ndarray. The code listed below creates a variable named arr with data type NumPy ndarray.
The first important thing to note is that NumPy is imported using the regular keyword import in the second line. In the third line, you may notice that NumPy is also imported using the keyword cimport.
It's time to see that a Cython file can be classified into two categories:
- Definition file (.pxd)
- Implementation file (.pyx)
The definition file has the extension .pxd and is used to hold C declarations, such as data types to be imported and used in other Cython files. The other file is the implementation file with extension .pyx, which we are currently using to write Cython code. Within this file, we can import a definition file to use what is declared within it.
The code below is to be written inside an implementation file with extension .pyx. The cimport numpy statement imports a definition file in Cython named "numpy". The is done because the Cython "numpy" file has the data types for handling NumPy arrays.
The code below defines the variables discussed previously, which are maxval, total, k, t1, t2, and t. There is a new variable named arr which holds the array, with data type
numpy.ndarray. Previously two import statements were used, namely
import numpy and
cimport numpy. Which one is relevant here? Here we'll use need
cimport numpy, not regular
import. This is what lets us access the numpy.ndarray type declared within the Cython numpy definition file, so we can define the type of the arr variable to numpy.ndarray.
The maxval variable is set equal to the length of the NumPy array. We can start by creating an array of length 10,000 and increase this number later to compare how Cython improves compared to Python.
import time import numpy cimport numpy cdef unsigned long long int maxval cdef unsigned long long int total cdef int k cdef double t1, t2, t cdef numpy.ndarray arr maxval = 10000 arr = numpy.arange(maxval) t1 = time.time() for k in arr: total = total + k print "Total =", total t2 = time.time() t = t2 - t1 print("%.20f" % t)
After creating a variable of type
numpy.ndarray and defining its length, next is to create the array using the
numpy.arange() function. Notice that here we're using the Python NumPy, imported using the
import numpy statement.
By running the above code, Cython took just 0.001 seconds to complete. For Python, the code took 0.003 seconds. Cython is nearly 3x faster than Python in this case.
When the
maxsize variable is set to 1 million, the Cython code runs in 0.096 seconds while Python takes 0.293 seconds (Cython is also 3x faster). When working with 100 million, Cython takes 10.220 seconds compared to 37.173 with Python. For 1 billion, Cython takes 120 seconds, whereas Python takes 458. Still, Cython can do better. Let's see how.
Data Type of NumPy Array Elements
The first improvement is related to the datatype of the array. The datatype of the NumPy array
arr is defined according to the next line. Note that all we did is define the type of the array, but we can give more information to Cython to simplify things.
Note that there is nothing that can warn you that there is a part of the code that needs to be optimized. Everything will work; you have to investigate your code to find the parts that could be optimized to run faster.
cdef numpy.ndarray arr
In addition to defining the datatype of the array, we can define two more pieces of information:
- Datatype for array elements
- Number of dimensions
The datatype of the array elements is
int and defined according to the line below. The numpy imported using cimport has a type corresponding to each type in NumPy but with _t at the end. For example, int in regular NumPy corresponds to int_t in Cython.
The argument is
ndim, which specifies the number of dimensions in the array. It is set to 1 here. Note that its default value is also 1, and thus can be omitted from our example. If more dimensions are being used, we must specify it.
cdef numpy.ndarray[numpy.int_t, ndim=1] arr
Unfortunately, you are only permitted to define the type of the NumPy array this way when it is an argument inside a function, or a local variable in the function– not inside the script body. I hope Cython overcomes this issue soon. We now need to edit the previous code to add it within a function which will be created in the next section. For now, let's create the array after defining it.
Note that we defined the type of the variable
arr to be
numpy.ndarray, but do not forget that this is the type of the container. This container has elements and these elements are translated as objects if nothing else is specified. To force these elements to be integers, the
dtype argument is set to
numpy.int according to the next line.
arr = numpy.arange(maxval, dtype=numpy.int)
The numpy used here is the one imported using the
cimport keyword. Generally, whenever you find the keyword numpy used to define a variable, then make sure it is the one imported from Cython using the
cimport keyword.
NumPy Array as a Function Argument
After preparing the array, next is to create a function that accepts a variable of type
numpy.ndarray as listed below. The function is named
do_calc(). t1 = time.time() for k in arr: total = total + k print "Total = ", total t2 = time.time() t = t2 - t1 print("%.20f" % t)
import test_cython import numpy arr = numpy.arange(1000000000, dtype=numpy.int) test_cython.do_calc(arr)
After building the Cython script, next we call the function
do_calc() according to the code below. The computational time in this case is reduced from 120 seconds to 98 seconds. This makes Cython 5x faster than Python for summing 1 billion numbers. As you might expect by now, to me this is still not fast enough. We'll see another trick to speed up computation in the next section.
Indexing vs. Iterating Over NumPy Arrays
Cython just reduced the computational time by 5x factor which is something not to encourage me using Cython. But it is not a problem of Cython but a problem of using it. The problem is exactly how the loop is created. Let's have a closer look at the loop which is given below.
In the previous tutorial, something very important is mentioned which is that Python is just an interface. An interface just makes things easier to the user. Note that the easy way is not always an efficient way to do something.
Python [the interface] has a way of iterating over arrays which are implemented in the loop below. The loop variable k loops through the arr NumPy array, element by element from the array is fetched and then assigns that element to the variable k. Looping through the array this way is a style introduced in Python but it is not the way that C uses for looping through an array.
for k in arr: total = total + k
The normal way for looping through an array for programming languages is to create indices starting from 0 [sometimes from 1] until reaching the last index in the array. Each index is used for indexing the array to return the corresponding element. This is the normal way for looping through an array. Because C does not know how to loop through the array in the Python style, then the above loop is executed in Python style and thus takes much time for being executed.
In order to overcome this issue, we need to create a loop in the normal style that uses indices
for accessing the array elements. The new loop is implemented as follows.
At first, there is a new variable named arr_shape used to store the number of elements within the array. In our example, there is only a single dimension and its length is returned by indexing the result of arr.shape using index 0.
The arr_shape variable is then fed to the
range() function which returns the indices for accessing the array elements. In this case, the variable k represents an index, not an array value.
Inside the loop, the elements are returned by indexing the variable arr by the index k.
cdef int arr_shape = arr.shape[0] for k in range(arr_shape): total = total + arr[k]
Let's edit the Cython script to include the above loop. The new Script is listed below. The old loop is commented out.)
By building the Cython script, the computational time is now around just a single second for summing 1 billion numbers after changing the loop to use indices. So, the time is reduced from 120 seconds to just 1 second. This is what we expected from Cython.
Note that nothing wrong happens when we used the Python style for looping through the array. No indication to help us figure out why the code is not optimized. Thus, we have to look carefully for each part of the code for the possibility of optimization.
Note that regular Python takes more than 500 seconds for executing the above code while Cython just takes around 1 second. Thus, Cython is 500x times faster than Python for summing 1 billion numbers. Super. Remember that we sacrificed by the Python simplicity for reducing the computational time. In my opinion, reducing the time by 500x factor worth the effort for optimizing the code using Cython.
Reaching 500x faster code is great but still, there is an improvement which is discussed in the next section.
Disabling Bounds Checking and Negative Indices
There are a number of factors that causes the code to be slower as discussed in the Cython documentation which are:
- Bounds checking for making sure the indices are within the range of the array.
- Using negative indices for accessing array elements.
These 2 features are active when Cython executes the code. You can use a negative index such as -1 to access the last element in the array. Cython also makes sure no index is out of the range and the code will not crash if that happens. If you are not in need of such features, you can disable it to save more time. This is by adding the following lines.
cimport cython @cython.boundscheck(False) @cython.wraparound(False) The new code after disabling such features is as follows. import time import numpy cimport numpy cimport cython ctypedef numpy.int_t DTYPE_t @cython.boundscheck(False) # turn off bounds-checking for entire function @cython.wraparound(False) # turn off negative index wrapping for entire function)
After building and running the Cython script, the time is not around 0.4 seconds. Compared to the computational time of the Python script [which is around 500 seconds], Cython is now around 1250 times faster than Python.
Summary
This tutorial used Cython to boost the performance of NumPy array processing. We accomplished this in four different ways:
1. Defining the NumPy Array Data Type
We began by specifying the data type of the NumPy array using the
numpy.ndarray. We saw that this type is available in the definition file imported using the
cimport keyword.
2. Specifying the Data Type of Array Elements + Number of Dimensions
Just assigning the
numpy.ndarray type to a variable is a start–but it's not enough. There are still two pieces of information to be provided: the data type of the array elements, and the dimensionality of the array. Both have a big impact on processing time.
These details are only accepted when the NumPy arrays are defined as a function argument, or as a local variable inside a function. We therefore add the Cython code at these points. You can also specify the return data type of the function.
3. Looping Through NumPy Arrays Using Indexing
The third way to reduce processing time is to avoid Pythonic looping, in which a variable is assigned value by value from the array. Instead, just loop through the array using indexing. This leads to a major reduction in time.
4. Disabling Unnecessary Features
Finally, you can reduce some extra milliseconds by disabling some checks that are done by default in Cython for each function. These include "bounds checking" and "wrapping around." Disabling these features depends on your exact needs. For example, if you use negative indexing, then you need the wrapping around feature enabled.
Conclusion
This tutorial discussed using Cython for manipulating NumPy arrays with a speed of more than 1000x times Python processing alone. The key for reducing the computational time is to specify the data types for the variables, and to index the array rather than iterate through it.
In the next tutorial, we will summarize and advance on our knowledge thus far by using Cython to reduc the computational time for a Python implementation of the genetic algorithm.
Add speed and simplicity to your Machine Learning workflow today | https://blog.paperspace.com/faster-numpy-array-processing-ndarray-cython/ | CC-MAIN-2022-21 | refinedweb | 2,601 | 73.58 |
How can I extend the Graphics class to add a new drawing method?
Created May 7, 2012
Scott Stanchfield
Suppose you wanted to draw a Dragon (fractal) you could create
and from your paint method
You're not supposed to extend it.
The AWT manager creates and passes Graphics objects to you to use to paint. It's the only thing that really knows how to set them up properly.
If you'd like to create utility functions to paint different things, you can create a new class that you pass the graphics context to.
Suppose you wanted to draw a Dragon (fractal) you could create
public class Dragon {
public static void draw(Graphics g) {
g.draw...
}
}
and from your paint method
public void paint(Graphics g) {
Dragon.draw(g);
}
} | http://www.jguru.com/print/faq/view.jsp?EID=506523 | CC-MAIN-2018-13 | refinedweb | 130 | 70.73 |
1618156200
In this Neural Networks Tutorial, we are going to talk about Convolutional Neural Networks. We will cover what convolutional neural networks are and how they work. We will also cover the different elements of CNNs and talk about some of the parameters. We will see some applications of how CNNs are used in self-driving cars and how Tesla and Waymo use them.
The code example is available on my GitHub:
Subscribe:
#keras #tensorflow
1597323120
CNN’s are a special type of ANN which accepts images as inputs. Below is the representation of a basic neuron of an ANN which takes as input X vector. The values in the X vector is then multiplied by corresponding weights to form a linear combination. To thus, a non-linearity function or an activation function is imposed so as to get the final output.
Neuron representation, Image by author
Talking about grayscale images, they have pixel ranges from 0 to 255 i.e. 8-bit pixel values. If the size of the image is NxM, then the size of the input vector will be NM. For RGB images, it would be NM*3. Consider an RGB image with size 30x30. This would require 2700 neurons. An RGB image of size 256x256 would require over 100000 neurons. ANN takes a vector of inputs and gives a product as a vector from another hidden layer that is fully connected to the input. The number of weights, parameters for 224x224x3 is very high. A single neuron in the output layer will have 224x224x3 weights coming into it. This would require more computation, memory, and data. CNN exploits the structure of images leading to a sparse connection between input and output neurons. Each layer performs convolution on CNN. CNN takes input as an image volume for the RGB image. Basically, an image is taken as an input and we apply kernel/filter on the image to get the output. CNN also enables parameter sharing between the output neurons which means that a feature detector (for example horizontal edge detector) that’s useful in one part of the image is probably useful in another part of the image.
Every output neuron is connected to a small neighborhood in the input through a weight matrix also referred to as a kernel or a weight matrix. We can define multiple kernels for every convolution layer each giving rise to an output. Each filter is moved around the input image giving rise to a 2nd output. The outputs corresponding to each filter are stacked giving rise to an output volume.
Convolution operation, Image by indoml
Here the matrix values are multiplied with corresponding values of kernel filter and then summation operation is performed to get the final output. The kernel filter slides over the input matrix in order to get the output vector. If the input matrix has dimensions of Nx and Ny, and the kernel matrix has dimensions of Fx and Fy, then the final output will have a dimension of Nx-Fx+1 and Ny-Fy+1. In CNN’s, weights represent a kernel filter. K kernel maps will provide k kernel features.
#artificial-neural-network #artificial-intelligence #convolutional-network #deep-learning #machine-learning #deep learning
Image classification is the process of segmenting images into different categories based on their features. A feature could be the edges in an image, the pixel intensity, the change in pixel values, and many more. We will try and understand these components later on. For the time being let’s look into the images below (refer to Figure 1). The three images belong to the same individual however varies when compared across features like the color of the image, position of the face, the background color, color of the shirt, and many more. The biggest challenge when working with images is the uncertainty of these features. To the human eye, it looks all the same, however, when converted to data you may not find a specific pattern across these images easily.
Figure 1. Illustrates the portrait of the Author taken in 2014 and 2019 respectively.
An image consists of the smallest indivisible segments called pixels and every pixel has a strength often known as the pixel intensity. Whenever we study a digital image, it usually comes with three color channels, i.e. the Red-Green-Blue channels, popularly known as the “RGB” values. Why RGB? Because it has been seen that a combination of these three can produce all possible color pallets. Whenever we work with a color image, the image is made up of multiple pixels with every pixel consisting of three different values for the RGB channels. Let’s code and understand what we are talking about.
import cv2 import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline sns.set(color_codes=True) ## Read the image image = cv2.imread('Portrait-Image.png') #--imread() helps in loading an image into jupyter including its pixel values plt.imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB)) ## as opencv loads in BGR format by default, we want to show it in RGB. plt.show() image.shape
#convolutional-network #deep-learning #machine-learning #computer-vision #keras #deep learning
1597594620
This post provides the details of the architecture of _Convolutional Neural Network _(CNN), functions and training of each layer, ending with a summary of the training of CNN.
3. First Convolutional Layer:
#convolutional-network #machine-learning #artificial-intelligence #deep-learning #neural-networks #deep learning | https://morioh.com/p/b10a68c70966 | CC-MAIN-2022-21 | refinedweb | 908 | 55.34 |
Visualization for joints. More...
#include <rendering/rendering.hh>
Visualization for joints.
Constructor.
Destructor.
Push a message for a child of this visual which hasn't been loaded yet.
Attach a vertex of a line to the position of the visual.
Attach a mesh to this visual by name.
Attach a renerable object to the visual.
Attach a visual to this visual.
Get the bounding box for the visual.
Clear parents.
Clone the visual with a new name.
Convert from msgs::Visual::Type to VisualType.
Convert from msgs::Visual::Type to VisualType.
Create an axis and attach it to the joint visual..
Fill an ignition::msgs::Material message based on this visual's material properties.
Get the ambient color of the visual.
Get the arrow visual which represents the axis attached to the child link.
returns Arrow visual.
Return the number of attached movable objects. JointVisual which is attached to the parent link.
returns Parent axis visual.
Get the root visual..
Returns true if an object with _name is attached.
Get whether this visual inherits transparency from parent.
Helper for the contructor.
Get the initial relative pose of the visual.
Check if this visual is an ancestor of another visual.
Check if this visual is a descendant of another visual.
Return true if the visual is a plane.
Return true if the visual is a static geometry.
Load the joint visual based on a message.
Load the visual with a set of parameters.
Load the visual with default parameters.
Reimplemented in SelectionObj, ApplyWrenchVisual, SonarVisual, TransmitterVisual, AxisVisual, LinkFrameVisual, OriginVisual, and ArrowVisual.
Load from a message.
Load a plugin.
Make the visual objects static renderables.
Move to a pose and over a given time.
Move to a series of pose and over a given time.
Get the name of the visual.
Get the position of the visual.
Process a material message.
Remove a running plugin.
Get the rotation of the visual.
Get the scale.
Set the ambient color of the visual. the layer this visual belongs to. a message specific for this visual type.
For example, a link visual will have a link message.
Set visibility flags for this visual and all children.
Enable or disable wireframe for this visual.
Set the world pose of the visual.
Set the world linear position of the visual.
Set the world orientation of the visual. an axis' arrow visual.
Update the joint visual based on a message.
Update a visual based on a message.
Get whether this visual uses RT shader system.
Get whether wireframe is enabled for this visual.
Get the global pose of the visual. | http://gazebosim.org/api/dev/classgazebo_1_1rendering_1_1JointVisual.html | CC-MAIN-2018-13 | refinedweb | 431 | 64.07 |
Opened 7 years ago
Closed 7 years ago
Last modified 3 years ago
#1872 closed defect (fixed)
Internal Error: TclError: bad screen distance "640.0"
Description
After using Tracmetrix I received a error message:
File "f:\programme\python25\lib\site-packages\Trac-0.11dev-py2.5.egg\trac\web\main.py", line 434, in dispatch_request dispatcher.dispatch(req) File "f:\programme\python25\lib\site-packages\Trac-0.11dev-py2.5.egg\trac\web\main.py", line 217, in dispatch resp = chosen_handler.process_request(req) File "build\bdist.win32\egg\tracmetrixplugin\mdashboard.py", line 416, in process_requestFile "build\bdist.win32\egg\tracmetrixplugin\mdashboard.py", line 537, in _render_viewFile "build\bdist.win32\egg\tracmetrixplugin\mdashboard.py", line 287, in create_cummulative_chartFile "F:\Programme\Python25\Lib\site-packages\matplotlib\pylab.py", line 2317, in cla ret = gca().cla(*args, **kwargs) File "F:\Programme\Python25\Lib\site-packages\matplotlib\pylab.py", line 883, in gca ax = gcf().gca(**kwargs) File "F:\Programme\Python25\Lib\site-packages\matplotlib\pylab.py", line 893, in gcf return figure() File "F:\Programme\Python25\Lib\site-packages\matplotlib\pylab.py", line 859, in figure figManager = new_figure_manager(num, figsize=figsize, dpi=dpi, facecolor=facecolor, edgecolor=edgecolor, frameon=frameon, FigureClass=FigureClass, **kwargs) File "F:\Programme\Python25\Lib\site-packages\matplotlib\backends\backend_tkagg.py", line 90, in new_figure_manager figManager = FigureManagerTkAgg(canvas, num, window) File "F:\Programme\Python25\Lib\site-packages\matplotlib\backends\backend_tkagg.py", line 274, in __init__ self.toolbar = NavigationToolbar2TkAgg( canvas, self.window ) File "F:\Programme\Python25\Lib\site-packages\matplotlib\backends\backend_tkagg.py", line 545, in __init__ NavigationToolbar2.__init__(self, canvas) File "F:\Programme\Python25\Lib\site-packages\matplotlib\backend_bases.py", line 1163, in __init__ self._init_toolbar() File "F:\Programme\Python25\Lib\site-packages\matplotlib\backends\backend_tkagg.py", line 585, in _init_toolbar borderwidth=2) File "F:\Programme\Python25\lib\lib-tk\Tkinter.py", line 2442, in __init__ Widget.__init__(self, master, 'frame', cnf, {}, extra) File "F:\Programme\Python25\lib\lib-tk\Tkinter.py", line 1930, in __init__ (widgetName, self._w) + extra + self._options(cnf))
code fragment:
1925. for k in cnf.keys(): 1926. if type(k) is ClassType: 1927. classes.append((k, cnf[k])) 1928. del cnf[k] 1929. self.tk.call( 1930. (widgetName, self._w) + extra + self._options(cnf)) 1931. for k, v in classes: 1932. k.configure(self, v) 1933. def destroy(self): 1934. """Destroy this and all descendants widgets.""" 1935. for c in self.children.values(): c.destroy()
local variables:
classes [] cnf {'width': 640.0, 'borderwidth': 2, 'height': 50} extra () k 'height' kw {} master <Tkinter.Tk instance at 0x027E2C60> self <matplotlib.backends.backend_tkagg.NavigationToolbar2TkAgg instance at ... widgetName 'frame'
Attachments (0)
Change History (9)
comment:1 Changed 7 years ago by khundeen
comment:2 Changed 7 years ago by didley@…
Sorry! But I cant try to install with python 2.4.4 . I'm using Python 25. But after a reinstall now I have an another error. Looks not so hard.
TclError: bad screen distance "640.0"
What means that?
comment:3 Changed 7 years ago by didley@…
Sorry! But there is no time to try to install with python 2.4.4 . I'm using python 2.5
Maybe in my private time.
comment:4 Changed 7 years ago by khundeen
didley, thanks for your interest. I will get to this bug once I completed the baseline functionality on Aug 20.
Please also tell me the version of the packages you use. (numpy, matplotlib, gtk) I will try to replicate your configuration and test it.
comment:5 Changed 7 years ago by didley@…
hi,
I'm using
- Python 2.5
- Python 2.5 numpy-1.0.3
- Python 2.5 pygtk-2.10.4
- Python 2.5 matplotlib-0.90.1
- Python 2.5 svn-python-1.43
comment:6 Changed 7 years ago by khundeen
- Resolution set to fixed
- Status changed from new to closed
comment:7 Changed 6 years ago by lexter
Hi!!
I'm getting the same error while running some code. The weird thing it's that I get the error only in some computers, in others works perfect. Even in the same computer, for one user works OK and for another one not.
Some help please, I've spent 2 days running in circles...
comment:8 Changed 5 years ago by anonymous
Hi!
I get also the problem of
tkinter.TclError: bad screen distance "320.0"
The error occurs, when I import
from matplotlib.backends.backend_qt4agg import FigureCanvasQTAgg as FigureCanvas from matplotlib.figure import Figure from matplotlib.pyplot import imshow
and proceed
imshow(matrix)
after
self.dpi = 100
self.fig = Figure(dpi=self.dpi)
self.canvas = FigureCanvas(self.fig)
I want to use imshow in an QT-Application to have fast look for intensity plots.
Would be great, if anybody could help me!
comment:9 Changed 3 years ago by John Doe III.
Hi,
just for completeness' sake and because this page was the first one Google came up with when searching for that error (and because it drove us nuts for two days): We have observed the same error message ("bad screen distance 640.0") with python 2.7.0, 2.7.1 and 2.7.2 when trying to plot with matplotlib 1.0.1 (all in Windows 64 bit).
The weird thing was that it worked on one machine, but not on the one next to it, which was set up in exactly the same way - or at least we thought so. We found the solution on: It turns out the numerical locale was different on the two machines. It worked when the locale was English, but failed for German. I.e., it seems the "640.0" is being converted from a string to float/integer at some point, and that only works in the English numerical locale, where the decimal point is the "." (in German it's the ",").
For us, it worked to manually set the locale just before initializing the matplotlib figure (from the linked bug report):
import locale locale.setlocale(locale.LC_NUMERIC, 'C')
Cheers, JD
Hi, | http://trac-hacks.org/ticket/1872 | CC-MAIN-2014-49 | refinedweb | 998 | 54.9 |
Rendering PDF pages with PDF.js and Vue
Building a PDF Viewer with Vue - Part 1
I remember a time not too long ago when the possibility of rendering PDFs inline on a web page would have sounded crazy. Then PDF.js came along and changed all that.
I was recently tasked with just this sort of project, and I leveraged PDF.js, Vue and Webpack to put it all together. This post is the first in a series which demonstrates how I used Vue to render PDF pages to
<canvas> elements. Later we'll explore conditional rendering and adding paging and zoom controls.
The latest source code for this project is on Github at rossta/vue-pdfjs-demo. To see the version of the project described in this post, check out the
part-1-simple-document branch. Finally, here's a link to the project demo.
Similar projects
Mozilla's PDF.js package ships with a web viewer (demo) For an alternative approach to PDF rendering with Vue, check out the vue-pdf package.
An incomplete intro to PDF.js
PDF.js is a JavaScript project by Mozilla that makes it easier to parse and render PDFs in HTML. It is comprised of three key pieces: Core, Display, and Viewer.
The Core layer is the lower level piece that parses and interprets PDFs for use by the other layers. This code is split out into a separate file,
pdf.worker.js, which will run in a web worker thread in the browser. Since we're using Webpack, it handles bundling, fetching, and configuration of the worker script behind the scenes.
The Viewer layer, as I mentioned earlier, provides a primary user interface for viewing and paging through PDFs in Firefox (or other browsers with included extensions). We won't be using this piece; in fact, this tutorial could be used as the basis for a Vue.js implementation of an alternative viewer.
Most of our interaction with the PDF.js library will be at the Display layer, which provides the JavaScript API for retrieving and manipulating PDF document and page data. The API relies heavily on Promises, which we'll be incorporating into our Vue.js components. We'll also take advantage of dynamic imports to code split our use of PDF.js, since, at least for my purposes, I only want to load the PDF.js library on demand. Keeping it out of the main application Webpack bundle helps keep the initial page load time small.
Using PDF.js
Here's a basic ES6 example of dynamically loading PDF.js to render an entire PDF document (without Vue):
import range from 'lodash/range' import('pdfjs-dist/webpack').then(pdfjs => { pdfjs .getDocument('wibble.pdf') .then(pdf => { const pagePromises = range(1, pdf.numPages).map(number => pdf.getPage(number)) return Promise.all(pagePromises) }) .then(pages => { const scale = 2 const canvases = pages.forEach(page => { const viewport = page.getViewport(scale) // Prepare canvas using PDF page dimensions const canvas = document.createElement('canvas') canvas.height = viewport.height canvas.width = viewport.width // Render PDF page into canvas context const canvasContext = canvas.getContext('2d') const renderContext = { canvasContext, viewport } page.render(renderContext).then(() => console.log('Page rendered')) document.body.appendChild(canvas) }) }, error => console.log('Error', error), ) })
The code above dynamically imports the PDF.js distribution with
import('pdfjs/dist'). Webpack splits the PDF.js code out into a bundle and loads it asynchronously only when that line is executed in the browser. This expression returns a promise that resolves with the PDF.js module when the bundle is successfully loaded and evaluated. With a reference to the modules,
pdfjs we can now exercise the PDF.js document API.
The expression
pdjs.getDocument('url-to-pdf') also returns a promise which resolves when the document is loaded and parsed by the PDF.js core layer. This promise resolves to an instance of
PDFDocumentProxy, which we can use to retrieve additional data from the PDF document. We used the
PDFDocumentProxy#numPages attribute to build a number range of all the pages (using lodash
range) and build an array of promises representing requests for each of the pages of the document returned by
PDFDocumentProxy#getPage(pageNumber). The key here to loading all pages at once is using
Promise.all to resolve when all pages are retrieved as PDFPageProxy objects.
Finally, for each page object, we create a separate
canvas element and trigger the
PDFPageProxy#render method, which returns another promise and accepts options for a canvas context and viewport. This render method is responsible for drawing the PDF data into the canvas element asynchronously while we append the canvas elements to
document.body.
Refactoring to Vue
Our little script works, and for some applications, this may implementation may be sufficient. However, let's say we need some interaction, like paging controls, zoom buttons, conditional page fetching and rendering while scrolling, etc. Adding complexity could get unwieldy quickly. For this next stage, we'll refactor to Vue components, so we can get the benefit of reactivity and make our code more declarative and more natural to extend.
In pseudocode, our component architecture resembles this:
<PDFDocument> <PDFPage : <PDFPage : <PDFPage : ... </PDFDocument>
Requirements
For my project, I used the following npm packages (installed using
yarn).
@vue/cli:
^3.0.0-beta.15
vue:
^2.5.16
pdfjs-dist:
^2.0.489
I would expect it to be straightforward to adapt the code for other relatively recent versions of these packages.
Fetching the PDF
Our
<App> component hard-codes default values for a PDF url and a rendering scale. A
<PDFDocument> child component receives this data as props.
<!-- src/App.vue --> <template> <div id="app"> <PDFDocument v- </div> </template> <script> export default { // ... data() { return { url: '', // a PDF scale: 2, } }, } </script>
The document component is responsible for fetching the PDF data through PDF.js and rendering a
<PDFPage> component for each
page object returned by the API.
Its
data will track the
page object in
pages.
// src/components/PDFDocument.vue export default { props: ['url', 'scale'], data() { return { pdf: undefined, pages: [], }; }, // ...
When the component is mounted, it will fetch the PDF data using the
pdfjs.getDocument function.
// src/components/PDFDocument.vue export default { //... created() { this.fetchPDF(); }, methods: { fetchPDF() { import('pdfjs-dist/webpack'). then(pdfjs => pdfjs.getDocument(this.url)). then(pdf => (this.pdf = pdf)); }, }, //...
We'll use a watch callback for the
pdf.getPage function provided by PDF.js. Since the return value of
getPage behaves like a promise, we can use
Promise.all to determine when all the
page objects have been fetched and set the resolved collection as the
pages data:
// src/components/PDFDocument.vue import range from 'lodash/range'; export default { // ... watch: { pdf(pdf) { this.pages = []; const promises = range(1, pdf.numPages). map(number => pdf.getPage(number)); Promise.all(promises). then(pages => (this.pages = pages)); }, }, };
The template simply renders a
<PDFPage> child component for each
page object. Each page component also needs the
scale prop for rendering the page data to
<canvas>:
<!-- src/components/PDFDocument.vue --> <template> <div class="pdf-document"> <PDFPage v- </div> </template>
Setting up the canvas
Now we can build out the
<PDFPage> element. We'll use a Vue
render function to create a
<canvas> element with computed attributes,
canvasAttrs.
// src/components/PDFPage.vue export default { props: ['page', 'scale'], render(h) { const {canvasAttrs: attrs} = this; return h('canvas', {attrs}); }, // ...
To render a PDF to
<canvas> with an acceptable resolution, we can take advantage of a browser property called
window.devicePixelRatio. This value represents the ratio of screen pixels to CSS pixels. Given a hi-resolution display with a
devicePixelRatio of
2, we'd want to give the canvas initial width and height attributes that are two times greater than its corresponding width and height in CSS. Otherwise, rendering our PDF pixels to canvas may appear blurry.
When the
<PDFPage> component is created, we can access the
viewport property of the
page object, via
PDFPageProxy#getViewport, to obtain the pixel width and height of the PDF. These are the width and height attributes of the
<canvas> element. For the actual size of the
<canvas>, we'll use CSS attributes.
Since the
scale prop is reactive and our
render function depends on
canvasAttrs, defining
canvasAttrs as a computed property based off the scale means our PDF pages automatically re-render when the scale changes. Future iterations allow changes to the
scale prop (using future zoom controls, for example). We'll calculate the width and height via CSS to update the rendered size of the canvas to avoid redrawing the canvas data from the
page object each time. For this, we use a clone of the original viewport, given via the
actualSizeViewport computed property, and the
devicePixelRatio to calculate the target width and height style attributes for the
<canvas>.
Here's the code that puts all that together:
// src/components/PDFPage.vue export default { created() { // PDFPageProxy#getViewport // this.viewport = this.page.getViewport(this.scale); }, computed: { canvasAttrs() { let {width, height} = this.viewport; [width, height] = [width, height].map(dim => Math.ceil(dim)); const style = this.canvasStyle; return { width, height, style, class: 'pdf-page', }; }, canvasStyle() { const {width: actualSizeWidth, height: actualSizeHeight} = this.actualSizeViewport; const pixelRatio = window.devicePixelRatio || 1; const [pixelWidth, pixelHeight] = [actualSizeWidth, actualSizeHeight] .map(dim => Math.ceil(dim / pixelRatio)); return `width: ${pixelWidth}px; height: ${pixelHeight}px;` }, actualSizeViewport() { return this.viewport.clone({scale: this.scale}); }, //... }, // ...
Rendering the page
When the
<canvas> element mounts, we can draw the PDF page data to it using the
PDFPageProxy#render method. It needs context from the
viewport and
canvasContext as arguments. Since that returns a promise, we can be notified when it's complete.
// src/components/PDFPage.vue export default { mounted() { this.drawPage(); }, methods: { drawPage() { if (this.renderTask) return; const {viewport} = this; const canvasContext = this.$el.getContext('2d'); const renderContext = {canvasContext, viewport}; // PDFPageProxy#render // this.renderTask = this.page.render(renderContext); this.renderTask. then(() => this.$emit('rendered', this.page)); }, // ... }, // ...
Cleaning up after ourselves
As we're working with JavaScript objects that keep state outside of Vue's control, we should be mindful of calling provided teardown methods. The PDF document and page objects provide
destroy methods to be called on teardown, such as, when our render promise fails, the
page object is replaced, or the Vue component itself is destroyed.
// src/components/PDFPage.vue export default { beforeDestroy() { this.destroyPage(this.page); }, methods: { drawPage() { // ... this.renderTask. then(/* */). catch(this.destroyRenderTask); }, destroyPage(page) { if (!page) return; // PDFPageProxy#_destroy // page._destroy(); // RenderTask#cancel // if (this.renderTask) this.renderTask.cancel(); }, destroyRenderTask() { if (!this.renderTask) return; // RenderTask#cancel // this.renderTask.cancel(); delete this.renderTask; }, }, watch: { page(page, oldPage) { this.destroyPage(oldPage); }, }, };
Wrapping up
We've now converted our original, imperative PDF rendering script with a declarative Vue component hierarchy. We've certainly added much code to make this work, but with a working knowledge of Vue, we've made it easier to reason about, easier to extend, and easier to add features to give our PDF viewer more functionality.
In the next post, we'll look at adding some conditional rendering; since all pages aren't visible when the document is initially loaded, Vue can help us design a system that only fetches and renders PDF pages when scrolled into view. | https://rossta.net/blog/building-a-pdf-viewer-with-vue-part-1.html | CC-MAIN-2019-13 | refinedweb | 1,850 | 50.33 |
Ruby hashes with custom objects as keys
When you're storing things in hashes, obviously you need a hash function to turn keys into numbers (or memory locations, or whatever), so you know which bucket gets which values. This hash function is nicely defined for Fixnums; two Fixnums give the same hash value no matter what, which makes sense since Fixnums are immutable, so two objects with the same Fixnum value are pretty much the same object in every way. Strings are mutable, but
String#hash always returns the same hash value for two strings even if they're different objects (i.e. have different object_id's), apparently by using the string's length and contents in some way.
Things becomes screwy if you have your own class and you want to use objects of that class as hash keys though. From what I can tell, if a class doesn't define it's own method called hash, then Object#hash defaults to using an object's object_id as the hash value.
Why would you want to use your own class's objects as hash keys? Well, I got in trouble because
Array#uniq happens to use that same hash function to determine uniqueness, and I want two objects with the same values for some subset of their instance methods to be considered non-unique. The default
Object#hash doesn't do this.
It's not as simple as defining your own hash method; the documentation for Object#hash says:
This function must have the property that a.eql?(b) implies a.hash == b.hash
So
Object#eql? is also apparently used by hashes somewhere along the way. The moral of this story is, if you want to use your objects as hash keys or ever plan to uniq an array containing them, you have to define a
hash and
eql? method. This code illustrates this:
def test(o1,o2) h = Hash.new h[o1] = true h[o2] = true puts "o1.object_id: #{o1.object_id}" puts "o2.object_id: #{o2.object_id}" puts "o1.hash: #{o1.hash}" puts "o2.hash: #{o2.hash}" puts "o1.eql? o2: #{o1.eql? o2}" puts "o1.value: #{o1.value}" puts "o2.value: #{o2.value}" puts "o1.value.object_id: #{o1.value.object_id}" puts "o2.value.object_id: #{o2.value.object_id}" puts "o1.value.hash: #{o1.value.hash}" puts "o2.value.hash: #{o2.value.hash}" puts "h.keys.length: #{h.keys.length}" puts "[o1,o2].uniq: #{h.keys.uniq}" puts "[o1,o2].uniq.length: #{h.keys.uniq.length}" puts end class Foo attr_reader :value def initialize(value) @value = value end end f1 = Foo.new('123') f2 = Foo.new('123') test(f1,f2) class Foo def hash @value.hash end end test(f1,f2) class Foo def eql?(other) @value.eql? other.value end end test(f1,f2) test = 123 test2 = 123 puts test.object_id puts test2.object_id | http://briancarper.net/blog/111.html | CC-MAIN-2017-26 | refinedweb | 476 | 68.67 |
February 2009
Volume 24 Number 02
.NET Matters - Ordered Execution With ThreadPool
By Stephen Toub | February 2009
Q Many components in my system need to execute work asynchronously, which makes me think that the Microsoft .NET Framework ThreadPool is the right solution. However, I have what I believe is a unique requirement: each component needs to ensure that its work items are processed in order and that, as a result, no two of its work items are executed at the same time. It's OK, though, for multiple components to execute concurrently with each other; in fact, that's desired. Do you have any recommendations?
AThis isn't as unique a predicament as you might think, as it occurs in a variety of important scenarios, including ones based on message passing. Consider a pipeline implementation that obtains parallelism benefits by having multiple stages of the pipeline active at any one time.
For example, you could have a pipeline that reads in data from a file, compresses it, encrypts it, and writes it out to a new file. The compression can be done concurrently with the encryption, but not on the same data at the same time, since the output of one needs to be the input to the other. Rather, the compression routine can compress some data and send it off to the encryption routine to be processed, at which point the compression routine can work on the next piece of data.
Since many compression and encryption algorithms maintain a state that affects how future data is compressed and encrypted, it's important that ordering is maintained. (Never mind that this example deals with files, and it'd be nice if you could decrypt and decompress the results in order to get back the original with all of the data in the correct order.)
There are several potential solutions. The first solution is simply to dedicate a thread to each component. This DedicatedThread would have a first-in-first-out (FIFO) queue of work items to be executed and a single thread that services that queue. When the component has work to be run, it dumps that work into the queue, and eventually the thread will get around to picking up the work and executing it. Since there's only one thread, only one item will be run at a time. And as a FIFO queue is being used, the work items will be processed in the order they were generated.
As with the example I provided in the January 2008 .NET Matters column, I'll use a simple WorkItem class to represent the work to be executed, shown in Figure 1. An implementation of DedicatedThread that uses this WorkItem type is shown in Figure 2. The bulk of the implementation is in a naive BlockingQueue<T> implementation (the .NET Framework 4.0 includes a BlockingCollection<T> type that would be a better fit for an implementation like this). The constructor of DedicatedThread simply creates a BlockingQueue<T> instance, then spins up a thread that continually waits for another item to arrive in the queue and then executes it.
internal class WorkItem { public WaitCallback Callback; public object State; public ExecutionContext Context; private static ContextCallback _contextCallback = s => { var item = (WorkItem)s; item.Callback(item.State); }; public void Execute() { if (Context != null) ExecutionContext.Run(Context, _contextCallback, this); else Callback(State); } }
public class DedicatedThread { private BlockingQueue<WorkItem> _workItems = new BlockingQueue<WorkItem>(); public DedicatedThread() { new Thread(() => { while (true) { workItems.Dequeue().Execute(); } }) { IsBackground = true }.Start(); } public void QueueUserWorkItem(WaitCallback callback, object state) { _workItems.Enqueue(new WorkItem { Callback = callback, State = state, Context = ExecutionContext.Capture() }); } private class BlockingQueue<T> { private Queue<T> _queue = new Queue<T>(); private Semaphore _gate = new Semaphore(0, Int32.MaxValue); public void Enqueue(T item) { lock (_queue) _queue.Enqueue(item); _gate.Release(); } public T Dequeue() { _gate.WaitOne(); lock (_queue) return _queue.Dequeue(); } } }
This provides the basic functionality for your scenario and it may meet your needs, but there are some important downsides. First, a thread is being reserved for each component. With one or two components, that may not be a problem. But for a lot of components, this could result in a serious explosion in the number of threads. That can lead to bad performance.
This particular implementation is also not extremely robust. For example, what happens if you want to tear down a component—how do you tell the thread to stop blocking? And what happens if an exception is thrown from a work item?
As an aside, it's interesting to note that this solution is similar to what Windows uses in a typical message pump. The message pump is a loop waiting for messages to arrive, dispatching them (processing them), then going back and waiting for more. The messages for a particular window are processed by a single thread. The similarities are demonstrated by the code in Figure 3, which should exhibit behavior very much like the code in Figure 2. A new thread is spun up that creates a Control, ensures that its handle has been initialized, and uses Application.Run to execute a message loop. To queue a work item to this thread, you simply use the Control's BeginInvoke method. Note that I'm not recommending this approach, but rather just pointing out that, at a high level, it's the same basic concept as the DedicatedThread solution already shown.
public class WindowsFormsDedicatedThread { private Control _control; public WindowsFormsDedicatedThread() { using (var mre = new ManualResetEvent(false)) { new Thread(() => { _control = new Control(); var forceHandleCreation = _control.Handle; mre.Set(); Application.Run(); }) { IsBackground = true }.Start(); mre.WaitOne(); } } public void QueueUserWorkItem(WaitCallback callback, object state) { _control.BeginInvoke(callback, state); } }
A second solution involves using the ThreadPool for execution. Rather than spinning up a new, custom thread per component that services a private queue, we'll keep just the queue per component, such that no two elements from the same queue will ever be serviced at the same time. This has the benefits of allowing the ThreadPool itself to control how many threads are needed, to handle their injection and retirement, to handle reliability issues, and to get you out of the business of spinning up new threads, which is infrequently the right thing to do.
An implementation of this solution is shown in Figure 4. The FifoExecution class maintains just two fields: a queue of work items to be processed, and a Boolean value that indicates whether a request has been issued to the ThreadPool to process work items. Both of these fields are protected by a lock on the work items list. The rest of the implementation is simply two methods.
public class FifoExecution { private Queue<WorkItem> _workItems = new Queue<WorkItem>(); private bool _delegateQueuedOrRunning = false; public void QueueUserWorkItem(WaitCallback callback, object state) { var item = new WorkItem { Callback = callback, State = state, Context = ExecutionContext.Capture() }; lock (_workItems) { _workItems.Enqueue(item); if (!_delegateQueuedOrRunning) { _delegateQueuedOrRunning = true; ThreadPool.UnsafeQueueUserWorkItem(ProcessQueuedItems, null); } } } private void ProcessQueuedItems(object ignored) { while (true) { WorkItem item; lock (_workItems) { if (_workItems.Count == 0) { _delegateQueuedOrRunning = false; break; } item = _workItems.Dequeue(); } try { item.Execute(); } catch { ThreadPool.UnsafeQueueUserWorkItem(ProcessQueuedItems, null); throw; } } } }
The first method is QueueUserWorkItem, with a signature that matches that exposed by the ThreadPool (the ThreadPool also provides a convenience overload that accepts just a WaitCallback, an overload you could choose to add). The method first creates a WorkItem to be stored and then takes the lock. (No shared state is accessed while creating the WorkItem. Thus, in order to keep the lock as small as possible, this capturing of the item is done before taking the lock.) Once the lock is held, the created work item is enqueued onto the work item queue.
The method then checks whether a request has been made to the ThreadPool to process queued work items, and, if one hasn't been made, it makes such a request (and notes it for the future). This request to the ThreadPool is simply to use one of the ThreadPool's threads to execute the ProcessQueuedItems method.
When invoked by a ThreadPool thread, ProcessQueuedItems enters a loop. In this loop, it takes the lock and, while holding the lock, it checks whether there are any more work items to be processed. If there aren't any, it resets the request flag (such that future queued items will request processing from the pool again) and exits. If there are work items to be processed, it grabs the next one, releases the lock, executes the processing, and starts all over again, running until there are no more items in the queue.
This is a simple yet powerful implementation. A component may now create an instance of FifoExecution and use it to schedule work items. Per instance of FifoExecution, only one queued work item will be able to execute at a time, and queued work items will execute in the order they were queued. Additionally, work items from distinct FifoExecution instances will be able to execute concurrently. And the best part is that you're now out of the business of thread management, leaving all of the hard (but very important) work of thread management to the ThreadPool.
In the extreme case, where every component is keeping the pool saturated with work, the ThreadPool will likely ramp up to having one thread per component, just like in the original DedicatedThread implementation. But that will only happen if it's deemed appropriate by the ThreadPool. If components aren't keeping the pool saturated, many fewer threads will be required.
There are additional benefits, such as letting the ThreadPool do the right thing with regard to exceptions. In the DedicatedThread implementation, what happens if the processing of an item throws an exception? The thread will come crashing down, but depending upon the application's configuration, the process may not be torn down. In that case, work items will start queueing up to the DedicatedThread, but none will ever get processed. With FifoExecution, the ThreadPool will just end up adding more threads to compensate for those that have gone away.
Figure 5shows a simple demo application that utilizes the FifoExecution class. This app has three stages in a pipeline. Each stage writes out the ID of the current piece of data it's working with (which is just the loop iteration). It then does some work (represented here by a Thread.SpinWait) and passes data (again, just the loop iteration) along to the next stage. Each step outputs its information with a different number of tabs so that it's easy to see the results separated out. As you can observe in the output shown in Figure 6, each stage (a column) is keeping the work ordered correctly.
static void Main(string[] args) { var stage1 = new FifoExecution(); var stage2 = new FifoExecution(); var stage3 = new FifoExecution(); for (int i = 0; i < 100; i++) { stage1.QueueUserWorkItem(one => { Console.WriteLine("" + one); Thread.SpinWait(100000000); stage2.QueueUserWorkItem(two => { Console.WriteLine("\t\t" + two); Thread.SpinWait(100000000); stage3.QueueUserWorkItem(three => { Console.WriteLine("\t\t\t\t" + three); Thread.SpinWait(100000000); }, two); }, one); }, i); } Console.ReadLine(); }
.gif)
Figure 6 Output from Demo Application
It's also interesting to note that there's a lack of fairness between the stages of the pipeline. You can see, for example, that stage1 in Figure 6is already up to iteration 21, while stage2 is still back on 13 and stage3 is on 9. This is largely due to my implementation of ProcessQueuedItems. The sample app is very quickly pushing 100 work items into stage1, and thus the thread from the pool that services stage1 will likely sit in the ProcessQueuedItems loop and not return until there's no more stage1 work. This gives it an unfair bias over the other stages. If you see similar behavior in your app, and it's a problem, you can increase fairness between the stages by modifying the implementation of ProcessQueuedItems to one more like the following:
Now, even if there are more items to be processed, ProcessQueuedItems won't loop around but rather will recursively queue itself to the ThreadPool, thus prioritizing itself behind items from other stages. With this modification, the output from the application in Figure 5now looks like that shown in Figure 7. You can see in this new output that scheduling is indeed treating stage2 and stage3 with more fairness than before (there's still some lag between the stages, but that's to be expected given that this is a pipeline).
.gif)
Figure 7 New Output with Fairer Scheduling
Of course, this increased fairness doesn't come for free. Each work item now incurs an extra trip through the scheduler, which adds some cost. You'll need to decide whether this is a trade-off you can make for your application; for example, if the work you're doing in your work items is at all substantial, this overhead should be negligible and unnoticeable.
This is just one more example of how it's possible to build systems on top of the ThreadPool that add functionality without having to build custom thread pools yourself. For other examples, see previous editions of the .NET Matterscolumn in MSDN Magazine.
Send your questions and comments to netqa@microsoft.com.
Stephen Toubis a Senior Program Manager on the Parallel Computing Platform team at Microsoft. He is also a Contributing Editor for MSDN Magazine.
Current Issue
Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus. | https://msdn.microsoft.com/en-us/magazine/dd419664.aspx | CC-MAIN-2019-22 | refinedweb | 2,238 | 62.48 |
#include <wx/event.h>
A class that can handle events from the windowing system.
wxWindow is (and therefore all window classes are) derived from this class.
When events are received, wxEvtHandler invokes the method listed in the event table using itself as the object. When using multiple inheritance it is imperative that the wxEvtHandler(-derived) class is the first class inherited such that the
this pointer for the overall object will be identical to the
this pointer of the wxEvtHandler portion.
Constructor.
Add an event filter whose FilterEvent() method will be called for each and every event processed by wxWidgets.
The filters are called in LIFO order and wxApp is registered as an event filter by default. The pointer must remain valid until it's removed with RemoveFilter() and is not deleted by wxEvtHandler.
This function is similar to QueueEvent() but can't be used to post events from worker threads for the event objects with wxString fields (i.e. in practice most of them) because of an unsafe use of the same wxString object which happens because the wxString field in the original event object and its copy made internally by this function share the same string buffer internally. Use QueueEvent() to avoid this.
A copy of event is made by the function, so the original can be deleted as soon as function returns (it is common that the original is created on the stack). This requires that the wxEvent::Clone() method be implemented by event so that it can be duplicated and stored until it gets processed.
Reimplemented in wxWindow.
Binds the given function, functor or method dynamically with the event.
This offers basically the same functionality as Connect(), but it is more flexible as it also allows you to use ordinary functions and arbitrary functors as event handlers. It is also less restrictive then Connect() because you can use an arbitrary method as an event handler, whereas Connect() requires a wxEvtHandler derived handler.
See Dynamic Event Handling for more detailed explanation of this function and the Event Sample sample for usage examples.
See the Bind<>(const EventTag&, Functor, int, int, wxObject*) overload for more info.
This overload will bind the given method as the event handler.
Asynchronously call the given method.
Calling this function on an object schedules an asynchronous call to the method method showing this message dialog after the current event handler completes.
The method being called must be the method of the object on which CallAfter() itself is called.
Notice that it is safe to use CallAfter() from other, non-GUI, threads, but that the method will be always called in the main, GUI, thread context.
Example of use:
Asynchronously call the given functor.
Calling this function on an object schedules an asynchronous call to the functor function showing this message dialog after the current event handler completes.
Notice that it is safe to use CallAfter() from other, non-GUI, threads, but that the method will be always called in the main, GUI, thread context.
This overload is particularly useful in combination with C++11 lambdas:
Connects the given function dynamically with the event handler, id and event type.
Notice that Bind() provides a more flexible and safer way to do the same thing as Connect(), please use it in any new code – while Connect() is not formally deprecated due to its existing widespread usage, it has no advantages compared to Bind().
This is an alternative to the use of static event tables. It is more flexible as it allows to connect events generated by some object to an event handler defined in a different object of a different class (which is impossible to do directly with the event tables – the events can be only handled in another object if they are propagated upwards to it). Do make sure to specify the correct eventSink when connecting to an event of a different object.
See Dynamic Event Handling for more detailed explanation of this function and the Event Sample sample for usage examples.
This specific overload allows you to connect an event handler to a range of source IDs. Do not confuse source IDs with event types: source IDs identify the event generator objects (typically wxMenuItem or wxWindow objects) while the event type identify which type of events should be handled by the given function (an event generator object may generate many different types of events!).
wxPerl Note: In wxPerl this function takes 4 arguments: id, lastid, type, method; if method is undef, the handler is disconnected.}
See the Connect(int, int, wxEventType, wxObjectEventFunction, wxObject*, wxEvtHandler*) overload for more info.
This overload can be used to attach an event handler to a single source ID:
Example:
wxPerl Note: Not supported by wxPerl.
See the Connect(int, int, wxEventType, wxObjectEventFunction, wxObject*, wxEvtHandler*) overload for more info.
This overload will connect the given event handler so that regardless of the ID of the event source, the handler will be called.
wxPerl Note: Not supported by wxPerl.
Deletes all events queued on this event handler using QueueEvent() or AddPendingEvent().
Use with care because the events which are deleted are (obviously) not processed and this may have unwanted consequences (e.g. user actions events will be lost).
Disconnects the given function dynamically from the event handler, using the specified parameters as search criteria and returning true if a matching function has been found and removed.
This method can only disconnect functions which have been added using the Connect() method. There is no way to disconnect functions connected using the (static) event tables.
wxPerl Note: Not supported by wxPerl.
See the Disconnect(wxEventType, wxObjectEventFunction, wxObject*, wxEvtHandler*) overload for more info.
This overload takes the additional id parameter.
wxPerl Note: Not supported by wxPerl.
See the Disconnect(wxEventType, wxObjectEventFunction, wxObject*, wxEvtHandler*) overload for more info.
This overload takes an additional range of source IDs.
wxPerl Note: In wxPerl this function takes 3 arguments: id, lastid, type.
Returns user-supplied client data.
Returns a pointer to the user-supplied client data object.
Returns true if the event handler is enabled, false otherwise.
Returns the pointer to the next handler in the chain.
Returns the pointer to the previous handler in the chain.
Returns true if the next and the previous handler pointers of this event handler instance are NULL.
Processes an event, searching event tables and calling zero or more suitable event handler function(s).
Normally, your application would not call this function: it is called in the wxWidgets implementation to dispatch incoming user interface events to the framework (and application).
However, you might need to call it if implementing new functionality (such as a new control) where you define new event types, as opposed to allowing the user to override virtual functions.
Notice that you don't usually need to override ProcessEvent() to customize the event handling, overriding the specially provided TryBefore() and TryAfter() functions is usually enough. For example, wxMDIParentFrame may override TryBefore() to ensure that the menu events are processed in the active child frame before being processed in the parent frame itself.
The normal order of event table searching is as follows:
-1(default) the processing stops here.
A->ProcessEventis called and it doesn't handle the event,
B->ProcessEventwill be called and so on...). Note that in the case of wxWindow you can build a stack of event handlers (see wxWindow::PushEventHandler() for more info). If any of the handlers of the chain return true, the function exits.
Notice that steps (2)-(6) are performed in ProcessEventLocally() which is called by this function.
Reimplemented in wxWindow.
Try to process the event in this handler and all those chained to it.
As explained in ProcessEvent() documentation, the event handlers may be chained in a doubly-linked list. This function tries to process the event in this handler (including performing any pre-processing done in TryBefore(), e.g. applying validators) and all those following it in the chain until the event is processed or the chain is exhausted.
This function is called from ProcessEvent() and, in turn, calls TryBefore() and TryAfter(). It is not virtual and so cannot be overridden but can, and should, be called to forward an event to another handler instead of ProcessEvent() which would result in a duplicate call to TryAfter(), e.g. resulting in all unprocessed events being sent to the application object multiple times.
Processes the pending events previously queued using QueueEvent() or AddPendingEvent(); you must call this function only if you are sure there are pending events for this handler, otherwise a
wxCHECK will fail.
The real processing still happens in ProcessEvent() which is called by this function.
Note that this function needs a valid application object (see wxAppConsole::GetInstance()) because wxApp holds the list of the event handlers with pending events and this function manipulates that list.
Queue event for a later processing.
This method is similar to ProcessEvent() but while the latter is synchronous, i.e. the event is processed immediately, before the function returns, this one is asynchronous and returns immediately while the event will be processed at some later time (usually during the next event loop iteration).
Another important difference is that this method takes ownership of the event parameter, i.e. it will delete it itself. This implies that the event should be allocated on the heap and that the pointer can't be used any more after the function returns (as it can be deleted at any moment).
QueueEvent() can be used for inter-thread communication from the worker threads to the main thread, it is safe in the sense that it uses locking internally and avoids the problem mentioned in AddPendingEvent() documentation by ensuring that the event object is not used by the calling thread any more. Care should still be taken to avoid that some fields of this object are used by it, notably any wxString members of the event object must not be shallow copies of another wxString object as this would result in them still using the same string buffer behind the scenes. For example:
Note that you can use wxThreadEvent instead of wxCommandEvent to avoid this problem:
Finally notice that this method automatically wakes up the event loop if it is currently idle by calling wxWakeUpIdle() so there is no need to do it manually when using it.
Reimplemented in wxWindow.
Remove a filter previously installed with AddFilter().
It's an error to remove a filter that hadn't been previously added or was already removed.
Processes an event by calling ProcessEvent() and handles any exceptions that occur in the process.
If an exception is thrown in event handler, wxApp::OnExceptionInMainLoop is called.
Searches the event table, executing an event handler function if an appropriate one is found.
Sets user-supplied client data.
Set the client data object.
Any previous object will be deleted.
Enables or disables the event handler.
Sets the pointer to the next handler.
Reimplemented in wxWindow.
Sets the pointer to the previous handler.
All remarks about SetNextHandler() apply to this function as well.
Reimplemented in wxWindow.
Method called by ProcessEvent() as last resort.
This method can be overridden to implement post-processing for the events which were not processed anywhere else.
The base class version handles forwarding the unprocessed events to wxApp at wxEvtHandler level and propagating them upwards the window child-parent chain at wxWindow level and so should usually be called when overriding this method:
Method called by ProcessEvent() before examining this object event tables.
This method can be overridden to hook into the event processing logic as early as possible. You should usually call the base class version when overriding this method, even if wxEvtHandler itself does nothing here, some derived classes do use this method, e.g. wxWindow implements support for wxValidator in it.
Example:
Unbinds the given function, functor or method dynamically from the event handler, using the specified parameters as search criteria and returning true if a matching function has been found and removed.
This method can only unbind functions, functors or methods which have been added using the Bind<>() method. There is no way to unbind functions bound using the (static) event tables.
See the Unbind<>(const EventTag&, Functor, int, int, wxObject*) overload for more info.
This overload unbinds the given method from the event.. | http://docs.wxwidgets.org/trunk/classwx_evt_handler.html | CC-MAIN-2014-42 | refinedweb | 2,045 | 62.27 |
Scroll down to the script below, click on any sentence (including terminal blocks!) to jump to that spot in the video!
If you liked what you've learned so far, dive in!
video, code and script downloads.
Ok team: we need a new ship class - a
BountyHunterShip. Start simple: in the
model directory, add a new class:
BountyHunterShip. Once again, PhpStorm already
added the correct namespace for us:
Like every other ship, extend
AbstractShip. Ah, but we do not need a
use statement
for this: that class lives in the same namespace as us.
Just like with an interface, when you extend an abstract class, you usually need to implement some methods. Go back to "Code"->"Generate"->"Implement Methods". Select the 3 that this class needs:
Great!
Now, bounty hunter ships are interesting for a few reasons. First, they're never broken:
those scrappy bounty hunters can always get the ship started. For
isFunctional(), return
true:
For
getType(), return
Bounty Hunter:
Simple. But the
jediFactor will vary ship-by-ship. Add a
JediFactor property
and return that from inside
getJediFactor():
At the bottom of the class add a
public function setJediFactor() so that we can
change this property:
$this->jediFactor = $jediFactor:
Cool!
To get one of these into our system, let's do something simple. Open
ShipLoader.
At the bottom of
getShips(), add a new ship to the collection:
$ships[] = new BountyHunterShip() called 'Slave I' - Boba Fett's famous ship:
Ok, head back and refresh! Yes! Slave I - Bounty Hunter, and it's not broken. That was easy.
So, what's the problem? Look at
BountyHunterShip and also look at
Ship: there's
some duplication. Both classes have a
jediFactor property, a
getJediFactor()
method that returns this, and a
setJediFactor that changes it.
Duplication is a bummer. How can we fix this? Well, we could use inheritance. But in this case, it's weird.
For example, we could make
BountyHunterShip extend
Ship, but then it would inherit
this extra stuff that we don't really want or need. We could make it work, but I
just don't like it.
Ok, what about making
Ship extend
BountyHunterShip? That just completely feels
wrong: philosophically, not all
Ships are
BountyHunterShips - it's just not the
right way to model these classes.
Are we stuck? What we want is a way to just share these 3 things: the
jediFactor
property,
getJediFactor() and
setJediFactor(). When you only need to share a
few things, the right answer might be a trait.
Let's see what this trait thing is. In the
Model directory, create a new PHP class
called
SettableJediFactorTrait. Now, change the
class keyword to
trait. Traits
look and feel exactly like a normal class:
In fact, open up
BountyHunterShip and move the property and first method into the
trait. Also grab
setJediFactor() and put that in the trait too:
The only difference between classes and traits is that traits can't be instantiated directly. Their purpose is for sharing code.
In
BountyHunterShip, we can effectively copy and paste the contents of that
trait into this class by going inside the class and adding
use SettableJediFactorTrait:
That
use statement has nothing to do with the namespace
use statements: it's
just a coincidence. As soon as we do this, when PHP runs, it will copy the contents
of the trait and pastes them into this class right before it executes our code. It's
as if all the code from the trait actually lives inside this class.
And now, we can do the same thing inside of
Ship: remove the
jediFactor property
and the two methods. At the top,
use SettableJediFactorTrait:
Give it a try! Refresh. No errors! In fact, nothing changes at all. This is called horizontal reuse: because you're not extending a parent class, you're just using methods and properties from other classes.
This is perfect for when you have a couple of classes that really don't have that much in common, but do have some shared functionality. Traits are also cool because you cannot extend multiple classes, but you can use multiple traits. | https://symfonycasts.com/screencast/oo-ep4/traits-reuse | CC-MAIN-2020-05 | refinedweb | 685 | 66.23 |
Quick Links
RSS 2.0 Feeds
Lottery News
Event Calendar
Latest Forum Topics
Web Site Change Log
RSS info, more feeds
Topic closed. 50 replies. Last post 6 years ago by chippie.
Diane (or anyway you can find it)
Aunt
Thanks
aunt 145 918 941 179 797 3968
Diane 377 271 037 407 9120
good luck
IN LOVING MEMORY OF SPOOKYSOOZY AUGUST 17, 1949- APRIL 28, 2011 R.I.P. Dear Friend
Thanks so much huney
I love when I ask for pacific numbers and get numbers I like in return (KINDA MOTIVATES ME)
(I do my best to pick out what is important vs. asking for a bunch of numbers)
189 combo, been playing that in NY
149 combo, might just repeat in CH
179 combo, man oh man when will that drop in CH
127 combo, a favorite for me this month for CH
037 combo, my age from last month
LETS RUMBLE
Lets go!!! Be blessed!
Looking for more replies, thanking you in advance
Also waiting on a friend of mine to look in her book and I will post what numbers she comes up with
Thanks so much
May you be blessed as well.
Off Topic:
I posted 2 numbers in the GA thread 313 113
I also see 235 249 345
Im now waiting on the 113 but still playing 133 combo in case it returns.
Good Luck
Thanks chippie......truly this is the way it's suppose to be.....looking out for one another. This is how you get your blessings. May you prosper. Take good care
Diane 494 006 283 338
Good Luck~!~!~!~!~!~!~!~!~!~!
Be who you are say what u mean and mean what u say!!!!!
Thanks Bengy242
I did see 066 n 006 lurking around
GA mid 9102 boxed....wtg chippie~~
aunt '16'
Diane766 - 1379
[13 - 61 - 63 - 70 -76- 79 ] combo
013-113-133-1313-0130-813-8130-8138-1333-1311 etc
061-861-6161-0610-8610-6661-1611-616 etc
063-863-0630-613-6163
079-879-179-779-799-7979-7779-7999 etc
070
077-777-7777-870 etc
Firststep is to determine what you want, then describe yourself as if you already have it.
0 = 8 = 00 [SpookySoozy 0 = 1 = 0]
Be$t of Luck from the $tate
Holy smokes, I didnt even check the 4 ball.
Now thats some luck, hope someone was watching
As promised my friend gave me
aunt 415 143
Diane 283 338
338 atlanta tonite congrats
Thanks hun, you and my friend must have the same book
338 Late. | https://www.lotterypost.com/thread/234293 | CC-MAIN-2017-17 | refinedweb | 426 | 84.61 |
Alright! I've figured out how to do this in an efficient way to where the clients can connect and begin transmission to the server program quickly. For all of those who want to try out making an MMORPG, this is actually an easy way to start. Before going through my tutorial, please learn the basics of JAVA or you might get lost (however, you can at least try it out anyway but realize you may become a little lost).
I bet you wanna make the kinds of MMO games she plays all the time right?
Or maybe you just wanna special server and client system for something in particular? Well in JAVA it's much easier than even I thought! Pull up a chair and fire up your programming environment~
There are two JAVA objects in the API that are interesting to us. The ServerSocket and the Socket object. One to accept socket connections from the client and the other to connect to the server. Does it sound too simple? It's is! In fact, you'll run into problems with synchronization way more often than connecting the clients. Let's connect a simple client to a simple server! Compile both programs and run them as two different instances of a program.
import java.net.ServerSocket; import java.net.Socket; import java.io.BufferedReader; import java.io.InputStreamReader; import java.io.IOException; public class ServerTest { private static ServerSocket serverSocket; private static Socket clientSocket; private static BufferedReader bufferedReader; private static String inputLine; public static void main(String[] args) { // Wait for client to connect on 63400 try { serverSocket = new ServerSocket(63400); clientSocket = serverSocket.accept(); // Create a reader bufferedReader = new BufferedReader(new InputStreamReader(clientSocket.getInputStream())); // Get the client message while((inputLine = bufferedReader.readLine()) != null) System.out.println(inputLine); } catch(IOException e) { System.out.println(e); } } }
and the client...
import java.net.Socket; import java.io.PrintWriter; public class ClientTest { private static Socket socket; private static PrintWriter printWriter; public static void main(String[] args) { try { socket = new Socket("localhost",63400); printWriter = new PrintWriter(socket.getOutputStream(),true); printWriter.println("Hello Socket"); printWriter.println("EYYYYYAAAAAAAA!!!!"); } catch(Exception e) { System.out.println(e); } } }
Run the server... It looks like it's frozen but it's waiting for a socket from the client to connect. Run the client and it will finish. Take a look at the server program's console... There's your messages followed by an exception generated from the socket from the client disconnecting.
getInputStream() and getOutputStream() will hold the execution there until the client has established an output stream where the server establishes an input stream. Any sort of input or output stream can be created (like an ObjectInputStream). So you can send and receive data in any way you wish doing this. Whatever way is most comfortable to you (or the way you think is easiest to learn) is fine for now.
The parts of interest are the following lines:
socket = new Socket("localhost",63400);
serverSocket = new ServerSocket(63400);
To connect, the target is the local host on the same machine. It could be an IP address for a server if you want to connect over the internet. If you're on a LAN then you can try this on two separate machines. Use the IP of the machine running the server instead of "localhost" to try it out. Also, depending on your network structure, you may be able to do this over the internet with some other user.
But...
How do you connect multiple clients? I mean it is a server, it's got to serve multiple clients! What good is an MMORPG or a chat room with just you in there? You might want to have a loop accepting socket connections and save them in a list to track all the clients... But you'll encounter a problem! For sending, receiving, accepting, and connecting things, they all WAIT until the operation is complete! If you've got someone with a slow connection it will sit there and wait for them to complete the transaction or fail!
Think for a moment. You've got multiple people who want service and some are slower than others, but you gotta satisfy everyone. That is where you multitask! You need parallel operations for ALL clients. And... Thread will do just that for you!
Here's a cutout from my own server program.
import java.net.ServerSocket; import java.net.Socket; import java.net.InetAddress; import java.net.UnknownHostException; import java.io.IOException; import java.util.ArrayList; /** * This handles all of the Umbra clients in the room. * It accepts new connections and adds them to the room. * It is a runnable thread and when the start method is called, it will accept clients. */ public class UmbraRoom extends Thread { private static final int UMBRA_PORT = 30480; private static final int ROOM_THROTTLE = 200; private ServerSocket serverSocket; private InetAddress hostAddress; private Socket socket; private ArrayList<UmbraUser> users = new ArrayList<UmbraUser>(); /** * Creates a new Umbra room for clients to connect to. */ public UmbraRoom() { // Attempt to get the host address try { hostAddress = InetAddress.getLocalHost(); } catch(UnknownHostException e) { System.out.println("Could not get the host address."); return; } // Announce the host address System.out.println("Server host address is: "+hostAddress); // Attempt to create server socket try { serverSocket = new ServerSocket(UMBRA_PORT,0,hostAddress); } catch(IOException e) { System.out.println("Could not open server socket."); return; } // Announce the socket creation System.out.println("Socket "+serverSocket+" created."); } /** * Starts the client accepting process. */ public void run() { // Announce the starting of the process System.out.println("Room has been started."); // Enter the main loop while(true) { // Remove all disconnected clients for(int i = 0;i < users.size();i++) { // Check connection, remove on dead if(!users.get(i).isConnected()) { System.out.println(users.get(i)+" removed due to lack of connection."); users.remove(i); } } // Get a client trying to connect try { socket = serverSocket.accept(); } catch(IOException e) { System.out.println("Could not get a client."); } // Client has connected System.out.println("Client "+socket+" has connected."); // Add user to list users.add(new UmbraUser(socket)); // Sleep try { Thread.sleep(ROOM_THROTTLE); } catch(InterruptedException e) { System.out.println("Room has been interrupted."); } } } }
import java.net.Socket; import java.io.IOException; import java.io.ObjectInputStream; /** * This object handles the execution for a single user. */ public class UmbraUser { private static final int USER_THROTTLE = 200; private Socket socket; private boolean connected; private Inport inport; /** * Handles all incoming data from this user. */ private class Inport extends Thread { private ObjectInputStream in; public void run() { // Open the InputStream try { in = new ObjectInputStream(socket.getInputStream()); } catch(IOException e) { System.out.println("Could not get input stream from "+toString()); return; } // Announce System.out.println(socket+" has connected input."); // Enter process loop while(true) { // Sleep try { Thread.sleep(USER_THROTTLE); } catch(Exception e) { System.out.println(toString()+" has input interrupted."); } } } } /** * Creates a new Umbra Client User with the socket from the newly connected client. * * @param newSocket The socket from the connected client. */ public UmbraUser(Socket newSocket) { // Set properties socket = newSocket; connected = true; // Get input inport = new Inport(); inport.start(); } /** * Gets the connection status of this user. * * @return If this user is still connected. */ public boolean isConnected() { return connected; } /** * Purges this user from connection. */ public void purge() { // Close everything try { connected = false; socket.close(); } catch(IOException e) { System.out.println("Could not purge "+socket+"."); } } /** * Returns the String representation of this user. * * @return A string representation. */ public String toString() { return new String(socket.toString()); } }
This will accept Socket connections from a client. It also tries to establish an InputStream with the client. Look closely, notice how I start some threads for every client. The rest of the clients won't be effected when a different client is slow, it won't wait for it. Study carefully on how dead clients are closed and removed and how the threads quit on failure. Now, I purposefully left the threads to run even when the client is removed, but you can see where the connection can fail and where you should make the thread quit (using return is easy).
Lots of exceptions can occur for various reasons, it's important to do what is needed in their case. Try writing a client to connect to this server and connect lots of them. Then make the server remove them when they disconnect.
From here on out, I shouldn't tell you how to establish a stream pair going the other way (from server to client) or what the server and client should do to messages being received. This is because... It varies from what you need! Right in the thread process where you see the InputStream, you can read objects and act upon them, or with the OutputStream you can send objects. It's all up to you now, just be sure to use Threads or your program will sit there and wait, which isn't what you want!
OK, OK, you're losing your attention because you really want to make an MMORPG right?
You can't hide it from me! I'll make a Tutorial for that later when I have time so stick around. I'll get a little deeper in making a networked game with a server (which doesn't really need a and a client (which uses a video panel and everything).
While you wait, I suggest you practice just making the server echo Strings from the client and to periodically send Strings back to the client. If you can do this for multiple clients without errors or having the client or server freeze up and wait, then you're all set for my next tutorial
Some images from Lucky Star (Kyoto Animation) | http://www.dreamincode.net/forums/topic/38672-creating-a-server-to-serve-clients/page__p__285040 | CC-MAIN-2013-20 | refinedweb | 1,601 | 60.11 |
Provided by: alliance_5.0-20110203-4_amd64
NAME
namealloc - hash table for strings
SYNOPSYS
#include "mut.h" char ∗namealloc(inputname) char ∗inputname;
PARAMETER
inputname Pointer to a string of characters
DESCRIPTION
The namealloc function creates a dictionnary of names in mbk. It warranties equality on characters string if the pointers to these strings are equal, at strcmp(3) meaning. This means also that there is a single memory address for a given string. The case of the letters do not matter. All names are changed to lower case before beeing introduced in the symbol table. This is needed because most of the file format do not check case. namealloc is used by all mbk utility function using names, so its use should be needed only when directly filling or modifing the structure, or when having to compare an external string to mbk internal ones. This should speed up string comparisons. One shall never modify the contains of a string pointed to by a result of namealloc, since all the field that points to this name would have there values modified, and that there is no chance that the new hash code will be the same as the old one, so pointer comparison would be meaningless. All string used by namealloc are constants string, and therefore must be left alone.
RETURN VALUE
namealloc returns a string pointer. If the inputname is already in the hash table, then its internal pointer is returned, else a new entry is created, and then the new pointer returned.
EXAMPLE
#include "mut.h" #include "mlo.h" lofig_list ∗find_fig(name) char ∗name; { lofig_list ∗p; name = namealloc(name); for (p = HEAD_LOFIG; p; p = p->NEXT) if (p->NAME == name) /∗ pointer equality ∗/ return p; return NULL; }
DIAGNOSTICS
namealloc can be used only after a call to mbkenv(3).
SEE ALSO
mbk(1). | http://manpages.ubuntu.com/manpages/precise/man3/namealloc.3.html | CC-MAIN-2019-43 | refinedweb | 302 | 63.7 |
Ok, well I have an assignment, and I'll be needing some help.
First off, I need help on counting the number of from a text file.
I'm not sure if I should do a loop eg :
for (j=0, j<26; j++)
or there may be an easier way. I have looked around Google, and all I don't really get anything that speaks out clearly to me. However I have read mentions to the use of getline.
I know I need
#include <fstream> ifstream inData; inData.open(music.txt);
at the beginning, and considering this part of my program will not be in main (it'll be as a void), I'll need help with this type of thing too.
The little tidbit of the loop is to also deal with the alphabetic characters, however I'm doubting it deals with the numerics as well, if I stick with a loop, do I have to do something else apart from j<36?
This is just the first part of what I need done, I'll ask other questions later on. | https://www.daniweb.com/programming/software-development/threads/193510/some-help-counting-lines-from-an-input-file | CC-MAIN-2017-26 | refinedweb | 183 | 77.67 |
?.
- AWS Elastic Beanstalk Flask Application: Toolkit Can't Find Resource That Appears to be Present on EC2 Instance
I'm trying to deploy a flask/python app using AWS Elastic Beanstalk and getting a '500 internal server' error resulting from a missing resource. The app works locally but one of the backend components can't find a resource it needs when running on the EC2 instance that Elastic Beanstalk is managing.
I am using the Natural Language Toolkit which I include in my requirements.txt file to be downloaded as a pip package. The nltk package install seems to be have been successful as I'm not getting an error on the line:
import nltk
The line I am getting the error on in my application code is:
sent_detector = nltk.data.load('tokenizers/punkt/english.pickle')
The error I am getting in my log ending is:
[Wed Feb 14 22:17:10.731016 2018] [:error] [pid 13894] [remote 172.31.0.22:252] Resource \x1b[93mpunkt\x1b[0m not found. [Wed Feb 14 22:17:10.731018 2018] [:error] [pid 13894] [remote 172.31.0.22:252] Please use the NLTK Downloader to obtain the resource: [Wed Feb 14 22:17:10.731020 2018] [:error] [pid 13894] [remote 172.31.0.22:252] [Wed Feb 14 22:17:10.731023 2018] [:error] [pid 13894] [remote 172.31.0.22:252] \x1b[31m>>> import nltk [Wed Feb 14 22:17:10.731025 2018] [:error] [pid 13894] [remote 172.31.0.22:252] >>> nltk.download('punkt') [Wed Feb 14 22:17:10.731027 2018] [:error] [pid 13894] [remote 172.31.0.22:252] \x1b[0m [Wed Feb 14 22:17:10.731029 2018] [:error] [pid 13894] [remote 172.31.0.22:252] Searched in: [Wed Feb 14 22:17:10.731031 2018] [:error] [pid 13894] [remote 172.31.0.22:252] - '/home/wsgi/nltk_data' [Wed Feb 14 22:17:10.731034 2018] [:error] [pid 13894] [remote 172.31.0.22:252] - '/usr/share/nltk_data' [Wed Feb 14 22:17:10.731036 2018] [:error] [pid 13894] [remote 172.31.0.22:252] - '/usr/local/share/nltk_data' [Wed Feb 14 22:17:10.731038 2018] [:error] [pid 13894] [remote 172.31.0.22:252] - '/usr/lib/nltk_data' [Wed Feb 14 22:17:10.731040 2018] [:error] [pid 13894] [remote 172.31.0.22:252] - '/usr/local/lib/nltk_data' [Wed Feb 14 22:17:10.731043 2018] [:error] [pid 13894] [remote 172.31.0.22:252] - '/opt/python/run/venv/nltk_data' [Wed Feb 14 22:17:10.731045 2018] [:error] [pid 13894] [remote 172.31.0.22:252] - '/opt/python/run/venv/lib/nltk_data' [Wed Feb 14 22:17:10.731047 2018] [:error] [pid 13894] [remote 172.31.0.22:252] - '' [Wed Feb 14 22:17:10.731049 2018] [:error] [pid 13894] [remote 172.31.0.22:252]
When I added the line
nltk.download('punkt')
to my application in order to ensure that the resource I need would be downloaded, I get this message in the error log:
[Wed Feb 14 22:30:07.861273 2018] [:error] [pid 28765] [nltk_data] Downloading package punkt to /home/wsgi/nltk_data...
which is then followed by a series of errors that comes down to:
[Wed Feb 14 22:30:07.864521 2018] [:error] [pid 28765] [remote 172.31.0.22:55448] FileNotFoundError: [Errno 2] No such file or directory: '/home/wsgi/nltk_data'
So I SSH-d into my EC2 instance, entered the virtual environment that it seems like my app is running on from the opt/python/run directory using
$source venv/bin/activate
and opened up the python interpreter. When I ran
>>import nltk >>nltk.download('punkt')
I got back
[nltk_data] Downloading package punkt to /home/ec2-user/nltk_data... [nltk_data] Package punkt is already up-to-date! True
So I also tried
>>> nltk.data.load('tokenizers/punkt/english.pickle')
and got back:
<nltk.tokenize.punkt.PunktSentenceTokenizer object at 0x7fb8afd34080>
So, it seems like the nltk package on my EC2 instance knows where the nltk_data resource is as long as it's not being asked by my Flask application. I also tried entering
>>nltk.data.path.append('home/ec2-user/nltk_data')
and still got the same error as I posted above with no indication that my attempts append the list of paths to check for nltk_data had gone through.
I am not sure what I need to get nltk to locate where the nltk_data resource it is trying to find is located.
I have seen .ebextensions mentioned in reference to dependency issues and tried to read the AWS page about it, but am not sure exactly how it fits into the issue occurring with my application. Probably a learning-curve web dev literacy issue on my end.
Thanks for any clarity that can be provided regarding this situation!
- Flask-SQLAlchemy is not inserting data into database table
I am using Flask-SQLAlchemy and I'm trying to insert values taken from a form. I'm not getting an error, but for some reason, it's not inserting into the database and I don't know why.
In models.py:
class Appliance(db.Model): """""" __tablename__ = "appliances" id = db.Column(db.Integer, primary_key=True) name = db.Column(db.String) state = db.Column(db.String, default='Off') room_id = db.Column(db.Integer, db.ForeignKey("rooms.id")) room = db.relationship("Room", backref=db.backref( "appliances", order_by=id), lazy=True) class Door(db.Model): """""" __tablename__ = "doors" id = db.Column(db.Integer, primary_key=True) state = db.Column(db.String, default='Closed') room_id = db.Column(db.Integer, db.ForeignKey("rooms.id")) room = db.relationship("Room", backref=db.backref( "doors", order_by=id), lazy=True)
In forms.py:
class ApplianceForm(Form): types = [('Appliance', 'Appliance'), ('Door', 'Door') ] names = [('Light', 'Light'), ('Television', 'Television'), ('Air Conditioner', 'Air Conditioner'), ('Oven', 'Oven'), ('Curtain', 'Curtain') ] name = SelectField('Name', choices=names, validators=[validators.required()]) room = SelectField('Room', validators=[validators.required()]) type = RadioField('Type', choices=types, validators=[validators.required()])
In views.py:
@app.route('/new_appliance', methods=['GET', 'POST']) def new_appliance(): form = ApplianceForm(request.form) form.room.choices = [(r.id, r.name) for r in db_session.query(Room)] print form.errors if request.method == 'POST': name=request.form['name'] room=request.form['room'] type=request.form['type'] print name, " ", room, " ", type if request.method == 'POST' and form.validate(): if form.type.data == 'Appliance': appliance = Appliance() appliance.room = room appliance.name = form.name.data db_session.add(appliance) else: door = Door() door.room = room db_session.add(door) db_session.commit() flash('Appliance added!') return render_template('new_appliance.html', form=form)
I think that the problem is in the state column. But even if I write it explicitly in the
new_appliance()function, like this:
application.state = 'Off'
It still doesn't change anything. What might the problem be?
- ApScheduler does not share global variables shared across workers of the Flask app when running on Gunicorn
I'm relativelly new to Python whereas should have decent background in Java.
Struggling with the following use case:
We've got a microservice running on Gunicorn app server. Service is intended to be called by multiple clients to performs certain manipulations. In order to serve individual client's requests service need to load a heavy model (that takes ~ 10 seconds to load) and then it can re-use the same model to perform all subsequent requests. Thus in order to make all subsequent requests faster we decided to keep model loaded in memory for some time and age upon inactivity. As stated earlier, I'm relativelly new to python. And could not come up with a better way than storing model in a dictionary (Java's Map alternative) {clientId:model}. In order to age models was planning to use a scheduled task that would be able to access the dictionary and remove models that are not used any more.
Run into ApScheduler and in a short time was able to integrate it in my Flask application. Created an integration test that run perfectly fine(Test uses app.run() directly). However when I run the application on Gunicorn seems like the apScheduler task always gets an empty version of the shared dictionary resource.
Was wondering if anyone else run into this issue before... and how did you solve it if you did.
Created a quick code snipped that to highlight the problematic behavior:.
from flask import Flask from apscheduler.schedulers.background import BackgroundScheduler # GLOBAL STORAGE app = Flask(__name__); # imaginary heavy resource __global_count__ = 0; # FLASK SERVICES @app.route("/addOne") def addOne(): global __global_count__ __global_count__ = __global_count__ + 1; return str(__global_count__); @app.route("/count") def count(): global __global_count__ return str(__global_count__); # Scheduled Task def reset(): with app.app_context(): global __global_count__ # Always prints 0 here print("GlobalCount: %d"% __global_count__); __global_count__ = 0; sched = BackgroundScheduler(daemon=True); sched.add_job(reset,'interval',seconds=30); sched.start();
Would greatly appreciate any help.
- Deploying Django Project with Supervisor and Gunicorn getting FATAL Exited too quickly (process log may have details) error
Im trying to launch my django project using this tutorial. I am currently setting up gunicorn and supervisor. These are the configs...
Here is my Gunicorn config:
#!/bin/bash NAME="smart_suvey" DIR=/home/smartsurvey/mysite2 USER=smartsurvey GROUP=smartsurvey WORKERS=3 BIND=unix:/home/smartsurvey/run/gunicorn.sock DJANGO_SETTINGS_MODULE=myproject.settings DJANGO_WSGI_MODULE=myproject.wsgi LOG_LEVEL=error cd $DIR source ../venv/bin/activate export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE export PYTHONPATH=$DIR:$PYTHONPATH exec ../venv/bin/gunicorn ${DJANGO_WSGI_MODULE}:application \ --name $NAME \ --workers $WORKERS \ --user=$USER \ --group=$GROUP \ --bind=$BIND \ --log-level=$LOG_LEVEL \ --log-file=-
This is my supervisor config: [program:smartsurvey]
command=/home/smartsurvey/gunicorn_start user=smartsurvey autostart=true autorestart=true redirect_stderr=true stdout_logfile=/home/smartsurvey/logs/gunicorn.log
When Ive saved my supervisor, i run
sudo supervisorctl rereadfollowed by
sudo supervisorctl update. These throw no errors.
I then run
sudo supervisorctl status smartsurvey, which gives the error
smartsurvey FATAL Exited too quickly (process log may have details)
This is my first time putting my project on the internet so help would be appreciated!
Thanks
- git server and django program inside nginx
I want to run a git server inside my django program. my nginx config is like this:
server{ listen 192.168.1.250:80; root /var/www/html/git; location /server\.git { client_max_body_size 0; # Git pushes can be massive, just to make sure nginx doesn't suddenly cut the connection add this. auth_basic "Git Login"; # Whatever text will do. auth_basic_user_file "/var/www/html/html } location / { include proxy_params; proxy_pass; } }
my django program run correctly, but for git server, I cannot open that.
but when I change the location of django program, both of them work correctly.
location /user { include proxy_params; proxy_pass; }
I want to use just "/" and not to "/" + string. what should I do??
- gunicorn + django + nginx -- recv() not ready (11: Resource temporarily unavailable)
I am getting this issue. I am trying to setup a server and cannot get it running. I am using django, gunicorn and nginx. here are the logs
nginx log
2018/02/09 22:22:32 [debug] 1421#1421: *9 http write filter: l:1 f:0 s:765 2018/02/09 22:22:32 [debug] 1421#1421: *9 http write filter limit 0 2018/02/09 22:22:32 [debug] 1421#1421: *9 writev: 765 of 765 2018/02/09 22:22:32 [debug] 1421#1421: *9 http write filter 0000000000000000 2018/02/09 22:22:32 [debug] 1421#1421: *9 http copy filter: 0 "/?" 2018/02/09 22:22:32 [debug] 1421#1421: *9 http finalize request: 0, "/?" a:1, c:1 2018/02/09 22:22:32 [debug] 1421#1421: *9 set http keepalive handler 2018/02/09 22:22:32 [debug] 1421#1421: *9 http close request 2018/02/09 22:22:32 [debug] 1421#1421: *9 http log handler 2018/02/09 22:22:32 [debug] 1421#1421: *9 free: 000055D4A01ACBE0 2018/02/09 22:22:32 [debug] 1421#1421: *9 free: 000055D4A01C6FB0, unused: 0 2018/02/09 22:22:32 [debug] 1421#1421: *9 free: 000055D4A01B9F80, unused: 214 2018/02/09 22:22:32 [debug] 1421#1421: *9 free: 000055D4A01C9460 2018/02/09 22:22:32 [debug] 1421#1421: *9 hc free: 0000000000000000 0 2018/02/09 22:22:32 [debug] 1421#1421: *9 hc busy: 0000000000000000 0 2018/02/09 22:22:32 [debug] 1421#1421: *9 reusable connection: 1 2018/02/09 22:22:32 [debug] 1421#1421: *9 event timer add: 3: 70000:1518215022208 2018/02/09 22:22:32 [debug] 1421#1421: *9 post event 000055D4A01D8BD0 2018/02/09 22:22:32 [debug] 1421#1421: *9 delete posted event 000055D4A01D8BD0 2018/02/09 22:22:32 [debug] 1421#1421: *9 http keepalive handler 2018/02/09 22:22:32 [debug] 1421#1421: *9 malloc: 000055D4A01C9460:1024 2018/02/09 22:22:32 [debug] 1421#1421: *9 recv: fd:3 -1 of 1024 2018/02/09 22:22:32 [debug] 1421#1421: *9 recv() not ready (11: Resource temporarily unavailable) 2018/02/09 22:22:32 [debug] 1421#1421: *9 free: 000055D4A01C9460
gunicorn log:
[2018-02-09 22:21:35 +0000] [2514] [INFO] Worker exiting (pid: 2514) [2018-02-09 22:21:35 +0000] [2510] [INFO] Worker exiting (pid: 2510) [2018-02-09 22:21:35 +0000] [2523] [INFO] Worker exiting (pid: 2523) [2018-02-09 22:21:35 +0000] [2501] [INFO] Shutting down: Master [2018-02-09 22:21:36 +0000] [2556] [INFO] Starting gunicorn 19.7.1 [2018-02-09 22:21:36 +0000] [2556] [INFO] Listening at: unix:/var/www/myapp/application/live.sock (2556) [2018-02-09 22:21:36 +0000] [2556] [INFO] Using worker: sync [2018-02-09 22:21:36 +0000] [2563] [INFO] Booting worker with pid: 2563 [2018-02-09 22:22:31 +0000] [2556] [CRITICAL] WORKER TIMEOUT (pid:2563) [2018-02-09 22:22:32 +0000] [2598] [INFO] Booting worker with pid: 2598
what can it be? I am stuck for hours now.
here is my gunicorn service:
[Unit] Description=gunicorn daemon After=network.target [Service] User=www-data Group=www-data WorkingDirectory={{ app_dir }} ExecStart={{ virtualenv_dir }}/bin/gunicorn --workers 1 --bind unix:{{ app_dir }}/live.sock {{ wsgi_module }}:application --error-logfile /var/log/gunicorn.log [Install] WantedBy=multi-user.target
- Python thread worker - append main thread list
Hello is there any way to append main_thread list from another threads?
I need to thread infinite loop.
Something like:
Class Main(object): list = [] def __init__(self): Thread(target=self.thread, args=()).start() def thread(self): while True: self.list.append("test")
- How to order results from workers as if there are no workers used?
Suppose that I have the following code to read lines and multiple each line by 2 and print each line out one by one.
I'd like to use N workers. Each worker takes M lines each time and processes them. More importantly, I'd like the output to be printed in the same order as the input. But the example here does not guarantee the output is printed in the same order as the input.
The following URL also shows some examples. But I don't think they fit my requirement. The problem is that the input can be arbitrarily long. There is no way to hold everything in memory before they are printed. There must be a way to get some output from the workers can determine if the output of a worker is ready to be printed and then it is print. It sounds like there should be a master goroutine to do this. But I am not sure how to implement it most efficiently, as this master gorountine can easily be a bottleneck when N is big.
How to collect values from N goroutines executed in a specific order?
Could anybody show an example program that results from the workers in order and prints the results as early as they can be printed?
$ cat main.go #!/usr/bin/env gorun // vim: set noexpandtab tabstop=2: package main import ( "bufio" "fmt" "strconv" "io" "os" "log" ) func main() { stdin := bufio.NewReader(os.Stdin) for { line, err := stdin.ReadString('\n') if err == io.EOF { if len(line) != 0 { i, _ := strconv.Atoi(line) fmt.Println(i*2) } break } else if err != nil { log.Fatal(err) } i, _ := strconv.Atoi(line[:(len(line)-1)]) fmt.Println(i*2) } }
- How to design a NodeJs worker to handle concurrent long running jobs
I'm working on a small side project and would like to grow it out, but I'm not too sure how. My question is, how should I design my NodeJs worker application to be able to execute multiple long running jobs at the same time? (i.e. should I be using multiprocessing libraries, a load-balancer, etc)
My current situation is that I have a NodeJs app running purely to serve web requests and put jobs on a queue, while another NodeJs app reading off that queue carries out those jobs (on a heroku worker dyno). Each job may take anywhere from 1 hour to 1 week of purely writing to a database. Due to the nature of the job, and it requiring an npm package specifically, I feel like I should be using Node, but at the same time I'm not sure it's the best option when considering I would like to scale it so that hundreds of jobs can be executed at the same time.
Any advice/suggestions as to how I should architect this design would be appreciated. Thank you. | http://codegur.com/46685820/how-to-run-initialization-for-each-gunicorn-worker | CC-MAIN-2018-09 | refinedweb | 2,890 | 58.58 |
A brief introduction to bundling your Service Worker scripts.
Photo by Joyce Romero / Unsplash
// The simplest Service Worker: A passthrough script addEventListener('fetch', event => { event.respondWith(fetch(event.request)) })
The code above is simple and sweet: when a request comes into one of Cloudflare’s data centers, passthrough to the origin server. There is absolutely no need for us to introduce any complex tooling or dependencies. Nevertheless, introduce we will! The problem is, once your script grows even just a little bit, you’ll be tempted to use JavaScript’s fancy new module system. However, in doing so, you’ll have a little bit of trouble uploading your script via our API (we only accept a single JS file).
Throughout this post, we’ll use contrived examples, shaky metaphors, and questionably accurate weather predictions to explain how to bundle your Service Worker with Webpack.
Webpack
Let’s just say Webpack is a module bundler. That is, if you have code in multiple files, and you tie them together like this:
app.js
// Import the CoolSocks class from dresser.js import { CoolSocks } from './dresser' import { FancyShoes } from './closet'
Then you can tell webpack to follow all of those import statements to produce a single file. This is useful because Service Workers running on Cloudflare need to be a single file as well.
Show me the code
Remember when I said something about predicting weather? Let’s build a worker with TypeScript that responds with the current weather.
Make sure to have NodeJS installed.
# Make a new project directory mkdir weather-worker cd weather-worker mkdir src dist # Initialize project and install dependencies npm init npm install --save-dev \ awesome-typescript-loader \ typescript \ webpack \ webpack-cli \ workers-preview touch src/index.ts src/fetch-weather.ts webpack.config.js tsconfig.json
You should now have a file in your project called
package.json. Add the following code to that file:
"scripts": { "build": "webpack", "build:watch": "webpack --watch", "preview": "workers-preview < dist/bundle.js" }
Now edit the following files to match what is shown:
tsconfig.json
{ "compilerOptions": { "module": "commonjs", "target": "esnext", "lib": ["es2015", "webworker"], "jsx": "react", "noImplicitAny": true, "preserveConstEnums": true, "outDir": "./dist", "moduleResolution": "node" }, "include": ["src/*.ts", "src/**/*.ts", "src/*.tsx", "src/**/*.tsx"] }
webpack.config.js
const path = require('path') module.exports = { entry: { bundle: path.join(__dirname, './src/index.ts'), }, output: { filename: 'bundle.js', path: path.join(__dirname, 'dist'), }, mode: process.env.NODE_ENV || 'development', watchOptions: { ignored: /node_modules|dist|\.js/g, }, devtool: 'cheap-module-eval-source-map', resolve: { extensions: ['.ts', '.tsx', '.js', '.json'], plugins: [], }, module: { rules: [ { test: /\.tsx?$/, loader: 'awesome-typescript-loader', }, ], }, }
For newcomers, this file will seem incredibly cryptic. All I can say is just to accept it as magic for now. You’ll eventually understand what’s going on.
A note about
devtool: 'cheap-module-eval-source-map'. Specifying this type of sourcemap is fast, lightweight, and results in stacktraces much more representative of your source code. They’re not exact (yet!), but we’re getting there.
Cloudflare fiddle devtools uses source maps to point you to the correct source file. Click through to see the problematic line.
src/index.ts
import { fetchWeather } from './fetch-weather' addEventListener('fetch', (event: FetchEvent) => { event.respondWith(handleRequest(event.request)) }) async function handleRequest(request: Request) { const weather = await fetchWeather('austin') const body = ` ${weather.location.city}, ${weather.location.region}<br> ${weather.item.condition.temp} ${weather.item.condition.text} `.trim() return new Response(body, { headers: { 'Content-Type': 'text/html' }, }) }
src/fetch-weather.ts
/** * Fetch the current weather conditions and forecast for a particular location * @param location location string to fetch * @param unit temperature units (c or f) */ export async function fetchWeather(location: string, unit = 'f') { const url = ` * from weather.forecast where u='${unit}' AND woeid in ( select woeid from geo.places(1) where text="${location}" )&format=json` .split('\n') .join(' ') // yahoo's api doesn't like spaces unless they're encoded .replace(/\s/g, '%20') const res = await fetch(url) if (res.status >= 400) { throw new Error('Bad response from server') } const result = await res.json() return result.query.results && result.query.results.channel }
Now simply run:
npm run build && npm run preview
This ought to build your script and open a page very similar to:
This is great, but instead of returning the weather for every single resource request, maybe we should only return the weather on pathnames that match a particular pattern. Something like:
GET /weather/:city GET /weather/austin GET /weather/toronto
In that pattern,
city is a variable. Anything that starts with
/weather/ will match, and everything after will be our city. This shouldn’t match a path like
/weather/austin/tatious. Luckily there are off-the-shelf solutions on npm to handle exactly this sort of logic.
Webpack also understands how to import npm modules into your bundle. To illustrate this, we’re going to use the fantastic path-to-regexp module.
Install and save the module:
npm install -S path-to-regexp
The path-to-regexp module converts the url path pattern
/weather/:city to a regular expression. Using that regular expression, we can extract the variable
city out of a pathname string. For instance, in the string ‘/weather/toronto’, the city variable is ‘toronto’. However, for the string ‘/users/123’, there is no match at all.
Let’s modify our
src/index.ts file to include this new routing logic.
src/index.ts
import * as pathToRegExp from 'path-to-regexp' import { fetchWeather } from './fetch-weather' type TWeatherRequestParams = { city: string } const weatherPath = '/weather/:city' addEventListener('fetch', (event: FetchEvent) => { // Create a regular expression based on the pathname of the request const weatherPathKeys: pathToRegExp.Key[] = [] const weatherRegex = pathToRegExp(weatherPath, weatherPathKeys) const url = new URL(event.request.url) const result = weatherRegex.exec(url.pathname) // No result, return early and passthrough if (!Array.isArray(result)) return // Build the request parameters object const params = weatherPathKeys.reduce( (params, key, i) => { params[key.name as keyof TWeatherRequestParams] = result[i + 1] return params }, {} as TWeatherRequestParams, ) event.respondWith(handleWeatherRequest(params)) }) async function handleWeatherRequest(params: TWeatherRequestParams) { const weather = await fetchWeather(params.city) const body = ` ${weather.location.city}, ${weather.location.region}<br> ${weather.item.condition.temp} ${weather.item.condition.text} `.trim() return new Response(body, { headers: { 'Content-Type': 'text/html' }, }) }
Notice that after installing the module, all we have to do is import by its name on npm. This is because webpack knows to look inside of your node_modules directory to resolve import statement paths.
Run:
npm run build && npm run preview -- \ --preview-url
You should see the weather for Austin, TX displayed. Congrats!
Conclusion.
You can view the full source of our weather script here: github.com/jrf0110/weather-workers
If you have a worker you'd like to share, or want to check out workers from other Cloudflare users, visit the “Recipe Exchange” in the Workers section of the Cloudflare Community Forum. | https://blog.cloudflare.com/using-webpack-to-bundle-workers/ | CC-MAIN-2018-30 | refinedweb | 1,130 | 51.04 |
Details
- Type:
Improvement
- Status: Open
- Priority:
Major
- Resolution: Unresolved
- Affects Version/s: 0.6.0
- Fix Version/s: None
- Component/s: None
- Labels:None
Description
The Zebra storage layer needs to use distributed cache to reduce name node load during job runs.
To to this, Zebra needs to set up distributed cache related configuration information in TableLoader (which extends Pig's LoadFunc) .
It is doing this within getSchema(conf). The problem is that the conf object here is not the one that is being serialized to map/reduce backend. As such, the distributed cache is not set up properly.
To work over this problem, we need Pig in its LoadFunc to ensure a way that we can use to set up distributed cache information in a conf object, and this conf object is the one used by map/reduce backend.
Issue Links
Activity
- All
- Work Log
- History
- Activity
- Transitions
My worry in doing these kinds of job related updates in the Job in getSchema() is that currently getSchema has been designed to be a pure getter without any indirect "set" side effects - this is noted in the javadoc:
/** * Get a schema for the data to be loaded. * @param location Location as returned by * {@link LoadFunc#relativeToAbsolutePath(String, org.apache.hadoop.fs.Path)} * @param job The {@link Job} object - this should be used only to obtain * cluster properties through {@link Job#getConfiguration()} and not to set/query * any runtime job information. ...
We should be careful in opening this up to allow set capability - something to consider before designing a fix for this issue.
It's ok for us not to use getSchema() for this purpose since it's a pure getter method.
What we need is simply a setter method in LoadFunc through which we can set up distributed cache. Pig needs to ensure that this information is indeed in the job configuration variable that's being passed to hadoop backend.
Also, this setter method should be only invoked at Pig's frondend. In the case of one m/r job containing multiple LoadFunc instances, Pig may need to combine distributed cache configuration information from all instances.
Also, we note that using the UDFContext to convey information from frontend to backend is not working for this. We need the job configuration variable already contain all the distributed cache related information when it's being passed to the hadoop backend.
We may need to add a new method - "addToDistributedCache()" on LoadFunc - notice this is an adder not a setter since there is only one key for distributed cache in hadoop's Job (Configuration in the Job). So implementations of loadfunc will have to use the DistributedCache.add*() methods.
Why not just allow a loader (or storer) the ability to set things on a conf object directly? DistributedCache won't be the only thing that I'll want access to. I don't think Pig will want to add new functions every time a Hadoop feature comes along that one wants access to.
Right now, users can set anything they want with properties on the script command line, but have zero ability to set in compiled code! This seems backwards to me. A custom LoadFunc, or StoreFunc should just either have access to the configuration that gets serialized for the job, or, have the ability to return a Configuration object with settings it wishes Pig will pass on (Pig can then ignore or overwrite things that a user should never touch, similar to what happens from command line params).
Perhaps either a:
void configure(Configuration config);
method or
Configuration getCustomConfiguration();
method would be great. The name for the loader and storer may have to differ as to not collide for classes that implement both, and they should not share the method since the disambiguation would be a problem (a load and store may not both want distributed cache, for example).
The problem with allowing load and store functions access to the config file is that the config file they see is not the config file that goes to Hadoop. This is not all Pig's fault (see comments above on this). The other problem is that multiple instances of the same load and store function may be operating in a given script, so there are namespace issues to resolve.
The proposal for Hadoop 0.22 is that rather than providing access to the config file at all Hadoop will serialize objects such as InputFormat and OutputFormat and pass those to the backend. It will make sense for Pig to follow suit and serialize all UDFs on the front end. This will remove the need for the UDFContext black magic that we do at the moment and should allow all UDFs to easily transfer information from front end to backend.
So, hopefully this can get resolved when Pig migrates to Hadoop 0.22, whenever that is.
This may also relate to
"Hadoop should serialize the Configration after the call to getSplits() to the backend such that any changes to the Configuration in getSplits() is serialized to the backend"
But a cleaner solution from Pig's side is still worthwhile - so we can just rely on Pig's front end only calls, like getSchema() to do the setup job. | https://issues.apache.org/jira/browse/PIG-1337?focusedCommentId=12875858&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel | CC-MAIN-2015-48 | refinedweb | 874 | 57.5 |
There is one exception to the import rule. All classes in the
java.lang package are imported by default. Thus you do
not need to
import java.lang.*; to use them without
fully qualified names.
Consider the
System.out.println() method we've been
using since the first day of class.
System is really the
java.lang.System
class. This class has a
public static field called
out which is an instance of the
java.io.PrintStream class. So when you write
System.out.println(), you're really calling the
println() method of the
out field of the
java.lang.System class. | http://www.cafeaulait.org/course/week4/31.html | CC-MAIN-2016-22 | refinedweb | 102 | 80.28 |
Node:Common library functions, Next:Mathematical functions, Previous:Kinds of library, Up:Libraries
Common library functions
Checking character types. Handling strings. Doing maths.
The libraries in GCC contain a repertoire of standard functions and macros. There are many different kinds of function and macro in the libraries. Here are a few of the different kinds available, with the header files you can use to access them:
- Character handling:
ctype.h
- Mathematics:
math.h
- String manipulation:
string.h
You may find it useful to read the header files yourself. They
are usually found in the directories
/usr/include and its
subdirectories on GNU/Linux systems. The three header files listed
above can be found in
/usr/include; there is a second version of
ctype.h in
/usr/include/linux.1
Footnotes
The version of
ctype.hin the
/usr/includedirectory proper is the one that comes with
glibc; the one in
/usr/include/linuxis a special version associated with the Linux kernel. You can specify the one you want with a full pathname inside double quotes (for example,
#include "/usr/include/linux/ctype.h"), or you can use the
-Ioption of
gccto force GCC to search a set of directories in a specific order. See Building a library, for more information.) | http://crasseux.com/books/ctutorial/Common-library-functions.html | CC-MAIN-2017-43 | refinedweb | 209 | 58.79 |
What does that mean: type safety by design. Type safety by design just means, that you always initialise your variables, use std::variant instead of a union, or prefer variadic templates and fold expressions to va_arg's.
As in my first post to type safety C++ Core Guidelines: Type Safety, I will name the four missing types of type safety and add additional information, if necessary. Here we are:
The rules on is initialized to 0. This initialisation will not hold for n2, because it is a local variable and is, therefore, not initialised. But if you use a user-defined type such as std::string, T1, or T2 in a global or in local scope they are initialised.
If that is too difficult for you, I have a simple fix. Use auto. The c findompiler can not guess from an expression auto a of what type a has to be. Now, you can not forget to initialise the variable. You are forced to initialise your variables.
struct T1 {};
class T2{
public:
T2() {}
};
auto n = 0;
int main(){
auto n2 = 0;
auto s = ""s;
auto t1 = T1();
auto t2 = T2();
}
In this case, I can make it short. I have already written in my post C++ Core Guidelines: More Non-Rules and Myths about the initialisation of member variables.
First of all: What is a union? A union is a user-defined type that can hold one of its members at a time.
.
A std::variant is in contrast a type-safe union. We have it since C++17. An instance of std::variant has a value from one of its types. The type must not be a reference, arrayIts or void. A default-initialized std::variant will be initialized with its first type. In this case, the first type must have a default constructor. Here is a simple example based on cppreference.com.
// variant.cpp
#include <variant>
#include <string>
int main(){
std::variant<int, float> v, w; // (1)
v = 12;
int i = std::get<int>(v);
w = std::get<int>(v); // (2)
w = std::get<0>(v); // (2)
w = v; // (2)
// std::get<double>(v); // error: no double in [int, float] (3)
// std::get<3>(v); // error: valid index values are 0 and 1 (4)
try{
std::get<float>(w); // (5)
}
catch (std::bad_variant_access&) {}
std::variant<std::string> v2("abc"); // (6)
v2 = "def"; // (7)
}
I define in line (1) both variants v and w. Both can have an int and a float value. std::get<int>(v) returns the value. In lines (2) you see three possibilities to assign the variant v to the variant w. But you have to keep a few rules in mind. You can ask for the value of a variant by type (line 3) or by index (line 4). The type must be unique and the index valid. On line (5), the variant w holds an int value. Therefore, I get a std::bad_variant_access exception. If the constructor call or assignment call is unambiguous, a conversion can take place. That is the reason that it's possible to construct a std::variant<std::string> in line (6) with a C-string or assign a new C-string to the variant (line 7).
Variadic functions are functions such as std::printf which can take an arbitrary number of arguments. The issue is that you have to assume that the correct types were passed. Of course, this assumption is very error-prone and relies on the discipline of the programmer.
To understand the implicit danger of variadic functions, here is a small example.
// vararg.cpp
#include <iostream>
#include <cstdarg>
int sum(int num, ... ){
int sum{};
va_list argPointer;
va_start(argPointer, num );
for( int i = 0; i < num; i++ )
sum += va_arg(argPointer, int );
va_end(argPointer);
return sum;
}
int main(){
std::cout << "sum(1, 5): " << sum(1, 5) << std::endl;
std::cout << "sum(3, 1, 2, 3): " << sum(3, 1, 2, 3) << std::endl;
std::cout << "sum(3, 1, 2, 3, 4): " << sum(3, 1, 2, 3, 4) << std::endl; // (1)
std::cout << "sum(3, 1, 2, 3.5): " << sum(3, 1, 2, 3.5) << std::endl; // (2)
}
sum is a variadic function. Its first argument is the number of arguments that should be summed up. I will only provide so much info to the varargs macros that you can understand the program. For more information, read cppreference.com.
Inline (1) and line (2) I had a bad day. First, the number of the arguments num is wrong; second, I provided a double instead of an int. The output shows both issues. The last element inline (1) is missing and the double is interpreted as int (line 2).
This issue can be easily overcome with fold expressions in C++17:
// foldExpressions.cpp
#include <iostream>
template<class ...Args>
auto sum(Args... args) {
return (... + args);
}
int main(){
std::cout << "sum(5): " << sum(5) << std::endl;
std::cout << "sum(1, 2, 3): " << sum(1, 2, 3) << std::endl;
std::cout << "sum(1, 2, 3, 4): " << sum(1, 2, 3, 4) << std::endl;
std::cout << "sum(1, 2, 3.5): " << sum(1, 2, 3.5) << std::endl;
}
Okay, the function sum may look terrifying to you. C++11 supports variadic templates. These are templates that can accept an arbitrary number of arguments. The arbitrary number is held by a parameter pack denote by an ellipse (...). Additionally, with C++17 you can directly reduce a parameter pack with a binary operator. This addition, based on variadic templates, is called fold expressions. In the case of the sum function, the binary + operator (...+ args) is applied. If you want to know more about fold expressions in C++17, here is my previous post to it.
The output of the program is as expected:
Additionally to variadic templates and fold expression, there is another comfortable way for a function to accept an arbitrary number of arguments of a specific type: use a container of the STL such as std::vector as an argument.
// vectorSum.cpp
#include <iostream>#include <numeric>#include <vector>
auto sum(std::vector<int> myVec){ return std::accumulate(myVec.begin(), myVec.end(), 0);} int main(){ std::cout << "sum({5}): " << sum({5}) << std::endl;std::cout << "sum({1, 2, 3}): " << sum({1, 2, 3}) << std::endl;std::cout << "sum({1, 2, 3, 4}): " << sum({1, 2, 3, 4}) << std::endl; }
In this case, a std::initializer_list<int> is used as argument of the function sum. A std::initializer_list can directly initialise a std::vector. In contrast, to fold expressions, std::accumulate is performed at runtime.
Next time, I continue with the profile to Bounds Safety. This profile has four35
Yesterday 7029
Week 41059
Month 107725
All 7375565
Currently are 141 guests and no members online
Kubik-Rubik Joomla! Extensions
Read more...
Read more... | https://modernescpp.com/index.php/c-core-guidelines-type-safety-per-design | CC-MAIN-2021-43 | refinedweb | 1,127 | 73.78 |
Create a Common Facelets Tag Library: Share it across projects
Tutorial – Step By StepIf you’ve learned to use JSF Facelets to create on-the-fly, simple components using XHTML, then you probably have a whole slew of custom components that need to be copied between various projects, and can be somewhat painful to keep up to date. You may have tried to move them into a jar file, but Facelets can’t find them there (without some help from us.)he intent of this tutorial is to explain how to create a packaged jar file, which can be referenced from multiple projects. and which contains all of your tag components and classes for easier maintenance. (Please note that you may still include non-xhtml based components in this tag library, this does not limit you to use only xhtml facelets.)
Download the following archive
- Facelets Jar: jsf-facelets.jar (1.1.14 was used in this tutorial)
Instructions:
- added to the class path.
- Copy and paste the following source files into your project.
- Create your facelets-taglib.common.xml definition file.
- Make necessary additions to web.xml
- Create your first tags.
facelets-taglib-common/ +---JavaSource/ | | CustomResourceResolver.java | - | +---lib/ | | jsf-facelets.jar | - | +---META-INF/ | | facelets-common-taglib.xml | | facelets-common-taglib.tld | | MANIFEST.MF | | | +---taglib/ | | analytics.xhtml | | doctype.xhtml | | your_custom_tag.xhtml - -
—-
CustomResourceResolver.javaThis is a required utility class to allow Facelets to find resources that are not in your project folder, but instead, anywhere on the build path. Put this file in your source folder. The reason putting your XHTML custom components and tag XML files does not work out of the box is because Facelets uses a strict ResourceResolver. The default ResourceResolver looks only in the path of your current project / War. Fortunately, however, we can override the default behavior.
—-.
—-
facelets-taglib-common.tld (optional)This file describes your tag-library for your IDE’s autocompletion, and for Validation, so that you can check during development to ensure that your tags are being properly used. Note**: To enable autocompletion in your IDE, you probably need to copy this file into your own Web Application’s WebContent/META-INF/taglib/ directory, this does not affect Facelets.
—-
web.xmlWe need to make two additions to your Web Application’s web.xml file in order for this to work. Assuming that you have already installed Facelets into your faces-config.xml file, you also need to do two things here:
- Add your tag library XML file to the list of Libraries that Facelets will load.
- Override the default ResourceResolver
—-
I did something similar recently for a project I worked on. The cool thing with Tomcat is that you don’t need to declare the taglib in the target project at all; it will be loaded automatically during startup (now I have to check again if that was a Tomcat specific feature or not, I don’t remember :p).
That’s correct! I’ve packaged the taglib with the JAR file and although Facelets complains, it still find it:
Sep 11, 2008 12:14:32 AM com.sun.facelets.FaceletViewHandler initializeCompiler
SEVERE: Error Loading Library: /META-INF/taglib/facelets-common-taglib.xml
java.io.FileNotFoundException: /META-INF/taglib/facelets-common-taglib.xml
at com.sun.facelets.FaceletViewHandler.initializeCompiler(FaceletViewHandler.java:272)
at com.sun.facelets.FaceletViewHandler.initialize(FaceletViewHandler.java:161)
at com.sun.facelets.FaceletViewHandler.renderView(FaceletViewHandler.java:537))
———
But this later:
Sep 11, 2008 12:14:32 AM com.sun.facelets.compiler.TagLibraryConfig loadImplicit
INFO: Added Library from: jar:file:/WEB-INF/lib/facelets-common-taglib.jar!/META-INF/taglib/facelets-common-taglib.xml
Good tutorial.
But I can’t get my Taglib to work.
It doesn’t seem to find my TLD in Netbeans and when I deploy it, it won’t work too. It just doesn’t recognise my namespace.
Thanks for the tutorial, helped me a lot.
I had reRender problems with Tomcat 6 after packing templates in a JAR.
Here is the solution:
I was able to get it to work by not putting the facelets.RESOURCE_RESOLVER context-param in the web.xml. However, I had to put the CustomResourceResolver in my project source code. I would like to put this in a common project since it will be the same for every project. I dont want to have to copy and paste it into every new app and change the name to CustomResourceResolver.java. I could probably modify the build script but the target project would have the package name of the common project. This seems a little ugly. Any thoughts on how to resolve this?
It sounds like the .jar file containing CustomResourceResolver is not on the build path. | http://www.ocpsoft.org/opensource/create-common-facelets-jar/ | CC-MAIN-2018-39 | refinedweb | 783 | 58.48 |
Interactive cubic shape to control the orientation of a camera. More...
#include <Inventor/ViewerComponents/nodes/SoViewingCube.h>
Interactive cubic shape to control the orientation of a camera. class as the viewer when using a viewing cube. SceneOrbiter is a "mode less" viewer. A mouse click without moving the mouse is interpreted as a selection (for example triggering the viewing cube behavior), but a mouse click and "drag" is interpreted as controlling the camera. For convenience, the SceneOrbiter automatically adds a viewing cube to the scene graph.
The viewing cube is rendered in a corner of the render area. Its size and position in the render area are defined by the size and position fields.
Rendering customization
When the mouse cursor is moved on top of the viewing cube, the part that is under the cursor is highlighted using the color defined by the field selectionColor. The colors of the cube can also be personalized with 3 fields: faceColor, edgeColor field.
Note that the animation of the camera is done in such a way that the default text of each face Top/Bottom/Front/Back/Left/Right are always legible.
Possible up axes of the scene.
Different types of edges.
Possible positions of the viewing cube in the scene camera viewport.
Duration of camera movement to reach the desired position.
Any negative value of this field is equivalent to 0 (no animation). Default is 0.8 seconds.
Color used to render the corners of the cube.
Default is gray (0.8,0.8,0.8).
Color used to render the edges of the cube.
Default is gray (0.8,0.8,0.8).
Size of the edges, relative to the size of the faces.
The size of the corners are also adjusted according to this value. When the edgeSize value, the edgeSize defines either a radius (for ROUND style) or a width.
Below are images of a viewing cube with different edgeSize=0.1 on the left column and edgeSize=0.25 on the right column.
Color used to render the faces of the cube.
Default is gray (0.8,0.8,0.8).
Texture to customize the face's appearance which has a "Right" label by default.
This field defines the name of the file which contains this texture.
The standard image file formats are supported. See SoRasterImageRW for the list. The image format must handle an alpha channel in order to allow the selected face to be highlighted with the selectionColor.
Position of the viewing cube in the scene camera viewport.
Use enum PositionInViewport.
Default is TOP_RIGHT.
Camera.
qtViewerExaminer = new ViewerExaminer(NULL); SceneExaminer* sceneExaminer = qtViewerExaminer->getRenderArea()->getSceneInteractor(); SoViewingCube* viewingCube = new SoViewingCube; viewingCube->sceneCamera = sceneExaminer->getCamera(); sceneExaminer->addChild(viewingCube);
Color used to highlight the part of the viewing cube that is under the cursor.
Default is cyan (0,1,1).
The following image shows the highlighted edge behind the cursor when selectionColor = (1,0,0)
Size of the viewport, in pixels, in which the viewing cube is drawn.
Default is 150 pixels width, 150 pixels height.
Up). Use enum Axis. Default is Y. | https://developer.openinventor.com/refmans/latest/RefManCpp/class_so_viewing_cube.html | CC-MAIN-2021-25 | refinedweb | 513 | 51.85 |
In perfect code all eventualities will be anticipated and code written for all eventualities. Sometimes though we might want to allow for more general failure of code and provide an alternative route.
In the following code we try to perform some code on a list. For the code to work the list must be long enough and the number must be positive. We use try …. except to perform the calculation where possible and return a null value if not.
In Python an error is called an ’exception’.
import math x = [1, 2, -4] # Try to take the square root of elements of a list. # As a user try entering a list index too high (3 or higher) # or try to enter index 2 (which has a negative value, -4) test = int(input('Index position?: ')) try: print (math.sqrt(x[test])) except: print ('Sorry, no can do') OUT: Index position?: 2 Sorry, no can do
Try …. except may be used in final code (but it is better to allow for all specific circumstances) or may be used during debugging to evaluate variables at the time of code failure
One thought on “14. Python basics: try …. except (where code might fail)” | https://pythonhealthcare.org/2018/03/22/14-python-basics-try-except-where-code-might-fail/ | CC-MAIN-2020-29 | refinedweb | 199 | 75.3 |
Sum Product Four Numbers coderinme
Finding the sum/product of four numbers entered by the user manually would be very hectic, frustrating as well as a time taking process. To make it simple Let us now write a program in Java that reads four integers from the user in the main program. It passes these two integers and then finds the sum of all these integers by adding the four and returns the sum/product as an output.
Given any two numbers like a and b
sum = a + b
product = a * b
Write a Java Program which prompts the user to enter 4 numbers. The program will then computes and display their sum and their product.
Program:
import java.io.*; public class Q16 { public static void main(String[] args) throws IOException { BufferedReader inp=new BufferedReader(new InputStreamReader(System.in)); System.out.println("Enter any 4 number :"); double n1=Double.parseDouble(inp.readLine()); double n2=Double.parseDouble(inp.readLine()); double n3=Double.parseDouble(inp.readLine()); double n4=Double.parseDouble(inp.readLine()); System.out.println("The sum of 4 number is : "+(n1+n2+n3+n4)); System.out.println("The product of 4 number is : "+(n1*n2*n3*n | https://coderinme.com/sum-product-four-numbers-coderinme/ | CC-MAIN-2019-09 | refinedweb | 196 | 50.73 |
There are four GN target templates that should be used for Rust projects:
rustc_librarydefines a library and optionally a unit test target. The library can be depended on by other targets.
rustc_binarydefines an executable and optionally a unit test target.
rustc_testdefines a test-only target.
rustc_macrodefines a procedural macro target.
The examples/rust directory has some examples of Rust packages that use these targets, as do the Rust FIDL examples.
Note: The example Rust BUILD.gn file contains the line
group("rust"). In this instance,
rust refers to the directory the
.gn file is in, not the language.
See also: Build Fuchsia with a custom Rust toolchain
Fuchsia Rust targets are not built with cargo. That said, you can generate Cargo.toml files for use with external tooling. This functionality is not guaranteed to work.
Once you have a Cargo.toml for your target you can generate and browse HTML documentation for your target and its dependencies by running:
fx rustdoc path/from/fuchsia/root/to/target:label --open
You can run unit tests on connected devices using
fx, with the
fx test {package name} command. See Testing Rust code for information on adding and running tests.
Procedural macro targets are executed on the host at compile time. Therefore, they cannot depend on other crates that are only available on device, e.g. zircon.
Negative tests, e.g. asserting that a macro fails to compile with a specific error, are currently not supported.
By default our build configuration makes all Rust warnings into errors. This requirement can be onerous during development, and on your local machine you may wish to see warnings as warnings and let CQ enforce the hard boundary.
The
rust_cap_lints GN arg allows you to control this behavior in your development environment. Setting
rust_cap_lints = "warn" in
fx args or adding
--args='rust_cap_lints = "warn"' to your
fx set will allow you to develop locally without being blocked by warnings.
We don't currently have a style guide for Rust, but you should run
fx rustfmt or
fx format-code before submitting. We mostly use the rustfmt defaults, but have a couple custom settings.
If you're new to Rust, and would like someone to review your changes to validate that your usage of Rust is idiomatic, contact one of the following (or add them as a reviewer to your change.)
(To volunteer for this, please add yourself to the list above and upload the change with one of the above as the reviewer).
Public discussion happens on the rust-users@fuchsia.dev mailing list.
{% dynamic if user.is_googler %}
[Googlers only] For Googler-only channels, see go/fuchsia-rust-googlers.
{% dynamic endif %} | https://fuchsia.googlesource.com/fuchsia/+/refs/heads/main/docs/development/languages/rust/README.md | CC-MAIN-2021-31 | refinedweb | 443 | 58.69 |
view raw
I have an array of EKReminder, like so:
var currentReminders: [EKReminder]? = ....
import Foundation
import EventKit
class NNReminder: EKReminder {
var additionalNotes: String?
}
Provided you are sure that all members of
currentReminders are, in fact,
NNReminders, you can cast them all like this:
currentReminders = currentReminders.map { $0 as! NNReminder }
Edit: if only some of the reminders are of type
NNReminder, or you're not sure of the array's contents, you can use
flatMap to remove the nil values:
currentReminders = currentReminders.flatMap { $0 as? NNReminder }
If you are asking how to transform a bunch of objects that were initialized as
EKReminder, you should write a custom
init in
NNReminder that takes an
EKReminder, and use this
init in the above map method. | https://codedump.io/share/8F6RFPEQHYwB/1/casting-array-of-objects-to-array-of-subclassed-objects-in-swift | CC-MAIN-2017-22 | refinedweb | 122 | 62.78 |
How to pass input value from child component to parent component
If you’re coming from React, passing a value from a child component to the parent component requires some boilerplate code.
You can see an example here.
I’m going to replicate the the example in the link above, but in Svelte.
I’m going to create a Svelte component called input.svelte.
<script> // Creating a prop export let onChange; const handleBlur = e => { if ('function' === typeof onChange) { // Pass input value to the top onChange(e.target.value); } }; </script> <input on:blur={handleBlur}>
In my
Input component, I have a
handleBlur() function. The sole job of this function is to check if
onChange is a function, and if so, pass the input value through.
In my parent Svelte component, App.svelte, I’m going to import my Svelte
Input component, and attach a function handler to the
onChange prop.
<script> import Input from "./input.svelte"; let fullName = ""; // Update fullName when the value changes on blur event const handleChange = value => (fullName = value); </script> <label> Enter your name <Input onChange={handleChange} /> </label> <h1>Hi {fullName}!</h1>
handleChange() is responsible to update the state property
fullName.
The greeting message then displays the new name value.
This works fine, but there are 2 problems:
- It doesn’t feel real time because it happens when the input tag is out of focus
- Too much boilerplate code
To solve for the 2 problems above, I can use Svelte
bind:property feature to help our Svelte app stay reactive on change.
Use bind:property instead of handler functions
In Svelte data flows downward. But sometimes you need data to flow upwards to a parent component.
And the example above we did that by tossing event handler functions down, to get values to go up to the parent component.
But,
bind:property={variable} let’s us reduce that boilerplate code, and gives us that real-time update.
It continues to follow the “write less, do more” motto.
Let’s refactor the
Input component and use the
bind property.
<script> // Input prop called value export let value; </script> <input type="text" bind:value>
The only thing in the
<script> tag is an exported variable called
value.
I’m than updating the
input HTML element to bind with the variable,
value.
Since the export variable name was the same name as the bind property, I used the short-hand version.
<!-- long-hand version --> <input type="text" bind:value={value}> <!-- short-hand version --> <input type="text" bind:value>
Now I’m going to refactor the App.svelte component and make use of the new
bind:property feature.
<script> import Input from "./input.svelte"; let fullName = ""; </script> <label> Enter your name <input bind:value={fullName}> </label> <h1>Hi {fullName}!</h1>
Let’s do a quick breakdown on this file.
I deleted the handler function,
handleChange(), and left the
import statement and the state variable,
fullName.
<script> import Input from "./input.svelte"; let fullName = ""; </script>
The next update in App.svelte, was the
Input component directive.
<Input bind:value={fullName}>
On the directive, I also added
bind:value={fullName} so it keeps my state variable,
fullName, updated as the user is entering new characters.
Now every time you enter a new character inside the text field, it will update the output greeting message.
I like to tweet about Svelte and post helpful code snippets. Follow me there if you would like some too! | https://linguinecode.com/post/pass-input-value-from-child-component-to-parent-component | CC-MAIN-2022-21 | refinedweb | 570 | 65.62 |
![if !IE]> <![endif]>
The this Keyword
Sometimes a method will need to refer to the object that invoked it. To allow this, Java defines the this keyword. this can be used inside any method to refer to the current object. That is, this is always a reference to the object on which the method was invoked. You can use this anywhere a reference to an object of the current class’ type is permitted.
To better understand what this refers to, consider the following version of Box( ):
// A redundant use of this.
Box(double w, double h, double d) {
this.width = w; this.height = h; this.depth = d;
}
This version of Box( ) operates exactly like the earlier version. The use of this is redundant, but perfectly correct. Inside Box( ), this will always refer to the invoking object. While it is redundant in this case, this is useful in other contexts, one of which is explained in the next section.
Instance Variable Hiding
As you know, it is illegal in Java to declare the Box class. If they had been, then width, for example, would have referred to the formal parameter, hiding the instance variable width. While it is usually easier to simply use different names, there is another way around this situation. Because this lets you refer directly to the object, you can use it to resolve any namespace collisions that might occur between instance variables and local variables. For example, here is another version of Box( ), which uses width, height, and depth for parameter names and then uses this to access the instance variables by the same name:
// Use this to resolve name-space collisions.
Box(double width, double height, double depth) {
this.width = width; this.height = height; this.depth = depth;
}
A word of caution: The use of this in such a context can sometimes be confusing, and some programmers are careful not to use local variables and formal parameter names that hide instance variables. Of course, other programmers believe the contrary—that it is a good convention to use the same names for clarity, and use this to overcome the instance variable hiding. It is a matter of taste which approach you adopt.
Related Topics
Copyright © 2018-2020 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai. | https://www.brainkart.com/article/The-this-Keyword---Java_10424/ | CC-MAIN-2020-24 | refinedweb | 379 | 66.13 |
External storage
The External Storage (ES) hosts are used to store the compressed text content of wiki page revisions in MariaDB databases. When a user asks for a page or a diff to an older version of a page, Mediawiki grabs the compressed content from the external store database and uses it to display the user's request. Compressed text is stored in a number of different formats.
ES data is sharded into multiple clusters. Originally this was to allow it to scale across multiple machines when one became unable to handle the load. Now those clusters have all been merged back onto one host (represented as separate tables inside each wiki's db) and the separation into clusters only serves to keep the table size down. Multiple clusters are used at any given time.
The Echo extension uses External Storage to store notifications. The Flow extension uses External Storage to store the Parsoid HTML of users' posts. They use the 'extension1' cluster in production as of 2014-07, see InitialiseSettings.php. The AbuseFilter extension uses ExternalStore as well.
Servers
The ES hosts are named es#. Eqiad hosts are numbered >=1000. Codfw hosts are numbered >=2000 They are all Dell r510 class machines with 12 2TB disks. Configured with RAID10, they have ~12TB available disk.
For a list of which servers are currently active serving which cluster, see db-*.php.
The ES servers use run MariaDB and use standard MariaDB replication to maintain consistent datasets. The replication topology is described in the image on the right - each colo has flat replication within the colo with one host per colo doing inter-colo replication. Of course this statement should be treated as immediately out of date and verified by querying the MariaDB servers directly.
Reading db.php
The section in db.php that lists out the ES hosts is called
externalLoads. As with other db arrays in db.php, the first in the list is the master, the rest are slaves. Since all but one of the ES clusters are read only, there is no difference between master and slave.
The default table name for an ES host is 'blobs'. the
templateOverridesByCluster section of db.php allows you to change that. Because most of the shards are coresident on the same server, most of them are overridden.
Nagios, Monitoring, and Health
Nagios
Nagios watches a few things on the ES hosts:
- stardards: ping, ssh, disk, etc.
- mysql: replication lag, master write status
Here are responses to some potential nagios alerts:
- Host down
- if it's a slave, comment out the host from db.php (How to do a configuration change)
- if it's the master, promote one of the other slaves in the same colo to be a new master using the switch-master script
- Disk full
- verify whether it's / or /a/ that is full
- delete replication logs, old backups, etc.
- escalate to sean or jaime
- if either / or /a/ reaches 100%, the database will need a reslave
- Replication fallen behind or stopped
- Do nothing - mediawiki will remove the host from rotation on its own
- check RAID for degraded disks
- wait and see if it gets better on its own
- if it doesn't get better on its own, escalate to sean or jaime
- RAID
- Go figure out which disk has failed Raid and MegaCli, put in an RT ticket to replace it.
Ganglia
In addition to the standard host metrics, each ES host has a number of mysql-specific metrics. The most useful of these:
- mysql_questions (how many queries are coming in)
- mysql_max_used_connections (how many connections mysql has open)
- mysql_threads_connected (the number of running threads)
- mysql_slave_lag (how many seconds behind the master the slave is currently lagging)
Health
Other generic commands to check the health of the databases
- show slave status \G
- look for the following 3 lines
- Slave_IO_Running: Yes
- Slave_SQL_Running: Yes
- Seconds_Behind_Master: 0
Backups and Snapshots
Daily copies of /a/sqldata are currently taken on es1004. these snaps are filesystem copies, not lvm snapshots. They should be discontinued as soon as we have regular LVM snapshots. There is a perl script called snaprotate.pl running around but it is not yet in use on the es hosts.
Taking a snapshot
The basic steps are: stop replication, record replication data, flush tables and sync the disks, take the snapshot, start replication.
- mysql> flush tables; # this step is optional but speeds up the next flush tables
- mysql> stop slave io_thread;
- mysql> show slave status\G
- note Read_Master_Log_Pos
- mysql> show slave status\G
- repeat until Read_Master_Log_Pos and Exec_Master_Log_Pos match and don't change twice in a row
- mysql> flush tables;
- $ mysql -e 'show slave status\G' > /a/slave_status_YYYY-MM-DD.txt
- $ sync
- $ device=$(lvdisplay | grep 'LV Name' | head -n 1 | awk '{print $3}')
- what this means - note the
LV Namefield for the first volume in lvdisplay - it probably looks like /dev/es1003/data
- $ lvcreate -L 200G -s -n snap $device
- mysql> start slave;
Making a new slave using snapshots
This process is useful when you are making a new slave of the cluster. The broad task is to take a snapshot (recording slave status), copy the data over to a new host, clean it up, start replication, then delete the snapshot. Here are the steps in detail using example hosts A (the current slave) and B (the new host).
Do all the steps listed above in #Taking a snapshot.
On host A, also do:
- $ mkdir /mnt/snap
- $ fs=$(mount | grep /a | cut -f 1 -d\ )
- what this means - note the filesystem mounted on /a; it probably looks like /dev/mapper/es1003-data
- $ mount -t xfs -o ro,noatime,nouuid ${fs/data/snap} /mnt/snap
- this mounts the snapshot at /mnt/snap. The snapshot is probably called something like /dev/mapper/es1003-snap
on host B:
- stop all running mysql processes, either /etc/init.d/mysql stop or kill them
- $ rsync -avP A:/mnt/snap/ /a/
- copy over all of /mnt/snap/ into /a, overwriting anything you find there.
- you should do this in a screen session; it will take about 2 days to complete.
- $ cd /a/sqldata; rm A* master.info relay-log.info
- by A* here I mean all the binary logs from the old host. There are a bunch of them; binlogs, relay logs, the slow query log, etc. eg. es1003-bin.000023, es1003-relay-bin.000004, es1003.err, etc.
- $ mysqld_safe&
- $ cat /a/slave_status_YYYY-MM-DD.txt (the file you created above).
- look for master host, user, log file, and position.
- mysql> change master to master_host='10.0.0.123', master_user='repl', master_password='xxxxx', master_log_file='host-bin.00012', master_log_pos=12345;
- mysql> start slave;
on host A:
- $ umount /mnt/snap
- $ lvremove $dev
- eg lvremove /dev/es1003/snap
Database Schema
The ES hosts have tables named blobs or blobs_cluster#. The schema is more or less the same: two columns, an ID (autoincrementing) and a blob store that contains the gzipped text.
CREATE TABLE `blobs` ( `blob_id` int(10) unsigned NOT NULL AUTO_INCREMENT, `blob_text` longblob, PRIMARY KEY (`blob_id`) ) ENGINE=MyISAM
In the main database schema, you get to the external store from the old_text column of the text table. It contains either the actual text for the page (for very few very old pages) or a pointer into the external store databases. The pointer looks like one of these:
DB://<clustername>/<integer> DB://<clustername>/<integer>/<integer>
clustername is resolved in db.php. There you find the hostname(s) of servers that have the content you're looking for, in addition to the table name.
Path in the database from a page name to its text (using as my example):
select page_latest as rev_id from page where page_namespace=0 and page_title='Defenestration'; select rev_text_id as old_id from revision where rev_id=$rev_id select old_text from text where old_id=$old_id
or, put together into one query:
select text.old_text from page, revision, text where page.page_namespace=0 and page.page_title='Defenestration' and page.page_latest = revision.rev_id and revision.rev_text_id = text.old_id;
OK, going back the other direction... In order to find a list of pages that exist on a specific cluster: (warning these are not efficient queries.)
select old_id as rev_id from text where old_text like 'DB://cluster6%' order by old_id desc limit 5; select rev_page as page_id from revision where rev_text_id=$rev_id; select page_title from page where page_id=$page_id;
or, put together as one query: select 20 pages from cluster 6 (warning this query is slow.):
select page.page_title, revision.rev_id, text.old_text from text,revision,page where text.old_text like 'DB://cluster6%' and text.old_id = revision.rev_text_id and page.page_id = revision.rev_page order by page.page_id limit 20;
These pages can be loaded in a browser with the rev_id alone with a URL like
See also
- Text storage data
- mw:Manual:External Storage
- There's interesting stuff on the page tracking the project to bring the ES service up to date, that ran from 2011-08 to 2011-11. | https://wikitech.wikimedia.org/wiki/External_storage | CC-MAIN-2020-10 | refinedweb | 1,485 | 61.46 |
26 August 2011 05:42 [Source: ICIS news]
By Felicia Loo
SINGAPORE (ICIS)--Asia’s naphtha backwardation is likely to strengthen on expectations that Taiwanese Formosa Petrochemical Corp (FPCC) would restart its long overdue 700,000 tonne/year No 1 naphtha cracker at Mailiao sometime in September, traders said on Friday.
The expectations of a restart have raised hopes of higher demand for naphtha in a market already quite short of molecules, traders said on Friday.
The time spread between the first-half October and first-half November contracts was assessed at $6/tonne (€4.20/tonne) on Thursday, the strongest since 16 May when the inter-month spread was at $7/tonne, according to ICIS.
The naphtha crack spread versus October Brent crude futures was valued at $135.60/tonne, up by $6.35/tonne on the previous week.
“Naphtha is on an uptrend. Demand is outpacing supply. The market is heartened with ?xml:namespace>
FPCC is expected to restart the No 1 naphtha cracker in September, with the unit's unplanned shutdown pushing on its fourth month.
An FPCC spokesman had earlier said that the cracker restart would be earlier than November but did not provide an exact date.
FPCC had shut the No 1 cracker for inspections following a pipeline fire at the firm’s Mailiao petrochemical complex on 12 May.
“
The fact that FPCC is gradually restarting its crude distillation units at the 540,000 bbl/day refinery at Mailiao, bodes well for the cracker to resume operations, traders said.
FPCC shut the refinery and related units entirely because of a fire in end-July.
Supply wise,
“The market seems to be strong because supplies are getting tighter. The arbitrage window is closed given a weak east-west spread. And
The east-west spread was at a weak level of $4.90/tonne, a far cry from at least $27/tonne that would enable arbitrage economics to work, they added.
“There is less (ethanol) output in
Amid the tight supply situation in
Indian refineries are curbing naphtha exports for September amid several plant turnarounds, with naphtha shipments being reduced to 850,000 tonnes from levels at above 900,000 tonnes in August, traders said.
Reflecting a bullish market, a series of spot tenders garnered strong premiums.
South Korea's LG Chem bought 75,000 tonnes of open-spec naphtha for delivery in the first-half of October, at premiums of $4.50/tonne and $5.00/tonne to Japan quotes CFR (cost & freight).
South Korea’s Samsung Total Petrochemicals bought 50,000-75,000 tonnes of open-spec naphtha for delivery in the first half of October at a premium of $5/tonne to Japan quotes CFR.
Indian refiner Oil and Natural Gas Corp (ONGC) has sold by tender 35,000 tonnes of naphtha for loading from Hazira on 11-12 September at a premium of $21-22/tonne to Middle East quotes FOB.
India’s Reliance Industries Limited (RIL) sold 55,000 tonnes of naphtha to trading firm Itochu for loading from Sikka on 10-20 September at Middle East quotes FOB plus $21/tonne.
Meanwhile, Qatar International Petroleum Marketing (Tasweeq) has sold by tender 50,000 tonnes of plant condensate and 30,000 tonnes each of full-range naphtha and Pearl GTL (gas-to-liquids) naphtha for loading in September, at premiums of $19-22/tonne to Middle East quotes Fan.
“Downstream (petrochemical) prices are holding. I don’t see why naphtha can’t be bullish,” said one trader.
Additional reporting by Lester Teo
($1 = €0.70)
Please visit the complete ICIS plants and projects database
For more information | http://www.icis.com/Articles/2011/08/26/9488132/asia-naphtha-backwardation-to-widen-on-firm-demand-tight-supply.html | CC-MAIN-2014-10 | refinedweb | 605 | 62.68 |
There is a special partnership between System.Transactions and Sql Server 2005, and no, it is not the fact that we begged the Enterprise Services team to ship this feature on whidbey and it is not (only) the fact that this is the only way to get distributed transactions to work inproc in Yukon (will talk about that in another blog). What makes the relationship special is that Sql Server 2005 understand Lightweight Transactions, uses Lightweight Transactions whenever possible, optimizes the use of Lightweight Transactions and does all this as transparently as we have been able to make it.
So what is a LightweightCommittableTransaction? The long answer is that it is what you get when you call System.Transactions.Transaction.Create() with the default DefaultTransactionManager set to LightweightTransactionManager. The short answer is that it is an in memory transaction that can be promoted to a full dtc. Neither of these definitions may be what you are looking for, so I have put this one together all by my lonesome:
Question>What is a LightweightCommittableTransaction as far as ado.net is concerned?
Angel>A transaction that looks like a distributed transaction, smells like a distributed transaction and tastes like a distributed transaction. Oh yeah, it can be a lot faster btw.
Let’s see this in action, bring up the trusty Component Services Transaction Statistics
(start->control panel->administrative tools->Component Services->Component Services->Computers->MyComputer->Distributed Transaction Coordinator -> Transaction Statistics)
This tool tracks distributed transactions as they happen, let’s give it a whirl. You should already be familiar with the code below from a previous blog, I have modified it slightly to show off a transaction being delegated, then promoted.
using System;
using System.Data.SqlClient;
using System.Transactions;
public class Repro {
public static int Main(string[] args) {
using(TransactionScope transactionscope1 = new TransactionScope()) {
//Delegation only works against Sql Server 2005, for this example to work this connection must point to one.
using (SqlConnection sqlconnection1 = new SqlConnection(SqlServer2005ConnectionString)) {
sqlconnection1.Open(); //The connection enlists, but does not promote
//do your work 1 here.
}
Console.WriteLine("Check your Transaction Statistics Active transactions here, then press enter");
Console.ReadLine();
//This connection can point to any Backend that supports DTC. Sql Server 7, 2000, 2005 or Oracle
using (SqlConnection sqlconnection2 = new SqlConnection(ConnectionString2)) {
sqlconnection2.Open(); //The connection enlists, automatically promotes the transaction.
//do your work 2 here.
}
Console.WriteLine("Check your Transaction Statistics Active transactions here, then press enter");
// Set the scope to commit by setting the following property:
transactionscope1.Consistent = true;
}// when the TransactionScope is disposed it will check the Consistent property. If this is true the DTC will commit, if it is false it will roll back.
return 1;
}
}
What just happened? Well if you run this code to the first ReadLine you will see that there are no Transactions Active showing! We have created a TransactionScope, we have opened a connection that enlists into this scope, but since we are connected to Sql Server 2005 and we don’t have the need for a full distributed transaction we have Delegated the promotion of the transaction. We have opened a local transaction with all of the performance implications that this implies, and all of the “work 1” will be done under this local transaction. After the first ReadLine we open a connection to a second server which could or could not be the exact same server that we are connecting to with sqlconnection1. On sqlconnection2.Open the Lightweight Transaction realizes that it no longer can remain “light” and converts into a full COM+ distributed transaction, at that point we will finally be able to see an Active transaction show in our Component Services tool.
Wait, wait! What about the local transaction that we are using against the first server? Nothing to worry about, we will promote the local transaction into the full distributed transaction, you will not even know that the first transaction was only local. Most of the time the MSDTC of the first server will own the distributed transaction from this point on. Why not all of the time? Well this depends on a heuristics feature that the Enterprise Service is working on, I really don’t know what the current state of this feature is, but the basic idea is that they will keep track of when a transaction is getting promoted and be able to promote at the optimal time to improve performance.
Bottom line, we want this feature to be completely transparent to the end user from the ado.net point of view. It should not matter whether delegation happens or not you should always get the same robust Distributed Transaction behavior you know and love, it’s just that sometimes, it will work a lot faster.
Standard Disclaimer, All information posted here is “AS IS” and confers no rights. This is not a finished article and it is very likely going to contain some errors.
Rambling out. | http://blogs.msdn.com/b/angelsb/archive/2004/07/12/181385.aspx | CC-MAIN-2014-49 | refinedweb | 821 | 52.49 |
Implementing equality of reference types by overriding the == operator with C# .NET
August 23, 2016 Leave a comment
Inthis post we saw how to implement the generic IEquatable interface to make two custom objects equatable using the static Equals method.
You’re probably aware of the equality operator ‘==’ in C#. Let’s see what happens if you try to use it for a custom object:
public class Person { public int Id { get; set; } public string Name { get; set; } public int Age { get; set; } }
Person personOne = new Person() { Age = 6, Name = "Eva", Id = 1 }; Person personTwo = new Person() { Age = 6, Name = "Eva", Id = 1 }; Console.WriteLine(personOne == personTwo);
This will print false just like it did in the post referenced above when we first tested the Equals method. The two person objects point to different positions in memory and in this case the Person objects are cast into their memory addresses that are simply integers. As the integers are different the ‘==’ operator and Equals methods will return false.
We can easily make the above code return true. We’ll say that two Person objects are equal if their IDs are equal. Here’s how you can override the ‘==’ operator in the Person object:
public static bool operator ==(Person personOne, Person personTwo) { return personOne.Id == personTwo.Id; }
You’ll now get a compiler error. As it turns out if the == operator is overridden then there must be a matching override of the != operator. OK, no problem:
public static bool operator !=(Person personOne, Person personTwo) { return personOne.Id != personTwo.Id; }
Rerun the equality example again…
Person personOne = new Person() { Age = 6, Name = "Eva", Id = 1 }; Person personTwo = new Person() { Age = 6, Name = "Eva", Id = 1 }; Console.WriteLine(personOne == personTwo);
…and will print true.
Note that you’ll get a warning from the compiler. The Person class should also override two other methods: object.GetHashCode and object.Equals. Refer back to the post mentioned above to see how to that.
View all various C# language feature related posts here . | http://126kr.com/article/3vbsl9fqkom | CC-MAIN-2017-22 | refinedweb | 335 | 56.05 |
libs/wave/ChangeLog
Boost.Wave: A Standard compliant C++ preprocessor library Copyright (c) 2001-2013'. ------------------------------------------------------------------------------- CHANGELOG Boost V1.55: - Fixed #8848: Wave driver improperly processes 0xFFFFui64 token - Fixed #9098: Wave driver option --c++0x invalid Boost V1.54: - Fixed #8478: Make Boost.wave compatible with Clang's -Wimplicit-fallthrough diagnostic. Boost V1.53: - Fixed a problem with context<>::add_macro_definition which sometimes appended a superfluous T_EOF to the macro replacement list. Boost V1.52.0: - Added util::create_directories() wrapper to account for new behavior of boost::filesystem::create_directories(). - Fixed an obscure problem when preprocessing directives wouldn't be recognized if the previous line contained nothing but an empty macro invocation (see new test case t_9_023.cpp) - Added a new command line option --license=<file> to the Wave driver tool which allows to pre-pend the content of a (license) file to every newly created file. This option simplifies the implementation of partial preprocessing as done on Phoenix, Fusion, etc. - Changed the effect of the -N command line option to simply not expand the macro by pretending it was not defined. Earlier the whole invocation sequence was skipped, now we skip only the macro itself, which still expands the arguments for the macro invocation. - Fixed a couple of compilation warnings Boost V1.51.0 - Fixed #7050: Invalid memory write bug in lexing_exception - Fixed #7159: Text-lines are processed as if they were preprocessing directives - Changed --c++0x command line option to --c++11. Boost V1.50.0 - V2.3.2 - Fixed #6758: not all members are initialized by base_iteration_context constructor. - Fixed #6838: Adding include file with force_include makes Wave fail to emit #line directive - Added support for test of --forceinclude option to testwave executable, added test case verifying #6838 is fixed. - Fixed #6870: build wave driver failed Boost V1.48.0 - V2.3.1 - Added the flag support_option_emit_contnewlines allowing to control whether backslash newline sequences are emitted by the Wave library. The default is as before: these tokens will be silently ignored (after the token positions have been properly updated). Note: this option is supported by the SLex lexer module only. - Fixed #5887: flex_string.hpp needs to include <ios> Boost V1.47.0 - V2.3.0 - After preprocessing the body of any #pragma wave option() the wave tool now concatenates all adjacent string literals into a single string literal. - Fixed whitespace handling, added a corresponding set of test cases (t_9_020.cpp) - Added a new preprocessing hook: locate_include_file allowing to customize the way include files are located. - Added new command line option --noexpand/-N to the Wave driver allowing to suppress macro expansion for a given macro name (works for both, object like and function like macros). This option has to be used very carefully as it not only leaves the whole macro invocation untouched in the generated output but also removes this macro from consideration for Wave itself. This can cause unexpected results if the suppressed macro would influence #ifdef's later on. - Fixed Wave driver to retain all macros defined on the command line in interactive mode. - Fixed problem #5554: wave slex parser eof without eol skips the last line. - Added the compile time configuartion option BOOST_WAVE_WCHAR_T_SIGNEDNESS, which can be set to BOOST_WAVE_WCHAR_T_AUTOSELECT, BOOST_WAVE_WCHAR_T_FORCE_SIGNED, or BOOST_WAVE_WCHAR_T_FORCE_UNSIGNED), it defaults to auto�select. - Fixed a problem in teh wave driver tool related to #pragma option(output). If wave was invoked in rapid succession this erroneously appended to an existing file instead of overwriting that file. - Fixed #5569: slex CONTLINE token works only for LF line endings Boost V1.46.0 - V2.2.0 - Added recognition of C++0x keywords to Re2C lexers. - Added --c++0x command line option to Wave to enabling the recognition of C++0x keywords, converting those to C++0x tokens. - Adapted all of the library to play well with Boost.Filesystem V3 (which is the default from now on). - Added support for extended character and string literals, added test case (which required to add C++0x support to the test application). - Added proper predefined macros fro --c++0x mode. __cplusplus is currently defined to 201101L, but this will change when the Standard is finalized. - Fixed a problem with object like macros, which when being adjacent to a parenthesis, caused the parenthesis to disappear when the macro expansion was suppressed by the expanding_object_like_macro() hook function. - Fixed a bug in pragma option(preserve), missing to set to preserve=1 if after the previous value was preserve=2. - Changed the --preserve option of the wave tool to interprete the integer argument in a slightly different way: 0: no whitespace is preserved, 1: only begin of line whitespace is preserved, 2: only begin of line whitespace and comments are preserved, 3: all whitespace is preserved The #pragma wave option(preserve) now supports these arguments: [0|1|2|3|push|pop]. Boost V1.45.0 - V2.1.0 - Token pasting is well formed only as long as the formed token(s) are pp_tokens as defined by the C++0x Standard. Until now, Wave allowed for non-pp_tokens to be formed in --variadics mode. - Fixed a problem, which prevented reporting /##/ in a macro definition as invalid token pasting. - Fixed problem preventing the skipped_token hook to be called for 'inactive' conditional preprocessing directive tokens. Improved overall consistency in reporting skipped tokens to the hooks function when processing conditional preprocessing directives. Added a new test case verifying the skipped_token hook gets called reproducibly (t_2_020.cpp). - Fixed a problem with the pp hooks 'expanding_object_like_macro' and 'expanding_function_like_macro', which when returning true were stopping all preprocessing instead of just inhibiting the expansion of the macro. - Fixed a duplicated call to the pp hook skipped_token for preprocessing directives inside inactive conditional branches. - Changing exception handling to fix clang++ regression errors. - Replaced assert() with BOOST_ASSERT to pacify the Boost inspect tool. Boost V1.44.0 - V2.0.6 - Added information about the file type to iteration context. This can be either main_file, system_header, or user_header depending whether the handled file is the main file to preprocess, a include file opened from `#include <>`, or a include file opened from `#include ""`. - Added support for new Boost visibility macros. Properly exported all exceptions, etc. Boost V1.43.0 - V2.0.5 - Fixed the wave driver application to strip leading and trailing whitespace from macro names specified on the command line using -U. - Fixed line number counting for lines containing nothing but whitespace followed by a C++ comment if the next line is a pp directive. - Fixed emitting of a #line directive after returned from an include file. - A couple of fixes allowing to properly report the current line number in #line directives for different whitespace preserve modes (see --preserve/-p). - Added new preprocessing hook: emit_line_directive, allowing to customize the format of the generated #line directive. - Changed --line/-l command line option of the wave driver application to accept 0, 1, and 2 as options. The option values 0 and 1 behave as before (disable/enable the generation of #line directives), while the option value 2 will generate the #line directive using the relative filename (instead of the absolute filename emitted from option 1). The default option is value 1. - Added new example: emit_custom_line_directives, demonstrating the use of the new preprocessing hook. - Added new preprocessing hook: found_unknown_directive, which is being invoked whenever an unknown preprocessor directive (i.e. '#' followed by some identifier) is detected. It allows to interpret the directive and to provide some replacement text. - Added new example: custom_directives demonstrating the usage of the new preprocessing hook. - Fixed #4113: cpp_lexer does not handle qualified backslashes correctly. - Fixed #3106: wave on VS2010 beta compiler generates error. Boost V1.42.0 - V2.0.4 - Fixed Wave for latest changes in multi_pass iterator. Boost V1.41.0 - V2.0.3 - Switched to Re2C V0.13.5 - Fixed --list_includes/-l command line option of the wave driver tool to correctly indent the generated list of included files. - Finally fixed all remaining examples. Everything seems to work fine now. - Specifying a custom token type now works as expected. The new lexer interface introduced in V2.0 broke this part. - Removed old code related to pre Boost V1.31 (related to V1 of iterator library). - Added a new commandline option --macrocounts/-c to the Wave driver application which lists all macro invocation counts to an optionally specified file (default is cout). - Fixed compilation problems caused by recent changes to the multi_pass iterator from Spirit V2.1. - Added the new preprocessing hooks detected_pragma_once() and detected_include_guard() which are getting called whenever either a #pragma once has been detected or if the include guard heuristics detected an include guard for a particular include file. - Added a new command line option to the wave driver tool: --listguards/-g allowing to trace all include files which are either contain a #pragma once or contain include guards. - Started to elminate g++ struct aliasing warnings (more to fix, mostly in flex_string). Boost V1.40.0 - V2.0.2 - Fixed a long standing race condition inhibiting to use Wave in multi threaded environments. - Incorporated the changes from latest version of the flex_string class (#2946). - Fixed another race condition triggering problems using Wave in multi-threaded environments. Boost V1.39.0 - V2.0.1 - Fixed Wave to compile with BOOST_FILESYSTEM_NO_DEPRECATED defined (i.e. the library doesn't use the deprecated filesystem interface anymore). Boost V1.37.0 - Updated examples to reflect the recent changes in the used multi_pass iterator. - Fixed documentation links still pointing to the old Boost CVS (thanks to J�rgen. See the documentation for more details. - Added an additional template parameter to the context object, allowing to specify any possibly derived type. This change propagates to the preprocessing hooks, which now get passed the most derived context type as its first argument allowing to access protected members in the original context type. This fixes ticket #1752. - Fixed a problem during parsing of #pragma wave directive, where the value sequence contained a closing parenthesis. This caused a premature end of the pragma value parsing. - Fixed handling of support_option_single_line, which was ignored under certain circumstances. - Fixed ticket #1766: Wrong evaluation of conditional preprocessor directives with predefined macros __FILE__, __LINE__ and __INCLUDE_LEVEL__. This bug triggered an error in constructs like #ifndef __FILE__. Thanks to Daniel Wadehn for reporting and supplying a patch. Added corresponding regression test: t_2_018.cpp. - Fixed a bug which reported a valid macro redefinition as invalid if the macro replacement text referred to a second or higher parameter of this macro. - Fixed a problem in the wave tool to allow two errors to occur while preprocessing two consecutive tokens. - to 'bool' allowing to force Wave to re-evaluate the current conditional expression. This was suggested by Felipe Magno de Almeida. - Added a wave::context object as first parameter to all pp hook functions. This is an interface compatibility breaking change. The new pp-hooks can be disabled by defining the BOOST_WAVE_USE_DEPRECIATED_PREPROCESSING_HOOKS compile time constant to something not equal to zero. By default this constant will be defined to zero for Boost V1.35.0 and newer, switching to the new interface by default. - Added optional support for the import keyword (needed for the C++ module proposal). The identifier import will be recognized as a keyword, if the compile time constant BOOST_WAVE_SUPPORT_IMPORT_KEYWORD is defined to something not equal zero. - Added new preprocessing hook functions: found_error_directive() and found_warning_directive() to be called when #error/#warning directives are encountered. This was suggested by Andreas S�bj�rnsen. - Added a new sample to Wave: hannibal, a partial C++ parser implementation initially written by Danny Havenith () who agreed to add this here. Thanks! - Added new preprocessing hook function: found_line_directive() to be called when a #line directive is encountered. This was suggested by Andreas S�bj�rnsen. - Improved command line handling for the wave applet. - Incorporated latest bug fixes for the Hannibal sample provided by Danny Havenith. - Added loading of a wave.cfg file from anywhere up the filesystem hierarchy starting from the main input file for the wave driver applet up to the root of the file system. - Added support_option_emit_pragma_directive to allow to control at runtime, whether unknown #pragma directives should be emitted or not. To maintain compatibility with earlier versions this option is by default on if the compile time constant BOOST_WAVE_EMIT_PRAGMA_DIRECTIVES was defined to be not equal to zero and it is off otherwise. - Enabled XML serialization support. - Added the throw_exception preprocessing hook, which gets called for every occurring error (whenever an exception would have been thrown). The default of this new hook function is to throw the corresponding exception, which reproduces the old behavior. - Implemented a new preprocessing hook: generated_token(), which get's called whenever a token is about to be returned form the library. This function may be used to alter the token before it gets returned to the calling application. - Added a new sample 'real_positions' demonstrating the new generated_token() preprocessing hook and showing how to use Wave with a new token type without using a new lexer type. - Factored out the pure lex_input_interface to simplify writing different lexer types for Wave. - Added the token_statistics sample showing how to use Xpressive to build a lexer for Wave. - Changed the list_includes sample to use a lexer which is based on the lexertl library written by Ben Hanson (). - Added a new support_option: insert_whitespace, allowing to switch off whitespace insertion which is normally (by default) in place to disambiugate C++ tokens, which would otherwise form different tokens in the output. - Added a new commandline option to the Wave applet: --disambiguate, allowing to control whitespace insertion. The default value for this option is --disambiguate=1, resembling the previous behaviour. Specifying the option --disambiguate=0 allows to suppress whitespace insertion alltogether. - Added pragma option values push and pop to the line, preserve and output options allowing to store and restore the current option. The syntax is: #pragma wave options(<option>: push) and #pragma wave options(<option>: pop) where <option> may be line, preserve or output. Thanks to Eric Niebler for suggesting this feature. - Added the possibility to use static pre-compiled DFA tables for the lexertl based lexer. - Incorporated the changes from Andrei's latest version of the flex_string class. - Added the is_macro_defined(name) function to the context object as described in the documentation. This function is usable with any string type compatible with std::string. - Changed behavior of the --force_include functionality, which now looks for the file to be (force-)included in the current directory first. - Switched to Re2C V0.11.2 - Added const specifiers to some of the context member functions. - Fixed a problem in the SLex C++ lexer (cpp_tokens example). - Fixed a runtime problem in the Re2C generated lexers when feeded with empty input files (thanks to Leo Davis for reporting and providing a patch). - Added the is_eoi() function to token classes, returning true if the token has been initialized to be the end of input token (T_EOI) (thanks to Ovanes Markarian for suggesting this). - Added missing #includes <cstring>, <cstdlib>, and <new> to flex_string.hpp. - Added missing #include <climits> to cpp_chlit_grammar.hpp. - Changed found_include_directive hook function to return a bool indicating, whether the file should be included (true) or skipped (false). Thanks to Felipe Magno de Almeida for suggesting this feature. - Added code to the wave driver applet ignoring a #import directive (the whole directive is passed through to the output) whenever the pp constant BOOST_WAVE_SUPPORT_MS_EXTENSIONS is defined to something not equal to zero. - Fixed the wave driver applet to correctly continue after a error or warning. - Added a macro introspection facility allowing to iterate on all defined macro names. - Added a new commandline option --macronames/-m to the Wave driver application which lists all defined macros and their definitions to an optionally specified file (default is cout). - Fixed configuration to take into account thread related build settings. - Added the BOOST_WAVE_SUPPORT_LONGLONG_INTEGER_LITERALS pp constant allowing to recognize large integer literals (larger in size than long/unsigned long) even if these do not have a 'll' suffix. This pp constant is effective only, if the target platform supports long long integers (BOOST_HAS_LONG_LONG is defined). - The following preprocessing hooks now return a boolean value, which when returning 'true' cause the Wave library to skip the execution of the related preprocessing action: . found_directive: allows to skip the whole directive it is called for . expanding_object_like_macro: allows to skip expansion of the given object like macro, the macro symbol is copied to the output . expanding_function_like_macro: allows to skip the expansion of the given function like macro, the whole macro invocation (including all macro invocation parameters) are copied to the output without any further processing. - Changed the interpretation of the return value of the found_include_directive preprocessing hook: a return value of 'false' now processes the file to be included normally and a return value of 'true' now skips the processing of the include file directive (the file doesn't get included). This change was necessary to make the return values of the preprocessing hooks consistent. Now return 'false' generally means: normal execution and return 'true' generally means: skip execution of the corresponding preprocessor action. - Fixed compilation problems on gcc, fixed ambiguity with boost code (detail namespace was ambigious). - Fixed predefined macro support to be thread safe. - Added missing file to real_positions example. Thanks to Ludovic Aubert for spotting the problem. - Unterminated C++/C comment diagnostics are now a warning and not an error anymore. - Applied patch provided by Jens Seidel making sure every header compiles on its own. - Updates to the documentation. - Fixed a problem in flex_string::compare() (#include_next was non-functional). - Fixed a bug in the pp hook expanding_function_like_macro(), where the seqend parameter was set to the first token after the closing parenthesis instead of pointing at it. - Added the BOOST_WAVE_SUPPORT_THREADING allowing to explicitely control whether the Wave library is built with threading support enabled. If not defined the build settings will be picked up from the Boost build environment (BOOST_HAS_THREADS). - Fixed a whitespace insertion glitch, where whitespace got inserted unconditionally between two operators even if one of these was a comma. - Fixed #line directive after a macro invocation containing newlines to correctly reference the line number. - Positions of macros defined on the command line now get properly reported as "<command line>":1:... - Added testing of the preprocessor hooks. Boost V1.34.0 - Wave Version 1.2.4 - Added the possibility to explicitly enable/disable the generation of #line directives. Added a corresponding commandline argument to the Wave driver tool (--line/-L) which takes either 0 or 1 as its parameter. - Added support for #pragma wave option(command: value) directives, which supports the following commands: . #pragma wave option(line: [0|1]) Enable/disable generation of #line directives . #pragma wave option(preserve: [0|1|2]) Control whitespace preservation . #pragma wave option(output: ["filename" | null | default]) Redirect output to the given file (or no output, if 'null' is specified, or output to the file as given on the command line, if 'default' is specified). The filename is resolved relative to the directory of the processed file. These new #pragma directives are implemented in the Wave driver tool only. It is possible to combine several options in one #pragma directive, i.e. #pragma wave option(line: 0, preserve: 2). - Changed the signature of the may_skip_whitespace() preprocessing hook to additionally take the preprocessing context as its first parameter. - Added the possibility to the Wave tool to disable initial output by specifying a '-' as the output file. This is useful for syntax checks only or in conjunction with the #pragma wave option(output: ...) to restrict the generated output. - Improved error reporting in the Wave tool on bad output file stream state. - Switched to Re2C V0.10.0 - Fixed some of the VC7.1 /W4 warnings. - The Wave tool now creates the directory hierarchy of output files as needed. - Applied some optimization, which causes skipping of the parsing for almost all preprocessor directives when the if block status is false. This gains upto 10-20% in speed for average applications. - Added error diagnostic for #elif without matching #if, which was missing under certain circumstances. - Avoided the evaluation of #elif expressions if one of the previous #if/#elif blocks of the same level was true. This gains upto another 5% of speed for average applications. - The MS specific integer suffix 'i64' is now correctly supported by the Re2C and SLex lexer components (only when BOOST_WAVE_SUPPORT_MS_EXTENSIONS is defined during compilation). - Changed the Wave tool to print the column number of an error/warning along with the line number. The new format is: 'filename:line:column: error text'. - It is now possible to recover from the unbalanced #if/#endif statement warning in a proper way. - The Wave library now automaticaly recognizes include guards in headers files and uses this information to avoid opening these header files more than once. This speeds up things by upto 10-20% depending on the concrete include files. - Fixed the failing test t_6_023.cpp. Error reporting for illformed #else directives was broken (because of some recent changes). - Fixed the failing test t_5_007.cpp. This was caused by the automatic include guard detection, which prevented the second inclusion of the specified include file the test was relying on. - Added the possibility to switch off the automatic include guard detection. - Added a new command line option to the Wave tool: --noguard/-G, which disables the automatic include guard detection. - Now a header with include guards will be included correctly for a second time after its include guard symbol gets undefined. - Added the generating platform to Wave's full version string. - Made the Wave tool fully interactive when started with input from stdin and and output to stdout. In this mode the Wave tool preprocesses the input line by line and not only after receiving the full input as normally. - Added serialization support for the wave::context object, which stores all information about defined macros and all #pragma once header information. - Added the command line option --state (-s) to the Wave tool, which tries to load the serialized information from the file given as the argument to --state and save the state information at the end to the same file. This option is available in interactive mode only. - Added possibility to verify the compatibility of the configuration used during compilation of the Wave library with the config info used for the application. Added a corresponding test to the Wave tool. - Added a new predefined macro __WAVE_CONFIG__ which expands to an integer literal containg the configuration information the library was compiled with. - Added proper versioning support to the serialization of state. - Fixed the macro tracing information to contain the column numbers of the macro definitions as well (the format used is the same as for error messages). - Fixed a memory leak in the flex_string copy on write code (thanks to Tobias Schwinger for reporting this bug). - Fixed a memory corruption bug in the Re2C scanner buffer management code (thanks to Andreas S�bj�rnsen for spotting the bug). - Fixed a major performance bottleneck in the lex_token class. This speeds up Wave for upto another 20-40% depending on the amount of macro expansions to perform. - Added the BOOST_SPIRIT_USE_BOOST_ALLOCATOR_FOR_TREES and the BOOST_SPIRIT_USE_LIST_FOR_TREES Spirit configration constants to the wave_config.hpp to allow to fine tune the generated Spirit tree code. VC7.1 gives best results when both are defined. - Fixed a memory corruption bug triggered by a possible dangling reference. - Fixed Wave tools startup crash when compiled with VC8. - Added the name of the generating compiler (BOOST_COMPILER) to the full Wave version info. - Fixed all Jamfile.v2 to correctly disable RTTI for VC7.1. - Added #pragma message("...") to be optionally supported by the Wave library. This may be enabled by defining the BOOST_WAVE_SUPPORT_PRAGMA_MESSAGE pp constant to some value different from zero. - Fixed a couple of typos in the file cpp.cpp preventing it to compile on gcc 4.1.0 (thanks to Richard Guenther for reporting these). - Richard Guenther fixed another buffer overrun problem in the Re2C scanner. - Fixed Jamfile.v2 files for all sample applications. - Fixed a bug which lead to reporting of an illegal preprocessing directive inside not-evaluated conditional blocks under certain circumstances (thanks to Tobias Schwinger for reporting). - Fixed '#define true ...', '#define false ...' and other constructs, i.e. the usage of the boolean keywords as identifiers during the preprocessing. Added a corresponding test case (t_9_017.cpp). Thanks to Andreas S�bj�rnsen for reporting. - Corrected the Jamfile[.v2] of the waveidl sample to refer to correct file names (thanks to Juergen Hunold for submitting a patch). - Fixed a bug which prevented the main iterator from returning a T_EOF token at the overall end of the input. - Fixed a problem, where non-evaluated #elif directives never got passed to the skippen_token() pp hook (thanks to Andreas S�bj�rnsen for reporting). - Fixed a problem in the get_tokenname() function. - Added a missing #define BOOST_WAVE_SOURCE 1 to the wave_config_constant.cpp file. - Fixed exception specifications to catch all exceptions by const&. - Fixed predefined macros to appear to be defined at a position referring to a file named "<built-in>". Thanks to Andreas S�bj�rnsen for reporting. - Fixed the Re2C lexer not to segfault on empty files anymore. - Stripped leading and trailing whitespace for all lines in a config file (Wave driver tool). - Fixed RTTI build issue for VC7.1/bjam --v2 (thanks to Rene Rivera for submitting a patch for the Wave Jamfile.v2). - Fixed certain problems reported by the Boost inspection tool. - Fixed a couple of SunPro5.8 warnings. - Fixed a bug resulting in a crash if a macro was redefined with a shorter expansion list as it was defined initially. Added a corresponding test case. - Fixed a bug causing an infinite loop when there was a missing endif in the main preprocessed file. - Improved error recovery for illegal preprocessing directive errors. - Improved error handling and error recovery for conditional expressions (#if/#elif expressions). - Wave now passes 160 out of 161 tests from the MCPP V2.6.1 validation testsuite! - Added new warning for invalid #line number and filename arguments - Improved error diagnostics for invalid #line directives containing arbitrary tokens after at the end of the line. - Improved error handling wrt the misuse of the __VA_ARG__ token in macro definitions. - The warning, that a file is not terminated by a newline is now issued for all files, not only for the main file (as previous). - Added a couple of new test cases to verify various diagnostics. - Fixed wave applet not to report missing #endif's when in interactive mode. - Cleaned up the Re2C lexer code. - Fixed a bug, where a empty line followed by an arbitrary token and followed by a preprocessing directive interpreted the preprcessing directive as if it were the first non-whitespace token on the line. This error occured only if the #line directive generation was suppressed. Thanks to Joan Grant for reporting this problem. - Fixed a problem in the Wave applet which prevented the correct recognition of Windows file paths in a configuration file, if this path was enclosed in quotes. - Extended the copyright notice to include the year 2007. - Fixed a problem in preserve=1 mode, when a C style comment triggered the generation of a #line directive. - Worked around a linker issue for the True64/CXX compiler, complaining about multiple defined symbols when using the flex_string class. - Added missing documentation for the context::get_macro_definition function. Sat Feb 18 2005 - Version 1.2.3 - Added a missing throw() specification to the function cpp_exception::get_related_name(). - Fix Boost bug ([ boost-Bugs-1395857 ] wave redefines BSIZE). - Added missing calls to the skipped_token() preprocessing hook which wasn't called for pp directives inside disabled #if blocks. - Made the context<> type noncopyable. - Introduced the --extended/-x command line option to the wave driver executable, which enables the #pragma wave system() directive. This directive is now disabled by default because it may cause a potential security threat. - Changed the what() function of the macro_handling_exception class, which now correctly returns the name of the exception type itself. - Added a diagnostic message to the wave driver executable, which is issued whenever a #pragma wave system() directive is found, but the -x (--extended) command line argument was not given. - Fixed long integer suffix to be allowed to be mixed case (1Ll or 2lL). - Fixed the BOOST_PP_CAT(1e, -1) pp-token bug. Wave now correctly recognizes pp-number tokens, which are converted to C++ tokens right before they are returned from the library. - Moved the implementation of the token_id query functions (get_token_name(), get_token_value()) to a separate source file. - Fixed a bug, which prevented to prefer pp-numbers in files preprocessed as a result of #include directives. - Fixed a bug, which prevented to open #include'd files specified by an absolute path. - Fixed a problem in the expression parser value type. - Fixed a dynaload compilation problem with VC7.1 of the re2c lexer tests. Sat Dec 24 13:33:53 CST 2005 - Version 1.2.2 - Added three new preprocessing hooks: 'found_directive', 'skipped_token' and 'evaluated_conditional_expression' (thanks to Andreas S�bj�rnsen for the suggestions). - Removed hook forwarding functions in the context_type. - Added missing include_next case branches for get_directivename() function. - Added new sample: advanced_hooks. - Fixed a possible buffer overflow in the cpplexer and cpp exception classes. - Made the cpp_grammar thread safe. - Removed the need for the get_directivename() function. Fixed typos in the predefined token table. - Removed assertions from get_token_name() and get_token_value() and replaced these with more meaningful code. - Added the BOOST_WAVE_USE_STRICT_LEXER configuration constant which allows to decide, whether the '$' character will be recognized as a part of identifiers or not (the default is BOOST_WAVE_USE_STRICT_LEXER == 0, i.e. '$' will be recognized as part of identifiers). - Added the possibility to testwave to extract a tagged comment based on a preprocessor constant (testwave V0.4.0). - Made the predefined_macros_grammar thread safe. - Added dll support for the generated Wave libraries. - Added the const_iterator based exlicit instantiations for the Re2C lexer to the built Wave library and dll. - Added the whitespace handling policy to the context object. This actually is no separate policy, it's a new preprocessing hook allowing to decide, whether a concrete token has to be skipped. - Changed the --preserve option of the wave tool to take a single integer argument (0: no whitespace is preserved, 1: only comments are preserved, 2: all whitespace is preserved) - Edited the command line option descriptions of the wave driver. - Fixed broken tags in documentation (magically inserted by DreamWeaver). - Merged the new whitespace_handling policy with the existing preprocessing hooks. The name of the new preprocessing hook ist may_skip_whitespace(). - Fixed compatibility issues for CW9.4 in the Wave test aplication. - Added get_errorcode() member to the wave exception classes allowing to get back the reason for the exception. - Added boost::wave::is_recoverable(cpp_exception const&) alowing to decide, whether it is possible to continue after a cpp_exception has been thrown. This is a temporary hack to overcome the current limitation of the library not allowing to do generic error recovery. It allows to recover from 75% of the generated errors types. - The --timer command line option for the Wave driver now prints the elapsed time correctly even if a preprcessing error occured. - Fixed an error recovery problem which skipped one token after continuing in case this was a pp directive. - Added the --autooutput (-E) option to the Wave driver applet which redirects the generated output to a file named after the input file changing the file extension to '.i'. - Changed all throw's to boost::throw_exception. - Added the possibility to configure the command keyword for the wave specific #pragma directives. It is now possible to define a string literal via BOOST_WAVE_PRAGMA_COMMAND, which will be recognized and all corresponding #pragma's are dispatched to the interpret_pragma() preprocessing hook. The default value for BOOST_WAVE_PRAGMA_COMMAND is "wave", just to ensure complete backward compatibility. - Added missing #pragma warning(pop) directives. - Fixed a bug wrt error propagation in the expression parser. - Fixed an assertion fired when cpp_token is used to process the quick_start sample. - Fixed a (Windows specific) bug which triggered a boost::file_system exception under certain conditions. - Switched to Re2C V0.9.11 - Fixed a problem with the new '-E' (--autooutput) option. - Added better error reporting for duplicate macro definitions to the Wave tool. Added the macro_handling_exception type containing the corresponding macro name via the new (virtual) get_related_name() function. - Added the get_severity() function to the exceptions thrown by the Wave library. - Extended the copyright notice to include the year 2006. Mon Dec 5 22:05:22 CST 2005 included;" statements accepted into Boost! With special thanks to Tom Brinkman, who volunteered to be the review manager. With thanks to David Abrahams, Beman Dewes, Reece Dunn, Larry Evans, Doug Gregor, Joel de Guzman, Baptiste Lepilleur, Andy Little, Paul Mensonides, Dan Nuffer, Andreas Pokorny, Vladimir Prus, Gennadiy Rozental, Michiel Salters, Jonathan Turkanis, Chris Uzdavinis, Pavel Vozenilek, Michael Walter for bug reports, fixes and hints. -) Mon May 24 10:02:47 WEDT 2004 Version 1.1.6 - Fixed a incompatibility with the new program_options version. Version 1.1.5 Version 1.0.6 - Fixed a bug, which reported an #include statement as ill formed, if it was followed by an empty C comment only. This was an error in the cpp.re regular expression for C comments. Additionally, since this change simplified the Re2C generated lexer a lot it was possible to remove the compiler workaround for the VC7.1 compiler which prevented the optimization of this lexer. Mon Mar 29 09:36:59 WEDT 2004 - Corrected the signature of the main() functions (was main(int, char const*[])). Sun Mar 28 12:55:59 WEDT 2004 Version 1.1.4 - Fixed a problem, where the first returned token was lost, whenever a --forceinclude file was given. - Adjusted the Wave driver and the other samples to use the new program_options library syntax (V1.1.x only). Mon Mar 1 19:14:21 WEST 2004 Version 1.1.2 Version 1.0.4 - Fixed a problem, which does not report an error, if in a #define statement in between a macro name and its replacement list were no whitespace given. - Fixed a bug, which generated an unexpected exception of the $ character in the input. - Macro definitions, which differ by whitespace only (one definition contains whitespace at a certain position, the other definition does not) are correctly reported as a warning now. - Fixed a problem, where different formal argument names during macro redefinition were not flagged as a warning. - A wide character string used in a #line directive wasn't flagged as an error. Sun Feb 29 19:10:14 WEST 2004 Used the test suite distributed with the mcpp V2.4 preprocessor to fix a bunch of mostly minor issues: - Fixed trigraph backslash followed by a newline handling (??/ \n) in the re2c (C/C++ and IDL) scanners. - Fixed a digraph/trigraph token type handling problem during macro expansion. - Fixed a digraph/trigraph token type problem during handling of the null preprocessor directive. - Fixed several signed/unsigned conversion bugs in the expression evaluator. - Fixed the || and && operators in the expression evaluator to stop evaluation, as only the outcome of the overall expression is determined. - Fixed the expression evaluation engine to detect divide by zero errors. - Fixed a bug with operator || and && arithmetic (the deduced type was wrong). - Fixed a bug with the unary operators ! and - which IN conjunction with an arithmetic operation yielded A wrong result type. - Fixed a bug, which reported a macro definition as an invalid redefinition, if it was different from the original definition only by different whitespaces. - Fixed a bug, which reported the redefinition of one of the alternative tokens as 'and', 'bit_and' etc. as invalid. - Fixed a bug in the character literal parser, which prevented the recognition of multibyte character literals. - Moved the cpp_token_ids.hpp header into the main wave.hpp header, because the values defined therein aren't changeable by the user anyway. - Fixed some spelling errors in the documentation (thanks to Rob Stewart). Tue Feb 3 20:20:16 WEST 2004 - Fixed the problem, that macro definitions in a config file were flagged as an error, if there was any whitespace in between the -D and the macro name (same problem existed for -P). Fri Jan 30 20:28:27 WEST 2004 - Fixed a missing boostification in the trace support header. - Added a missing std:: namespace qualification to the list_includes.cpp sample file. - Fixed line ending problems with the cpp.re and idl.re files. - Added quick_start sample. Sun Jan 25 20:26:45 WEST 2004 This version was submitted to Boost as the review candidate (V1.1.0) - Fixed invalid explicit instantiation syntax as reported by the Comeau compiler. - Added a missing header to flex_string.hpp. Sat Jan 24 19:47:44 WEST 2004 - Completely decoupled the used lexer from the preprocessor. - Unfortunately had to change the template interface of the context class. It now instead of the token type takes the type of the lexer to use. - Reintroduced the cpp_tokens, list_includes and waveidl samples. . cpp_tokens is based on the SLex lexer . list_includes shows the usage of the include file tracing capability . waveidl uses the Re2C based IDL lexer in conjunction with the default token type Tue Jan 13 20:43:04 WEST 2004 - Fixed several compilation issues under linux (gcc 3.2.3, gcc 3.3, gcc 3.3.2, gcc 3.4, Intel V7.1) - Fixed a compatibility problem with Spirit versions older than V1.7. Mon Jan 12 20:39:50 WEST 2004 - Boostified the code base: . Moved code into namespace boost. . Prefixed all pp constants with "BOOST_". . Refactured the directory structure. - Removed IDL mode and SLex lexer from the code base. These will be re-added as samples. - Changed the Wave configuration system to be more flexible (all #if defined(BOOST_WAVE_...) changed to #if BOOST_WAVE_... != 0), which allows to configure the library without changing the code base itself Sat Jan 10 18:17:50 WEST 2004 - Incorporated Andrei Alexandrescu's latest changes to the flex_string class, which resulted in an overall spedd gain of about 5-10%. Wed Jan 7 17:46:45 WEST 2004 - Found a major performance hole! The achieved general speedup is about 50-70%. - Added missing old MS specific extensions to the re2c lexer (_based, _declspec, _cdecl, _fastcall, _stdcall, _inline and _asm). - Added support for #include_next (as implemented by gcc). - Fixed compilation problems with gcc 3.3.1 - Avoid to look up in symbol table of a potential macro name twice. - Added the Spirit SLex lexer sample to the Wave source tree, because it was removed from the Spirit distribution. - Removed the configuration option, which allowed to reverse the names stored in the symbol tables. - Implemented experimental support for using a TST (ternary search tree) as the container for the symbol tables. Sun Jan 5 12:30:50 2004 - Released V1.0.0 Sun Jan 4 00:11:50 2004 - Removed tabs from the flex_string.hpp file. - Modified the input_functor.hpp file to sqeeze out some milliseconds at runtime. - The --timer option now prints the overall elapsed time even if an error occured. - Added support for #pragma once. Fri Jan 2 22:58:54 2004 - Fixed a bug in the code, which predefines the preprocessor constants. - Fixed a bug in intlit_grammar<> initialisation code. Thu Jan 1 21:15:03 2004 - Fixed a bug while predefining a macro with a value through the commmand line. - Fixed a bug, which reported a macro definition as illegal, if the redefined macro was a function like macro with parameters. - Fixed a bug, if concatenation of two tokens resulted in a C++ comment start token. Thu Jan 1 15:01:54 2004 - Finished license migration. Wed Dec 31 12:23:55 2003 - Changed the copyright and licensing policiy to be Boost compatible. Wed Dec 31 12:01:14 2003 - Fixed a problem while compiling certain headers from the Microsoft Windows SDK: ) where essentially is no whitespace between the parameter list and the macro replacement list. - Fixed a problem with the MS extension __declspec, which now is recognized correctly. Sat Dec 27 14:48:29 2003 - Fixed remaining problems with assign/assign_a. - Fixed some gcc warnings about signed/unsigned comparision mismatch. Tue Nov 11 20:51:41 WEST 2003 - Changed the IDL mode to recognize identifiers only. All keywords (except 'true' and 'false') are returned as identifiers. This allows for easy extension of the IDL language. The drawback is, that after preprocessing there needs to be just another lexing stage, which recognizes the keywords. - Fixed a possible problem, when in between a #if/#elif directive and a subsequent opening parenthesis Wave finds no whitespace: #if(_WIN_VER >= 0x0500) is now recognized correctly. (This problem was pointed out by Porter Schermerhorn). Sun Nov 9 21:05:23 WEST 2003 - Started to work on implementation of an IDL lexer for the TAO idl compiler. . Branched off the Re2C C++ lexer and related files as a starting point for the new IDL lexer. Added connfiguration means to allow compile time decision, in which mode to operatoe (C++ or IDL). . Implemented the Re2C based IDL lexing component. . Fixed all occurences of non-IDL tokens (as T_COLON_COLON and T_ELLIPSIS) Sat Nov 8 20:05:52 WEST 2003 - Version 1.0.0 - Munged the email addresses embedded within the source files. - Adjusted for the new actor names in Spirit (assign_a and append_a). Thu Aug 21 16:54:20 2003 - Removed the internally used macro 'countof()' to avoid possible nameclashes with user code. - Fixed a bug, which prevented the execution of the concatination operator '##' while expanding object-like macros. Tue Aug 5 10:04:00 2003 - Fixed a false assertion, if a #pragma directive started with some whitespace on the line. - Added the #pragma wave timer() directive to allow rough timings during processing. This is done on top of a new callback hook for unrecognized #pragma's, which allows to easily add new pragma commands without changing the Wave library. - Fixed a bug in the whitespace insertion engine, which prevented the insertion of a whitespace token in between two consecutive identifier tokens or a integer literal token followed by an identifier token. - Fixed a bug during macro concatenation, which allowed to concatenate unrelated tokens from the input stream: #define CAT(a, b) PRIMITIVE_CAT(a, b) #define PRIMITIVE_CAT(a, b) a ## b #define X() B #define ABC 1 CAT(A, X() C) // AB C CAT(A, X()C) // correct: AB C, was 1 - Fixed a 64 bit portability problem. - Added pragma wave timer(suspend) and wave timer(resume) - Fixed a ODR problem with static initialization data for predefined macros. - Ported the iterators to the new iterator_adaptors. - Updated the documentation to reflect the recent changes Sun Jun 29 12:35:00 2003 - Fixed 64 bit compatibility warnings. - Fixed a bug, which prevented the correct recognition of a #line directive, if only the filename part of this directive was generated by a macro expansion. - Fixed a bug during macro expansion of conditional expressions, which prevented the correct expansion of certain scoped macros. Fri Jun 27 09:50:14 2003 - Changed the output of the overall elapsed time (option --timer) to cerr. - Added a configuration constant WAVE_REVERSE_MACRONAMES_FOR_SYMBOLTABLE, which reverses the macro names while storing them into the symbol table, which allows to speed up name lookup especially, if the macro names are very long and if these share a common prefix. - Fixed a very subtle bug, which prevented the recognition of fully qualified macro names during the macro expansion of conditionals expressions (for #if/#elif). - Improved the error output for the illformed pp expression error. Thu Jun 26 08:20:30 2003 - Done a complete spell check of the source code comments. Wed Jun 25 20:33:52 2003 - Changed the conditional expression engine to work with integer numeric literals only. Distinguished signed and unsigned literals. - Importing a region twice is allowed now. - Fixed a bug, which does not removed all placeholder tokens from a expanded token sequence while evaluating conditional expressions (C++0x mode only). Wed Jun 25 15:01:51 2003 - Changed the conditional expression engine to respect the type of numeric literals, now expressions like '#if 1 / 10 == 0' evaluate correctly (to true :-) - Fixed a bug, where macro names referring to global macros (as ::A::B) were not correctly recognized under certain circumstances. - Empty parameter lists for macros with ellipses only sometimes generated a placemarker token in the output: #define STR(...) #__VA_ARGS__ STR() // resulted in "�" instead of "" . Wed Jun 25 08:35:06 2003 - Fixed several gcc compilation errors (missing typename's etc.) - Fixed a compilation problem, if Wave is built on top of the SLEX scanner. - Reformatted the --timer output from pure seconds to a more reasonable format. Fri Jun 20 19:33:30 2003 - Changed the enable_tracing function of the tracing_policies to take a trace_flags variable instead of a bool, to allow to control tracing with more granulation. - Added the tracing_enabled function to the tracing_policies, which returns the current tracing status. - Updated the documentation of the tracing policies. Thu Jun 19 21:45:39 2003 - Reactivated the list_includes sample with the help of the new include file tracing facility. Thu Jun 19 17:55:35 2003 - Eliminated the TraceT template parameter from the macromap<> template. - Added two hooks to the trace policy to allow to trace the opening and closing of include files. Thu Jun 19 14:08:10 2003 - Added the command line option --timer, which enables the output to std::cout of the overall elapsed time during the preprocessing of the given file. Fri Jun 13 09:11:29 2003 - Emitted an error message, if an ellipses was found as a formal macro parameter and variadics were disabled. - Fixed a false error message, that the last line was not terminated with a newline, which occured, if no output was generated by the last line of the source file. Thu Jun 12 15:20:22 2003 - Fixed the recent change in argument expansion for the variadics/C99/C++0x mode. - Fixed a problem, where an additional whitespace between _Pragma and the opening parenthesis resulted in a false error message. - Used a pool allocator for the token sequence containers (std::list<>'s), which gives a speed gain of more than 60% (while profiling the Order library). Wed Jun 11 22:18:54 2003 - Fixed a macro scoping/expansion problem, when a macro returned a full scope which is continued on the call site to form a full qualified name, the name wasn't recognized correctly: # region A # define MACRO 1 # region B # define MACRO 2 # endregion # endregion # define ID(x) x ID(A)::MACRO // 1 ID(A::B)::MACRO // 2, was expanded to A::B::MACRO - Changed the expansion of macro arguments such, that these will be expanded only, if the result is to be used for substitution during the expansion of the replacement list. Wed Jun 11 14:40:29 2003 - Included a whitespace eating finite state machine (FSM) for minimal whitespace in the generated output. This was suggested by Paul Mensonides. - Updated the acknowledgement section Wed Jun 4 08:03:04 2003 - Fixed a bug reported by Faisal Vali, which prevented the correct evaluation of conditional expressions, if these referenced macro names, which expanded to a sequence containing non-expandable tokens. - Fixed the above bug for #elif directives too (in the first place this was fixed for #if directives only) Mon May 26 22:15:40 2003 - Added missing copyrights in several files. - Fixed false output, if a unknown _Pragma were encountered. - Fixed a macro expansion problem with qualified names, were constructs like the following were not expanded correctly: #define ID(x) x #region SCOPE # define TEST 1 #endregion ID(SCOPE::) TEST // should expand to 1 - Changed #import semantics for macros from copy semantics to reference semantics, i.e. macros are now considered to be implicitly imported into the scope, where they are defined. If a macro is imported into another scope and the original macro is undefined, the imported macro still exists. Further, if the imported macro is expanded, then while rescanning the original macro is disabled too: #region A # define B(x) x #endregion #import A B (A::B) (*) // A::B(*) A::B (B) (*) // B(*) B (B) (*) // B(*) A::B (A::B) (*) // A::B(*) - Fixed a recently introduced problem, where placemarker tokens slipped through to the output under certain conditions (in variadics/C99/C++0x modes only). Mon May 19 16:30:49 2003 - Fixed a bug, which prevented the recognition of the __lparen__, __rparen__ or __comma__ alternative tokens, if these were the first token after an emitted #line directive (reported by Vesa Karvonen). - Added an optimization, that only those tokens are considered for a macro expansion, which may result in an expansion. Tue May 13 18:16:26 2003 - Fixed a newly introduced problem, where a omitted argument consisting out of whitespace only were failed to be replaced by a placemarker token. This lead to problems with constructs like the following: #define paste(a, b, c) a ## b ## c paste(1, , 3) // should expand to 13, but expanded to 1## 3 - Fixed a problem with the tracing support, which throwed an unexpected exception if there were too few arguments given while expanding a macro. - Allowed to open and to import the global scope ('#region ::' and '#import ::'). - Fixed a bug, if more than one file was given with a --forceinclude command line option. Sat May 10 21:30:29 2003 - Added __STDC_FULL_REGION__ and __STDC_CURRENT_REGION__ to the list of not undefinable macros. - In normal C++ mode and C99 mode the #ifdef/#ifndef and the operator defined() should not support qualified names. This is fixed now. - Updated the documentation. - Fixed minor gcc -Wall compilation warnings. - Added better error support for qualified names used as arguments for #ifdef, #ifndef and operator defined(). Sat May 10 09:51:18 2003 - Removed the feature, that the comma before the ellipsis parameter in a macro definition may be omitted. - Resolved an issue with the expansion of qualified macros, when these qualified names were partially generated by a previous macro expansion - Allowed to specify fully qualified names as arguments to the #region directive Wed May 7 22:44:21 2003 - Changed the names of __SCOPE__ and __FULL_SCOPE__ predefined macros to __STDC_CURRENT_REGION__ and __STDC_FULL_REGION__ resp. The names are subject to change if the #region keyword actually will be renamed to #scope/#module or whatever. - In C++0x mode it is now possible to omit the last comma before a variadics ellipsis in a macro definition: #define cat_i(a, b, c, d, e ...) a ## b ## c ## d ## e - Fixed a bug in the stringize code, where an ellipsis to stringize resulted in stringizing of the first ellipsis parameter only. Preserved the original whitespace delimiting in between the ellipsis arguments. - Introduced the wave::language_support enum for convenient switching of the supported language features throughout the library. - Fixed a bug, which prevented the definition of the predefined macro __WAVE_HAS_VARRIADICS__, if --variadics were given on the command line. Tue May 6 15:49:45 2003 - Made predefined macros available at every macro scope without qualification. - Predefined a new macro in C++0x mode: __STDC_GLOBAL__, which is defined at global macro scope only and equals to '1' (integer literal). - In C++0x mode there are two new predefined macros: __SCOPE__: expands to the last part of the qualified name of the current macro scope __FULL_SCOPE__: expands to the full qualified name of the current macro scope Mon May 5 23:02:48 2003 - Fixed a problem in the new well defined token pasting code, which occured for constructs like the following: #define is_empty(...) is_empty_ ## __VA_ARGS__ ## _other i.e. where two or more '##' operators were contained in the replacement text. - Implemented __comma__, __lparen__ and __rparen__ alternative pp-tokens, which may be used as the ',', '(' and ')' tokens during preprocessing. These are only converted to there respective string representation in a special translation phase after preprocessing. This was proposed by Vesa Karvonen. - Changed the macro scoping rules to: "If a qualified name does not find a nested name, it is not a qualified name to the preprocessor." This seems to be the simplest usable solution for the possible ambiguities. - Fixed a bug in the macro expansion engine in C++0x mode, where the skipping of whitespace inside of a qualified name wasn't consistent. Sun May 4 10:48:53 2003 - Fixed a bug in the expression grammar, which prevented 'not' to be recognized as a valid operator. - Qualified names are now supported as parameters to #ifdef and #ifndef too. - Remove one specialization of the macro expansion engine. It gets instantiated only twice now (for the main input iterator and for list<>'s of tokens. - Simplified the required explicit specialization of the defined_grammar template. It has to be explicitely instantiated by providing the token type only (just as for the explicit instantiations of the other grammars). Fri May 2 22:44:27 2003 - Qualified names are now allowed as parameters to the operator defined() in C++0x mode. - Separated the defined() functionality into a separate translation unit to work around a VC7.1 ICE. Fri May 2 15:38:26 2003 - The C++0x mode now has a special set of predefined macros. - The predefined macro __WAVE_HAS_VARIADICS__ is now defined in C99 and C++0x modes too (--variadics is implied for these modes). - Updated the documentation to reflect the recent changes and additions. - In C++0x mode Wave now supports macro scopes: - new keywords #region/#endregion/#import - qualified macro names - In C++0x mode Wave now supports token pasting of unrelated tokens. These are concatenated, the result is re-tokenized and inserted into the output stream. - Fixed a minor bug in the macro expansion engine, if a qualified function-like macro was found in an object-like context. - Fixed an issue with well defined token pasting of unrelated tokens. Tue Apr 29 08:47:37 2003 - Fixed a bug in the macro expansion engine, which prevented the expansion of a certain macro under specific conditions (if the left of two tokens to concatenate were a disabled one (T_NONREPLACABLE_IDENTIFIER), then the resulting token was disabled too). - Added additional diagnostics to the Wave driver to disambiguate the C99 and C++0x modes. - Implemented a new API function and a corresponding Wave driver command line option, which allows to specify one or more include files to be preprocessed before the regular file is preprocessed (the files are processed as normal input and all the resulting output is included, before processing the regular input file). The Wave driver command line option is --forceinclude (-F). - Wave now compiles the Order library from Vesa Karvonen. Mon Apr 28 07:57:10 2003 - Fixed a bug in the macro expansion engine. - Removed a lot of (not needed) whitespace in the generated output (but still not optimal). Sat Apr 26 20:30:53 2003 - Fixed a bug in the initialization code of the Slex lexer while working in C99 mode (reported by Reece Dunn). Fri Apr 18 08:37:35 2003 - Fixed the handling of option_value's inside of pragma directives: _Pragma("wave option(option_value)") inside which all all whitespaces were deleted. - Started to implement experimental macro scoping. Thu Apr 10 10:20:07 2003 - Fixed a problem with the #pragma wave stop(), where only the first token inside the stop directive was output, when the preprocessor stops in result of this pragma. - Implemented a new #pragma wave system(command), which spawns a new operation system command exactly as specified inside the system directive, intercepts the stdout output of this process, retokenizes this output and inserts the generated token sequence in place of the original #pragma or operator _Pragma. Please note that the generated output is _not_ subject to any macro expansion before its insertion as the replacement of the pragma itself. If you need to macro expand the replacement text, you always may force this by writing: #define SCAN(x) x SCAN(_Pragma("wave system(...)")) which re-scans the replacement once. - Replaced the Wave position_iterator with the boost::spirit::position_iterator (without any problems!). Mon Apr 7 10:45:30 2003 - Fixed macro_trace_policies::expand_object_like_macro not to be called with the formal arguments as one of its parameters. - Updated the documentation to reflect the changes needed for the tracing stuff. Mon Mar 31 19:07:05 2003 - Fixed variadics support in the trace output. - Fixed preprocessing of operator _Pragma() before it's execution. - Added _Pragma("wave stop(errmsg)") (#pragma wave stop(errmsg)) to allow diagnostics output from inside macro expansion. - Fixed operator _Pragma for unknown pragmas (these are simply put through to the output). - Implemented a maximal possible include nesting depth to avoid an out of memory error. The initial value for this is configurable through the compile time constant WAVE_MAX_INCLUDE_LEVEL_DEPTH, which defaults to 1024, if not given. Additionally this may be enlarged through a new command line option: -n/--nesting (Wave driver only). Sun Mar 30 20:40:17 2003 - Implemented the predefined macro __INCLUDE_LEVEL__, which expands to a decimal integer constant that represents the depth of nesting in include files. The value of this macro is incremented on every '#include' directive and decremented at every end of file. - Implemented the operator _Pragma(). It is recognized in C99 mode and whenever variadics are enabled. Sun Mar 30 08:30:12 2003 - Changed the tracing format to be more readable. - Changed the tracing #pragma's to enable tracing: #pragma wave trace(enable) disable tracing: #pragma wave trace(disable) or enable tracing: #pragma wave trace(1) disable tracing: #pragma wave trace(0) - Changed the semantics of the -t (--traceto) switch. Without any -t switch there isn't generated any trace output at all, even, if the corresponding #pragma directives are found. To output the trace info to a file, the '-t file' syntax may be used, to output to std::cerr, the '-t-' (or '-t -') syntax may be used. Fri Mar 28 17:27:25 2003 - Added a new template parameter to the wave::context<> object, which allows to specify a policy for controlling the macro expansion tracing. The default macro_trace_policy does no tracing at all. This way one can add specific macro expansion tracing facilities to the library. - #pragma directives starting with a STDC identifier are no longer not macro expanded in C++ mode, in C++ mode these are now expanded as usual, in C99 mode not. - The tracing can be enabled/disabled from inside the preprocessed stream by inserting a special #pragma directive: enable tracing: #pragma wave_option(trace: enable) disable tracing: #pragma wave_option(trace: disable) - The Wave driver now allows to specify a destination for the macro expansion tracing trough a new command line switch: '-t path' or '--traceto path'. If this option isn't given, the trace output goes to stderr. - The Wave driver now allows to specify the name of the file, where the preprocessed result stream is to be saved: '-o path' or '--output path'. If this option is not given, the output goes to stdout. Wed Mar 26 20:39:11 2003 - Fixed a problem with alternative tokens (as 'and', 'or' etc.) and trigraph tokens, which were not correctly recognized inside #if/#elif expressions. - Alternative tokens ('and', 'or' etc.) are no longer subject to a possible macro redefinition. - Fixed the special handling of 'true' and 'false' during the macro expansion of #if/#elif expressions. Tue Mar 25 12:12:35 2003 - Released Wave V0.9.1 Mon Mar 24 13:34:27 2003 - Implemented placemarkers, i.e. Wave now supports empty arguments during macro invocations. This must be enabled by means of a new pp constant: WAVE_SUPPORT_VARIADICS_PLACEMARKERS which must be defined to enable the placemarker and variadics code and by defining the command line option '--variadics' (Wave driver only). - Implemented variadics, i.e. Wave now supports macros with variable parameter counts. This must be enabled by means of the pp constant: WAVE_SUPPORT_VARIADICS_PLACEMARKERS which must be defined to enable the placemarker and variadics code and by defining the command line option '--variadics' (Wave driver only). - Implemented a C99 mode. This mode enables variadics and placemarkers by default and rejects some specific C++ tokens (as the alternate keywords and '::', '->*', '.*'). This mode must be enabled by the means of the pp constant WAVE_SUPPORT_VARIADICS_PLACEMARKERS (see above). The C99 mode is enabled by the command line switch '--c99' (Wave driver only). This involved some changes in the C99/C++ lexers. Fri Mar 21 16:02:10 2003 - Fixed a bug in the macro expansion engine, which prevented the expansion of macros, which name was concatenated out of a identifier and a integer followed directly by another identifier: #define X() X_ ## 0R() // note: _zero_ followed by 'R' #define X_0R() ... X() // expanded to: X_0R(), but should expand to ... This is a problem resulting from the fact, that the Standard requires the preprocessor to act on so called pp-tokens, but Wave acts on C++ tokens. Thu Mar 20 21:39:21 2003 - Fixed a problem with expression parsing (#if/#elif constant expressions), which failed to produce an error message for expressions like #if 1 2 3 4 5 i.e. where the token sequence starts with a valid constant expression, but the remainder of the line contained other tokens than whitespace. - Integrated the flex_string class from Andrei Alexandrescu (published on the CUJ site) to get COW-string behaviour for the token values and position filename strings. This resulted in a major overall speedup (about 2-3 times faster in dependency of the complexity of pp usage in the input stream). - Fixed a bug, which reported ill formed #if/#else expressions as errors, even if the current if block status (conditional compilation status) is false. - Added a warning, if the last line of a file does not end with a newline. - Improved error recognition and handling for malformed preprocessor directives Mon Mar 17 19:53:29 2003 - Fixed a concatenation problem: constructs like a##b##c where expanded incorrectly. - Optimized the recognition of pp directives: - the parser is used only, if the next non-whitespace token starts a pp directive - null directives now are recognized without calling the parser - the parser isn't called anymore, if the if_block_status is false and no conditional pp directive (#if etc.) is to be recognized. These optimizations give a speed improvement by upto 40%. - Removed adjacent whitespace during macro expansion (needs to be revised, since there is some whitespace left, which may be removed) Sun Mar 16 23:19:11 2003 - Fixed a problem with include paths given on the command line, if the file to preprocess was not given as a full path (driver executable). - Fixed a problem with path names containing blanks (driver executable). - Cleaned command line and argument handling (driver executable). - Fixed a severe memory leak. - Fixed a bug, if a C++ keyword was used as a macro name or macro parameter name, which prevented the macro recognition and expansion to function properly. - Implemented the WAVE_SUPPORT_MS_EXTENSIONS compiler switch for the re2c generated lexer too. - Fixed a problem, which caused an internal T_PLACEHOLDER token to show up outside the macro replacement engine. - Fixed a problem with macro #include directives, which prevents to find the file to include, if after the macro expansion the token sequence representing the filename began or ended with at least one whitespace token. - Fixed a problem, which caused a false error message if the '#' character was to be concatenated with an arbitrary other token. - The concatenation of a whitespace token with an arbitrary other token was reported as illegal token pasting (but it is certainly not). Sat Mar 15 21:43:56 2003 - Added a default constructor to the wave::util::file_position template. - Report the concatenation of unrelated tokens as an error. - Finished the documentation. Fri Mar 14 20:14:18 2003 - More work on documentation - Changed file_position to expose accessor functions (the member variables are marked as private now). This opens up the possibility to provide another file_position implementation, which may be optimized in some way. - Fixed a problem with the token name table, the alternate and trigraph token names were printed incorrectly. - Fixed a bug, which prevented the correct recognition of 'defined X' (without parenthesises). - Fixed a bug, which allowed to redefine and undefine the predefined name 'defined'. - Fixed a bug, which prevents the correct recognition of a macro based #include directive, if it expands to something like #include <...>. - Fixed a bug, which prevented the recognition of duplicate macro parameter names. - Removed the insertion of additional whitespace inside of string literals (during stringizing). Wed Mar 12 19:16:40 2003 - Fixed a bug, which prevented the instantiation of the wave::context object with auxiliary iterators. The token type isn't coupled anymore with the iterator type. This required some changes in the interface: - The wave::context object now has three template parameters (the iterator type, the token type and the input policy type) - The token type does not have the iterator type as it's template parameter anymore. - Implemented a new position_iterator template on top of the iterator_adaptor<> template to make it work even for input_iterator type iterators. - Fixed a bug in the regular expressions for the Slex lexer. - The function 'set_sys_include_delimiter()' was renamed to 'set_sysinclude_delimiter()' to better fit the naming scheme of the other functions. - Wrote more documentation - Unified the different token definitions of the lexers, so that there is only one token type left. This required some changes in the interface: - There is no need anymore to explicitly specify the namespace of the token type to use. - Added the command line option -P to the Wave driver program, which predefines a macro (i.e. defines it such, that is _not_ undefinable through an #undef directive from inside the preprocessed program). Sat Mar 8 07:46:43 2003 - Released Wave 0.9.0 Thu Mar 6 20:02:44 2003 - Compiled Wave with IntelV7.0/DinkumwareSTL (from VC6sp5) - Fixed new compilation problems with gcc -Wall - Fixed the list_includes and cpp_tokens samples to compile and link correctly. - Fixed a bug, where a wrong filename was reported by the generated #line directive. - Fixed a bug, where the __FILE__ macro was expanded without '\"' around the filename. - The generated #line directives and the expanded __FILE__ macro now report the filename in a native (to the system) format. Additionally the generated string literals are now escaped correctly. Wed Mar 5 21:11:14 2003 - Reorganized the directory structure to mirror the namespace structure of the library - Fixed a bug, where the complete input after the first found #include directive were eaten up. - Fixed a bug, where the __LINE__ macro expanded to a incorrect linenumber, if the __LINE__ macro was encountered on a line after a '\\' '\n' sequence. Tue Mar 4 11:50:24 2003 - The new name of the project is 'Wave'. - Adjusted namespaces, comments etc. to reflect the new name. - Added the command line option -U [--undefine], which allows to remove one of the predefined macros (except __LINE__, __FILE__, __DATE__, __TIME__, __STDC__ and __cplusplus) Sun Mar 2 20:10:04 2003 - Fixed a bug while expanding macros without any definition part (empty macros) - The pp-iterator will not emit a newline for every recognized preprocessing directive anymore. The generated output is much more condensed this way. - The pp-iterator now emits #line directives at appropriate places. - Added an additional parser to the library, which may be used to parse macros given in the command line syntax, i.e. something like 'MACRO(x)=definition'. - Added the possibility to the cpp driver sample, to add macros from the command line through the -D command line switch. - Martin Wille contributed a test script to allow automatic testing of the cpp driver sample by feeding all files contained in the test_files directory through the cpp driver and comparing the generated output with the corresponding expectations. - Added config file support to allow for predefined option sets (for instance for the emulation of other compilers) - Changed the way, how include paths are defined. It resembles now the behaviour of gcc. Any directories specified with '-I' options before an eventually given '-I-' option are searched only for the case of '#include "file"', they are not searched for '#include <file>' directives. If additional directories are specified with '-I' options after a '-I-' option was given, these directories are searched for all '#include' directives. In addition, the '-I-' option inhibits the use of the current directory as the first search directory for '#include "file"'. Therefore, the current directory is searched only if it is requested explicitly with '-I.'. Specifying both '-I-' and '-I.' allows to control precisely which directories are searched before the current one and which are searched after. - Added config file support to the cpp driver. - stored not only the current 'name' of a file (given eventually by a #line directive) but in parallel the actual full file system name of this file too. Tue Feb 25 21:44:19 2003 - Fixed the warnings emitted by gcc -Wall. - Fixed a bug in the cpp grammar, which causes to failing the recognition of certain preprocessor directives if at the end of this directive were placed a C++ comment. - Simplified and extended the insertion of whitespace tokens at places, where otherwise two adjacent tokens would form a new different token, if retokenized. Mon Feb 24 19:13:46 2003 - defined() functionality was broken - added missing typename keywords - added missing using namespace statements, where appropriate - added a warning, when a predefined macro is to be undefined (by an #undef directive) - removed the 'compile in C mode' hack for the re2c generated lexer (VC7.1 (final beta) is not able to compile it with optimizations switched on anyway :( ) - compiled with gcc 3.2 and Intel V7.0 (20030129Z) Sun Feb 23 23:39:33 2003 - Fixed a couple of 'missing typename' bugs (thanks to Martin Wille) - Added code to insert whitespace at places, where otherwise two adjacent tokens would form a new different token, if retokenized. - Fixed a severe macro expansion bug. - Added the handling of invalid or not allowed universal character values inside of string literals and character literals. Sat Feb 22 20:52:06 2003 - Bumped version to 0.9.0 - Added test for invalid or not allowed universal character values (see C++ Standard 2.2.2 [lex.charset] and Annex E) - Fixed a bug with newlines between a macro name and the opening parenthesis during the macro expansion and a bug with newlines inside the parameter list during the macro expansion. - Added the following predefined macros: __SPIRIT_PP__ expands to the version number of the pp-iterator lib (i.e. 0x0090 for V0.9.0) __SPIRIT_PP_VERSION__ expands to the full version number of the pp-iterator lib (i.e. 0x00900436 for V0.9.0.436) __SPIRIT_PP_VERSION_STR__ expands to the full version string of the pp-iterator lib (i.e. "0.9.0.436") Fri Feb 21 22:09:04 2003 (feature complete!) - Allowed to optionally compile the Re2c generated lexer in 'C' mode, because at least the VC7.1 (final beta) compiler has problems to compile it in 'C++' mode with optimizations switch on - Implemented #error and #warning (optional) directives (C++ standard 16.5). Additionally there are now allowed the following preprocessor configuration constants: CPP_PREPROCESS_ERROR_MESSAGE_BODY if defined, preprocesses the message body of #error and #warning directives to allow for better diagnostics. CPP_SUPPORT_WARNING_DIRECTIVE if defined, then the #warning directive will be recognized such, that a warning with the given message will be issued - Adjusted the error handling for the Re2c generated C++ lexer, so that any error inside the lexer is now propagated as an cpplexer_exception. - Implemented the #line directive (C++ standard 16.4) - Implemented #pragma directive (C++ standard 16.6) Additionally there are now allowed the following preprocessor configuration constants: CPP_RETURN_PRAGMA_DIRECTIVES if defined, then the whole pragma directive is returned as a token sequence to the caller, if not defined the whole pragma directive is skipped CPP_PREPROCESS_PRAGMA_BODY if defined, then the #pragma body will be preprocessed - Implemented #include directive with macro arguments (C++ standard 16.2.4) - Made the namespace structure finer granulated to leave only the main interface classes in the main namespace cpp. All other classes are moved into sub-namespaces to reflect the logical dependencies - Reorganized the public interface of the context<> template class, made all non relevant functions into the protected. - Implemented predefined macros (__LINE__ et.al.) (C++ standard 16.8) - Further documentation work Wed Feb 19 23:44:47 2003 - Corrected a lot of bugs in the macro expansion engine, which now should be conformant to the C++ standard. - # (null) directive (C++ standard 16.7) Sun Feb 16 08:40:38 2003 - Added a macro expansion engine which expands macros with arguments C++ standard 16.3 [cpp.replace] - Added a new sample: cpp_tokens. This sample preprocesses a given file and prints out the string representations of all tokens returned from the pp iterator - Added documentation (to be continued!) - Added a couple of small test files to test elementary functionality (the tests mainly were contributed by Paul Mensonides) - The main cpp sample is now a simple preprocessor driver program, which outputs the string representation of the preprocessed input stream. Use cpp --help to get a hint, how to use it. - Fixed a bug in the preprocessor grammar which failed to recognize a pp statement, if there was a C++ comment at the end of the line - Added '#' operator (C++ standard 16.3.2) [cpp.stringize] - Fixed a bug in the slex based C++ lexer to handle the concatenation characters correctly ('\\' followed by a '\n') Sun Feb 9 23:01:00 2003 - Improved error handling for #if et.al. - Fixed a pair of lexer errors - Implemented the #if/#elif statements, the sample now contains a complete C++ expression evaluation engine (for the calculation of the outcome of the #if/#elif statement conditions) - Implemented macro replacement (with parameters) - Implemented the '##' [cpp.concat] operator - Implemented the defined() [cpp.cond] operator Sun Feb 2 23:28:24 2003 - Implemented the #define, #undef, #ifdef, #ifndef, #else and #endif statements - Added optional parse tree output as xml stream (controlled through the config pp constant CPP_DUMP_PARSE_TREE) Fri Jan 31 21:30:55 2003 - Fixed different minor issues and a border case (#include statement at the last line of a included file) Wed Jan 29 21:13:32 2003 - Fixed exception handling to report the correct error position - Fixed another bug in the stream position calculation scheme - Added a more elaborate sample 'list_includes' which lists the dependency information for a given source file (see test/list_includes/readme.txt). Sat Jan 18 22:01:03 2003 - Fixed a bug in the stream position calculation scheme - Made cpp::exceptions more standard conformant (added 'throw()' at appropriate places) - Overall housekeeping :-) Wed Jan 15 21:54:20 2003 Changes since project start (still 0.5.0) - Added #include <...> and #include "..." functionality - pp directives are now generally recognized - Decoupled the C++ lexers and the pp grammar to separate compilation units (optionally) to speed up compilation (a lot!) Thu Jan 2 12:39:30 2003 A completely new version 0.5.0 of the C preprocessor was started. It's a complete rewrite of the existing code base. The main differences are: - The preprocessor is now implemented as an iterator, which returns the current preprocessed token from the input stream. - The preprocessing of include files isn't implemented through recursion anymore. This follows directly from the first change. As a result of this change the internal error handling is simplified. - The C preprocessor iterator itself is feeded by a new unified C++ lexer iterator. BTW, this C++ lexer iterator could be used standalone and is not tied to the C preprocessor. There are two different C++ lexers implemented now, which are functionally completely identical. These expose a similar interface, so the C preprocessor could be used with both of them. - The C++ lexers integrated into the C preprocessor by now are: Slex: A spirit based table driven regular expression lexer (the slex engine originally was written by Dan Nuffer and is available as a separate Spirit sample). Re2c: A C++ lexer generated with the help of the re2c tool. This C++ lexer was written as a sample by Dan Nuffer too. It isn't hard to plug in additional different C++ lexers. There are plans to integrate a third one written by Juan Carlos Arevalo-Baeza, which is available as a Spirit sample. ------------------------------------------------------------------------------- Tue Feb 12 22:29:50 2002 Changes from 0.2.3 to 0.2.4: - Moved XML dumping functions to the main Spirit directory - Fixed operator '##', it was not correctly implemented somehow :-( Sun Feb 10 21:07:19 2002 Changes from 0.2.2 to 0.2.3: - Implemented concatenation operator '##' (cpp.concat) - Removed defined() functionality for Intel compiler (it ICE's) until this issue is resolved - Separated code for dumping a parse tree to XML for inclusion in the main Spirit headers Thu Jan 17 23:51:21 2002 Changes from 0.2.1 to 0.2.2: - Fixes to compile with gcc 2.95.2 and gcc 3.0.2 (thanks Dan Nuffer) - Reformatted the grammars to conform to a single formatting guideline - Assigned explicit rule_id's to the rules of cpp_grammar, so that the access code to the embedded definition class is not needed anymore - Fixed a remaining const problem Tue Jan 15 23:40:40 2002 Changes from 0.2.0 to 0.2.1: - Corrected handling of defined() operator - In preprocessing conditionals undefined identifiers now correctly replaced by '0' - Fixed several const problems - Added parse_node_iterator for traversing one node in a parse_tree without going deeper down the hierarchy than one level (this is useful, if all inspected tokens arranged along a single node in the parse tree. The main difference to the parse_tree_iterator is, that the underlying iterator generally can be adjusted correctly after advancing the attached parse_node_iterator - Fixed a problem with gcc 2.95.2, which doesn't have a <sstream> header - Prepared usage of slex for lexer states Sun Jan 13 10:21:16 2002 Changes from 0.1.0 to 0.2.0: - Added operator 'defined()' - Added directive '#warning' - Corrected error reporting - Added command line option -I- for finer control of the searched include directories (-I and -I- should now work as in gcc, see readme.html for more info) - Corrected conditional preprocessing (should be fully functional now) - Fixed existing code base for changes made in parse tree support - Moved parse tree utility functions to a separate header (prepared for inclusion to the Spirit main library) | https://www.boost.org/doc/libs/1_61_0/libs/wave/ChangeLog | CC-MAIN-2018-17 | refinedweb | 13,018 | 55.34 |
Introduction
In this tutorial, you'll learn how to create a mobile 3D game using C# and Unity. The objective of the game is to score as many points as possible. You'll learn the following aspects of Unity game development:
- Setting up a 3D project in Unity
- Implementing tap controls
- Integrating physics
- Creating Prefabs select your preferred platform. I've chosen Android for this tutorial.
3. Devices
The first thing we need to do after selecting the platform we're targeting is choosing the size of artwork that we'll use in the game. I've listed the most important devices for each platform below and included the device's screen resolution and pixel density.
iOS
- iPad: 1024px x 768px
- iPad Retina: 2048px x 1536px
- 3.5" iPhone/iPod Touch: 320px x 480px
- 3.5" iPhone/iPod Retina: 960px x 640px
- 4" iPhone/iPod Touch: 1136px x 640px
Android
Because Android is an open platform, there are many different devices, screen resolutions, and pixel densities. A few of the more common ones are listed below.
- Asus Nexus 7 Tablet: 800px x 1280px, 216ppi
- Motorola Droid X: 854px x 480px, 228ppi
- Samsung Galaxy S3: 720px x 1280px, 306ppi
Windows Phone
- Nokia Lumia 520: 400px x 800px, 233ppi
- Nokia Lumia 1520: 1080px x 1920px, 367ppi
BlackBerry
- Blackberry Z10: 720px x 1280px, 355ppi
Remember that the code used for this tutorial can be used to target any of the above platforms.
4. Export Graphics
Depending on the devices you're targeting, you may need to convert the artwork for the game to the recommended size and resolution. You can do this in your favorite image editor. I've used the Adjust Size... function under the Tools menu in OS X's Preview application.
5. Unity User Interface
Before we get started, make sure to click the 3D button in the Scene panel. You can also modify the resolution that's being displayed in the Game panel.
6. Game Interface
The interface of our game will be straightforward. The above screenshot gives you an idea of the artwork we'll be using and how the game's interface will end up looking. You can find the artwork and additional resources in the source files of this tutorial. as3sfxr and Soungle.
9. 3D Models
To create our game, we first need to get our 3D models. I recommend 3Docean for high quality models, textures, and more, but if you're testing or still learning then free models may be a good place to start.
The models in this tutorial were downloaded from SketchUp 3D Warehouse where you can find a good variety of models of all kinds.
Because Unity doesn't recognize the SketchUp file format, we need to convert SketchUp files to a file format Unity can import. Start by downloading the free version of SketchUp, SketchUp Make.
Open your 3D model in SketchUp Make and go select Export > 3D Model from the File menu and choose Collada (*.dae) from the list of options.
Choose a name, select a directory, and click Export. A file and a folder for the 3D model will be created. The file contains the 3D object data and the folder the textures used by the model. You can.
12. 2D Background
Start by dragging and dropping the background into the Hierarchy panel. It should automatically appear in the Scene panel. Adjust the Transform values in the Inspector as shown in the next screenshot.
13. Hoop
The objective of the game is throw the ball through the hoop. Drag it from the Assets panel to the Scene and change its Transform properties as shown in the below screenshot.
14. Light
As you may have noticed, the basketball hoop is a bit too dark. To fix this, we need to add a Light to our scene. Go to GameObject > Create Other and select Directional Light. This will create an object that will produce a beam of light. Change its Transform values as shown in the next screenshot so that it illuminates the basketball hoop.
15. Hoop Collider
With the basketball hoop properly lighted, its time to add a collider so the ball doesn't go through when it hits the white area.
Click the Add Component button in the Inspector panel, select Physics > Box Collider, and change its values as shown in the next screenshot.
You'll see a green border around the basketball hoop in the Scene panel representing the box collider we just added.
16. Bounce Physics Material
If we were throw a ball at the basketball hoop, it would be stopped by the box collider, but it would stop without bouncing like you'd expect it to in the real world. To remedy this we need a Physics Material.
After selecting Create > Physics Material from the Assets menu, you should see it appear in the Assets panel. I changed the name to BounceMaterial.
Change its properties in the Inspector panel to match the ones in this below screenshot.
Next, select the box collider of the basketball hoop and click on the little dot to the right of the Material text, a window should appear where you can select the physics material.
17. Basket Collider
We'll use another collider to detect when the ball passes through the hoop. This should be a trigger collider to make sure it detects the collision without interacting with the physics body.
Create a new collider for the hoop as shown in step 15 and update its values as shown in the next screenshot.
This will place the collider below the ring where the ball can't go back upwards, meaning that a basket has been made. Be sure to check the Is Trigger checkbox to mark it as a trigger collider.
18. Ring Mesh Collider
Time to add a collider to the ring itself. Because we need the ball to pass through the center of the ring, we can't have a sphere or box collider, instead we'll use a Mesh Collider.
A Mesh Collider allows us to use the shape of the 3D object as a collider. As the documentation states the Mesh Collider builds its collision representation from the mesh attached to the GameObject.
Select the hoop from the Hierarchy panel, click on the triangle on its left to expand its hierarchy, expand group_17, and select the element named Ring.
Add a collider as we saw in step 15, but make sure to select Mesh Collider. Unity will then automatically detect the shape of the model and create a collider for it.
19. Hoop Sound
To play a sound when the ball hits the hoop, we first need to attach it. Select it from the Hierarchy or Scene view, click the Add Component button in the Inspector panel, and select Audio Source in the Audio section.
Uncheck Play on Awake and click the little dot on the right, below the gear icon, to select the sound you want to play.
20. Ball
Let's now focus on the basketball. Drag it from the Assets folder and place it in the scene. Don't worry about the ball's position for now, because we'll convert it to a Prefab later.
To make the ball detect when it hits the hoop, we need to add a component, a Sphere Collider to be precise. Select the ball in the scene, open the Inspector panel, and click Add Component. From the list of components, select Sphere Collider from the Physics section and update its properties as shown below.
21. Ball RigidBody
To detect a collision with the basketball, at least one of the colliding objects needs to have a RigidBody component attached to it. To add one to the ball, select Add Component in the Inspector panel, and choose Physics > RigidBody.
Leave the settings at their defaults and drag the ball from the Hierarchy panel to the Assets panel to convert it to a Prefab.
22. Hoop Sprite
To represent the baskets already made by the player, we use a 2D version of the basketball hoop. Drag it from the Assets panel and place it on the scene as shown below.
23. Score Text
Below the 2D hoop, we display the number of baskets the player has scored so far. Select GameObject > Create Other > GUI Text to create a text object, place it at the bottom of the basketball hoop, and change the text in the Hierarchy panel to 0.
You can embed a custom font by importing it in the Assets folder and changing the Font property of the text in the Inspector.
24. Force Meter
The force meter is a bar that will show the force used to shoot the ball. This will add another level of difficulty to the game. Drag the sprites for the force meter from the Assets panel to the Scene and position them as shown in the screenshot below.
25. Ball Sprite
We also add an indicator to the interface showing how many shots the player has left. To complete this step, follow the same steps we used to display the player's current score.
26. Basket Script
It's finally time to write some code. The first script that we'll create is the
Basket script that checks if the ball passes through the ring or hits the board.
Select the hoop and click the Add Component button in the Inspector panel. Select New Script and name it
Basket. Don't forget to change the language to C#. Open the newly created file and add the following code snippet.
using UnityEngine; using System.Collections; public class Basket : MonoBehaviour { public GameObject score; //reference to the ScoreText gameobject, set in editor public AudioClip basket; //reference to the basket sound void OnCollisionEnter() //if ball hits board { audio.Play(); //plays the hit board sound } void OnTriggerEnter() //if ball hits basket collider { int currentScore = int.Parse(score.GetComponent().text) + 1; //add 1 to the score score.GetComponent().text = currentScore.ToString(); AudioSource.PlayClipAtPoint(basket, transform.position); //play basket sound } }
In this script, we set two public variables that represent objects on the Scene and in the Assets folder. Go back to the editor and click the little dot on the right of the variables to select the values described in the comments.
We play a sound when the ball hits the basketball hoop and check if the ball passes through the ring. The
Parse method will convert the text from the GUI Text game object to a number so we can increment the score and then set it again as text using
toString. At the end, we play the
basket sound.
27. Shoot Script
The
Shoot class handles the rest of the game interaction. We'll break the script's contents down to make it easier to digest.
Start by selecting the Camera and click the Add Component button in the Inspector panel. Select New Script and name it
Shoot.
28. Variables
In the next code snippet, I've listed the variables that we'll use. Read the comments in the code snippet for clarification.
using UnityEngine; using System.Collections; public class Shoot : MonoBehaviour { public GameObject ball; //reference to the ball prefab, set in editor private Vector3 throwSpeed = new Vector3(0, 26, 40); //This value is a sure basket, we'll modify this using the forcemeter public Vector3 ballPos; //starting ball position private bool thrown = false; //if ball has been thrown, prevents 2 or more balls private GameObject ballClone; //we don't use the original prefab public GameObject availableShotsGO; //ScoreText game object reference private int availableShots = 5; public GameObject meter; //references to the force meter public GameObject arrow; private float arrowSpeed = 0.3f; //Difficulty, higher value = faster arrow movement private bool right = true; //used to revers arrow movement public GameObject gameOver; //game over text
29. Increase Gravity
Next, we create the
Start method in which we set the gravity force to
-20 to make the ball drop faster.
void Start() { /* Increase Gravity */ Physics.gravity = new Vector3(0, -20, 0); }
30. Force Meter Behavior
To handle interactions with the physics engine, we implement the
FixedUpdate method. The difference between this method and the regular
Update method is that
FixedUpdate runs based on physics steps instead of every frame, which might cause problems if the device is slow due to a shortage of memory, for example.
In the
FixedUpdate method, we move the arrow of the force meter using the
right variable to detect when to reverse the arrow's movement.
void FixedUpdate() { /* Move Meter Arrow */ if (arrow.transform.position.x < 4.7f && right) { arrow.transform.position += new Vector3(arrowSpeed, 0, 0); } if (arrow.transform.position.x >= 4.7f) { right = false; } if (right == false) { arrow.transform.position -= new Vector3(arrowSpeed, 0, 0); } if ( arrow.transform.position.x <= -4.7f) { right = true; }
31. Shoot Ball
The basketball is thrown when the player taps the screen. Whenever the screen is tapped, we first check if there's already a ball in the air and if the player has shots available. If these requirements are met, we update the values, create a new instance of the ball, and throw it using the
addForce method.
/* Shoot ball on Tap */ if (Input.GetButton("Fire1") && !thrown && availableShots > 0) { thrown = true; availableShots--; availableShotsGO.GetComponent().text = availableShots.ToString(); ballClone = Instantiate(ball, ballPos, transform.rotation) as GameObject; throwSpeed.y = throwSpeed.y + arrow.transform.position.x; throwSpeed.z = throwSpeed.z + arrow.transform.position.x; ballClone.rigidbody.AddForce(throwSpeed, ForceMode.Impulse); audio.Play(); //play shoot sound }
32. Remove Ball
In the following code block, we test if the ball reaches the floor and remove when it does. We also prepare for the next throw by resetting the variables.
/* Remove Ball when it hits the floor */ if (ballClone != null && ballClone.transform.position.y < -16) { Destroy(ballClone); thrown = false; throwSpeed = new Vector3(0, 26, 40);//Reset perfect shot variable
33. Check Available Shots
After removing the ball, we verify that the player has shots left. If this isn't the case, then we end the game and call
restart.
/* Check if out of shots */ if (availableShots == 0) { arrow.renderer.enabled = false; Instantiate(gameOver, new Vector3(0.31f, 0.2f, 0), transform.rotation); Invoke("restart", 2); } } }
34.
restart
The
restart method runs two seconds after the player runs out of shots, restarting the game by invoking
LoadLevel.
void restart() { Application.LoadLevel(Application.loadedLevel); } }_33<<
The settings are application and include the creator or company, application resolution and display mode, rendering mode, device compatibility, etc. The settings will differ depending on the platform and devices your application is targeting and also keep the requirements of store you're publishing on in mind.
37. Icons and Splash Images
Using the artwork about 3D models, mesh colliders, physics materials, collision detection, and other aspects of Unity game development._35<< | https://code.tutsplus.com/tutorials/create-a-basketball-free-throw-game-with-unity--cms-21203 | CC-MAIN-2017-13 | refinedweb | 2,461 | 63.29 |
#include <deal.II/base/function_lib.h>
Given a sequence of wavenumber vectors and weights generate a sum of sine functions. Each wavenumber coefficient is given as a \(d\)-dimensional point \(k\) in Fourier space, and the entire function is then recovered as \(f(x) = \sum_j w_j sin(\sum_i k_i x_i) = Im(\sum_j w_j \exp(i k.x))\).
Definition at line 730 of file function_lib.h.
Constructor. Take the Fourier coefficients in each space direction as argument.
Definition at line 1940 of file function_lib 1957 of file function_lib.cc.
Return the gradient of the specified component of the function at the given point.
Reimplemented from Function< dim >.
Definition at line 1975 of file function_lib.cc.
Compute the Laplacian of a given component at point
p.
Reimplemented from Function< dim >.
Definition at line 1993 of file function_lib.cc.
Stored Fourier coefficients and weights.
Definition at line 765 of file function_lib.h. | http://www.dealii.org/developer/doxygen/deal.II/classFunctions_1_1FourierSineSum.html | CC-MAIN-2016-07 | refinedweb | 149 | 61.02 |
This is the mail archive of the cygwin-xfree@cygwin.com mailing list for the Cygwin XFree86 project.
LessTif is a freely available Motif clone. It aims to be source level compatible with Motif 1.2 and to some extent 2.0/2.1. It is distributed under the terms of the GNU Library General Public License (LGPL).
Feedback about this issue is welcomed and should be directed to the cygwin-xfree mailing list mentioned at the end of this announcement.
* Implement Add Mode in XmText. This means that the selection doesn't disappear when moving the cursor around. You can toggle this by using Shift-F8. * Fix some memory leaks. * Fix for an invalid XmStringFree in XmList. * Fix an X server hang when clicking in an option menu. * Fix an extended selection problem in XmList. * Partial fix for gadget colour problems.
* Yet another attempt to fix nedit's grab behaviour problems. (that are caused by lesstif). * Rename a private function to avoid namespace pollution.
* Hopefully this fixes both of the nedit grab issues. * List hightlight problem fix. * Fixes for XmText(Field)GetSubstring bugs. * Fix for a 2.0 build problem.
* Fix for a memory leak in TextF.c * Quick fix to implement overstrike behaviour in XmText. * Fix for bug #736415 (XmText widget still buggy). * Fix for bug #721016 : use of an OptionMenu can cause the X Server to hang. * Fix for bug #721010 "text is displayed in the reverse order in TextWidget" * Patch provided by Herbert Xu : fixes comparison bugs on platforms with unsigned char as the default. * Patch by Peter Breitenlohner for various build/install problems. * Fix the problem reported by DIG in textf/test12 - TextField was not always drawing correctly..
-- Brian Ford Volunteer maintainer of Cygwin's lesstif package Senior Realtime Software Engineer VITAL - Visual Simulation Systems FlightSafety International | http://www.cygwin.com/ml/cygwin-xfree/2003-09/msg00441.html | crawl-002 | refinedweb | 302 | 62.14 |
#include <MPolyMessage.h>
This class is used to register callbacks for poly component id modification messages.
There is 1 add callback method which will add callbacks for the following messages:
Values passed in addPolyComponentIdChangeCallback's "wantIdModifications" array to indicate which component id changes should trigger the callback.
This method registers a callback that should be called whenever a poly component id is modified.
Currently, there are some cases where the component ids for a polygonal mesh can be modified without generating a callback or without generating a correct mapping. These cases are outlined below.
This method registers a callback that will be called whenever the topology of a meshShape changes.
This method returns a constant which is to be used to determine if a component id has been deleted. Compare component ids returned by the callback with the value return by this method. If they are the same then, the component id has been deleted. | http://download.autodesk.com/us/maya/2009help/API/class_m_poly_message.html | crawl-003 | refinedweb | 155 | 54.22 |
LocalResponseNorm¶
- class paddle.nn. LocalResponseNorm ( size, alpha=0.0001, beta=0.75, k=1.0, data_format='NCHW', name=None ) [source]
Local Response Normalization performs a type of “lateral inhibition” by normalizing over local input regions. For more information, please refer to ImageNet Classification with Deep Convolutional Neural Networks
See more details in local_response_norm .
- Parameters
size (int) – The number of channels to sum over.
alpha (float, optional) – The scaling parameter, positive. Default:1e-4
beta (float, optional) – The exponent, positive. Default:0.75
k (float, optional) – An offset, positive. Default: 1.0
data_format (str, optional) – Specify the data format of the input, and the data format of the output will be consistent with that of the input. An optional string from: If input is 3-D Tensor, the string could be “NCL” or “NLC” . When it is “NCL”, the data is stored in the order of: [batch_size, input_channels, feature_length]. If input is 4-D Tensor, the string could be “NCHW”, “NHWC”. When it is “NCHW”, the data is stored in the order of: [batch_size, input_channels, input_height, input_width]. If input is 5-D Tensor, the string could be “NCDHW”, “NDHWC” . When it is “NCDHW”, the data is stored in the order of: [batch_size, input_channels, input_depth, input_height, input_width].
name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.
- Shape:
input: 3-D/4-D/5-D tensor.
output: 3-D/4-D/5-D tensor, the same shape as input.
Examples:
import paddle x = paddle.rand(shape=(3, 3, 112, 112), dtype="float32") m = paddle.nn.LocalResponseNorm(size=5) y = m(x) print(y.shape) # [3, 3, 112, 112]
- forward ( input )
forward¶
Defines the computation performed at every call. Should be overridden by all subclasses.
- Parameters
*inputs (tuple) – unpacked tuple arguments
**kwargs (dict) – unpacked dict arguments | https://www.paddlepaddle.org.cn/documentation/docs/en/api/paddle/nn/LocalResponseNorm_en.html | CC-MAIN-2022-05 | refinedweb | 301 | 52.36 |
Ok, so first off I want to apologize if this is kind of a newbie question,
but we are making the migration from Tomcat to Geronimo, and I am having a
hard time moving some of our application's logging. We have written a
custom log4j appender that utilizes both our custom jar, and 3x 3rd party
jars. My original intent was to just add these three jars to
GERONIMO_HOME/lib and then configure the
GERONIMO_HOME/var/logs/server-log4j.properties file to make use of this
appender with a filter to our namespace; however, this failed with getting a
class not found error. I also tried adding these to the repository and it
didn't seem to work either. I am not sure whether this is the correct route
to take (as in digging further into the GBean -- it would seem nice to be
able to add our appender functionality by loading/unloading a GBean within
Geronimo), but seemed to be the easiest. I was wondering if anyone could
give an example of / knew how to do either of the following methods:
1) Adding the 3rd party jars to the j2ee-server module so that the custom
appender can be resolved when loading log4j
2) Getting a handle on the running log4j instance (I think the beans name is
"ServerLog"), and being able to add the appender/classes to the
configuration by means of a GBean.
Thanks anyone for your help, as I have been looking at this now for quite
some time, and can't seem to find an answer anywhere.
-Eric
--
View this message in context:
Sent from the Apache Geronimo - Users mailing list archive at Nabble.com. | http://mail-archives.apache.org/mod_mbox/geronimo-user/200709.mbox/%3C12647000.post@talk.nabble.com%3E | CC-MAIN-2014-23 | refinedweb | 281 | 58.15 |
How to return Multiple field id in search function by domain
i have age field in my student.student module,this is computed filed according to birthdate field
but while searching in odoo its not showing because its computed field so i write below code for eneble searching for this field but its not working
def _search_subscription(self, operator,value):
a=[]
b = self.env['student.student']
for x in b.search([]):
if x.age==value:
a.append(x[0])
return [('id', 'like' ,a)]
i want to return all field which age is like value,please give me solution...
@Hilar AK ,i know i can do by stored but, i want to do by function that's why i post,but now i got solution | https://www.odoo.com/it_IT/forum/help-1/question/how-to-return-multiple-field-id-in-search-function-by-domain-137348 | CC-MAIN-2019-30 | refinedweb | 124 | 70.13 |
I would suggest to assume false on everything else, and/or maybe to ignore the whole if/endif section in such cases.+1, it also halves the number of values we have to support later.
After giving it some thought, I revise a little bit my opinion:I think that if the value is evaluated to TRUE or FALSE, then fine. If it is anything else, then an error is raised (error message shown), which should also stop the script on "ON_ERROR_STOP", and if not the script continues with assuming the value was FALSE.
-- Fabien. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: | https://www.mail-archive.com/pgsql-hackers@postgresql.org/msg303057.html | CC-MAIN-2018-43 | refinedweb | 111 | 70.94 |
Computer Science Archive: Questions from September 19, 2011
- Anonymous askedWrite a program that asks the user to enter the dimensions of a maze and the maze finds and prints o... More »0 answers
- Anonymous askedBuild an FA that accepts the language consisting of only those words that have an odd occurrence of ... More »1 answer
- Anonymous asked1 answer
- Anonymous asked0 answers
- Anonymous askedG... Show moreConsider the language EVENBB consisting of all words of even length that contain the
substring bb.
Give (i) an applicable universal set,
(ii) the generator(s),
(iii) an applicable function on the universal set, and
(iv) then use these concepts to write down a recursive definition for the language. • Show less0 answers
- OldKnight8612 askedWrite a program that reads in two hexadecimal numbers from a file, hex.dat, and prints out the sum o... Show moreWrite a program that reads in two hexadecimal numbers from a file, hex.dat, and prints out the sum of the two numbers in hexadecimal. (As noted in class, first do this without using a file and by reading using the cin>> command).”
For example, if the file contains:
45AF
12B3
your program will output (if you output the result in decimal):
The decimal sum of 45AF and 12B3 is 22626. • Show less1 answer
- Anonymous askedA cache designer wants to increase the size of a 4 KB virtually indexed, physically tagged cache. G... More »0 answers
- MellowEagle2912 asked1 answer
- Footballguy74 askedCreate a project for the local car rental agency that calculates rental charges. The agency charges... Show moreCreate a project for the local car rental agency that calculates rental charges. The agency charges $15 per day plus $0.12 per mile.
Form: Use text boxes for the customer name, address, city, state, zip code, beginning odometer reading, ending odometer reading, and the number of days the car was used. Use text boxes to display the miles driven and the total charge. Format the output appropriately.
Include buttons for Calculate, Clear, Print and Exit.
Code: Include an event procedure for each button. For the calculation, subtract the beginning odometer reading from the ending odometer reading to get the number of miles traveled. Use a constant for the $15 per day charge and the $0.12 mileage rate. Display a message to the user for any bad input data.
• Show less0 answers
- Anonymous askedEach of the flowchart segments in figure... Show more
book: programming logic and design sixth edition. page 127• Show less
Each of the flowchart segments in figure 3-35 is unstructured. Redraw each flowchart segment so that it does the same thin but is structures1 answer
- FancyRaven305 askedAn Internet service provider has three different subscription packages for... Show moreInternet Service Provider
An Internet service provider has three different subscription packages for its customer:
Package A: For $9.95 per month 10 hours of access are provided. Additional hours are $2.00 per hour.
Package B: For $14.95 per month 20 hours of access are provided. Additional hours are $1.00 per hour. Package C: For $19.95 per month 10 hours of access are provided. Additional hours are $2.00 per hour.
Write a program that calculates a customer’s monthly bill. It should ask which package the customer has purchased and how many hours were used. It should then display the total amount due.
Input Validation: Be sure the user only selects package A, B, or C. Also, the number of hours used in a month cannot exceed 744.
• Show less1 answer
- OldKnight8612 askedAn interesting Machine Learning Artificial Intelligence application is buildin... Show moreTwo Dimensional Arrays
An interesting Machine Learning Artificial Intelligence application is building “Recommender” Systems. An example of a recommender system is a system that will tell you whether or not you will like the NEW MOVIE that was just released. How can a system do this?
A collaborative recommender system constantly asks users to say whether they like or dislike movies. It builds up a database of information that looks as follows: (This is only for 5 movies and 6 people, but it could be much larger. An entry of 10 means likes a lot and 1 means dislikes).
User
Movie 1
Movie 2
Movie 3
Movie 4
NEW MOVIE
John
10
3
8
2
9
Tim
8
2
1
1
7
Mary
7
2
8
3
7
Sally
7
2
6
1
5
Emile
2
8
8
9
9
Bea
1
9
2
1
2
If you want the recommender system to tell you if you will like NEW MOVIE, the system will ask you whether or not you like Movie 1, Movie 2, Movie 3 and Movie 4. It then finds the user most similar to you (in AI language this is called your nearest neighbor!). If your nearest neighbor liked NEW MOVIE, it will be recommended to you, otherwise you will be told NOT to waste your time with it.
Write a program that declares a two–dimensional array and fills it with the “movie data base” above. It is a 6 by 5 two-dim array because it is storing information for 6 people, with 5 movies each.
When your program runs, it will ask the user to enter a score for the Movie 1, Movie 2, Movie 3, Movie 4, and will store those values in a one dimensional array. It then finds the user’s nearest neighbor and uses that line of the two dimensional array to output a message giving you a score for New Movie.
To find the nearest neighbor, you must compute the distance between the array (or vector) that holds the user’s numbers and each line in the two dimensional array. You can use the standard Euclidean distance (same as distance between two points, except in higher dimensions). For two arrays, p and q of length n, it would be:
EXTRA CREDIT: Instead of checking the nearest neighbor, have a vote between the three closest neighbors on whether the new user will like the NEW MOVIE or not. You can simply vote by taking the average rating of the three closest neighbors.
Something to think about: Instead of simply taking the average of the three nearest neighbors, take the weighted average. Weigh the vote by “how close” that neighbor is.
Something else to think about: Each person rates movies differently. Perhaps the highest rating that you ever give a movie is a 6; maybe the lowest rating that I ever give is a 5. So if I give a movie a 5 it means that I really don’t like it, if you give it a 5 it means that you really like it!
• Show less0 answers
- Anonymous asked1 answer
- Anonymous askedI have developed a good deal of code, which runs and all, and it does almost everything i... Show moreMY PROBLEM:
I have developed a good deal of code, which runs and all, and it does almost everything it should be doing, expect the adding part doesn't seem to be right at all. It won't even output any sum, when quite clearly the coding should be something (even if the adding is faulty). I am NOT allowed to use global variables in this program and for awhile that caused issues. However, the only issue remaining is if the adding is being calculated correctly, and how to make it output.
For example, when I run the program, if I input "123456789123456789" then hit enter and also input "12345" it should output:
"The sum of
123456789123456789 and 12345 is
1234567891234934"
Instead, that sum is totally blank. If you can get it to output the correct sum, while following the specifications from the textbook (shown below), that's all I need. Thanks in advance. Here's the instructions and below is the code I have so far.
An array can be used to store large integer’s big integer numbers. Your program will read the digits as values of type char (You must read the digits as individual characters because the largest big integer that could be read in would be: 99999999999999999999. There is not any numeric data type in C++ that could represent an integer that large.) then.
Given hints:
1. Because the big integers that could be input, you will NOT be able to read them into any kind of simple integer variable. Read each digit a character at a time, convert it to a single numeric digit, then store it in an array. Character digits are stored in ASCII format where: ‘0’ is 48, ‘1’ is 49, ‘2’ is 50 .. ‘8’ is 56 and ‘9’ is 57. To convert a character digit to an integer do the following operation: numVar = chDigit - '0';
2. Write the definition of the add function. The function has 6 parameters: the two arrays representing the integers to be added, along with the size of each, an array for the sum, and the size of the sum array.
3. Remember that the digits are stored in reverse order in the arrays so the units digit is in position 0. Hence you add starting at position 0.
4. Be sure to account for carrying from one position to the next.
5. After adding the first two big integers, remember the arrays will have old data stored in them. You do not want to add high order digits from previous data input accidentally.
6. Be sure to test for overflow (when the sum doesn't fit in the number of digits allocated). Note that the MAX_DIGITS constant is provided for you.
Here is the coding:
#include <iostream>
using namespace std;
const int MAXIMUM_DIGITS = 20;
void input_Large_Int (int a[], int& size_of_A); //input function for the two big integers
void output_Large_Int(int a[], int size_of_A); //output function for the two big integers and the sum integer
void add(int a[], int size_of_A, int b[], int size_of_B, int sum[], int & size_Sum); //add function for the big integers' sum
int main()
{
//Declare arrays
int a[MAXIMUM_DIGITS], b[MAXIMUM_DIGITS], sum[MAXIMUM_DIGITS];
//Declare arrays' size
int size_of_A, size_of_B, size_Sum;
//declare char type for program answer response
char answer;
for(int i = 0; i < MAXIMUM_DIGITS - 1; i++)
{
a[i] = 0;
b[i] = 0;
sum[i] = 0;
}
//perform do-while loop until user answers with "n" or "N"
do {
//accept first user integer input
input_Large_Int(a, size_of_A);
//accept second user integer input
input_Large_Int(b, size_of_B);
//add arrays
add(a, size_of_A, b, size_of_B, sum, size_Sum);
//display the integers that were added and their sum
cout << "The sum of \n";
output_Large_Int(a, size_of_A);
cout << " and ";
output_Large_Int(b, size_of_B);
cout << " is ";
output_Large_Int(sum, size_Sum);
cout << "\n\n";
//ask if user wishes to continue
cout << "Add two more? (y or n): ";
cin >> answer;
}
while (answer == 'y' || answer == 'Y');
system("pause");
return EXIT_SUCCESS;
}
//define input for the big integer function
void input_Large_Int(int a[], int& size_of_A)
{
char digit[MAXIMUM_DIGITS];
char change;
int i = 0;
cout << "Please enter a positive integer - no more than 20 digits: ";
cin.get(change);
if (change == '\n')
cin.get(change);
while (isdigit(change) && i < MAXIMUM_DIGITS)
{
digit[i] = change;
i++;
cin.get(change);
}
size_of_A = i;
int j = 0;
//convert ASCII values to actual values with while-loop
while (i > 0)
{
i--;
a[j] = digit[i] - '0';
j++;
}
}
//define output for the big integer with a for-loop to display each digit
void output_Large_Int(int a[], int size_of_A)
{
for (int i = 0; i < size_of_A; i++)
cout << a[size_of_A - i - 1];
}
void add(int a[], int size_of_A, int b[], int size_of_B, int sum[], int &size_Sum)
{
int i;
for(i = 0; i < MAXIMUM_DIGITS - 2; i++)
{
sum[i] = (a[i] + b[i]) % 10;
sum[i + 1] = (a[i] + b[i]) / 10;
i++;
}
if ((a[i] + b[i])/10 > 0)
{
cout << "INTEGER OVERFLOW in the sum, NOT accurate.\n"
<< endl;
}
}
• Show less1 answer
- Anonymous askedWally Catos is a professional house painte... Show moreI want to create an ER diagrom for the following scenario.
Wally Catos is a professional house painter who owns and operates a small company at Ypsilanti, Mi. In addition to himself, He employs a few full time workers and several part-time college students. One of the full time workers manages the store while Wally is not in the store. Wally has been in business for ten years and gets his business from repeat customers and referrals from other individual customer, building contractors, interior designers, and paint stores.
Customers often remember Wally better than he remembers them. He has decided that he needs a better record-keeping system that can be used to quickly retrieve information about his current and former customers, such as their ID number, name, phone number, address, number of purchases, last purchase date and pay date. In addition, Wally wants to store the ID number, name, employee type, hourly wage, phone number address, number of weekly working hours, wage and skill level on each of his employees. He already has detailed data about the individual jobs that his company has done for the current customers, such as the job number, job name, the beginning and end dates of the job, a brief description of the job, the amount billed and payment type. However, Wally would like to be able to easily relate this data to the job’s customer and to his employees who worked on the job.
Since many customers may be referred by other businesses, he also thinks it would be a good idea to store data about the referral sources with their company ID number, name, address, business code and type of referral so he can send a thank-you note to the referring business for each referral. In addition, he would like to be able to produce a report showing the customers and their referral sources. Wally does not recognize multiple referral sources for an individual customer, and he has no interest in storing data on the self-referrals.
• Show less0 answers
- Anonymous askedS# is the supplier number, and... Show more
Hii..i really need help..pz...• Show less
R is the name of your relation schema,
S# is the supplier number, and P# is the part number.
I need to show all the keys primary and foreign keys. Please show work in step by step 1st, 2nd, 3rd and BCNF.
Will really appreciate it.
1 answer
- Anonymous asked... Show moreB. In the previous problem consider the pair (1,4). Notice there are two routes – (1->2->3->4) and
(1->3->4) between node 1 and node 4 (each route can be used in either direction but you need
not consider that). Suppose the following:
• It is equally likely that a packet sent from node 1 to node 4 will use either of the two
routes.
• The probability that an error occurs during transmission over each link is a.
• If an error occurs on any link, the packet is discarded and it is resent from node 1.
a. What is the probability that a transmitted packet will successfully reach node 4 on its first
transmission from node 1?
b. Assuming that a = 0.1, what is the expected number of transmissions from node 1 before
the packet successfully reaches node 4? • Show less0 answers
-0 answers
- Anonymous asked1 answer
- cesareborgia1 askedexc... Show moreBackground Information:
- The program from the user perspective in this case is somewhat trivial
except for users who are programmers acquiring knowledge about ASCII codes
(you!). Certain non-printable characters will be very important for you in
later chapters.
- or page 923 ( C: how to program)
- Useful book content: 9.4, page 358, Fig. 9.1; 9.8, page 363; 9.9, page 366,
Fig. 9.10; 9.10, page 369, Fig. 9.16
Programming Problem Instructions:
1. Print “\nASCII Printable Characters\n\n”.
2. Print all of the ASCII printable characters in a table (32-126). The table
should have three descending value columns and each column should have four
sub-columns with the decimal, hex, octal, and the respective character
printed. The table should appear like it appears at except that you don’t have to print non-printable
characters and you don’t need to include the HTML column.
When you print an ASCII code, print with the following string “%3d %3x %3o
%3c “. This has 1 space in between conversion specifiers and 5 spaces
after the last conversion specifier. Optionally, add a header row that
matches with the columns and sub-columns.
hint: 3
rd
column, last row should be blank (no output for decimal value 127).
3. Print a newline and ask the user to “Press Enter”. After the user presses
enter, print “\nNon-Printable & Extended Characters\n\n”
4. Now print 5 columns in which the first column is composed of ASCII codes 0-
31 and 127. The remaining four columns print ASCII codes 128-255 and should
begin under the ‘E’ in Extended. The columns should have descending values
just like in step 2.
In this case the sub-columns are composed of the decimal value and the
corresponding character with some exceptions. For ASCII codes in column 1
replace the character with a question mark (‘?’) character.
When you print an ASCII code, print with the following string “%3d %3c
“. This has 1 space in between conversion specifiers and 5 spaces after the
last conversion specifier. Notice that additional spaces are needed when
moving across the first column on a single row in order to line up with the
‘E’. It is up to you to determine how to handle that. Optionally, add a
header row that matches with the columns and sub-columns.
THE OUTPUT SHOULD LOOK LIKE WHAT IS SHOWN ON THIS LINK:
• Show less3 answers
- Anonymous asked1 answer
- Anonymous askedif his or he... Show more
Write an application that calculates and displays the amount of money a user would have• Show less
if his or her money could be invested at 5% interest for one year. Create a method that
prompts the user for the starting value of the investment and returns it to the calling program.
Call a separate method to do the calculation and return the result to be displayed.1 answer
- Anonymous askedfor the range x is gr... Show moreWrite an assembly language program to nd the maximum of:
y = x^(6)-14x^(2)+ 56x
for the range x is greater than 2 and less than. • Show less0 answers
- Anonymous askedFruit Computer produces two types of computers: Pear computers and Apricot computers. Relevant data... Show moreFruit Computer produces two types of computers: Pear computers and Apricot computers. Relevant data are given below. A total of 3000 chips and 1200 hours of labor are available. Formulate an IP to help Fruit maximize profits.
Labor Chips Equipment cost Selling price
Pear 1 hour 2 $5000 $400
Apricot 2 hours 5 $7000 $900
• Show less1 answer
- Anonymous askedshow that, if c is a positive real number, then g(n) = 1 + c + c^2 + ... + c^n is:
(a) theta(1) if c... Show moreshow that, if c is a positive real number, then g(n) = 1 + c + c^2 + ... + c^n is:
(a) theta(1) if c<1
(b) theta(n) if c=1
(c) theta(c^n) if c>1. • Show less1 answer
- Anonymous askedWrite a function named dicepair that calls the die function to simulate the sum of a pair of dice. T... Show moreWrite a function named dicepair that calls the die function to simulate the sum of a pair of dice. Test the function by calling it from main 100 times and displaying the resulting numbers.
write a function named gameRound that implements a single round of the game. The function should call dicepair as necessary. The function should return a value of type bool, indicating whether the user won or lost.
the player rolls the two dice, if the sum of resulting die values is 6 or 10 the player wins, if the sum 2, 11 or 12 the player lose, if something else rolled, the player has to roll again to determine the outcome,if the sum of the second roll is the same as what the player obtained in the first roll, the player wins. Otherwise the player loses. • Show less3 answers
- Anonymous asked0 answers
- Anonymous asked1 answer
- Anonymous asked1 answer
- Anonymous askedCreate and application that displays a table of the Celsius temperatures 0 through 20 and their Fahr... More »1 answer
- Anonymous askedThe manager of a football stadium wants you to write a program that calculates the total ticket sale... Show moreThe manager of a football stadium wants you to write a program that calculates the total ticket sales after each game. The four types of tickets are box, sideline, premium, and general admission. After each game, data is stored in a file in the following form:
ticketPrice numberOfTicketsSold
.
.
.
Sample data are shown below:
250 5750
100 28000
50 35750
25 18750
The first line indicates that the box ticket price is $250 and that 5750 tickets were sold at that price. Output the number of tickets sold and the total sale amount. Format your output with two decimal places.
• Show less1 answer
- Anonymous askedIn managing risks in an organization, professionals in the information technology (IT)... Show moreIntroduction:
In managing risks in an organization, professionals in the information technology (IT) department conduct research to identify threats, vulnerabilities, and threat/vulnerability pairs. Then, the IT professionals determine the likelihood of each threat occurring. The IT professionals present this information to IT management, whose role in risk management is to determine and recommend approaches to manage these risks. IT management then presents these recommendations to the senior management, whose role is to allocate resources, specifically money and employees, to prepare for and respond to identified threats and vulnerabilities appropriately.
This activity allows a small group of students among you to fulfill the role of IT professionals in a small business tasked with identifying threats, vulnerabilities, and threat/vulnerability pairs; estimating the likelihood of these threats occurring; and present this information to IT management.
Scenario:
YieldMore is a small agricultural company, which produces and sells fertilizer products. The company headquarters is in a small town in Indiana. Outside its headquarters, there are two large production facilities—one in Nebraska and other in Oklahoma. Furthermore, YieldMore employs salespersons in every state in the U.S. to serve its customers locally.
The company has three servers located at its headquarters—Active Directory server, a Linux application server, and an Oracle database server. The application server hosts YieldMore’s primary software application, which is a proprietary program managing inventory, sales, supply-chain, and customer information. The database server manages all data stored locally with direct attached storage.
All three major sites use Ethernet cabled local area networks (LANs) to connect the users Windows Vista workstations via industry standard managed switches.
The remote production facilities connect to headquarters via routers T-1 LAN connections provided by an external Internet service provider (ISP), and share an Internet connection through a firewall at headquarters.
Individual salespersons throughout the country connect to YieldMore’s network via virtual private network (VPN) software through their individual Internet connections, typically in a home office.
Tasks:
Your instructor will assign you a group where you need to assume the roles of IT professionals assigned by YieldMore’s IT management to conduct the following risk management tasks:
1. Identify threats to the seven domains of IT within the organization.
2. Identify vulnerabilities in the seven domains of IT within the organization.
3. Identify threat/vulnerability pairs to determine threat actions that could pose risks to the organization.
4. Estimate the likelihood of each threat action.
5. Prepare a brief report or presentation of your findings for IT management to review.
• Show less0 answers
- Anonymous askedSuppose that... Show more
you found an exciting summer programming job for five weeks. It pays $15.50 per hour. Suppose that the total tax you pay on your summer job income is 14%. After paying the taxes, you spend 10% of your net income to buy new clothes and other accessories for the next school year and 1% to buy school supplies. After buying clothes and school supplies, you use 25% of the remaining money to buy saving bonds. For each dollar you spend to buy savings bonds, your parents spend $0.50 to buy additional savings bonds for you. Write a program that prompts the user to enter the pay rate for an hour and the number of hours you worked each week. The program then o utputs the following: your income before and after taxes from your summer job• Show less1 answer
- FluffyDog3360 askedsearch("hello",arr,0) should search for "hello" in the full ar... Show moreboolean search( T item,T[] arr, int i)
search("hello",arr,0) should search for "hello" in the full array ( begining at position 0)
whereas search("hello",arr,2) should search for hello in the subarray that begins at position 2 and goes to the end of the array. It must uses recursion instead of iteration. • Show less1 answer
- Anonymous askedA milk carton can hold 3.78 liters of milk. Each morning, a dairy frm ships cartons of milk to a loc... Show moreA milk carton can hold 3.78 liters of milk. Each morning, a dairy frm ships cartons of milk to a local grocery store. The cost of producing one liter of milk is $0.38, and the profit on each carton of milk is $0.27. Write a program that prompts the user to enter the total amount of milk produced in the morning.
Redo programming Exercise that is above so that the user can also input the cost of producing one liter of milk and the profit on each carton of milk. • Show less1 answer
- Anonymous askedA milk carton can hold 3.78 liters of milk. Each morning, a dairy frm ships cartons of milk to a loc... More »1 answer
- Anonymous asked1 answer
- Anonymous asked// AUTHOR : (Put your n... Show more/*-------------------------------------------------------------------------
// AUTHOR : (Put your name here)
// LAB LETTER : (Put your lab letter here)
// FILENAME : Salary.java
//----------------------------------------------------------------------*/
//Complete the following program to determine the raise and new salary for an employee by adding if ... else
//statements to compute the raise. The input to the program includes the current annual salary for the employee
//and a number indicating the performance rating (1=excellent, 2=good, and 3=poor). An employee with a rating of
//1 will receive a 6% raise, an employee with a rating of 2 will receive a 4% raise, and one with a rating of 3
//will receive a 1.5% raise.
// ***************************************************************
// Salary.java
// Computes the raise and new salary for an employee
// ***************************************************************
import java.util.Scanner;
public class Salary
{
public static void main (String[] args)
{
double currentSalary; // current annual salary
double rating; // performance rating
double raise; // dollar amount of the raise
Scanner scan = new Scanner(System.in);
// Get the current salary and performance rating
System.out.print ("Enter the current salary: ");
currentSalary = scan.nextDouble();
System.out.print ("Enter the performance rating: ");
rating = scan.nextDouble();
// Compute the raise -- Use if ... else ...
// Print the results
System.out.println ("Amount of your raise: $" + raise);
System.out.println ("Your new salary: $" + (currentSalary + raise));
}
}
// End of class • Show less1 answer
- Anonymous askedA trucking company called "ABC" picks up shipments from warehouses of a retail chain called WARE_BRO... Show moreA trucking company called "ABC" picks up shipments from warehouses of a retail chain called WARE_BROTHERS. Shipments are delivered to individual retail store locations of WARE_BROTHERS. There are currently 6 warehouse locations and 50 retail stores.
A truck can carry several shipments during a single trip, and each trip is identified by Trip# and delivers those shipments to possible multiple stores. Each shipment is identified by a shipment# and includes info about shipment volume, weight, destination, etc.
Trucks have different capacities for both the volume they can hold and the weight they can carry. The ABC company currently has 150 trucks, and each truck makes 3 to 4 trips each week.
A database to be used by both ABC and WARE_BROTHERS must be designed to keep track of truck usage and deliveries as well as for scheduling trucks and shipments. • Show less1 answer
- Anonymous asked1.Beam bending with supporte... Show more
• Show less
I am trying to write a program for a loaded beam deflection calculator.
1.Beam bending with supported on Both Ends Single Load at Center.
2. Cantilevered Beam Bending with One Load Applied at End.
The program will allow the user to enter all the pertinent information needed to calculate the deflection and stress on the beam. Once the user has entered all the information, and clicks on a button to "Calculate Deflection", the program will display, in a list box, the deflection for twenty points along the length of the beam. The program will also allow the user to obtain the stress at any one point along the beam. This will be accomplished by selecting one of the entries in the list box to indicate the point on the beam and clicking a calculate stress button.
For calculating stress, you will need the section modulus, z. This value is based on the type of beam used. The calculator, on the links above, used a value of 103 in^3. This is the section modulus for a W 14 68 beam.
Your program will also need to:
1 Provide radio buttons for the user to select the support condition(Support at both ends or cantilevered).
2 Validate all inputs to ensure that there will be no chance for any run time errors due to user error (did not provide all the needed information).
3 Properly identify units of values in code and for user.
4 Make use of class-level variables where applicable.
5 Use Function procedure to perform the deflection and stress calculation.
6 The program will be tested against the calculator provided in the links above.
This program should be like the one on this website answers
- Anonymous askedI am trying to write a method that returns an array containing all elements of students who have a g... Show moreI am trying to write a method that returns an array containing all elements of students who have a gpa LESS THAN 2.0. This array should not contain any null elements. That is, if there are 2 failing students, the array returned by this method should only contain two elements. Here is the other work that I've done that I assume you need to call in this method.
package People;
public class Student {
String name;
double gpa;
public Student(String n, double g) {
name = n;
gpa = g;
}
public String getName(){
return name;
}
public double getGpa(){
return gpa;
}
public void setName(String n){
name = n;
}
public void setGpa(double g){
gpa = g;
}
public boolean equals(Student s) {
if ((s.name == name) && (s.gpa == gpa)){
System.out.println("Passed!");
return true;
}
else {
System.out.println("Failed!");
return false;
}
}
}
Now, I need to write the fore mentioned method in a different class called Teacher, also in the package People. What I don't know how to do is to a) use the methods in Student that I've already written (which I know I need to do) and b) how to get the gpa's from the array and see if they are above a 2.0
package People;
public class Teacher {
public static Student [] getFailing(Student[] students){
}
}
HELP?! Thanks! • Show less1 answer
- DecisiveScarf3435 askedBackground: You have two implementations of a doubly linked list, SimpleList and SentinelList. For p... Show more
Background: You have two implementations of a doubly linked list, SimpleList and SentinelList. For purposes of discussion, let firstNode denote the Node holding the first element, and let lastNode denote the Node holding the last element.• Show less
• An instance of SimpleList has two fields: first = firstNode, and last = lastNode.
• An instance of SentinelList has two fields: header = new Node( ), and trailer = new Node( ). These are distinct, special nodes created by the SentinelList constructor, and they stand permanently at either end of the list. We have header.next = firstNode, and trailer.prev = lastNode.
A) If the list is empty, then firstNode and lastNode don't exist. Then what should go in SimpleList's first and last fields? How about SentinelList's header and trailer fields? How about header.next and trailer.prev?
SimpleList. first _______
SimpleList. last _____________
SentinelList.header ______________
SentinelList.trailer _______________
SentinelList.header->next _____________
SentinelList.trailer->prev _____________1 answer
- Anonymous askedI need to write a java program that does the following.
Write a program that asks the user to enter t... Show moreI need to write a java program that does the following.
Write a program that asks the user to enter three test scores. The program should display each test score, as well as the average of the scores. Use JOptionPane. • Show less1 answer
- AngryPepper4181 askedDraw a structured flowchart describing how your paycheck is calculated. Include at least two decisio... More »1 answer
- Anonymous asked1 answer
- uhgmo24 asked... Show more
This exercise is based on the ListNode2.java program, available from:• Show less
Complete the addSorted ( ) method, which takes a new string and adds it into the linked list while maintaining ascending order of the items in the list.
Use the main( ) method as shown below to test the addSorted( ) method. Sample output of running the program is shown in Figure 4.3.
public static void main(String args[]) {
ListNode2 list = new ListNode2();
System.out.println("Creating a linked list using the addSorted( ) method ...");
list = list.addSorted("Adam");
list = list.addSorted("Eve");
list = list.addSorted("April");
list = list.addSorted("Don");
list = list.addSorted("Syd");
list = list.addSorted("Mary");
list = list.addSorted("Peter");
list = list.addSorted("April");
list.displayAllNodes();
findAndRemove (list, "April");
System.out.println("After removing \"April\" ...");
list.displayAllNodes();
} //main
Figure 4.3: Sample NetBeans screen output2 answers
- Anonymous askedSuppose that Alice and bob share a 4-digit PIN number, X. To establish a shared symmetric key, Bob p... More »1 answer
- SilkyBank6955 asked1 answer
- SilkyBank6955 askedAlgorithm A executes on O(logn) time computation for each entry of an n-element array. what is the w... More »2 answers
- Anonymous askedMore of C code.
2. Translate the following C code into MIPS code. Loop: for (i = 0; i < 50; i =i+2)
{... Show moreMore of C code.
2. Translate the following C code into MIPS code. Loop: for (i = 0; i < 50; i =i+2)
{
j = 2* i + k;
k = B[i] - 4;
}
Assume the compiler associates the variables i, j, and k to the registers $t0, $t1, and $t2 respectively. Also, assume B is an array of integers and its address is stored at register $s1.
• Show less1 answer
- Anonymous asked0000 000... Show moreConsider the following positive integer which is represented in binary format using 32 bits.
0000 0000 0000 0101 0000 0000 0001 1111
Write a MIPS code which loads the above integer into the register $5. • Show less1 answer
- SilkyBank6955 asked) time method for compu... Show more
Let p(x) be a polynomial of degree n, that is, p(x) =
a) describe a simple O(
) time method for computing p(x).
b) now consider rewriting of p(x) as p(x) =• Show less
.1 answer
- Anonymous askedwhile in... Show moreCan someone please help me convert this into MIPS/MARS programming language?!?2 answers
- hypertc13 askedWrite a python program to process data from a file. Each line of the file will contain a last name,... Show more
Write a python program to process data from a file. Each line of the file will contain a last name, first name, and two exam scores. The items are separated by commas. Following are a few lines from such a file:• Show less
Potter, Harry, 78, 79
Weasley, Ronald, 68, 72
Granger, Hermione, 100, 110
Weasley, George, 74, 84
Your program should ask the user for the name of the file to read. It should then read the data, compute the average of each of the two exams, and output these two values. So the output should read something like "The average for exam 1 is: blank, the average for exam 2 is: blank. Make sure to close the file before your program terminates.2 answers
- Anonymous asked1 answer
- Anonymous asked1 answer
- Anonymous asked1 answer
- Anonymous asked2 answers
- Anonymous askedWrite a program that generates all the factors of a number entered by the user. For instance, the nu... Show moreWrite) b = b /.
• Show less1 answer
- Anonymous asked1 answer
- Anonymous asked1 answer
- Anonymous asked1 answer
- Anonymous asked1 answer
- daphonk asked1. a re... Show moreProject requirement:
Program specifics: members
Thanks in advance for the much appreciated help! • Show less0 answers
- Anonymous asked1 answer
- Anonymous asked1 answer
- Anonymous asked1 answer
- Anonymous askedI'm having problems on where to begin. I have to decrypt an encoded message. The message is :mmZ\dxZ... Show more
I'm having problems on where to begin. I have to decrypt an encoded message. The message is :mmZ\dxZmx]Zpgy. It is based on ASCII code. The encryption is as follows:
if(OriginalChar + Key > 126) then
EncryptedChar = 32 + ((OriginalChar + Key) - 127
Else
EncryptedChar = (OriginalChar + key)
the key is a number between 1-100. When you use the right number, the message will make sense.
I know I need a decrypt function. I set it up like this. This is in c++ by the way. Basically the message loops through a decrypt function until the key(which is can be 1-100) makes the message make sense.
#include "stdafx.h"• Show less
#include <iostream>
#include <cstdlib>
#include <cctype>
using namespace std;
void decrpyt(char encoded[], char decoded[], int key)
{
char encoded[] = ":mmZ\dxZmx]Zpgy";
int i;
char e, d;
for(i = 0; i < strlen(encoded); i++)
{
e = encoded[i];
if((e-key) < 32)
{
d = e + 127-32-key;
}
}
}
int _tmain(int argc, _TCHAR* argv[])
{
return 0;
}1 answer
- Anonymous askedABC Institute of Research has sensitive information that needs to be protected from its rivals. The ... More »1 answer
- uhgmo24 askedIn the assignmen... Show more
I have to design a web application that has a form that looks like the one below.
In the assignment I have to create 2 XML files for Sports and Concerts. When I click on the radio button for one of these categories, the dropdown box is supposed to load the XML file associated with it. I then have to be able to select the number of tickets. In addition, the user has to enter his name and address like in the boxes displayed above. The user can then see what his ticket looks like by clicking the View Tickets button which will take all the data entered by the user and display it like shown above. I am very new to web design and do not know where to begin. Please help!• Show less0 answers
- Anonymous askedSuppose we want to partition N items into G equal-sized groups of size N/G, such that the smallest N... Show moreSuppose we want to partition N items into G equal-sized groups of size N/G, such that the smallest N/G
items are in group 1, the next smallest N/G items are in group 2, etc. The groups themselves do not have to
be sorted. For simplicity, you may assume that N and G are powers of two. Give an O(N logG) algorithm to
solve this problem. • Show less1 answer
- Robby499 askederations on disjoint sets (i.... Show more
Show that the worst-case time required by Version 1 algorithms for op-
erations on disjoint sets (i.e., Algorithms 3.6.4, 3.6.5, and 3.6.8 of the textbook) is
θ(n + m * min{m, n}), where there are n makeset operations and a total of m union
and findset operations.
Algorithm 3.6.4 Makeset, Version 1, This algorithm represents the set {i} as a one-node tree
Input: i
Output: None
makeset1(i) {
parent[i] = i
}
Algorithm 3.6.5 Findset, Version 1, This algorithm returns the root of the tree to which i belongs
Input: i
Output: None
findset1(i) {
while (i != parent[i])
i = parent[i]
return i
}
Algorithm 3.6.6, Mergetrees, Version 1, This algorithm recieves as input the roots of two distinct trees and combines them by making one root a child of the other root
Input: i, j
Output: None
mergetrees1(i, j) {
parent[i] = j;
}
Algorithm 3.6.8, Union, Version 1, This alogithm receives as input two arbitrary values i and j and constructs the tree that represents the union of the sets to which i and j belong. The algorithm assumes that i and j belong to different sets
Input: i, j
Output: None
union1(i,j) {
mergetrees1(findset1(i), findset1(j))
}• Show less1 answer
- Anonymous asked1. Implement a class named... Show more
This homework is loosely based on Programming Challenge #12 in Chapter 7.
1. Implement a class named Auto that holds data about an automobile. The class should have the following
private member variables:
make - a string that holds the name of the manufacturor of the automobile
year - an int that holds the year that the automobile was made
speed - an int that hold's the automobile's current speed
a) The class should have at least one constructor.
b) Include mutator functions to assign values to the private member variables and accessor
functions that return the values of each of the member variables.
c) When assigning the current speed, do not allow a negative value. If the user tries to assign a
negative speed, set the speed to zero and write out an informative message to standard output.
d) The year the automobile was made must be between 1900 and 2011 (inclusive.)
Use the class in a source code that creates two Auto objects. The code whould request values from the user
and assign the values to the member variables using the mutator functions. The code should then use the
accessor functions to print out the assigned values.
For this homework, the class definition and the function definitions can be in the same file as the main function.
Hand in:
1. A listing of the code.
2. Display the output when (a) the user enters valid values for all the member variables
and (b) when the user enters either a negative speed or an incorrect year.
Assessment Concerns:
1. Is the program well-doccumented?
2. Is the code easy to read, i.e. well-structured?
3. Is there input validation within the class?
4. Are there any easily detected logic errors?
• Show less1 answer
- Anonymous asked0 answers
- Anonymous askedCon... Show moreConsider the knapsack cryptosystem. Suppose the public key consists of (18,30,7,26) and n = 47.?
Consider the knapsack cryptosystem. Suppose the public key consists of (18,30,7,26) and n = 47.
a) Find the private key, assuming m = 6
b) Encrypt the message M = 1101 (given in binary). Give your result in decimal • Show less2 answers
- Anonymous askedWrite a java program that reads an unspecified number of integers, determines how many positive and ... More »1 answer
- Anonymous askedWrite a C program to examine a text file and create a new file showin... Show more
How do I write the codes in C??• Show less
Write a C program to examine a text file and create a new file showing the contents of a text file as ASCII, hex and binary, ignoring new line of characters. The program should accept an input and output file names as command line arguments.
USAGE:
./fdump<input file><output file>
Sample Input:
Hello World!
I Have Arrived!
Sample Output:
LOC Char Hex Binary
0000 20 0010 0000
0001 20 0010 0000
0002 H 48 0100 1000
0003 e 65 0110 0101
0004 l 6C 0110 1100
0005 l 6C 0110 1100
0006 o 6F 0110 1111
0007 20 0010 0000
0008 W 57 0101 0111
0009 o 6F 0110 1111
0010 r 72 0111 0010
0011 l 6C 0110 1100
0012 d 64 0110 0100
.
.
.
.
.
With the output showing pretty much like that. How would I incorporate a UNIX system calls, open, read, close and write? ANy help would be appreciated.0 answers
- Anonymous asked1 answer
- Anonymous asked1. Write a program called Room that computes area of room with some specified dimensions. You have t2 answers
- Anonymous askedAda... Show moreGiven this line of data:
17 13 7 3
and these procedure calls:
Ada.Integer_Text_IO.Get (Item => E);
Ada.Integer_Text_IO.Get (Item => F);
Ada.Text_IO.Skip_Line;
a. what's the value of each variable after the Gets are completed?
b.what's happens to any leftover date values in the input?
c. where is the reading marker after these three calls? • Show less1 answer
-1 answer
- Anonymous askedIt also has to ask if... Show moreCan someone please help me convert this into MIPS/MARS programming language???
It also has to ask if the user would like to sort another sequence and I am extremely confused in this class...Thank You!1 answer
- Anonymous askedphilosopher has a plate of food... Show moreConsider a dinner table where n dining philosophers are seated. Each
philosopher has a plate of food; however, there is only a single eating-utensil placed
in the center of the table. Eating is done at discrete rounds. At the beginning of
each round, if a philosopher wishes to eat, she may attempt to obtain the utensil
from the center of the table. If the philosopher obtains the utensil, she eats for the
duration of the round (i.e., only one philosopher may eat during any given round)
and then places the utensil back at the table center at the end of the round. If
two or more philosophers attempt to obtain the utensil at the beginning of the
same round, then no philosopher will eat during that round (i.e., no one obtains the
utensil); thus, if all the philosophers try to access the utensil on every round, no
philosopher would ever eat.
One way to avoid the starvation of these philosophers is to use randomization. A
possible simple randomized algorithm is to have each philosopher attempt to obtain
the utensil at any given round with probability p > 0 (independently of the decisions
of the other philosophers).
a) What is the probability (in terms of p) that a philosopher is able to successfully
eat during any given single round?
b) What value of p maximizes this probability?
c) Using the probability from Part b, what is the probability that a philosopher
does not successfully obtain the utensil after k consecutive rounds? • Show less0 answers
- = 0 and b = -5, what will x become after the above statement is executed? • Show less1 answer
- Anonymous askedProve the following using a direct proof.
The negative of any even number is even. (Remember that yo... Show moreProve the following using a direct proof.
The negative of any even number is even. (Remember that you have to have an implication to have a direct proof: rephrase as If you have an even number, the negative of it is also even. • Show less1 answer
- = 1 and b = -1, what will x become after the above statement is executed? • Show less1 answer
- Anonymous askedthat converts Fahrenheit... Show moreWrite a function celsius prototyped by
double celsius( double fahrenheit );
that converts Fahrenheit to Celsius according to the formula
Celsius = 5.0 / 9.0 * ( Fahrenheit - 32 )
and returns the value in Celsius. Then write a driver program that gets a
temperature in Fahrenheit from the user, calls the function celcius and then
displays the conversion result, i.e., the temperature in Celsius obtained from
the function call. • Show less1 answer
- BrainyBagel377 askedWrite a recursive function CountUp which output the number from 1 to n, where n is the user'... More »2 answers
- Anonymous asked1 answer
- EducatedRabbit5024 askedIf x is c... Show moreif (a > 0)
if (b < 0)
x = x + 5;
else
if (a > 5)
x = x + 4;
else
x = x + 3;
else
x = x + 2;
If x is currently 0, a = 5 and b = 5, what will x become after the above statement is executed?
• Show less1 answer
- ImpoliteCatfish4999 askeda) P... Show moreAssume that for any two positive integers, a and b, the function, a mod b works only if b = 10.
a) Provide an algorithm to determine the sum of the digits of a positive integer, a (Note: The
sum of the digits must finally be an integer between 0 and 9. So, if you input a = 6372041485, your
algorithm should provide an output, 4 and not 40).
b) Does the ability to perform a mod 9 simplify your solution? If so, how?
c) Use the algorithms developed in part (3a) or (3b) to determine whether 6|y (i.e., 6 divides
y) for any positive integer, y.
please provide the algorithm, THANK YOU • Show less1 answer
- Anonymous askedFor the language that you have selecte... Show more
• Show less
ASSIGNMENT:
Select ONE of the following languages:
C
C++
Java
For the language that you have selected, discuss how this language handles various design considerations (like data types, scopes etc.) You should also discuss why you think this language handles these design considerations well or not so well. You may also discuss other aspects of the language you have selected that makes the language more or less useful. You may also briefly discuss the history of the language appropriately.
__________________________________________________________________________
Make sure the following things
The minimum length of the paper should be 10 pages. Minimum length is excluding the front cover page
The font must not be larger than 12 pt. for main body of the paper. Normal margins should be used i.e. 1” on each side.
Please use APA style for writing your paper. Resources for the same have been provided as a separate file.
You may cite sources and references. Any quotes MUST be attributed to the sources.
In addition to explaining the features and information about the language, the paper must represent your thoughts and analysis and should not simply be a copy and paste of lecture material or internet sources.1 answer
- Anonymous asked(You can either use classical Vectors or ArrayLis... Show moreWrite a program that performs the following tasks.
(You can either use classical Vectors or ArrayLists,
or "generics" Vectors or ArrayLists, for HW#1.)
* The program should ask the user to input 10 String items, one at a time.
For each inputted data item, store it into a Vector or ArrayList.
* The program should then output all of the user's data items,
by using the Vector class's or ArrayList class's .toString() instance method.
* The program should then output all of the user's data items,
but without using the class's .toString() method, and without
simply outputting the String variables that were used to receive the user's
original inputs.
• Show less1 answer
- Anonymous askedinteger in the range [1, 9999]. The prog... Show moreComplete the program which asks the user to enter a 4-digit
integer in the range [1, 9999]. The program should then display the number with the digits reversed. For example, if the input
number was 3418 then the program would display the number 8143. As another example, if the input number was 406 then the
program would display the number 6040 (there is an implicit leading 0 in front of the 4 in the number 406, i.e., we treat it as
0406). Hint: This can be done using only the integer division / and modulus % operators.
[001] #include <iostream>
[002] using namespace std;
[003]
[004] int main() {
[005] int n;
[006] cout << "Enter an integer [1-9999]: "; cin >> n;
[007] // put the required output statements here.
[008] return 0;
[009] } • Show less2 answers
- Anonymous asked1 answer
- Anonymous askedMatrix T looks like t... Show more
How do you create a n x n matrix, T, by using built-in functions eye and diag.
Matrix T looks like the following:
(so the diagonal is all 2, and above diagonal is -1 and so does below the diagonal.everywhere eles is 0.)• Show less1 answer
- EducatedRabbit5024 askedJ =... Show moreWhat is the correct output?
I = 5;
J = 4;
while (I < 10) {
if ( (I % 3 == 0) || (I % 3 == 1) ) {
J = J + I;
}
I = I + 2;
} • Show less1 answer
- Anonymous asked2 use for loop and 3 use do while that can i r... Show moreHelp i need source code for the following? number 1 & 2 use for loop and 3 use do while that can i run to my c++ .
1. design the logic for a program that prints every even number from 2 through 30.
2. design the logic for a program that prints number in reverse order from 10 down to 1.
3. design the logic for a program that prints every number from 1 through 10 along with its square and cube. | http://www.chegg.com/homework-help/questions-and-answers/computer-science-archive-2011-september-19 | CC-MAIN-2014-15 | refinedweb | 8,796 | 64 |
The back-end of the Genesis Smart Client Framework is the web services that provide
business logic and database access to the Smart Client, the databases, Meta-Data
and a file system. This article explains in detail the function of each of these
systems and how they work together to provide a smart client framework.
This article is part III of VII. Go to part I to
see the index.
Please refer to our web site
for the most up to date information about the Genesis Smart Client Framework or
any of our newsfeeds.
News feed
Development feed
View our blog
Follow our tweets
In order to pursue our active interest in releasing this version of the Genesis
Smart Client Framework as an open source product, we've created a profile on where we maintain the most recent copy of the source code.
If you want the latest version of the Genesis Smart Client Framework source code
go to our code plex profile
or download the latest release from Code Plex.
The Meta-Data is the configuration data for the Genesis Smart Client Framework when
combining the module and command configuration with the user access profile. The
Meta-Data has three distinct sections.
The diagram above illustrates the database tables and relationships that are used
for the security Meta-Data. The security Meta-Data defines the users and their security
access profiles.
The Security_Role table contains the definition of specific security roles within
your application. The Genesis Smart Client Framework compares your user's access
to one or more roles that have the required access to resources hosted by the Genesis
Smart Client Framework. The role's definition is simply a name and a description.
The Security_User table contains the definition of the user profile. The Genesis
Smart Client Framework uses this information to authenticate users when they first
sign in, and on some method calls to ensure that the current session is still valid.
The user definition contains a username, password, e-mail address, startup script
and a flag to indicate whether an account is active or disabled.
The Security_UserRole table contains information about the user's membership of
specific roles.
The diagram above illustrates the database tables and relationships that are used
for the application Meta-Data. The application Meta-Data defines the applications
and the application specific commands and files required to enable the application
to execute on the client computer.
The Module_Application table contains the definition of the application hosted in
the Genesis Smart Client Framework. The Genesis Smart Client Framework has the ability
to host multiple applications concurrently and allow user access to any of these.
The application's definition contains an application name, description and the order
in which the application must appear on the user's list of available applications.
There is also a flag to indicate whether and application is visible or not.
The Security_ApplicationRole table contains information about the roles that have
access to this application.
The Module_File table contains the definition of files that are required by applications
hosted in the Genesis Smart Client Framework. Files have a name, description, url
where the file can be downloaded, a local file path for database synchronization
with available resources, version information, a value indicating the type of file
and the order in which the file must be downloaded by the client application.
If the client application detects a difference between the online version of a file
and the local copy of the file, a new copy is downloaded. The client application
keeps a cache of files locally to prevent files from being repeatedly downloaded.
The Lookup_FileType table extends the information of a specific file by allowing
the Genesis Smart Client Framework to focus on specific files. For example, when
the client application loads a hosted application into memory it only queries the
system for files where the type of file is a 'Dynamic Link Library'.
The Module_Command table contains the definition of commands that can be executed
by the client application. The Genesis Smart Client Framework uses the information
in this table to locate the code in the local file cache and to execute the code
using Reflection. Commands have a name, a code that is used to reference the command
from scripts and menu items, a description and a .NET Type. The type is a full name
including the namespace to the implementation of the command in .NET code.
The Security_CommandRole table contains information about the roles that have execution
rights for the command.
The diagram above illustrates the database tables and relationships that are used
for the user interface Meta-Data. The user interface Meta-Data defines the menu
items that are visible to the authenticated user. * Our commercial client application
makes use of an Office 2007 styled menu, and therefore our database structure was
structured accordingly. There is no reason why the current system cannot be applied
to conventional menu and toolbar implementations.
The Module_Ribbon table contains the definition of the ribbon tabs that will appear
along the top line of the ribbon. A ribbon is linked to an application and has a
name and an order in which it is displayed.
The Security_RibbonRole table contains information about the roles that can show
this ribbon tab to authenticated users.
The Module_RibbonBar table contains the definition of the groups that appear on
a specific ribbon. The groups are containers that host buttons and other controls
that can be displayed on a ribbon. A ribbon bar is linked to a ribbon and has a
name and an order in which it is displayed.
The Security_RibbonBarRole table contains information about the roles that can show
this ribbon bar to authenticated users.
The Module_BarItem table contains the definition of the items that appear in a specific
ribbon bar. A bar item can be of any type declared in the Lookup_BarItemType table.
Bar items can host other bar items and are linked to commands. The command is executed
when the user interacts with the bar item on the front-end. Bar items have a name,
an image and an order in which it is displayed.
The Lookup_BarItemType table extends the bar items to define specific controls in
the user interface. Bar item types include buttons, text boxes, combo boxes and
containers.
The Genesis Smart Client Framework has an online File System that it exposes to
the server components and the Smart Client. The file system is accessible through
the Web Services. The File System is an ASP.NET web site with database backup, deployment
and Meta-Data update functions.
The File System is a shared web resource to enable global access to key resources
at run-time. Modules can upload files which can then be accessed by other users
or systems.
The database backup functions are contained in the /deployment/database/backup.aspx
file. When browsing to this file include the following parameters in the query string:
DatabaseName with the name of the database to back up and ConnectionStringKey with
the name of the key for the connection string in the web.config file. Provided with
these parameters, the database will be backed up to the disk using today's date
and current time.
The deployment functions are contained in the /deployment/default.aspx file. When
browsing to this file the user will be prompted to upload a deployment package.
Once uploaded the package will be extracted and the relevant updates will be applied
to the Genesis Smart Client Framework server, ready to be distributed to client
applications. * Deployment packages are created with the Genesis.Deployment utility.
The Meta-Data update functions are contained in the /update.aspx file. When browsing
to this file, the Genesis Smart Client Framework will review each file found in
the Module_File table. Each file will be checked on the server in the Path location
stored for each file, if the file is a Dynamic Link Library the update function
will use Reflection to analyze the file and update the related Meta-Data. This includes
adding any new commands, or updating existing commands.
This function is important during the development cycle because each time the developer
needs to force an update from his client application the file version is increased
in his/her project, a build is done and the Meta-Data update function is run. This
ensures that when the developer executes his/her client application, that the client
application will download the latest version of the library. * A client application
may choose to have a developer mode that automatically clears the local file cache
before authentication with the Genesis Smart Client Framework. This will ensure
that the client application always downloads the latest copy of the application
libraries, however in order to ensure that the server has the latest copies of files
available for download, the developer would still have to run the Meta-Data update
function.
As a post-deployment step after using the deployment functions operators should
also execute the Meta-Data update function.
The Genesis Smart Client Framework relies on the Web Services to provide the business
logic for the client application to access the database and other online resources.
The Web Services is an ASP.NET web services site with authentication, application,
file management and exception handling services.
The Exception web service provides methods for the client application to log its
exceptions and application events online to a central database.
The File Manager web service provides methods for managing the online file cache.
This includes file uploads and downloads.
The Genesis web service provides generic methods for use with the Genesis Smart
Client Framework. These methods include generic database command execution.
The Keep Alive web service provides methods for use by the client application to
test connectivity to the server.
The Lookup web service provides generic methods to use any table in any database
as a lookup table for other systems.
The Module web service provides methods to interact with the application and user
interface information stored in the Meta-Data. These methods take into account the
user access profile when providing information.
The Security web service provides methods to authenticate users, authenticate current
user sessions and other methods related to the security management of the Genesis
Smart Client Framework.
The Genesis Smart Client Framework can expose multiple databases to the Windows
Client through the Web Services. The only database that the Genesis Smart Client
Framework requires is the database that contains the user and security information,
Meta-Data and module configuration.
This article is part III of VII. Go to part IV
to read about the client.
Blue Marble is proud to be a Microsoft BizSpark startup company.
The Genesis Smart Client Framework is a commercial closed source application. It
is being provided on Code Project with all of the benefits that articles on Code
Project grant its users (ie. Free and Fair use), however it must be noted that some
modules contain a 3rd party control that we are unable to license via Code Project.
Once all of the articles are uploaded, the code will be extracted to a seperate
obfusicated library to protect our vendor's Intellectual Property. The full source
code will be provided for this library, excluding the few lines that could compromise
our License. An alternative will also be provided for developers who wish to use
a completely free version.
An implementation of a standard Microsoft.NET controls user interface has been created
and is available on Code Plex
as of release 1.30.1024.0. This release complies with all of the
Code Project article publishing terms and conditions.
This article, along with any associated source code and files, is licensed under The Microsoft Public License (Ms-PL)
General News Suggestion Question Bug Answer Joke Rant Admin
Man throws away trove of Bitcoin worth $7.5 million | http://www.codeproject.com/Articles/36636/Genesis-Hybrid-Smart-Client-Framework-part-III?fid=1541089&df=90&mpp=25&sort=Position&spc=Relaxed&tid=3091732 | CC-MAIN-2013-48 | refinedweb | 1,971 | 52.7 |
Revision history for MooseX-Has-Sugar 1.000001 2014-06-10T05:18:46Z [00 Trivial] - Packaging changes - dist.ini cannonicalisation. - perlcritic.rc normalisation. [Dependencies::Stats] - Dependencies changed since 1.000000, see misc/*.deps* for details - configure: (recommends: ↑1) - develop: +3 -3 (suggests: +2 -1) - test: (recommends: ↑2) 1.000000 2014-01-30T19:43:03Z misc/*.deps* for details - build: -1 - configure: +1 -1 (recommends: +1) - develop: +9 ↑1 -9 - runtime: +1 -1 - test: +5 ↓1 -3 (recommends: +6) 0.05070422 2013-11-20T08:32:12Z [00 Maint release] [Dependencies::Noteworthy] - Upgrade Module::Build to 0.4202 - Upgrade Test::More 0.98 to 1.001002 - drop File::Find - drop File::Temp [Dependencies::Stats] - Dependencies changed since 0.05070421, see misc/*.deps* for details - build: ↑1 - configure: ↑1 - develop: +50 -1 (recommends: -1, suggests: +1 -1) - test: +1 ↑1 -2 [Documentation] - Update © Year - Specify doc encoding. [Meta] - Bug tracker to github issues [Packaging] - Use test_requires with new MB [Tests] - Switch to Test::Compile::PerFile - Update ReportVersions::Tiny test - Update CPAN::Changes test - Update Kwalitee test 0.05070421 2012-08-03T10:25:23Z [Bugs] - Fixed use of a Test::Builder method that is going away. Thanks to Schwern for reporting and providing the patch. [Dependencies::Noteworthy] - Upgrade Module::Build to 0.4002 - Upgrade Test::More to 0.98 [Dependencies::Stats] - Dependencies changed since 0.05070420, see misc/*.deps* for details - build: ↑1 - configure: ↑1 - develop: (recommends: ↑1, suggests: ↑1) - test: ↑1 [TODO] - Thinking of adding Moo support of some kind, but Moo is notedly different. Esp: lazy/lazy_build. - I considered making a Moo-style-for-Moose version, but then considered that implementing 'lazy' would have to know what the attribute was called to set the respective Moose builder value, so that is Too Hard. - Please, if you're reading this and have suggestions/feedback, feel free to hit me up on IRC =). 0.05070420 2012-02-03T03:33:11Z - Maintenance/Packaging release. [Dependencies::Stats] - Dependencies changed since 0.05070419, see misc/*.deps* for details - develop: (suggests: ↑1) - runtime: +3 - test: -1 [Internals] - All namespaces provide $AUTHORITY - $VERSION moved to outside of BEGIN [Packaging] - Update LICENSE ( Year, Address, Indentation ) - Extra Tests moved to xt/ - Git URI's moved to https:// - Export x_authority - Git based versions instead of auto incrementing relative versions. 0.05070419 2011-04-07T02:03:19Z - Maintainence only release. No external changes. [Dependencies::Stats] - Dependencies changed since 0.05055616, see misc/*.deps* for details - develop: +1 -1 (recommends: +1 -1, suggests: +1 -1) [Packaging] - Moved to @Author::KENTNL - update gitignore and perlcritic rules. - ship perltidyrc - Normalize Changes. [Tests] - Add CPAN::Changes test - Remove portability test 0.05055616 2010-11-13T23:43:43Z - Replaced Test::Exceptions with Test::Fatal - Removed FindBin in tests. - Core Tests now 5% faster!. [Dependencies::Noteworthy] - use Test::Fatal instead of Test::Exception - drop use of FindBin [Dependencies::Stats] - Dependencies changed since 0.05046611, see misc/*.deps* for details - develop: +1 (recommends: +1, suggests: +1) - test: +1 -2 0.05046611 2010-08-16T18:30:39Z - Improved docs and tests for Saccharin. ( Alexandr Ciornii / chorny ) - Eradicated excess in xt/. [Dependencies::Noteworthy] - tests require MooseX::Types::Moose [Dependencies::Stats] - Dependencies changed since 0.05044303, see misc/*.deps* for details - test: +2 0.05044303 2010-07-24T10:03:50Z - Migrate to @KENTNL Dzil. - Rework t/ dirs. - Drop depend on MX::Types in tests. - Drop accidental dep on Test::Kwalitee [Dependencies::Stats] - Dependencies changed since 0.0405, see misc/*.deps* for details - build: ↑1 - configure: ↑1 - runtime: -6 - test: +7 0.0405 2009-12-04T09:20:43Z - Toolkit upgrade & rebuild. - Testsuite cleanup. - Documentation overhaul with ::Weaver - Dropped :allattrs from MXHS as its identical to :default - Tests drop Find::Lib; [Dependencies::Stats] - Dependencies changed since 0.0404, see misc/*.deps* for details - build: +1 - configure: +1 - runtime: +1 -2 0.0404 2009-07-06T03:34:10Z - Added Saccharin, experimental sugars. 0.0403 2009-06-30T13:56:07Z - Using Dist::Zilla's handy author-tests feature - Revised Docmentation a little to be more correct 0.0402 2009-06-29T19:43:05Z - Fixed missing META.yml in Dzil build 0.0401 2009-06-29T18:16:51Z - Fixed Dep on Moose Test. - Moved to Dist::Zilla. - Loads of edits for change 0.0400 2009-06-28T00:53:52Z - Improved Test cases - Improved meta dependency advertising - added 'bare' keyword. 0.0300 2009-05-29T16:22:57Z - export group :is/-is moved to ::Minimal. - MX::H::S::Minimal exports by default - MX::H::Sugar exports all list-flavours by default. - MX::H::Sugar croaks if group :is is requested. - Test/Documentation updated. 0.0200 2009-05-16T21:38:31Z - Fixed META.yml - Added weak_ref, coerce and auto_deref to -attrs - Added collision detection to complain if you use it wrong. - Removed Constant Folding based subs, too pesky at present. - Added A bunch of tests. 0.0100 2009-05-15T09:18:30Z - First version. | https://metacpan.org/changes/release/KENTNL/MooseX-Has-Sugar-1.000001 | CC-MAIN-2015-32 | refinedweb | 803 | 53.98 |
Let us learn about a simple and straightforward searching algorithm in Python.
The Linear Search Algorithm
Linear Search works very similar to how we search through a random list of items given to us.
Let us say we need to find a word on a given page, we will start at the top and look through each word one by one until we find the word that we are looking for.
Similar to this, Linear Search starts with the first item, and then checks each item in the list until either the item is found or the list is exhausted.
Let us take an example:
Theoretical Example of the Linear Search Algorithm
Consider,
- List: 19, 2000, 8, 2, 99, 24, 17, 15, 88, 40
- Target: 99
So, we need to find 99 in the given list. We start with the first item and then go through each item in the list.
- Item 1: 19, not found.
- Item 2: 2000, not found.
- Item 3: 8, not found.
- Item 4: 2, not found.
- Item 5, 99, target found, end loop.
So, we have found the given target after five checks at position 5.
If the given target was not in the list, then we would have gone through the entire list and not found the item, and after the end of the list, we would have declared the item as not found.
Note that we are looking at each item in the list in a linear manner, which is why the algorithm is named so.
A Note on Efficiency
Linear Search is not a very effective algorithm, it looks through each item in the list, so the algorithm is directly affected by the number of items in the list.
In other terms, the algorithm has a time complexity of O(n). This means that if the number of items in the list is multiplied by an amount, then the time it takes to complete the algorithm will be multiplied by that same amount.
There are better search algorithms out there like Sentinel, Binary, or Fibonacci Search, but Linear Search is the easiest and the most fundamental of all of these which means that every programmer should know how to use it.
Implementing Linear Search Algorithm in Python
def linear_search(lst, target): for i in range(len(lst)): if(lst[i] == target): return i return -1
Let us look at the code,
- We are creating a function for linear search that takes in two arguments. The first argument is the list that contains the items and the second argument is the target item that is to be found.
- Then, we are creating a loop with the counter
i,
iwill hold all the indexes of the given list, i.e.,
iwill go from 0 to length of the list – 1.
- In every iteration, we are comparing the target to the list item at the index
i.
- If they are the same, then that means that we have found the target in the list at that index, so we simply return that index and end the loop as well as the function.
- If the entire list is checked and no items are returned, then the control will move out of the list, and now we are sure that the target item is not in the list, so we return -1 as a way of telling that the item was not found.
Let us look at how the algorithm will behave for an item in the list and another item that is not in the list:
The Output
Here, we send two items as the target: 99, which is in the list at index 4, and 12, which is not in the list.
As we can see, the algorithm returned the index 4 for 99, and -1 for 12. which indicates that 99 is at index 4, and 12 is absent from the list, and hence the algorithm is working.
Conclusion
In this tutorial, we studied a very easy and simple searching algorithm called the Linear Search.
We discussed how Linear Search works, we talked about its efficiency and why it is named “linear”.
Then we looked at how the algorithm is written in Python, what it does, and confirmed that by looking at the output of the code.
I hope you learned something, and see you in another tutorial. | https://www.askpython.com/python/examples/linear-search-algorithm | CC-MAIN-2021-31 | refinedweb | 724 | 74.12 |
Example project: PIR sensor and Domoticz API
- Ralph Global Moderator
I've done a small project on my WiPy 1.0 with a PIR sensor to turn on a lamp in my home using my existing Domoticz setup. I'm using this sensor. This project can be found on this repo. Below a small report of the project and the code.
The setup
For every motion trigger, a message is sent to the domoticz api to update a 'user variable'. The domoticz installation, which is running on a RasPi, has a lua script that triggers when this variable changes and turns one lamp on for 2 minutes. (It uses the rfxcom 433 tranceiver and cheap pieces of hardware in my lamps, but that's not relevant now) My kitchen seemed like a good place to implement this, since the lights there only need to be on when there is motion.
To make it run from the 3.3v out of the WiPy, I soldered a wire to bypass the 5v to 3.3v stepdown on the back of the PIR board. When the sensor triggers, it pulls the pin up for about 5 or 6 seconds, after which it drops back, ready for more triggers. This means I don't have to build in any mechanism to prevent sending http requests too often, which is nice.
Main.py
I kept everything very simple, so it's easy for others to grab and adept for their own purposes. Besides the sensor, I also listen to a button press on the expansion board to break the main loop, which I found useful for testing. To make it work with your domoticz, just replace <ip> and <basic hash> with your values.
import time from network import WLAN from machine import Pin from domoticz import Domoticz wl = WLAN(WLAN.STA) d = Domoticz("<ip>", 8080 ,"<basic hash>") #flags running = True button_pressed = False pir_triggered = False #callbacks def pirTriggered(pin): global pir_triggered pir_triggered = True def buttonPressed(pin): global button_pressed button_pressed = True pir = Pin('GP4',mode=Pin.IN,pull=Pin.PULL_UP) pir.irq(trigger=Pin.IRQ_RISING, handler=pirTriggered) pir = Pin('GP17',mode=Pin.IN,pull=Pin.PULL_UP) pir.irq(trigger=Pin.IRQ_FALLING, handler=buttonPressed) # main loop print("Starting main loop") while running: time.sleep_ms(500) if pir_triggered: pir_triggered = False result = d.setVariable('Presence:LivingRoom','1') print("HTTP Status: "+str(result)) elif button_pressed: button_pressed = False running = False print("Exited main loop")
Domoticz class
This implements a simple http get request and 2 domoticz api endpoints. Implementation consists of setting values of devices and user-variables.
I tried implementing the micropython-http-client class from balloob (I simply copied the class into my project), but I found that after 4 successful http requests it stopped working for some reason. So for now, I'm just using the bare sockets in the simplest way possible.
import socket class Domoticz: def __init__(self, ip, port, basic): self.basic = basic self.ip = ip self.port = port def setDevice(self, idx, command): print("Setting device "+idx+" to "+command) return self.sendRequest("type=command¶m=switchlight&idx="+idx+"&switchcmd="+command) def setVariable(self, name, value): print("Setting variable "+name+" to "+value) return self.sendRequest("type=command¶m=updateuservariable&vtype=0&vname="+name+"&vvalue="+value) def sendRequest(self, path): try: s = socket.socket() s.connect((self.ip,self.port)) s.send(b"GET /json.htm?"+path+" HTTP/1.1\r\nHost: pycom.io\r\nAuthorization: Basic "+self.basic+"\r\n\r\n") status = str(s.readline(), 'utf8') code = status.split(" ")[1] s.close() return code except Exception: print("HTTP request failed") return 0
Boot.py
I slightly adapted the wifi code to support fixed IP on different wifi networks
import os import machine uart = machine.UART(0, 115200) os.dupterm(uart) known_nets = { '<net>': {'pwd': '<password>'}, '<net>': {'pwd': '<password>', ") wl.init(mode=WLAN.AP, ssid=original_ssid, auth=original_auth, channel=6, antenna=WLAN.INT_ANT)
Plans:
- Making the http library from Balloob wor properly, or implementing a new http wrapper class
- Add more sensors like temperature and lux to this same project and send these values to domoticz
- Expand domoticz class to support the full api
EDIT
The API on the esp32 devices for Pin has changed. The pir.irq method has been replaced by pin.callback. The correct implementation for these devices would be:
pir = Pin('G4',mode=Pin.IN,pull=Pin.PULL_UP) pir.callback(trigger=Pin.IRQ_RISING, handler=pirTriggered) pir = Pin('G17',mode=Pin.IN,pull=Pin.PULL_UP) pir.callback(trigger=Pin.IRQ_FALLING, handler=buttonPressed)
- Morningstar
Can someone help me out on how to setup a domoticz for this project? And also the changes that are required to be made to the code.
- robmarkcole
@Ralph OK great. Not sure if you took this project any further (?) but I am working on something almost identical.
Cheers
- Ralph Global Moderator
@robmarkcole Correct, thanks for pointing this out. I added a note to this post now with the correct code.
- robmarkcole
@Ralph Does this need to be updated since callbacks are not handled with pin.irq (deprecated?) but pin.callback?
- Ralph Global Moderator
@Xykon You are right, I might as well have left it on the 5V.
I'm running it without the expansion board and only soldered one connector to the 5v out of an old usb cable that I stripped. I could have added another connector on hindsight, but at that point it seemed easier to use the free 3.3v out pin on the wipy and bypass the stepdown.
- Xykon administrators
Thanks a lot for sharing the project
I soldered a wire to bypass the 5v to 3.3v stepdown on the back of the PIR board
I'm curious why you did that instead of powering the PIR sensor from 5V through VIN? The only reason I could think of is that you are using a 3.7V LIPO to power the Wipy2. But if it's installed in an area where you have power anyway I'm wondering why you would rely on a battery powered solution. | https://forum.pycom.io/topic/240/example-project-pir-sensor-and-domoticz-api/7 | CC-MAIN-2018-17 | refinedweb | 1,002 | 57.47 |
Hey, I'm writing a network application, in which I read packets of some custom binary format. And I'm starting a background thread to wait for incoming data. The problem is, that the compiler doesn't let me to put any code throwing (checked) exceptions into
run()
run() in (...).Listener cannot implement run() in java.lang.Runnable; overridden method does not throw java.io.IOException
Caveat: this may not meet your needs if you have to use the exception mechanism.
If I understand you correctly, you don't actually need the exception to be checked (you've accepted the answer suggesting an unchecked exception) so would a simple listener pattern be more appropriate?
The listener could live in the parent thread, and when you've caught the checked exception in the child thread, you could simply notify the listener.
This means that you have a way of exposing that this will happen (through public methods), and will be able to pass more information than an exception will allow. But it does mean there will be a coupling (albeit a loose one) between the parent and the child thread. It would depend in your specific situation whether this would have a benefit over wrapping the checked exception with an unchecked one.
Here's a simple example (some code borrowed from another answer):
public class ThingRunnable implements Runnable { private SomeListenerType listener; // assign listener somewhere public void run() { try { while(iHaveMorePackets()) { doStuffWithPacket(); } } catch(Exception e) { listener.notifyThatDarnedExceptionHappened(...); } } }
The coupling comes from an object in the parent thread having to be of type
SomeListenerType. | https://codedump.io/share/BT3e2vwITOsY/1/how-to-throw-a-checked-exception-from-a-java-thread | CC-MAIN-2017-17 | refinedweb | 261 | 52.19 |
I will show you a good tip when you want to measure the time or cycles taken for your code.
By means of CMSIS ( Cortex Micro-controller Software Interface and Standard) which is an application software interface for Cortex-M MCU, you can do it so easily.
Here is the easy way of using CMSIS interface to measure cycles. Let’s do it.
Contents
CMSIS Interface
There are seven software interface standards in total, which are specified in a common and consistent software interface for the use of Cortex-M core devices or the peripherals for it.
- CMSIS-CORE
- CMSIS-Driver
- CMSIS-DSP
- CMSIS-RTOS
- CMSIS-Pack
- CMSIS-SVD
- CMSIS-DAP
With above interfaces, you can access the Cortex-M core, DSP instructions, embedded functions and even can access debug interface in common way.
CMSIS is a runtime system that allows access to the core.
With using this CMSIS, as you can access any Coretex-M device with common interfaces, it makes porting easier.
CMSIS-CORE
This time, I will use CMSIS-CORE out of these interfaces for measuring cycles taken for my code.
CMSIS-CORE is comprised of a HAL(Hardware Abstraction Layer), System exceptions, interrupt vectors and system initialization and so on. Those are the interfaces that allows access to the core of the main system.
Here are the CMSIS-CORE.
- HAL (Hardware Abstraction Layer)
- System exceptions, Interrupt vector
- Configuration of header files
- Initialization method of system
- Embedded function of instructions that doesn’t support by C/C++ complier
- SysTick timer initialization and start it
In large, it consists of six parts.
This time, I will use the sixth of them.
SysTick Timer
Let’s use it first without my long explanation.
First, you need to download MCUXpresso IDE and stall it. Also you need to download SDK for your board.
You can build the SDK according to the board you want. Once you download the SDK, you can import it by drag-and-drop the SDK in the “Installed SDK…”.
Now you can import any sample code projects.
CMSIS comes with any projects in default as you can see below picture, left hand side of red colored rectangle.
This time, I will use SysTick timer. There is SysTick_config() which is an in-line function in core_cm0plus.h.
SysTick timer is clocked by core clock. SysTick_config() sets the counter value and start the counter.
The usage of the timer is just simple! All you need to do is to set the counter value. The counter value is decremented by core clock and if it comes to zero, it generates an interrupt on SysTick_IRQn.
This time, I only want to measure the counter value for measuring cycle count. So, the counter value can be any value, unless it is more than the time your code takes. Otherwise, the SysTick timer would be expired and generate an interrupt before your code finishes.
For the usage of the interrupt, it can be used for a context switch of operating system, or so.
Here is the usage of the SysTick timer in-line function.
uint32_t SysTick_Config (uint32_t ticks ) Parameter [in] ticks The number of ticks between interrupts. Return 0 - Success 1 - Error
Measuring cycle count
Now that I measure the cycle count.
You need to install SDK by drag-and-drop .zip file of SDK into MCUXpresso IDE. And, Hello World sample program is imported from the SDK. How-to install the SDK, please refer to the other articles.
I prepare a simple code for the purpose of measuring cycle count. You can use it if you want.
You need to modify it in Hello World main() function as below.
int main(void) { uint32_t cnt=0; volatile uint32_t i; uint16_t ret; /* Init board hardware. */ BOARD_InitPins(); BOARD_BootClockRUN(); BOARD_InitDebugConsole(); /* Cycle count start */ ret= SysTick_Config(16000000);/*<---- Here starts the cycle count */ if(ret){ PRINTF("SysTick configuration is failed.\n\r"); while(1); } for (i=0; i<1000;i++ ); cnt = 16000000 - SysTick->VAL;/*<--- Here stops the cycle counting */ PRINTF("Result : %d Cycles\n\r",cnt); /* Cycle count ended */ while(1); }
Result of measuring the cycle count
In above picture, It measures the number of loops in the for-loop. Of course, the measured cycle would increase in proportion to the number of loops you set.
It also counts the IF sentence.
The result is 12048 cycles in total.
SysTick_config() sets the counter value and start it. And, in this code, I didn’t stop the SysTick timer, but you can stop it by clearing SysTick.CTRL Enable bit register.
Additional info: SysTick interrupt
SysTick_Config generates an interrupt after the counter expires according to the tick you set.
Let’s see the how-to.
The sample code is written in the CMSIS document.
#include "LPC17xx.h" uint32_t msTicks = 0; /* Variable to store millisecond ticks */ void SysTick_Handler(void) { /* SysTick interrupt Handler. msTicks++; See startup file startup_LPC17xx.s for SysTick vector */ } int main (void) { uint32_t returnCode; returnCode = SysTick_Config(SystemCoreClock / 1000); /* Configure SysTick to generate an interrupt every millisecond */ if (returnCode != 0) { /* Check return code for errors */ // Error Handling } while(1); }
Summary
You can count cycles easy with CMSIS-CORE. For me, I sometimes want to measure time or cycles taken to operate a specific code.
In the case of Cortex-M4F, you can use CYCLE COUNT in CPU register when you debug, but it is not the case to Cortex-M0/Mo+. If you want to count the cycle on Cortex-M0/M0+, you can use this tips of SysTick Timer. | https://mcu-things.com/blog/cmsis-for-cycle-count/ | CC-MAIN-2020-05 | refinedweb | 912 | 65.83 |
Introduction to Natural Language Processing (NLP) using TensorFlow in Python
Before we begin, let’s consider a scenario where you want to communicate very important information to the machine but due to the limited vocabulary of the machine, the machine fails to understand you.
Feel stuck now? The answer to your problem here is Natural Language Processing!
Now one may wonder what is Natural language processing? You may have several questions regarding the same. I am sure all your concerns and questions are clear in this article!
Introduction to Natural Language Processing (NLP)
Natural Language Processing (NLP) gives computers the ability to do tasks involving the human language and comes with a diverse vocabulary.
NLP is used at a lot of places including Translation of text into various languages and extracting vital information from text.
The flowchart below represents a simple way to implement NLP:
Figure1: Flowchart representing basic Natural Language Processing
Let’s dive more into the NLP concepts.
Some Important NLP Concepts
There are various concepts involved when one talks about Natural Language Processing. The same are as follows:
- Tokenization & Stopwords Removal
- Stemming
- Building a Vocabulary
- Vectorization
We will cover each of them in brief.
1. Tokenization & Stopwords Removal
Tokenization is simply breaking down the text into separate words. And stopwords Removal implies the removal of the words which are not relevant for our processing. It also helps in making our dataset smaller and hence making it faster to be processed.
2. Stemming
Stemming is converting words into their base forms. For instance let’s say we have the word ‘sleeping’, to make our processing easier we can convert the word to ‘sleep’ (the base form of the original word). This process is also known as ‘Lemmatization’.
3. Building a Vocabulary
This concept helps in building a common vocabulary which includes a list of all the unique words in the text we originally had. The reason behind doing this is that it will be helpful when our code has to convert the words to numbers.
4. Vectorization
In this concept, we convert our words or sentences into vector form. The size of the vector is always greater than the actual length of the sentence as the vector size is equivalent to the size of the vocabulary. After doing this we can identify all the unique as well as the most frequent words in the vocabulary.
Implementing some NLP Concepts in Tensorflow with Python
The first step is to always import the necessary Python modules. The code for the same is shown below:
import tensorflow as tf from tensorflow import keras from tensorflow.keras.preprocessing.text import Tokenizer
Tokenization using TensorFlow
As said earlier, Tokenisation is simply breaking down sentences into words. The code to implement the same is shown below:
sample_texts=['ValueML clears all my concepts', 'I love clearing my coding concepts through ValueML', 'All Machine Learning and Deep Learning concepts are available here'] token = Tokenizer(num_words=1000) token.fit_on_texts(sample_texts) words=token.word_index print(words)
The code is described line by line below:
- 1: Created a list having a number of sample sentences that you need to process further.
- 4: Initializing tokenizer object which has a maximum word limit as a parameter.
- 5: We then fix the tokenizer created in line 4 on the same sentences created in line 1.
- 6: Assigning values to words so that you can process the words later.
- 7: Prints the dictionary having the words as keys and the values are the values assigned to the words by the tokenizer.
The output of the tokenizer is shown below:
{'concepts': 1, 'valueml': 2, 'all': 3, 'my': 4, 'learning': 5, 'clears': 6, 'i': 7, 'love': 8, 'clearing': 9, 'coding': 10, 'through': 11, 'machine': 12, 'and': 13,'deep': 14, 'are': 15, 'available': 16, 'here': 17}
Converting more sentences to numbers
For machines to understand the text, we have to convert the text into numbers to convert it to machine-understandable form. The code for the same is shown below:
input_text = ["I love to go through ValueML","ValueML is best for Machine Learning concepts"] text_to_num=token.texts_to_sequences(input_text) print(text_to_num)
The output of the code above is shown below where the input text sentences are converted to numerical form.
[[7, 8, 11, 2], [2, 12, 5, 1]]
BUT now the problem with this approach is that the words that are not included in the original vocabulary are not handled properly. The same is shown in the figure below:
Figure2: The problem of missing words in the vocabulary that remain unhandled
Handling the issue we are facing:
The issue can be handled in multiple ways:
- Having a huge vocabulary originally
- Using the out of vocabulary parameter when you initialize the tokenizer ( only efficient for small data )
The first approach is not manually possible without a proper dataset. So for now we will make use of the out of vocabulary parameter. The code for the same is shown below:
new_token = Tokenizer(num_words=1000,oov_token="<I am missing!>") new_token.fit_on_texts(sample_texts) words=new_token.word_index print(words)
The ‘oov_token’ is the out of vocabulary (OOV) token and it can be assigned anything one wishes to, unless and until it doesn’t confuse the machine with any other word from the dictionary. So, make it as unique as possible.
This time the output contains the default value for the OOV words as shown below:
{'<I am missing!>': 1, 'concepts': 2, 'valueml': 3, 'all': 4, 'my': 5, 'learning': 6, 'clears': 7, 'i': 8, 'love': 9, 'clearing': 10, 'coding': 11, 'through': 12, 'machine': 13, 'and': 14, 'deep': 15, 'are': 16, 'available': 17, 'here': 18}
Now when one converts the text into numbers, all the words that are missing from the vocabulary are assigned the value of the OOV word value from the dictionary shown above.
The output of the same is shown below:
[[8, 9, 1, 1, 12, 3], [3, 1, 1, 1, 13, 6, 2]]
Summing up what we learned!
Congratulations! You have reached the end of the article!
Now let’s revisit what we learned today. We first gained knowledge about what exactly Natural Language Processing (NLP) is and why is it needed. Then we learned about a few concepts or terms related to NLP and in the later sections, we learned to implement a few of them using TensorFlow.
And yes there is a lot more to explore in NLP!
Stay tuned to learn more!
Great content!
Very excited to learn more about NLP!
This is sooo knowlegleablee😍😍💕
Keep posting such things😍❤️
Definitely learned new things about NLP which I had not focused on . Very informative | https://valueml.com/introduction-to-natural-language-processing-nlp-using-tensorflow-in-python/ | CC-MAIN-2021-25 | refinedweb | 1,096 | 50.26 |
Query optimization plan node. More...
#include <sql_select.h>
Query optimization plan node.
Specifies:
Set the combined condition for a table (may be performed several times)
Sets join condition.
Keys checked.
multiple equalities for the on expression
Keys with constant part.
Subset of keys.
The set of tables that this table depends on.
Used for outer join and straight join dependencies.
Flags from SE's MRR implementation, to be used by JOIN_CACHE.
The set of tables that are referenced by key from this table.
Pointer to the associated join condition:
pointer to first used key
Join buffering strategy.
After optimization it contains chosen join buffering strategy (if any).
Used to avoid repeated range analysis for the same key in test_if_skip_sort_order().
This would otherwise happen if the best range access plan found for a key is turned down. quick_order_tested is cleared every time the select condition for this JOIN_TAB changes since a new condition may give another plan and cost from range analysis.
true <=> AM will scan backward
Keys which can be used for skip scan access.
We store it separately from tab->const_keys & join_tab->keys() to avoid unnecessary printing of the prossible keys in EXPLAIN output as some of these keys can be marked not usable for skip scan later. More strict check for prossible keys is performed in get_best_skip_scan() function.
points to table reference
The maximum value for the cost of seek operations for key lookup during ref access.
The cost model for ref access assumes every key lookup will cause reading a block from disk. With many key lookups into the same table, most of the blocks will soon be in a memory buffer. As a consequence, there will in most cases be an upper limit on the number of actual disk accesses the ref access will cause. This variable is used for storing a maximum cost estimate for the disk accesses for ref access. It is used for limiting the cost estimate for ref access to a more realistic value than assuming every key lookup causes a random disk access. Without having this upper limit for the cost of ref access, table scan would be more likely to be chosen for cases where ref access performs better. | https://dev.mysql.com/doc/dev/mysql-server/latest/classJOIN__TAB.html | CC-MAIN-2021-43 | refinedweb | 369 | 64.51 |
#include <c4d_baselist.h>
An array of C4DAtom objects.
Gets the number of atoms in the array.
Checks how many elements in the array match type and/or instance.
Gets the atom at the position idx in the array.
Appends obj to the array.
Clears the atom array.
Removes obj from the array.
Copies all atoms in
*this to *dest.
Copies all atoms in
*this to *dest filtered by type and/or instance.
Gets the user ID of the array.
Sets the user ID of the array to t_userid.
Gets the user data pointer stored with the array.
Store a user data pointer with the array.
The preferred object is the one to use for operations that require a single object.
For example, if the user drags many objects to a link field this is used.
Sets the preferred object.
Removes objects that do not match the filter given by type and instance.
Removes all objects that has a parent (or ancestor) in the array.
Appends all objects in src to the array.
Compares the array with cmp. | https://developers.maxon.net/docs/Cinema4DCPPSDK/html/class_atom_array.html | CC-MAIN-2021-17 | refinedweb | 177 | 79.56 |
Hi everyone,
im a beginer with flash pro so please be gently
I got a project in mind and I wanna know if it is possible or not.
I have 4 label in my project.
First Label got 2 text box (Date.text and Name.text)
Second label got 2 text box too( Adress.text and Reason.text)
Third label got 3 text box (Calltime.text, Arrivedtime.text and Leavingtime.text)
Fourth label got 1 text box called «Details.text»
I also have a Button named Savebutton (Viewable on he four label).
What I want is, When I press on the «Savebutton», I want flash to create a new .txt document in a certain folder on my computer. I also want it to name that document with the First textbox of my project (in this case, «Date.text») but also put this textbox in the «.txt» document.
So, when i'll open my «23june2012.txt» file, I'll see something like that:
Date:23june2012
Name:Bill
Adress:8765 Old street
Reason:Car accident
Calltime:17:05
Arrivedtime:17:50
Leavingtime:20:37
Details:A blue honda Civic 2010 ''364 VCG'' crashed on a Red Mazda 3 ''321 JHK''.
So I juste wanna know if we can create this kind of things with Flash - Actionscript 3.
Thanks a lot
Bambi
No one knows? Cmon Kglad or Ned Murphy! I know that you guys can answer this one easily
You can try the below code. It works for.text();
}
I took the textfields names as you mentioned. (.txt file is saved in local store of the program file. Program file location will be dependent on the storage directory you gave.)
Here I took applicationStorageDirectory for storing a file. You can store your file in different directories. For that once you check File class in adobe. You can get the total logic. And one more thing is when using File in your program you should set the publish settings to air.
Thanks bhargavi reddy,
ill try it as soon as I can
and ill tell you if it works fine for me too
Thanks again
Well, maybe it can work but im too noob with flash to make it work...When I copy paste your code, I get 13 error lol...And actually, I have no freaking idea of what are these errors and how to fix it... juste to let you know, on my ''Action keyframe'' (the one where we put the codes), I only have your script that your just gave me... nothing else... Am I suposed to create some variables or something else? Well... Here's my entire code
Retour.addEventListener(MouseEvent.CLICK, fl_ClickToGoToAndStopAtFrame_retour);
function fl_ClickToGoToAndStopAtFrame_retour7(event:MouseEvent):void
{
gotoAndStop();
}
So, any idea of what's wrong lol? :S If it's too complicated, just say so and ill let it go lol. Ho and by the way, I made an error in my description, the names I gave you aren't textbox named Date.text but only the Label name of my text box and they are called ''Date'' only, no .text after it....Maybe that's the problem? Well, thanks for your help
Yes You have to create some user interface for my code.
First create some labels and textfields as follows
Go to stage and click on the text tool. In properties select the type as static text and make all those labels as in the above screen. Now again select text tool and select input text as the type and give instance name for each text field as I mentioned in the code (for example, for the textfield beside the Date label should be named as Date like that) and under character properties, select show border around text. Then take button component and drag it to the stage. Give the instance name as Save and in properties change the label as Save. Now your user interface is completed.
I used Date.text in code means Date is the instance name of the textfield and Date.text is the text you entered in the textbox. Text fields names should be matched with the code.
You can also use FileReference to save the file in your system.
Damn, it wasn't on Adobe Air publication... Now it kind of work but still get this error when i press the ''Save'' button,,
Error: Error #2038: File I/O Error.
at flash.filesystem::FileStream/open()
at Textsavingtest_fla::MainTimeline/saveDetails()[Textsavingtest_fla.Mai nTimeline::frame1:16]
Any idea of what it is?
And another thing, Why can't I use any number in my textfields and some letter are missing too... The only letter I can use are the follow one (acdeilnoprstuv)
Second thing, how to chose Exactly where to put my .txt files? In exemple, I want to pu it in this location ''C:\Users\P-O\Desktop\Text test'', how to do it?
By the way, tnx alot for your help, I was giving up but now i'll do it to the end
You should select "Use Device Fonts" property from anti-alias dropdown list which is under Character in properties of textbox. So that you will be able to enter all chars.
Using FileReference you can put .txt file in your mentioned location. Use save method for FileReference instance, once search it in google (filereference in as3). You can understand how to use it.
Use FileReference instead of FileStream as follows:
import flash.events.MouseEvent;
import flash.net.FileFilter;
import flash.filesystem.File;
import flash.net.FileReference;
Save.addEventListener(MouseEvent.CLICK,saveDetails);
function saveDetails(e:MouseEvent):void
{
var file:FileReference = new FileReference();
var str:String = "Date:"+Date.text +"\n"+ "Name:"+Name.text +"\n"+ "Address:"+Address.text +"\n"+ "Reason:"+Reason.text +"\n"+ "Calltime:"+Calltime.text +"\n"+ "Arrivedtime:"+Arrivedtime.text +"\n"+ "Leavingtime:"+Leavingtime.text +"\n"+ "Details:"+Details.text;
str.replace("\\n","\n");
file.save(str,Date.text.replace(/^\s+| \s+$/g, "")+".txt");
}
I am not practically check this one. First understand FileReference and try this one. If there are any mistakes in my code you can find those easily.
Ok, I'll search for it and i'll try to understand, i'll give you feedback later
Tnx again
Hey! Well, actually I made some research and learn a littlebit about Filereference and now I understand how it works (basically)
So now, with this script, I can save things onto a textfile but I still got a little problem. The only way to save things is if I dont put anything in ''Date'' textfield... As soon as I put something, I cant save... it's telling me:
Error: Error #2087: The FileReference.download() file name contains prohibited characters.
at flash.net::FileReference/_save()
at flash.net::FileReference/save()
at Textsavingtest_fla::MainTimeline/saveDetails()[Textsavingtest_fla.Mai nTimeline::frame1:16]
So I know that the problem is on this line
file.save(str,Date.text.replace(/^\s+| \s+$/g, "")+".txt");
I dont realy know what it is suposed to do But I guess that this is the code that Put the file name as ''Date'' and add a ''.txt'' after the title... Because, when I save something, I have to choose a name for the file and I have to manually add ''.txt'' after the name.. So do you know where's the problem? Maybe a code problem.. Well, still trying to understand what this line''file.save(str,Date.text.replace(/^\s+| \s+$/g, "")+".txt");'' is suposed to do but whit your help, it's gonna be faster. Thanks again, and after this one, I wont have any other question
Well.. I guess
Thank's
Well, didn't found what's the problem but I Cutteed it
That was my old code '' file.save(str,Date.text.replace(/^\s+| \s+$/g, "")+".txt"); '' and replaced it with this one ''file.save(str);''.. Dont really know why but actually, I can write in Datetextfield so it's not a problem anymore... That thing is that i have to manually put my file name and add a ''.txt'' after it but, it's better than nothing
So, I thank you very much and I hope i wasn't too anoying
Thanks again for your help
file.save(str,Date.text.replace(/^\s+| \s+$/g, "")+".txt");
I will explain this line.
str is the data that you want to save in a file. Date is the instance name of the date textfield. Date.text is the text that you given in the date text field i.e., date. You want to save your file with the naem of date right? So that I took the text given in the datefield to set the title for file. First check whether you give the instance name of particular textfield as Date or not. If so change that one.
Here I used replace function to trim the right and left side spaces in the string i.e, in date. And I concat .txt extension for saving the file as txt file.
If you give the date as 3july2012. The above line should be
file.save(str,3july2012.txt);
As per the syntax 1st argument is the data and 2nd argument is the filename.
Simply I am using string functions.
It still doesn't work but it's ok, I'll manually put the filename and the ''.txt'' after it. Because, if i put this code (file.save(str,Date.text.replace+".txt");), it put the name ''function Function() {}'' and it save in .txt file but as soon as I change the name, the program doesn't save it in .txt file... so I have to manually put it so it's the same thing as the first option... Well, Great tnx to you!
file.save(str,Date.text.replace(/^\s+| \s+$/g, "")+".txt");
If it's not working, and giving you that odd file name with function in it, you might try breaking out that inner code to remove it from inside the save method like so:
var fName:String = Date.text.replace(/^\s+| \s+$/g, "")+".txt";
file.save(str, fName);
Thanks for you help too
But it still giving me this error when i press the save button
Error: Error #2087: The FileReference.download() file name contains prohibited characters.
at flash.net::FileReference/_save()
at flash.net::FileReference/save()
at Textsavingtest_fla::MainTimeline/saveDetails()[Textsavingtest_fla.
It's the samething with the original code... I really dont know what I am doing wrong...
I also checked it. But no use. I too get the same error. I didn't find the solution. I hope i will be back with solution or atleast help.
Once check behaviour property under paragraph of textfield. It must be single line. All are working fine but it doesn't detect \n. I am unable to find the reason. For that I put "," instead of "\n" in str. | http://forums.adobe.com/message/4524640 | CC-MAIN-2013-48 | refinedweb | 1,802 | 77.13 |
The Microsoftî . NET Framework versions 1. 0 and 1. 1 represented major changes in software development. However, one important thing that did not change much was support for distributed transactions.
John Papa
MSDN Magazine February 2005
View Complete Post
The System.Transactions namespace of the Microsoft .NET Framework makes handling transactions much simpler than previous techniques. Read all about it this month.
MSDN Magazine November 2006
Systems that handle failure without losing data are elusive. Learn how to achieve systems that are both scalable and robust.
Udi Dahan
MSDN Magazine July 2008
Here we build a solution that fits the Entity Framework into an n-tier architecture that uses WCF and WPF and the MVP pattern. | http://www.dotnetspark.com/links/2607-data-points-adonet-and-systemtransactions.aspx | CC-MAIN-2017-30 | refinedweb | 116 | 60.92 |
Using Cognitive Services to make museum exhibits more compelling and track user behavior.
Key technologies used
- Windows 10 IoT Core (herein referred to as Windows IoT)
- Raspberry Pi 3 with a USB webcam and speaker (RP3)
- Universal Windows Platform (UWP)
- Microsoft Cognitive Services Face API (Face API)
- ASP.NET Core Web API (Web API)
- Microsoft Azure App Service (App Service)
- Azure Table storage (Table storage)
- Microsoft Power BI (Power BI)
- Microsoft Office Excel 2016 (Excel)
Core team
- Martin Kearn – Senior Technical Evangelist, Microsoft
- Martin Beeby – Technical Evangelist, Microsoft
- Paul Foster – Senior Technical Evangelist, Microsoft
- Joe Collins – Head of Systems and Analysis, Black Radley
- Richard Wilde – Developer, Black Radley
- Ryan O’Neil – Developer, Black Radley
Source code
The full solution is open-sourced under the GNU General Public License v3.0 license on GitHub.
Customer profile
Black Radley is a consultancy organization that works with public services to inspire and invigorate systems and people. Black Radley works closely with museums within the United Kingdom on both technical and commercial projects.
For this project, we worked closely with the Shrewsbury Museum & Art Gallery (SMAG), a provincial museum in Shrewsbury, Shropshire, England. SMAG offers you the chance to discover a mix of the old, the new, and the curious all within an extraordinary set of buildings. From a medieval town house to an early Victorian Music Hall, exhibits span more than 750 years of history.
Problem statement
Provincial museums like SMAG are under increasing pressure to be more compelling and profitable, and to ultimately drive higher attendance from the general public.
Black Radley has had a desire to make museum exhibits more intelligent for some time. Joe from Black Radley was inspired to explore Microsoft technology after attending the Hereford Smart Devs community meetup in February 2017 where Martin Kearn presented on Microsoft Cognitive Services.
“We had attempted this project using Linux and several open source tools and found that setting up our development environment proved to be a challenge. Working with Microsoft and using Visual Studio, we were able to accelerate through to a working solution within a couple of days.” —Joe Collins, CTO, Black Radley
Black Radley saw two primary ways to help museums achieve this goal: make exhibits more compelling, and better understand patrons.
“Public funding for museums is likely to decline in the foreseeable future. To maintain their services, museums and galleries are trying to attract more visitors and understand the visitors’ experience. This solution provides a tool to help museums enhance the visitor experience and to understand how visitors respond to exhibits.” —Joe Collins, CTO, Black Radley
State of the art for exhibits
Most museum exhibits have a small amount of printed information accompanying them for patrons to read. This information has to be written in a plain, generic way that is accessible for all audiences; therefore, it is often very factual and not particularly compelling, especially for younger audiences.
Some exhibits may also have passive infrared (PIR) sensor-based, motion-triggered audio descriptions, videos, or other media that give some information about the exhibit as a patron approaches. While these multi-media descriptions do grab attention and give information in a more compelling way than printed text, they still have to be created in a generic way that is suitable for all audiences. Additionally, the PIR sensor systems are relatively “dumb” in that they cannot detect when a person walks off. This causes the descriptions to continue playing until completed, regardless of whether anyone is listening.
Patron data
Most museums capture relatively little information around visitors and their activity, such as:
- Patron age
- Patron gender
- Which exhibits are the most popular and attract the most interaction
Without this sort of data, it is up to the museum curators to estimate, based on their experience, how to best lay out the exhibits, and to make recommendations about which exhibits certain demographics may find more interesting.
“Museums usually count and segment visitors so they can report to funding bodies and understand their customers. Many museums conduct a rolling program of visitor surveys to understand the visitor experience. The data is used to guide design changes to the museum exhibitions and to support funding applications.” —Joe Collins, CTO, Black Radley
Solution and steps
There were two primary goals to the solution:
- Improve exhibits by using cognitive intelligence to give a tailored experience based upon demographic information.
- Track and report on how patrons interact with exhibits and the museum as a whole to allow the museum to report to funding bodies and better understand their customers.
Prerequisites
- Install Visual Studio Code.
- Obtain the key for the Cognitive Services Face API.
- Obtain an Azure subscription to use Azure App Service and Azure Table storage.
- Set up Windows 10 IoT Core on your Raspberry Pi 3.
- Set up an account for Power BI.
Intelligent Exhibit app and device exhibit. This device would do several things:
- Detect the presence of faces gazing at or interacting with the exhibit.
- Greet the patron/patrons appropriately.
- Take a photo and obtain the rough age, gender, and emotional state of each face by using Face API.
Play a suitable audio description to match the age and gender of the patron, or play a generic audio description for groups.
For example, when looking at an exhibit that is a panoramic painting of Shrewsbury, a patron in the 12–17-year-old demographic will hear audio that is pacey and enthusiastic, and the script will include age-appropriate cultural references to an iPhone. You can listen to a sample MP3 here: Teenage demographic audio.
The 55–64-year-old demographic will hear audio that is more formally delivered, and the script will include more specific details. Listen to a sample MP3: Senior demographic audio.
- Detect when patrons depart, and stop the audio description.
The IoT device was a Raspberry Pi 3 running Windows IoT Core, which runs a UWP app that does the initial face detection, takes the photo, and plays the audio descriptions. The Raspberry Pi is equipped with a webcam and a speaker, but no screen or other peripherals.
Following is a photo of the device and hardware (the screen is for debugging only).
For the purposes of the initial workshop, we used a mocked up cardboard box to contain the Raspberry Pi and other hardware when it is in situ with the exhibit, but going forward this would have a much better, more discreet finish.
Following is a photo of the cardboard casing.
This photo shows the cardboard casing in situ with an exhibit at SMAG.
Patron data capture and analysis
For museums to gain more insight into how patrons interact with exhibits and traverse the museum, the solution also tracks faces as they are sighted throughout the museum.
When the Intelligent Exhibit device captures the details of a face, we store the unique face ID and add it to a list of faces for that day. We store the face data with the time, exhibit location, and device details so that we are able to determine whether a face has been seen before on that day, and if so, at which exhibit and at which time.
The solution logs each sighting so that the data can be analyzed at a later date. Using this data, we can track key data about each patron, including:
- Approximate age, gender, and emotional state for each patron at each sighting.
- Which exhibits they visited and in which order.
- How long they lingered at each exhibit.
These data points are extremely valuable in terms of helping museums understand patrons so they can offer better services to them.
“Using a device like this takes visitor data collection to a new level, providing museums with much more detailed information about visitor behavior. This would allow museums to identify hotspots within the museum and to spot trends as they develop.” —Joe Collins, CTO, Black Radley
The following section explains in more detail exactly how the data is logged and searched and how the primary key is created.
Technical delivery
The entire solution is a relatively complex solution involving multiple technologies.
This diagram shows the high-level architecture that is explained in each of the following sub-sections.
The key steps of the solution were to create a UWP app that would run on an RP3 device running Windows IoT. This device would be integrated into exhibits and set to capture images when a person (face) is detected. The photo would then be sent to a proxy Web API that passes the image through the Face API to gather data on emotion, age, gender, and other data points before logging the data in Table storage for later analysis and passing the data back to the UWP app. The UWP app would then tailor the audio description about the exhibit based on the data gained from the API.
Windows 10 IoT Core and UWP
“Windows 10 IoT Core and UWP provided the fastest method of getting a working initial solution. It offered a familiar development experience so that our developers could be productive from day one.” —Joe Collins, CTO, Black Radley
The UWP app uses a webcam to capture faces, detect faces, post information to the Face API, and display a different audio file based upon the response from the API.
The main entry point for the application is WebcamFaceDetector.xaml inside of App.xaml.
This page has the following responsibilities:
- Initialize and manage the
FaceTrackerobject.
- Start a live webcam capture, and display the video stream.
- Create a timer and capture video frames at specific intervals.
- Execute face tracking on selected video frames to find human faces within the video stream.
- Acquire video frames from the video capture stream and convert them to a format supported by
FaceTracker.
- Retrieve results from
FaceTrackerand visualize them.
- Call the API to get demographic information from video frames that contain faces.
- Request audio to be played by the
VoicePlayerbased on demographic information.
The
FaceDetector is initialized in the
OnNavigatedTo method. The
FaceDetector class is part of the namespace
Windows.Media.FaceAnalysis and can be used to detect faces in a SoftwareBitmap (uncompressed bitmap).
if (faceTracker == null) { faceTracker = await FaceTracker.CreateAsync(); ChangeDetectionState(DetectionStates.Startup); }
We used this class as a fast on-device way of establishing the presence of faces. After faces are detected, the proxy Web API (explained in the following section) is used to get more in-depth information. This ensures that our device responds to faces quickly while also reducing the amount of network chatter that would be required to process every frame using off-device APIs.
Despite initial reservations about performance, we found that we could actively detect faces from a live video stream while keeping the RP3 CPU at well under 50% capacity.
During initialization, we changed the
DetectionState to
Startup. Throughout the code, there are references to the
DetectionState. This is a state machine that is used to determine if the
FaceDetector should be analyzing frames or not. We required this state machine because we wanted to have a way to stop processing frames while we made API calls; we also used it to determine if a patron moves away from an exhibit and pauses the audio.
Calling
ChangeDetectionState with
DetectionStates.Startup initializes the webcam and starts face detection. We also changed
DetectionState to
WaitingForFaces.
The
StartWebcamStreaming method creates a new
MediaCapture class and ties the output to
CamPreview, which is a
CaptureElement XAML control that is visible on the XAML page. We use this for debugging; however, to preserve system resources, you can switch it off by setting TurnOnDisplay to false in the resources file. Lastly, we call
RunTimer, which starts a timer that we use to process the video frames for face detection.
The timer uses
timerInterval, which is set by values retrieved from
TimerIntervalMilliSecs in Resources.resw (250 ms). On each tick of the timer, we call
ProcessCurrentStateAsync. This method uses
DetectionState to determine if faces are present and what to do in various circumstances.
Inside
ProcessCurrentStateAsync, if the
DetectionState is
DetectionStates.WaitingForFaces, we process the current video frame using
ProcessCurrentVideoFrameAsync(). If this API returns a face, we know that a face has been detected, so we change the
DetectionState to
DetectionStates.FaceDetectedOnDevice.
Inside
ProcessCurrentStateAsync, if the state is
FaceDetectedOnDevice, the following logic runs. We check to see if the API was called in the last 5 seconds; this avoids the device getting stuck in a loop if the user leaves and then re-enters the frame.
The code waits a few seconds and then plays a hello audio file called
VoicePlayer.PlayIntroduction(), and then posts the image data to the proxy API by using
PostImageToApiAsync. During this pause and the duration of the audio file, we would hope that we have received a response from the proxy API with the demographic information; we are then able to determine which audio playlist to play.
DetectionState is changed to
ApiResponseReceived.
Inside
ApiResponseReceived, we run some more logic, and then if we still have a face, we ultimately call the play method of the
VoicePlayer class. This method takes a
DetectionState argument that is used to determine the audio to play. We take the age of the oldest face in the image and play the audio file that corresponds to that face.
To play the WAV file, we use the
PlayWav method, which takes all the WAV files in the playlist group as an argument.
Inside
PlayWav, we use
MediaPlayer and
MediaPlaybackList to play our audio. Rather than an exhibit having a single audio file for each demographic group, we have multiple audio files per group. We add these to a playlist. That way if a user leaves the frame and walks away from the exhibit, we can stop the audio at the end of a sentence rather than randomly during playback.
To handle the event that the user has left the exhibit, we have a detection case called
WaitingForFacesToDisappear. When the audio is playing, this is the state that the system should be in. If it detects that faces disappear, it will call
VoicePlayer.Stop(). This method sets a variable called
StopOnNextTrack. When the playlist changes the item it is playing, it calls the
CurrentItemChanged event, which checks the
StopOnNextTrack event. If it finds it, it will stop the audio.
You can see the full code for the UWP app on GitHub.
There are naturally some concerns over privacy; however, the device makes no record of visitor images. No images are stored, so it is not possible to review the records of who visited the exhibit. It is only possible to determine the estimated age and gender of visitors.
Proxy API
Because we potentially needed to use several Cognitive Services APIs, we decided to create a custom proxy API that the Intelligent Exhibit app would interface with via a single call. This allows the logic of the UWP app to be simple and lightweight.
This API was created by using ASP.NET Core 1.1 Web API in C#, and is contained within the Server solution.
There is a single controller called
PatronsController with a single public method called
Post, which responds to
HttpPost requests to
<server>/api/patrons.
The API also accepts a body containing a byte array that represents an image containing faces. For this to work, the client must make the request with the
Content-Type:application/octet-stream header.
The API uses a repository pattern with ASP.NET Core Dependency Injection to interact with the Cognitive Services and Azure APIs. The use of patterns was chosen to separate the application logic from the logic required to interact with the third-party APIs.
The
Post function broadly follows this logical flow:
Cast the request body to a byte array representing the image passed from the client.
Get or create a face list for today (see the Face lists section).
Get the faces in the images by using the Cognitive Services Face API - Detect (see the Face detect section).
Enumerate the faces.
Get the best matching face in today’s face list if there is one (see the Similar face matching section); otherwise, add this face to the face list as a new face.
Create a
Patronobject to pass back to the client containing the face data and other data points required by the client.
Store the
Patronobject in Azure Table storage (see the Azure Table storage section).
Return the
Patronobject to the client.
The interactions with the Face API are very simple, and there are plenty of examples online for how to do this. You can see the full code for the
FaceApiRepository on GitHub.
You will also see more detail about the Face API in subsequent sections of this document.
ASP.NET Core user secrets and app settings
To avoid putting API keys into the public GitHub repository, we used a tool called Secret Manager to store local copies of API keys that can be exposed via the
IOptions<AppSettings> interface in the code and App settings on Azure App Service.
To use this tool, the Microsoft.Extensions.Configuration.UserSecrets NuGet package is required.
After it’s installed, you can store API keys locally on your machine by using this command:
`dotnet user-secrets set AppSettings:EmotionApiKey wa068ba186e145bw9a57cue59bd53799`
The name of the app setting as referred to in your code is
EmotionApiKey, and
wa068ba186e145bw9a57cue59bd53799 is the value (this is a made-up key for the sake of this write-up).
After the keys are stored, they can be accessed in code by using the ASP.NET Core configuration API with options and configuration objects. The code looks like this:
_appSettings.EmotionApiKey.
When the application is published to Azure App Service, the key can be stored in an application setting for App Service, which is accessible via the Azure portal. This Azure Friday video has some detail about how to configure this.
One thing that caused problems for us in the solution is that the key name in Azure must also include the
AppSettings prefix. It should look like this:
Face detect
The proxy API initially uses the Face API to detect faces within the image passed to it by the client device. You can read all about this function at Face API - Detect.
As with the entire solution, a repository pattern has been used to separate concerns between the proxy API and the services it calls. A
FaceApiRepository class contains a function called
DetectFaces, which performs the task of sending the image to the Face API and parsing results.
This operation is relatively straightforward and follows the following logical flow:
The image is sent to the Cognitive API as a POST request to this endpoint:
- The image is sent as a binary file in the body of the request.
The following headers are added to the request:
Ocp-Apim-Subscription-Key:<The API key for the Face API>
Content-Type:application/octet-stream
The Face API accepts several request parameters that allow additional data to be returned. By default, the proxy API requests the face ID and all attributes.
- The response is parsed as JSON and returned to the API controller.
You can see the full code for the
DetectFaces function on GitHub.
Face lists
To meet the objective of recognizing whether a patron has been seen before, the solution makes use of the Face API - Face List feature. A face list is a list of up to 1000 faces that have been detected by using the Face API - Detect function and permanently stored.
Face lists are a prerequisite to the Face API - Find Similar function.
The limitation of just 1,000 faces per list was considered in detail, and we felt that in the specific context in which the solution is intended, it is unlikely that more than 1,000 patrons would visit in a single day (at least for the museums that this solution is targeted towards). Therefore, a design decision was made to create a new face list for each day using the
ddMMyyyy naming schema.
On this basis, the first task that is performed is to get or create the face list for the current day. If the solution was ever to scale to a location that has 1000+ visitors in a day, the face lists could be created on an hourly or half daily basis instead.
As with the entire solution, a repository pattern has been used to separate concerns between the proxy API and the services it calls. A
FaceApiRepository class contains a function called
GetCurrentFaceListId, which checks if there is a face list for today, and if not, creates it and then returns the ID. This makes use of the Face API - Create a Face List function. The code attempts to create a face list by using the date ID detailed earlier. If one already exists, a 409 status code is returned, which the code ignores.
You can see the full code for the
GetCurrentFaceListId function on GitHub.
Similar face matching
A core part of the server-side solution is to check whether a face has been seen before. This is critical to the goal of tracking patrons as they traverse the museum.
To do this, the solution performs two operations at the controller level:
- Finds similar faces.
- Adds detected faces to today’s face list if there are no matches.
After the controller has detected faces, it enumerates each one and calls the
FindSimilarFaces function of the
FaceApiRepository class, which returns a list of faces that match the detected face from the current face list. The controller uses the best matching face for the next steps of the solution (Table storage).
If there are no matches, the controller considers the detected face to be a new face within the context of today’s face list and will add the detected face to the face list.
As with the entire solution, a repository pattern has been used to separate concerns between the proxy API and the services it calls. The
FaceApiRepository class contains two functions related to this feature,
FindSimilarFaces and
AddFaceToFaceList. You can see the full code for both functions on GitHub.
The face repository makes use of the Face API - Find Similar function and the Face API - Add a Face to a Face List function.
Azure Table storage
After the controller has done all the work in detecting faces and storing them in the face list, it creates a
Patron object that gets returned to the client.
The
Patron object is also stored so that it can be queried later on, thus fulfilling the objective of tracking patrons as they traverse the museum.
Several options existed around the best storage mechanism, including:
- Azure SQL Database
- Azure Table storage
- Azure Blob storage (store the
Patronobject as a JSON document)
- Azure Cosmos DB (store the
Patronobject as a JSON document)
One of the key criteria for the solution was that museum staff can easily analyze data in Excel 2016 without any specific technical knowledge or custom code. This requirement means that only SQL Database and Table storage were valid options because these can both be easily imported into Excel as a data connection. Table storage was ultimately chosen for the following reasons:
- The schema can be easily changed.
- It’s cost-effective compared to Azure SQL Database.
- It offers better performance at scale compared to SQL Database.
To facilitate interaction between the proxy API controller and Table storage, a repository was created called
StoreRepository. The repository has a single function called
StoreTable, which casts a
List<Patron> to a model called
PatronSighting, which is simply inserted into Table storage. The repository has a dependency on the Windows Azure Storage NuGet Package, which makes it very simple to interact with Table storage from C# code.
The key for the stored row is a combination of the persisted face ID, which is the ID for the face from the face list combined with a new GUID, thus making the entry unique. The other attributes that are stored are:
Device: The device where the sighting originated.
Exhibit: The exhibit where the device is housed (an exhibit could have multiple devices).
Gender: The gender of the patron in the sighting according to the Face API - Detect function.
Age: A rounded number (to the nearest year) representing the age of the patron in the sighting according to the Detect function.
PrimaryEmotion: The primary (highest scoring) emotion of the patron in the sighting according to the Detect function.
TimeOfSighting: The date/time that the sighting took place.
Smile: Whether the patron in the sighting is smiling or not according to the Detect function.
Glasses: Whether the patron in the sighting is wearing glasses or not according to the Detect function.
FaceMatchConfidence: A double indicating how confident the Face API is that the sighting matches an existing face according to the Face API - Find Similar function.
The storage can be searched by any attribute that is needed such as Device, Exhibit, or Persisted face ID. This was not implemented as part of the initial solution.
You can see the full code for the
StoreRepository on GitHub.
As well as using Excel to query the data, we found that Azure Storage Explorer was a superb tool for exploring the storage repository and seeing data coming in from the API.
Excel 2016 for data analysis
The ability to easily analyze the data stored in Table storage was a key requirement. It is very simple to connect Excel to Table storage to sort, filter, and analyze the data in Excel. In Excel 2016, you simply follow these steps:
- Go to the Data tab.
- Select New Query > From Azure > From Microsoft Azure Table Storage.
- Enter your Microsoft Azure Table Storage account name.
- Enter your Microsoft Azure Table Storage account key.
- Select the data source that you want to view.
This screenshot shows some of the test data stored in Table storage being viewed in Excel.
Power BI for data analysis
Given the volume of data being collected, we needed something more visual to explain the potential business impact of the data. We discovered that very quickly and without data conversion from raw table storage, we could use the data stored in Azure Table storage to produce a report using Power BI Desktop.
To create the report, we first imported the data using Get Data. After selecting Azure, we then selected Azure Table Storage. After the data source was configured, it was then just a case of dragging visualizations onto the report canvas to visualize the data.
We were then able to share the dashboard by selecting Publish. The report example can be found here.
While this example is very basic, it shows how very quickly we can convert data stored in an inexpensive storage solution into an effective report that stakeholders can use to act upon.
Conclusion
The challenge was to make museum exhibits more compelling for patrons and also to provide museums with more data about how patrons use the museums. The solution achieved these two primary goals of providing a more intelligent museum exhibit and tracking patron data, which allows museums to better understand how patrons interact with and traverse their museum.
The solution involved a wide range of Microsoft technologies but pivoted around three main areas:
- The Universal Windows Platform to capture patrons via an IoT device
- Cognitive Services to provide intelligence about patrons as they interact with exhibits
- Azure App Service for storage, hosting, and analysis
The solution is entirely open source on GitHub at Intelligent Museum Exhibits (Patron Interactive Engagement).
The next steps at a high level are to further develop the solution to make it more robust and flexible, and to take it to other museums with the goal of gaining sponsorship to make it a production system.
Next steps for the solution
Compare how visitors interact with exhibits with and without narration
Before audio narration is added to exhibits, it would be wise to determine how visitors interact with exhibits without narration. To this end, the initial solution will be deployed in a museum for a longer period of time to gather information without the narration running. The information gathered can be used to provide a summary describing the types of visitors who viewed the painting and their dwell times. This information can then be used to compare the effect of having the narration running.
Create an easy-to-use interface for making modifications to narrations
The device used off-the-shelf components and open source software. However, it does require some technical expertise to configure the device and to update the narration. If the devices are to see wider use, they will need an interface that allows curators and other subject matter experts to update the sound files of the narrative.
Adapt the software to allow for different narrations for various devices
For speed and convenience, the initial solution was constructed as a single stand-alone device. However, the low cost of the device would make it possible to have multiple devices, which either interact with or track visitors through a museum or gallery. For this to happen, the software will need to be adjusted to use different narrations for different devices. Currently, the design assumes that there is only one narration.
Allow for museum professionals to share performance information easily with other museums and galleries
Some attempt was made during the project to gather information about how visitors were responding to the painting. To be useful to museum and gallery curators, this information should be presented in a form that suits their needs. The museum and gallery community has a long history of sharing benchmarking and performance information, so such an interface could reasonably include comparison information so that the performance of exhibits can be compared across different museums and galleries.
“The solution works, for one exhibit, in one museum. It now needs to be generalised so it can be used with other exhibits and in other museums.” —Joe Collins, CTO, Black Radley | https://microsoft.github.io/techcasestudies/cognitive%20services/2017/08/04/BlackRadley.html | CC-MAIN-2019-13 | refinedweb | 4,989 | 51.99 |
Issue
I am totally new in python world. Here I am looking for some suggestion about my problem. I have three text file one is original text file, one is text file for updating original text file and write in a new text file without modifying the original text file. So file1.txt looks like
$ego_vel=x $ped_vel=2 $mu=3 $ego_start_s=4 $ped_start_x=5
file2.txt like
$ego_vel=5 $ped_vel=5 $mu=6 $to_decel=5
outputfile.txt should be like
$ego_vel=5 $ped_vel=5 $mu=6 $ego_start_s=4 $ped_start_x=5 $to_decel=5
the code I tried till now is given below:
import sys import os def update_testrun(filename1: str, filename2: str, filename3: str): testrun_path = os.path.join(sys.argv[1] + "\\" + filename1) list_of_testrun = [] with open(testrun_path, "r") as reader1: for line in reader1.readlines(): list_of_testrun.append(line) # print(list_of_testrun) design_path = os.path.join(sys.argv[3] + "\\" + filename2) list_of_design = [] with open(design_path, "r") as reader2: for line in reader1.readlines(): list_of_design .append(line) print(list_of_design) for i, x in enumerate(list_of_testrun): for test in list_of_design: if x[:9] == test[:9]: list_of_testrun[i] = test # list_of_updated_testrun=list_of_testrun break updated_testrun_path = os.path.join(sys.argv[5] + "\\" + filename3) def main(): update_testrun(sys.argv[2], sys.argv[4], sys.argv[6]) if __name__ == "__main__": main()
with this code I am able to get output like this
$ego_vel=5 $ped_vel=5 $mu=3 $ego_start_s=4 $ped_start_x=5 $to_decel=5
all the value I get correctly except $mu value.
Will any one provide me where I am getting wrong and is it possible to share a python script for my task?
Solution
Looks like your problem comes from the if statement:
if x[:9] == test[:9]:
Here you’re comparing the first 8 characters of each string. For all other cases this is fine as you’re not comparing past the ‘=’ character, but for $mu this means you’re evaluating:
if '$mu=3' == '$mu=6'
This obviously evaluates to false so the mu value is not updated.
You could shorten to
if x[:4] == test[:4]: for a quick fix but maybe you would consider another method, such as using the
.split() string function. This lets you split a string around a specific character which in your case could be ‘=’. For example:
if x.split('=')[0] == test.split('=')[0]:
Would evaluate as:
if '$mu' == '$mu':
Which is True, and would work for the other statements too. Regardless of string length before the ‘=’ sign.
This Answer collected from stackoverflow, is licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0 | https://errorsfixing.com/modified-textfile-python-script/ | CC-MAIN-2022-27 | refinedweb | 429 | 73.68 |
Aucun résultat trouvé
December 19, 2017Peter Heinrich
Ask any non-programmer what it takes to write computer code and the tentative response might be something like, “You have to be good at math?”
As developers, of course we know this isn't true, but it's a common misconception that works in our favor. In a survey conducted for Change the Equation, 30 percent of Americans say they'd rather clean the bathroom than solve a math problem. As long as that's true, our reputation as lovable math geniuses is safe.
I talk to loads of developers just itching to try their hand at games, and the question of math comes up a lot. The bad news is that game development usually does involve math (at least more than some other kinds of programming). The good news is that game math isn't actually that hard (Mostly. Some of it is very, very hard). The rise of frameworks like Unity has had an overall calming effect on math anxiety among first-time game programmers, but the truth is that understanding the math at the heart of these frameworks makes for better games.
So where do we begin our quest for math literacy in pursuit of superior game creation? Let's bounce a ball! It's a good example because just moving something around the screen can be instructive, but it's also a problem of surprising depth. There are lots of interesting ratholes and tangents for us to explore as we tackle this simple exercise.
Let's start with the basics. We have a circle on the screen. We want it to move in straight lines at a constant speed. We want the screen edges to act as “walls,” reflecting the circle's motion.
To make all this happen, there are a few things we need to keep track of. The circle (our low-tech ball) has a position and a velocity. The screen has a width and height. We'll ignore complications like friction, elasticity, and ball diameter for now (but reserve the right to add them later, if we feel ambitious).
For simplicity, we treat the lower-left corner of the screen as the origin (0, 0) of our coordinate system, with x increasing to the right and y increasing as we move toward the top of the screen. This is the arrangement commonly used in mathematics—and has been adopted by many popular game development frameworks—so it feels natural.
Now we can represent the ball's position by the coordinate pair (x, y). Similarly, we represent the ball's velocity by a pair of deltas, vx and vy, that get added to x and y (respectively) every time we update the screen. Technically, the deltas vx and vy describe a vector, but we often write them as a point—such as (vx,vy)—because points and vectors are so intimately connected, and are frequently manipulated together.
Vectors are quantities with direction and magnitude. We draw them as arrows in order to convey both of these characteristics simultaneously, since an arrow (obviously) has a direction, and its length can indicate magnitude. We label them using bold or a little arrow hat, as in the diagram above.
Vectors have no inherent location, but they can be used to modify the location of objects that do, like our ball. We can add and subtract vectors to produce new vectors, and combine them with points to move those points around. They're also useful for figuring out angles and distances, and are critical to 96.57 percent of computer game graphics (approximately). Bare vectors are sometimes shown starting at the origin.
As you can see above, adding vectors together amounts to arranging them head-to-tail. Adding one to a point moves the point in the vector's direction by a distance equal to the vector's length. You can also multiply a vector by a scalar, which, as the name implies, is just a scaling factor. To multiply v by the scalar 6/5, multiply each component by the scalar value to yield (6/5 vx, 6/5 vy). Subtracting a vector is the same as flipping its direction with a scalar of -1, then adding.
Armed with nothing more than points and vectors, we can start to think about how our ball will actually bounce around a screen. Given an initial position (x, y) and velocity (vx, vy), we can always figure out where it will end up after the next update.
Of course, the new position may be off the screen, which is where the bounce comes in. If ynew is out of bounds, for example, we reposition the ball by a distance equal to the amount it overshot the boundary, but in the opposite direction. That is, if the new position would be n vertical pixels off the screen, we substract 2n from the new y component to move the ball back onscreen by the same amount. We also flip the sign of the vertical velocity so the ball will move in the opposite vertical direction from now on, away from the wall it just “hit.”
The next time the ball needs updating, we again add its current position and velocity—now (vx,−vy), since the last bounce changed its vertical direction of motion. This time perhaps xnew will be out of bounds, passing m pixels beyond the edge of the screen. We compensate in the same way we did for the vertical component: we adjust the horizontal position by −2m and flip the sign of the horizontal velocity.
Because all of the boundaries in this simplified scenario are perfectly horizontal or perfectly vertical, these straightforward bounce calculations are pretty easy to implement in code. For example:
// Basic bouncing ball public class Ball { private int x, y; private int vx, vy; public Ball( int x, int y, int vx, int vy ) { this.x = x; this.y = y; this.vx = vx; this.vy = vy; } // Called every frame with the current screen width and height. public void move( int width, int height ) { // Add the velocity to position. x += vx; y += vy; // Bounce off vertical walls, if necessary. if( 0 > x || x >= width ) { // Flip the horizontal velocity. vx = -vx; // If x is negative (the ball is off the screen to the left), // then simply flipping its sign is enough to move it to where // we want it to be. Otherwise (the ball is off the screen to // the right), we need to take the excess x - width and subtract // it from the screen width, yielding x = width - (x - width) = // width - x + width = 2*width - x. Either way, we negate x. In // the first case, we’re done; in the second, we just need to // add 2*width. x = -x; if( 0 > x ) // if now negative, must be 2nd case x += width << 1; } // Bounce off horizontal walls, if necessary. if( 0 > y || y >= height ) { // Follow the same logic as above. vy = -vy; y = -y; if( 0 > y ) y += height << 1; } } }
Note that in most cases, we probably won't see the actual bounce itself. This is because the ball will appear to have jumped from its starting position to its final “bounced” position between frames. (We'll see it actually touch the wall only if adding its velocity and position places it in direct contact with the boundary.) It turns out this isn't a problem, though, because our brain can “fill in” the missing action. It combines the old and new locations, plus what it knows of the ball's previous speed and trajectory, to infer that a bounce happened—even though we didn't actually witness it.
There's a wrinkle here that we haven't addressed, though. What happens when the ball's velocity is so great that it hits multiple walls during a single update? This isn't as pathological as it sounds—just picture a fast ball headed into a corner, or one constrained by some on-screen container.
We have to adjust our code a little bit to account for all bounces that may apply; we have to process the ball's position until the result is a valid screen location (see highlighted loop, below). Again, the bounces themselves won't necessarily be visible, since the ball simply moves from its starting position to its ending position.
public void move( int width, int height ) { // Add the velocity to position. x += vx; y += vy; while( 0 > x || x >= width || y > 0 || y >= height ) { // Same vertical bounce code as before. if( 0 > x || x >= width ) { vx = -vx; x = -x; if( 0 > x ) x += width << 1; } // Similarly for the horizontal bounce code. if( 0 > y || y >= height ) { vy = -vy; y = -y; if( 0 > y ) y += height << 1; } } }
Now that we have a basic understanding of what a vector is (direction and magnitude), how we represent one (as a point whose x and y coordinates each measure a delta along the corresponding axis), and why they're useful in games (they can model velocity and movement), we can explore common ways to manipulate them to achieve interesting effects. We've already seen how we can use vectors to reflect motion—at least, as long as the reflective surface is straight up and down or perfectly level—but what if we want to bounce our ball on a surface that's slanted? What if we don't necessarily want to move in a straight line? What if we want our velocity to change over time?
It turns out that vectors can help in all of these situations, which is good, because they are all common occurrences in a typical game. Over the course of the next few installments in this series, we'll cover these and many more useful applications of our new best friend, the vector. That's just the beginning, though.
Join us for future posts on other mathematical tidbits indispensable to game developers, like interpolation, bit manipulation, randomization, path finding, 3D, and much more.
Stay tuned!
-peter (@peterdotgames) | https://developer.amazon.com/fr/blogs/appstore/post/a07ab562-0609-4519-a2ba-9b15d69ea62b/introduction-to-game-math-raw-and-cooked | CC-MAIN-2022-21 | refinedweb | 1,673 | 68.91 |
'Declaration Public Class TabInfo Implements FarPoint.Web.Spread.Model.ISerializeSupport
public class TabInfo : FarPoint.Web.Spread.Model.ISerializeSupport
Since a Spread component may have more than one sheet, the tabs (or buttons) in the command bar contain the sheet names and provide a way to navigate to different sheets. These are called sheet name tabs.
The default sheet names are Sheet0, Sheet1, etc. You can specify other names for the sheets and these appear in the sheet name tabs.
You can set how many sheet name tabs are displayed. If the number of tabs exceeds the value specified, an ellipses is displayed. Click the ellipses to display the next (or previous) set of sheet names. You can also set the increment for advancing the sheet names. Be sure not to set the increment bigger than the number displayed if you want to be able to see all the sheet name tabs.
Note: The sheet changes when you click a different sheet name tab or when you click on the ellipses. When you click on the ellipses, the lowest number sheet in the set of sheet names is displayed.
System.Object
FarPoint.Web.Spread.TabInfo
TabInfo Members
FarPoint.Web.Spread Namespace
Tab Property (FpSpread)
Displaying the Sheet Names | https://www.grapecity.com/spreadnet/docs/v13/online-asp/FarPoint.Web.Spread~FarPoint.Web.Spread.TabInfo.html | CC-MAIN-2020-50 | refinedweb | 207 | 66.84 |
One often works with IP addresses in packed form which allows for a very efficient regex match:
sub is_private {
my ($packed_ip) = @_;
return $packed_ip =~ m{
^
(?: \x0A # 10.0.0.0/8
| \xAC[\x10-\x1F] # 172.16.0.0/12
| \xC0\xA8 # 192.168.0.0/16
)
}x;
}
[download]
Test:
use strict;
use warnings;
use Socket qw( inet_aton );
sub is_private {
my ($packed_ip) = @_;
return $packed_ip =~ m{
^
(?: \x0A # 10.0.0.0/8
| \xAC[\x10-\x1F] # 172.16.0.0/12
| \xC0\xA8 # 192.168.0.0/16
)
}x;
}
my $result = '';
for (qw(
9.255.255.255 10.0.0.0 10.255.255.255 11.0.0.0
172.15.255.255 172.16.0.0 172.31.255.255 172.32.0.0
192.167.255.255 192.168.0.0 192.168.255.255 192.169.0.0
)) {
my $packed_ip = inet_aton($_);
$result .= is_private($packed_ip) ? 1 : 0;
}
print("got: $result\n");
print("expect: ", "0110"x3, "\n");
[download]
got: 011001100110
expect: 011001100110
[download]
Update: Alternatives:
sub is_private {
my ($packed_ip) = @_;
return ($packed_ip & "\xFF\x00\x00\x00") eq "\x0A\x00\x00\x00"
|| ($packed_ip & "\xFF\xF0\x00\x00") eq "\xAC\x10\x00\x00"
|| ($packed_ip & "\xFF\xFF\x00\x00") eq "\xC0\xA8\x00\x00";
}
[download]
sub is_private {
my $nummy_ip = unpack('N', shift);
return ($nummy_ip & 0xFF000000) == 0x0A000000 # 10.0.0.0/8
|| ($nummy_ip & 0xFFF00000) == 0xAC100000 # 172.16.0.0/12
|| ($nummy_ip & 0xFFFF0000) == 0xC0A80000; # 192.168.0.0/16
}
[download]
use Inline CPP => <<'__EOI__';
// Assumes the arg had UTF8=0
IV is_private(const char* packed_ip) {
// XXX Alignment issues
const U16& hi = *(const U16*)packed_ip;
// The irrelevant branch will be optimised away.
if (*(const U16*)("\x12\x34") == 0x1234) {
// Little endian
if ( hi == 0xC0A8
|| (hi & 0xFFF0) == 0xAC10
|| (hi & 0xFF00) == 0x0A00
)
return 1;
} else {
// Little endian
if ( hi == 0xA8C0
|| (hi & 0xF0FF) == 0x10AC
|| (hi & 0x00FF) == 0x000A
)
return 1;
}
return 0;
}
__EOI__
[download]
In reply to Re: Regexp: Private IP Addresses
by ikegami
in thread Regexp: Private IP Addresses. | http://www.perlmonks.org/index.pl?parent=791164;node_id=3333 | CC-MAIN-2017-26 | refinedweb | 319 | 63.49 |
Case study:
We needed to test performance of various parts of our application. I wrote a metrics class containing all of the timing values we were intending to use, and stored that in the request. It worked OK, but was a very “brute force” approach, which I was not completely happy with.
public class Metrics implements Serializable { private long requestBegin; // where Spring interceptor first gets called private long controllerBegin; private long wurflBegin; private long wurflEnd; private long dataBegin; private long dataEnd; private long controllerEnd; // normally same as renderBegin private long renderBegin; private long renderEnd; private long requestEnd; }
Suddenly I realized there were sometimes two data query sections for some screens, eg. one to check the login details and another to query from the database. It was impossible to get the Metrics data structure to cover all cases. I didn't have a better answer, so I consulted with the team and my colleague had a smart idea:
“Create a StatsService that logged timing events for a conversation/request against ThreadLocal. Each timing event has a begin and an end, but the events are generic.”
Tip: If logic starts getting too complex or you find you have to do more and more workarounds to get the application to function the way you want, it's probably the wrong solution to the problem. Step back and consider if there is a better way. Discuss the approach with someone. | http://javagoodways.com/advanced_problem_Problems_can_indicate_poor_design.html | CC-MAIN-2021-21 | refinedweb | 236 | 58.42 |
0
Good day all,
I've been working on this code for a for a while now and I can't seem to find what's wrong.
I am supposed input two numbers and return if this is a twin prime number by just outputting true or false. My program compiles and runs but I am getting false for all my numbers. Twin prime numbers are numbers that are prime and differ by 2
for example, 3 and 5 are prime numbers and differ by 2
Can someone please check my code and tell me what part of it is wrong...
Much appreciated...
#include <iostream> using std::cout; using std::endl; using std::cin; bool isPrime(int a){ for (int i=1; i!=a / 2; i++) { int o=0; if (a % i == 0) { ++o; } if ( o != 1) return true; else return false; } } bool isTwinPrime(int a, int b){ if (isPrime(a) && isPrime(b) && 2 == b - a) return true; else return false; } int main() { int a, b, isPrime; cin >> a; cin >> b; using namespace std; cout<< boolalpha<< isTwinPrime (a, b)<<endl; return 0; } | https://www.daniweb.com/programming/software-development/threads/91477/twin-primes-not-quite-working-why | CC-MAIN-2017-47 | refinedweb | 182 | 72.5 |
.3: Exercises in Not Existing
About This Page
Questions Answered: Could I practice writing classes some more,
please? How can I make use of the
Option type? And
match?
Topics: The constructs from the previous chapter.
What Will I Do? The first half of the chapter is a series of small programming assignments. The second half contains some voluntary reading.
Rough Estimate of Workload:? Three hours.
Points Available: A140.
Related Projects: Miscellaneous, Stars (new), and Football3 (new). IntroOOP and MoreApps feature in optional assignments.
Assignment: Improve Class
VendingMachine
You may recall that after we created class
VendingMachine in Chapter 3.4, we
raised some questions about the quality of its
sellBottle method.
Task description
Locate
VendingMachine in project Miscellaneous and modify it. Edit
sellBottle so that
it no longer returns minus one to signify a failed purchase. Instead, the method’s return
type should be
Option[Int]; the method should return
None if no bottle was sold and
the amount of change wrapped in a
Some object if a bottle was sold.
Submission form
A+ presents the exercise submission form here.
Assignment: Fix Class
Member
Task description
The same project contains class
o1.people.Member, a separate example. Examine its
program code and documentation. Notice that the code doesn’t match the docs: the methods
isAlive and
toString are missing. Write them.
Hint
Submission form
A+ presents the exercise submission form here.
Assignment: Implement Class
Passenger
Task description
A class
Passenger is listed in the documentation for
o1.people. No matching code
has been given, however. Implement the class.
Instructions and hints
- The class uses another class,
TravelCard. That class has been provided for you. Use it as is; don’t change it.
- The documentation details
Passenger’s constructor parameters and their corresponding public instance variables. You won’t need to define any private instance variables for this class.
- As stated in the Scaladocs, passengers should have an instance variable of type
Option[TravelCard]; that is, each passenger has zero or one travel cards.
Optionworks as a wrapper for our custom class
TravelCard.
- You need to create a file for class
Passengerwithin
o1.people. Here’s how:
- Right-click the package in Eclipse’s Package Explorer and select . A small dialog pops up.
- Eclipse will set up the file for you if you enter some additional information in the dialog. Under Name, enter
o1.people.Passengeras the name of the class.
- There is no need to edit the other fields. Hit Finish. A new file will show up in the editor, with a bit of starter code.
- Refer to Chapter 4.2 for tools that you can use to implement the methods.
Submission form
A+ presents the exercise submission form here.
Assignment: Improve Class
Order
There’s nothing new about the problem below, but it provides further practice on
Options and
match. We recommend it especially if you had difficulty with the
above assignments. You can also come back to practice on this problem if you run
into trouble later on.
Additional practice
Return to class
Order in project IntroOOP.
There was a tiny optional assignment in Chapter 2.6 where the
description method in class
Order was replaced by a
toString
method. If you didn’t do that then, do it now: rename
description
to
toString and write
override in front.
Now edit the class as follows:
- Add a constructor parameter
addresswith the type
Option[String]. Also introduce a corresponding
valinstance variable. This variable will to store a postal address that is (possibly) assosiated with the order.
- Add a parameterless, effect-free method
deliveryAddress. It should return a
Stringthat indicates where the order should be delivered. This will be either the address associated with the order, if there is one, or the orderer’s personal address, if the order isn’t associated with an address.
- Edit the
toStringmethod so that there’s an additional bit at the end: a comma and a space
", "followed by the either
"deliver to customer's address"or
"deliver to X", where X is the address associated with the order.
A+ presents the exercise submission form here.
Assignment: Convenient Keyboard Control
The next optional assignment is meaningful only if you have done the
Trotter assignment
from Chapter 3.5. Of course, you could do that assignment now if you didn’t already.
Direction.fromArrowKey
Package
o1 contains not just the class
Direction but also a singleton object
of the same name. The singleton object has a convenient
method
fromArrowKey, whose behavior is illustrated below.
val exampleKey = Key.UpexampleKey: o1.Key.Value = Up Direction.fromArrowKey(exampleKey)res0: Option[Direction] = Some(Direction.Up) Direction.fromArrowKey(Key.X)res1: Option[Direction] = None
As you see, the method returns the direction that corresponds to the given key
on the keyboard. It wraps the return value in an
Option, because only some keys
correspond to a direction.
Edit
TrotterApp’s
onKeyDown method in project MoreApps. The method should do the
same job as before but you can write a simpler and less redundant implementation
with
fromArrowKey.
A+ presents the exercise submission form here.
Assignment: Star Maps (Part 1 of 4: Basic Star Info)
Introduction: stars and their coordinates
Let’s write a program that displays star maps: views of the night sky. A star map contains a number of stars; stars may also be linked together to form constellations.
In this first part of the assignment, we won’t be drawing anything yet. We’ll begin by creating some tools for representing individual stars.
Fetch the project Stars. For now, we’ll concentrate on classes
Star and
StarCoords:
you use the former, which is already implemented, to implement the latter. Study the
two classes’ documentation; don’t mind the other classes now.
As indicated in the Scaladocs, our program needs to work on two different sorts of two-dimensional coordinates:
In
StarCoords, each coordinate is between minus one and plus one.
For instance, if the
x of a star is +1.0 and its
y is 0.0, it’s
located at the middle of a star map’s right-hand edge.
- A star’s location on a two-dimensional star map is represented as a
StarCoordsobject.
- For this, we use a coordinate system like the one you know from math class, with values of y increasing towards the top.
- We assume that all values of x and y have been normalized so that they fall within the interval [-1.0...+1.0]. (See the illustration.) The two coordinates represent a star’s location in the visible sky independently of the size of any picture that may depict the sky.
- On the other hand, can use the method
toImagePosof a
StarCoordsinstance to produce a
Posthat represents the star’s location within a particular
Pic. These coordinates increase, like the other
Poscoordinates that we’ve used, right- and downward from the top left-hand corner. They indicate which pixel the star’s center should appear at within a larger
Picof the entire sky that has a known width and height.
Task description
- Read the above and the Scaladocs. Make sure you understand the two coordinate systems we need. Make sure you understand what
toImagePosin
StarCoordsaccomplishes.
- Then implement the missing methods of class
Starso that their behavior meets the Scaladoc specification.
Instructions and hints
Here is an example of how your
Starclass should work:
val unnamedStar = new Star(28, new StarCoords(0.994772, 0.023164), 4.61, None)unnamedStar: o1.stars.Star = 28 (x=0.99, y=0.02) unnamedStar.posIn(rectangle(100, 100, Black))res2: o1.world.Pos = (99.7386,48.8418) unnamedStar.posIn(rectangle(200, 200, Black))res3: o1.world.Pos = (199.4772,97.6836) val namedStar = new Star(48915, new StarCoords(-0.187481, 0.939228), -1.44, Some("SIRIUS"))namedStar: o1.stars.Star = 48915 SIRIUS (x=-0.19, y=0.94)
You don’t actually have to implement the required math yourself if you make good use of
StarCoords.
You also don’t need to round a star’s coordinates even though
toStringreturns them in rounded form. The
toStringmethod in
StarCoordsalready does that for you, so use it.
Submission form
A+ presents the exercise submission form here.
Assignment: Star Maps (Part 2 of 4: Seeing Stars)
Task description
The Stars project also includes the singleton object
o1.stars.io.SkyPic, whose methods
we’ll use to generate images of star maps.
In this short assignment, we’ll focus on
placeStar, a method that takes a picture and
a star and returns a new version of the picture with the star’s picture drawn on top.
For instance, let’s say we want a picture that contains pictures of these two stars:
val unnamedStar = new Star(28, new StarCoords(0.994772, 0.023164), 4.61, None)unnamedStar: o1.stars.Star = 28 (x=0.99, y=0.02) val namedStar = new Star(48915, new StarCoords(-0.187481, 0.939228), -1.44, Some("SIRIUS"))namedStar: o1.stars.Star = 48915 SIRIUS (x=-0.19, y=0.94)
The following code should do the trick.
val darkBackground = rectangle(500, 500, Black)darkBackground: Pic = rectangle-shape val skyWithOneStar = SkyPic.placeStar(darkBackground, unnamedStar)skyWithOneStar: Pic = combined pic val skyWithTwoStars = SkyPic.placeStar(skyWithOneStar, namedStar)skyWithTwoStars: Pic = combined pic skyWithTwoStars.show()
placeStar lacks an implementation. Read its specification in the Scaladocs and implement
it in
SkyPic.scala where indicated.
Instructions and hints
- This part-assignment should be very easy if you use the tools introduced in Part 1 above.
- After writing the method, you may reflect on what we have and have not accomplished so far:. And that is indeed what we will do soon enough.
- Feel free to take a look at the folders
testand
northernand especially the
stars.csvfiles in those folders.
- We’ll return to this program in Chapter 4.5.
Submission form
A+ presents the exercise submission form here.
Assignment: Football3
Our football-scores application (of Chapter 3.4) was already rewritten once (in Chapter 4.1), and now we’re going to mess with it again.
Task description
Fetch Football3. Study its documentation. Note that class
Match is now somewhat
different than before and there is a new class,
Season.
Implement the two classes so that they meet the new specification.
Instructions and hints
- You can copy and paste your
Matchimplementation from Football2 into Football3 and work from there. The only changes you need to make to that class are:
- In addition to
winnerName, there is a new method named simply
winner, which returns an
Option.
winningScorerNameno longer exists. It’s replaced by
winningScorer, which returns an
Option.
- Once you have
winnerworking, you can use it to simplify the implementation of
winnerName.
- Remember:
Matchwith a capital M is a programmer-chosen name for a class that represents football matches.
matchis a Scala command (that can’t be used as a name; it’s a reserved word). Mix up these two, and you may get some interesting error messages.
- You can again use
FootballAppfor testing. Once you’ve implemented
Season, the app’s main window will display season statistics and a list of matches.
- For implementing
Seasonyou may find it useful to review Chapters 4.1 and 4.2 and the GoodStuff application.
- For working out the biggest win, you can use an instance variable of type
Optionas a most-wanted holder, much like we did in class
Category.
- There are alternative techniques for solving the problem that we haven’t covered yet (such as loops; Chapter 5.3). You aren’t forbidden from using them if you happen to know how, but there is no need to.
- Did you notice that
goalDifferencecan return a negative number?
Submission form
A+ presents the exercise submission form here.
Something to think about
The documentation goes out of its way to say that matches added to
a
Season are assumed to have finished and no more goals will be
added to those matches. What happens if you go against this
assumption? What would it take to reimplement the program so that
we prevent that from happening?
Further Reading: The Flexible
match Command
The boxes below tell you more about the
match command, which we’ve used only for
manipulating
Option objects. This additional information isn’t required for O1 but
should interest at least those readers who have prior programming experience and wish
to explore Scala constructs in more depth. Beginners can perfectly well skip this section
and learn about these topics at a later date.
match is a pattern-matching tool
The general form of
match is:
expression E match { case pattern A => code to run if E’s value matches pattern A case pattern B => code to run if E’s value matches pattern B (but not A) case pattern C => code to run if E’s value matches pattern C (but not A or B) And so on. (Usually, you’ll seek to cover all the possible cases.) }
Noneand
Somepatterns, but many more kinds of patterns are possible. Some of them are introduced below.
Primitive
matching on literals
Suppose we have an
Int variable called
number. Let’s use
match to examine
the value of the expression
number * number * number.
val cubeText = number * number * number match { case 0 => "number is zero and so is its cube" case 1000 => "ten to the third is a thousand" case otherCube => "number " + number + ", whose cube is " + otherCube }
matchchecks the patterns in order until it finds one that matches the expression’s value. Here, we have a total of three patterns.
Intliterals. The first case is a match if the cube of
numberequals zero; the second matches if the cube equals one thousand.
otherCube. Such a pattern will match any value; in this example, the third case will always be selected if the cube wasn’t zero or one thousand.
Below is a similar example where we match on
Boolean literals rather than
Ints.
These two expressions do the same job:
if (number < 0) "negative" else "non-negative"
number < 0 match { case true => "negative" case false => "non-negative" }
We can accomplish all that by chaining
ifs and
elses (Chapter 3.3), too.
Matching on literals and variables has not yet demonstrated the power of
match.
But see below.
Question from student: Is
match roughly the same as Java’s
switch?
Java and some other programming languages have a
switch command that selects among
multiple alternative values that an expression might have. Scala’s
match is similar
in some ways. However,
switch can select only a case that corresponds to a specific
value (as in our
cubeText example), whereas
match provides a more flexible
pattern-matching toolkit. Perhaps the most significant differences are that
match
can:
- make a selection based on an object’s type; and
- “take apart” the object and automatically extract parts of it into variables defined in the pattern.
Examples of both appear later in this chapter.
Guarding a case with a condition
You can associate a pattern with an additional condition (a pattern guard) that needs to be met for a match to happen.
val cubeText = number * number * number match { case 0 => "the number is zero and so is its cube" case 1000 => "ten to the third is a thousand" case other if other > 0 => "positive cube " + other case other => "negative cube " + other }
ifkeyword, but this isn’t a standalone
ifcommand.
Underscores in patterns
An underscore pattern means “anything” or “don’t care what”. Here are a couple of examples.
number * number * number match { case 0 => "the number is zero and so is its cube" case 1000 => "ten to the third is a thousand" case _ => "something other than zero or thousand" }
matching on data types
In the preceding examples, the patterns corresponded to different values of the same type. In this example, the patterns match values of different types:
def experiment(someSortOfValue: Any) = someSortOfValue match { case text: String => "it is the string " + text case number: Int if number > 0 => "it is the positive integer " + number case number: Int => "it is the non-positive integer " + number case vector: Vector[_] => "it is a vector with " + vector.size + " elements" case _ => "it is some other sort of value" }
Any, which means that we can pass a value of any type as a parameter. (More on
Anyin Chapter 7.3.)
vectorhas the type
Vector, so we can use the variable to call the matching vector’s
sizemethod.
Using
match to take apart an object
One of
match’s most appealing features is that you can use it to destructure the object
that matches a pattern, extracting parts of it into variables. A simple example is the
“unwrapping” of a value stored within an
Option:
vectorOfNumbers.lift(4) match { case Some(wrapped) => "the number " + wrapped case None => "no number" }
Somewill have some value inside it. That value is automatically extracted and stored in the variable
wrapped.
This feature of
match can be combined with the others listed above. Below, we take
apart an
Option and, at the same time, try to match its possible contents to one of
several cases:
vectorOfNumbers.lift(4) match { case Some(100) => "exactly one hundred" case Some(wrapped) if wrapped % 2 == 0 => "some other even number " + wrapped case Some(oddNumber) => "the odd number " + oddNumber case None => "no number at all" }
You can destructure an object in this fashion only if the class defines how to do that
for objects of that type. Many of Scala’s library classes come with such a definition,
Some included. Similarly, O1’s own library defines that a
Pos object can be
destructured into two coordinates:
myPos match { case Pos(x, y) if x > 0 => "a pair of coordinates where x is positive and y equals " + y case Pos(_, y) => "some other pair of coordinates where y equals " + y }
Pos, two numbers can be extracted from it. Store them in local variables
xand
y. If
xis positive, this case matches.”
Pos. Discard the first and store the other in a variable
y.” Any
Poswill match this pattern and this branch will be chosen every time if the first one isn’t.
So how do you define how objects of a particular type can be taken apart? The easiest way is to turn a class into a so-called case class (tapausluokka). Here’s a simple example:
case class Album(val name: String, val artist: String, val year: Int) { // ... }
case, this is like a regular class definition.
matching.
You can then use the case class like so:
myAlbum match { case Album(_, _, year) if year < 2000 => "ancient" case Album(name, creator, released) => creator + ": " + name + " (" + released + ")" }
Search online for Scala pattern matching and Scala case class for more information. As noted, you aren’t required to use these language features in O1.
Further Reading:
Option.get
This optional section introduces a method on
Options that can seem deceptively handy
but that you should avoid.
The dangerous
get method
One way to open up an
Option wrapper is to call its parameterless
get method.
Try it.
We could have used
get instead of
match to implement
addExperience in
class
Category:
def addExperience(newExperience: Experience) = { this.experiences += newExperience val newFave = if (this.fave.isEmpty) newExperience else newExperience.chooseBetter(this.fave.get) this.fave = Some(newFave) }
isEmptychecks whether an old favorite exists.
getthe old favorite from its wrapper. Since we do this only in the
elsebranch, we know we’re not dealing with
None.
However, if we use
get, we’re faced with some of the same problems that we had with
null: if we call
get on
None, our program crashes. It’s up to the programmer to
ensure that
get is only called on a
Some. This is easy to forget.
As one student put it:
At least you shouldn’t just rush to open a wrapper, as
get does. It’s better to
be prepared for possible disappointment and avoid the tears.
It’s good to know that
get exists; you may see it used in programs written by
others. Avoid using it yourself. There is always a better solution such as
match,
getOrElse, or one of the methods from Chapter 8.
matchcan examine the value of any expression. In technical terms, this examination is a form of pattern matching (hahmonsovitus): the expression’s value is compared to... | https://plus.cs.aalto.fi/o1/2018/w04/ch03/ | CC-MAIN-2020-24 | refinedweb | 3,370 | 65.83 |
Interacting with docassemble over HTTP
If you are a software developer, you can develop your own front end for docassemble. You can use docassemble’s API to communicate with docassemble sessions. It is slightly more complicated, but just as effective, to communicate with docassemble the same way that the web browser does.
You can extract a machine-readable version of any docassemble
screen. If you add
&json=1 to the end of the URL, or include
json=1 as parameter of a POST request, a JSON representation of
the screen will be returned. (In fact,
json set to anything will
cause a JSON representation to be returned.)
To communicate back with the server, you will need to mimic the way that the browser communicates with the server. The easiest way to figure this out is use your browser’s developer tools and inspect the POST requests that the browser sends to the server.
docassemble uses cookies and 302 redirects, so if you are using a library to send HTTP requests to docassemble, you need to make sure that cookies are stored for the life of the session, and make sure that your library will follow redirects. If your library does not store cookies, you will encounter an infinite redirect, because the first thing that docassemble does when a user connects to it is send a self-referential redirect with a cookie.
In general, you can set any variable in the interview by sending a
POST request with parameters where the keys are base64-encoded
variable names and the values are the values you want to assign to the
variables. In Javascript, you can use the
atob() and
btoa()
functions to convert between base64 and text. However, when using
btoa(), you need to alter the results to remove newline and
=
characters when working with variable names. Thus, if your variable
name is
varName, call
btoa(varName).replace(/[\\n=]/g, ''). In
Python, you can use the
encode_name() and
decode_name()
functions to convert to and from base64.
For example, if you want to set the variable
favorite_fruit to
'apple', you would convert
favorite_fruit to base64 using
btoa('favorite_fruit').replace(/[\\n=]/g, '') or
encode_name('favorite_fruit'), to get
'ZmF2b3JpdGVfZnJ1aXQ'.
Then you would put the following key and value in your POST request:
ZmF2b3JpdGVfZnJ1aXQ:
apple
The POST request needs to go to the interview URL, which will look like.
In addition to including keys and values of variables, your requests
should include the parameter
json=1, so that the server knows to
respond with JSON. In addition, your requests should feed back the
following values from the previous JSON response you received:
csrf_token. This token is a security measure that protects against cross-site interference. See CSRF protection.
_question_name. This contains the name of the question to which you are providing data. In most cases, this is not used, but there are some question types for which it is important.
_datatypes. This is a way of telling the server the data types of the variables being set, so that the server knows which values are integers or dates rather than text values. The value is a base64-encoded JSON representation of a dictionary where the keys are base64-encoded variable names and the values are the names of variables’
datatypes.
_varnames. For certain types of questions, variable aliases are used. This base64-encoded JSON representation of a dictionary tells the server what this mapping is.
The
_datatypes field is important if you are setting non-text
values. For example, to set the variable
likes_fruit to
True,
a boolean value, you would run
btoa('likes_fruit').replace(/[\\n=]/g, '')
to get the key name
bGlrZXNfZnJ1aXQ, and then you would run
btoa('{"bGlrZXNfZnJ1aXQ": "boolean"}') to get
eyJiR2xyWlhOZlpuSjFhWFEiOiAiYm9vbGVhbiJ9. Then you would set
the following keys and values in your POST request:
bGlrZXNfZnJ1aXQ:
True
_datatypes:
eyJiR2xyWlhOZlpuSjFhWFEiOiAiYm9vbGVhbiJ9
If you are uploading a file, use the
multipart/form-data style of
encoding POST parameters, and include one additional parameter:
_files. This is a base64-encoded JSON representation of a list where each element is a base64-encoded variable name for a file being uploaded.
The “name” of an uploaded file should simply be the base64-encoded variable name.
For example, if you wanted to upload a file into a variable
user_picture, you would run
btoa('user_picture').replace(/[\\n=]/g, '') to get
'dXNlcl9waWN0dXJl', and then you would run
btoa('["dXNlcl9waWN0dXJl"]') to get
'WyJkWE5sY2w5d2FXTjBkWEpsIl0=', and you would set the following in
your POST:
dXNlcl9waWN0dXJl: the file you are uploading, using the standard method for attaching files to a
multipart/form-dataPOST request.
_files:
WyJkWE5sY2w5d2FXTjBkWEpsIl0=
There is also a second way to upload files, which uses data URLs.
To use this method, send a normal POST request, without
multipart/form-data and without a traditional uploaded file, in
which there is a key called
_files_inline, which is set to
base64-encoded JSON data structure containing the file or files
you want to upload, and some information about them.
For example, suppose you want to upload a file to the variable
user_picture. You would run
btoa('user_picture').replace(/[\\n=]/g, '')
to get
'dXNlcl9waWN0dXJl'. Then you would create a Javascript object (a
Python dictionary) with two key-value pairs. In the first key-value
pair, the key will be
keys and the value will be a list containing
the base64-encoded variable names of the variables to which you want
to upload files. In the second key-value pair, the key will be
values and the value will be an object (a Python dictionary) with
the following keys:
name: the name of the file being uploaded, without a directory specified.
size: the number of bytes in the file.
type: the MIME type of the file being uploaded.
content: a data URL containing the contents of the file, using base64 encoding.
Here is an example of the data structure you would need to create:
{"keys":["dXNlcl9waWN0dXJl"],"values":{"dXNlcl9waWN0dXJl":[{"name":"the_filename.png","size":8025,"type":"image/png","content":"}]}}
Assuming that this data structure was stored in a Javascript
variable
data, you would set the POST parameter
_files_inline to
the result of
btoa(JSON.stringify(data)).
In addition, when sending a POST request, include the parameter
json
and set it to
1, so that the response you get back is in JSON
format.
The format of the JSON representation should be self-explanatory.
Toggle the
json=1 URL parameter to compare the HTML version of the
screen to the JSON representation.
Example of “logging in” with JavaScript
If you want to programmatically “log in” to docassemble, you can use code like the following:
var myHeaders = new Headers(); var myInit = { method: 'GET', headers: myHeaders, mode: 'cors', cache: 'default', redirect: 'follow', credentials: 'include' }; var myRequest = new Request('', myInit); fetch(myRequest).then(function(response) { var contentType = response.headers.get("content-type"); if(contentType && contentType.includes("application/json")) { return response.json(); } throw new TypeError("Error: JSON not returned from sign-in site"); }).then(function(json) { var form = new FormData(); form.append('next', ''); form.append('csrf_token', json.csrf_token); form.append('email', 'someuser@example.com'); form.append('password', 'xxsecretxx'); var loginInit = { method: 'POST', headers: myHeaders, mode: 'cors', cache: 'default', credentials: 'include', redirect: 'follow', body: form }; var loginRequest = new Request('', loginInit); fetch(loginRequest).then(function(response) { var contentType = response.headers.get("content-type"); if(contentType && contentType.includes("application/json")) { return response.json(); } throw new TypeError("Error: JSON not returned after signing in"); }).then(function(json) { console.log(json) }); });
Sessions
Background information
If you are building a front end to docassemble, you do not need to know exactly how docassemble works internally, but it may help for you to know a little about its internal design.
The docassemble web application saves state in a two different ways.
First, there is the state that represents the relationship between the
web browser and the docassemble server. This is controlled by
Flask, using a standard Flask session ID. Redis is used as the
backend for this session tracking system. (The
flask_kvsession
and
simplekv.memory.redisstore packages are used.) This session
tracking is necessary for the user login system, which is provided by
the
flask_user package. It is used on all pages of a
docassemble site. It is necessary for CSRF protection. This
session tracking system works through a cookie called
session. This
session ID is generated by Flask and only lasts for the life of the
web browser session (unless the user chooses the “remember me”
option). Old session IDs are automatically purged after a time.
Second, there is the state that represents the current step of a
user’s docassemble interview. This is not related to Flask, and
is unique to docassemble. Each step of the interview process
(which is accessible to the user with the “Back” button, which reverts
the interview state to the previous step) is stored as a row in a SQL
table called
userdict. A state of the interview is represented as a
Python dictionary serialized with cPickle and stored in the
dictionary column of the
userdict table. This Python dictionary
is used as the namespace in which interview code is evalutated. A
stored interview session is uniquely identified by the interview file
name (e.g.,
docassemble.eviction:data/questions/complaint.yml) and a
session ID, which is a unique sequence of 32 random upper and
lowercase characters. These are the values referred to in the
i and
session parameters produced by
interview_url(). (In the
userdict table, the interview file name is in the
filename column
and the session ID is in the
key column.)
So, there are two session IDs, based on systems that are independent
from one another. As a front-end developer, you will probably never
need to worry about the Flask session ID, which is stored in the
cookie called
session; your HTTP library and the docassemble
server can work with this session ID without any manual intervention
from you. The session ID that matters is the docassemble session
ID that, along with the filename of the YAML interview file,
uniquely identifies an interview session.
By default, server-side encryption is used, which means that interview
answers are stored encrypted on the server, and only a user who
started the interview can access the interview answers. The
encryption key is stored in a cookie called
secret, and the value of
the
secret is not stored on the server anywhere. For a logged in
user, the
secret is based on the user’s password, and the cookie is
set when the user logs in. Users who are not logged in have a random
secret. When the user logs in or registers, existing stored
interview states are re-encrypted with the new encryption key. (The
same happens when a logged-in user changes his or her password.) If a
user forgets his or her password, there is no way to retrieve the
interview answers. If a user completes an interview without logging
in, and the cookie containing the
secret expires (as it will when
the user closes the browser), there is no way for the interview
answers to be retrieved again. Server-side encryption can be turned
off in the interview by setting
multi_user to
True.
The docassemble web application is designed to use Ajax calls as
much as possible. When a user first visits an interview, the browser
sends a GET request, and the server responds with HTML. However, when
the user interacts with the screen and moves from step to step in the
interview, the browser sends Ajax POST request, and the server
responds with JSON. Within this JSON is a string representing the
HTML of the
<body> of the new screen. The server knows it is
dealing with an Ajax request because the browser includes
ajax=1
among the parameters of every request.
How sessions work
All interviews take place using the root location
/ (unless the
root of the whole site is changed in the configuration to
something else).
A new user starts an interview by visiting
/ with an
i parameter
set to the interview name (e.g.,
docassemble.eviction:data/questions/complaint.yml).
If a new user visits
/ without an
i parameter, the user will be
directed to
/list. However, if there is no
dispatch directive
in the configuration, the user will be redirected to the
default interview.
A user can enter an on-going interview by visiting
/ with an
i
parameter and a
session parameter.
Once the user enters an interview, the interview filename
i and the
session ID are cached in the Flask session data, so the web browser
does not have to send them to the server with every request; the
session cookie is the key that the server uses to know which session
it is dealing with.
The current interview’s
session ID is primarily used internally, out
of sight of the front end, but it can be obtained by:
- Calling
interview_url(), which returns a URL that can be used to resume the interview, containing both the
iand
sessionparameters.
- Calling
docassemble.base.functions.get_uid(), which returns the session ID.
- Calling
interview_list()to get information about all of the sessions of a logged-in user.
- Calling
get_interview_variables()from JavaScript.
The web browser interface tries to make sure that the location bar
always displays the
i parameter. Users may bookmark the URL or copy
and paste the URL to share it with other people. As long as the
i
parameter is present in the location bar, the link will generally
behave in a way that meets the user’s expectations.
However, this is only done out of convenience to the user. Since the
interview file name is stored in the user’s Flask session, it is not
necessary to use the
i parameter; in fact, if you try navigating to
/ with a web browser while using an interview, this will simply have
the effect of refreshing the screen.
If the browser sends a request with a different
i parameter than the
one in the Flask session data, this has the effect of starting a new
interview session. The old interview session is not deleted; it is
still there, stored in SQL, and is accessible to a logged-in user on
the
/interviews page.
If the URL parameter
reset=1 is included along with an
i parameter
in a GET request, the current interview session is erased from the
server (if there is a current session), and the interview is started
fresh.
If the URL parameter
cache=0 is included in a GET request, the
interview file will be re-read from the disk. This is how the
Playground works; when you click the “Save and Run” button or the
“Run” button, the browser is directed to a link that contains
cache=0. (Parsing an interview YAML file is a computationally
intensive task, so docassemble keeps interviews in memory whenever
it can.)
If you are sending HTTP requests to docassemble using your own
software, you only need to worry about URL parameters when sending a
GET request to start an interview session or switch to a different
session. (If you send a GET request in order to start a new session,
you might want to always include the
reset=1 parameter along with
the
i parameter, because then you don’t need to worry about what
might be in the Flask session data.) Most of the requests you send
can simply be POST requests to
/. The
session cookie unlocks all
of the stored information that is used to handle every HTTP request.
While the web browser sends POST requests with
ajax=1, a custom
front end would probably not be interested in the information returned
from such requests. Instead, a custom front end would send POST
requests with
json=1 and get back a JSON representation of the
next screen of the interview.
Note that as a technical matter, a user does not need to go through
the
flask_user log-in process in order to resume an existing
interview. Even if server-side encryption is used, if the appropriate
secret cookie is present, the interview answers will be decrypted.
So if you have a mechanism for remembering
i,
session, and
secret, you can run interviews without logging in. And if
server-side encryption is turned off in an interview, all you need is
i and
session. So, if you have such a mechanism for storing these
values, the only benefit to going through the login process would be
that the user of the interview would be that the interview could use
functions like
user_info() to do different things based on who the
user is.
Cross-Origin Resource Sharing (CORS)
There are many security issues with sending sensitive data over HTTP. By default, HTTP libraries implement fairly strict security restrictions, which you can override by:
- Changing how you call the library; and
- Changing the headers returned by the web server.
In addition, docassemble uses Flask’s CSRF protection system.
This is why you need to include a
csrf_token with every POST
request. By default, Flask’s CSRF protection mechanism
scrutinizes the referer header of each request, and generates an
error if the referer header is not set to an appropriate value.
This security check can be turned off by setting
require referer
to
False in the Configuration. However, CSRF protection itself
cannot be turned off.
Depending on how your custom front end calls docassemble, you may
need to change the headers that the docassemble web server
returns. By setting the
cross site domains directive in the
Configuration to include, you will
activate the following headers:
Access-Control-Allow-Origin: Access-Control-Allow-Credentials "true"
In some circumstances you can set
cross site domains to include
* in order to allow connections from anywhere. But when cookies are
used, this is not allowed. See the “Credentialed requests and
wildcards” section of Mozilla’s Cross-Origin Resource
Sharing
documentation.
Using the API avoids issues with CORS. | https://docassemble.com.br/docs/frontend.html | CC-MAIN-2020-45 | refinedweb | 2,972 | 51.99 |
Parsing XML With SimpleXML.
Basic Usage
Let’s start with the following sample as
languages.xml:
<?xml version="1.0" encoding="utf-8"?> <languages> <lang name="C"> <appeared>1972</appeared> <creator>Dennis Ritchie</creator> </lang> <lang name="PHP"> <appeared>1995</appeared> <creator>Rasmus Lerdorf</creator> </lang> <lang name="Java"> <appeared>1995</appeared> <creator>James Gosling</creator> </lang> </languages>
The above XML document encodes a list of programming languages, giving two details about each language: its year of implementation and the name of its creator.
The first step is to loading the XML using either
simplexml_load_file() or
simplexml_load_string(). As you might expect, the former will load the XML file a file and the later will load the XML from a given string.
<?php $languages = simplexml_load_file("languages.xml");
Both functions read the entire DOM tree into memory and returns a
SimpleXMLElement object representation of it. In the above example, the object is stored into the $
languages variable. You can then use
var_dump() or
print_r() to get the details of the returned object if you like.
SimpleXMLElement Object ( [lang] => Array ( [0] => SimpleXMLElement Object ( [@attributes] => Array ( [name] => C ) [appeared] => 1972 [creator] => Dennis Ritchie ) [1] => SimpleXMLElement Object ( [@attributes] => Array ( [name] => PHP ) [appeared] => 1995 [creator] => Rasmus Lerdorf ) [2] => SimpleXMLElement Object ( [@attributes] => Array ( [name] => Java ) [appeared] => 1995 [creator] => James Gosling ) ) )
The XML contained a root
language element which wrapped three
lang elements, which is why the
SimpleXMLElement has the public property
lang which is an array of three
SimpleXMLElements. Each element of the array corresponds to a
lang element in the XML document.
You can access the properties of the object in the usual way with the
-> operator. For example,
$languages->lang[0] will give you a
SimpleXMLElement object which corresponds to the first
lang element. This object then has two public properties:
appeared and
creator.
<?php $languages->lang[0]->appeared; $languages->lang[0]->creator;
Iterating through the list of languages and showing their details can be done very easily with standard looping methods, such as
foreach.
<?php foreach ($languages->lang as $lang) { printf( "<p>%s appeared in %d and was created by %s.</p>", $lang["name"], $lang->appeared, $lang->creator ); }
Notice that I accessed the
lang element’s name attribute to retrieve the name of the language. You can access any attribute of an element represented as a
SimpleXMLElement object using array notation like this.
Dealing With Namespaces
Many times you’ll encounter namespaced elements while working with XML from different web services. Let’s modify our
languages.xml example to reflect the usage of namespaces:
<?xml version="1.0" encoding="utf-8"?> <languages xmlns: <lang name="C"> <appeared>1972</appeared> <dc:creator>Dennis Ritchie</dc:creator> </lang> <lang name="PHP"> <appeared>1995</appeared> <dc:creator>Rasmus Lerdorf</dc:creator> </lang> <lang name="Java"> <appeared>1995</appeared> <dc:creator>James Gosling</dc:creator> </lang> </languages>
Now the
creator element is placed under the namespace
dc which points to. If you try to print the creator of a language using our previous technique, it won’t work. In order to read namespaced elements like this you need to use one of the following approaches.
The first approach is to use the namespace URI directly in your code when accessing namespaced elements. The following example demonstrates how:
<?php $dc = $languages->lang[1]- >children(""); echo $dc->creator;
The
children() method takes a namespace and returns the children of the element that are prefixed with it. It accepts two arguments; the first one is the XML namespace and the latter is an optional Boolean which defaults to false. If you pass true, the namespace will be treated as a prefix rather the actual namespace URI.
The second approach is to read the namespace URI from the document and use it while accessing namespaced elements. This is actually a cleaner way of accessing elements because you don’t have to hardcode the URI.
<?php $namespaces = $languages->getNamespaces(true); $dc = $languages->lang[1]->children($namespaces["dc"]); echo $dc->creator;
The
getNamespaces() method returns an array of namespace prefixes with their associated URIs. It accepts an optional parameter which defaults to false. If you set it true then the method will return the namespaces used in parent and child nodes. Otherwise, it finds namespaces used within the parent node only.
Now you can iterate through the list of languages like so:
<?php $languages = simplexml_load_file("languages.xml"); $ns = $languages->getNamespaces(true); foreach($languages->lang as $lang) { $dc = $lang->children($ns["dc"]); printf( "<p>%s appeared in %d and was created by %s.</p>", $lang["name"], $lang->appeared, $dc->creator ); }
A Practical Example – Parsing YouTube Video Feed
Let’s walk through an example that retrieves the RSS feed from a YouTube channel displays links to all of the videos from it. For this we need to make a call to the following URL:
The URL returns a list of the latest videos from the given channel in XML format. We’ll parse the XML and get the following pieces of information for each video:
- Video URL
- Thumbnail
- Title
We’ll start out by retrieving and loading the XML:
<?php $channel = "channelName"; $url = "".$channel."/uploads"; $xml = file_get_contents($url); $feed = simplexml_load_string($xml); $ns=$feed->getNameSpaces(true);
If you take a look at the XML feed you can see there are several
entity elements each of which stores the details of a specific video from the channel. But we are concerned with only thumbnail image, video URL, and title. The three elements are children of
group, which is a child of
entry:
<entry> … <media:group> … <media:player <media:thumbnail <media:titleTitle…</media:title> … </media:group> … </entry>
We simply loop through all the
entry elements, and for each one we can extract the relevant information. Note that
player,
thumbnail, and
title are all under the
media namespace. So, we need to proceed like the earlier example. We get the namespaces from the document and use the namespace while accessing the elements.
<?php foreach ($feed->entry as $entry) { $group=$entry->children($ns["media"]); $group=$group->group; $thumbnail_attrs=$group->thumbnail[1]->attributes(); $image=$thumbnail_attrs["url"]; $player=$group->player->attributes(); $link=$player["url"]; $title=$group->title; printf('<p><a href="%s"><img src="%s" alt="%s"></a></p>', $player, $image, $title); }
Conclusion
Now that you know how to use SimpleXML to parse XML data, you can improve your skills by parsing different XML feeds from various APIs. But an important point to consider is that SimpleXML reads the entire DOM into memory, so if you are parsing large data sets then you may face memory issues. In those cases it’s advisable to use something other than SimpleXML, preferably an event-based parser such as XML Parser. To learn more about SimpleXML, check out its?
- rs
- John
- Fernando Benitez
- Rod
- Oscar
- dalip
- leon
- Danish
- Don | http://www.sitepoint.com/parsing-xml-with-simplexml/ | CC-MAIN-2015-22 | refinedweb | 1,123 | 52.6 |
DL_ITERATE_PHDR(3) BSD Programmer's Manual DL_ITERATE_PHDR(3)
dl_iterate_phdr - iterate over program headers
#include <link.h> int dl_iterate_phdr(int (*callback)(struct dl_phdr_info *, size_t, void*), void *data); struc- ture that is defined as: struct dl_phdr_info { Elf_Addr dlpi_addr; const char *dlpi_name; const Elf_Phdr *dlpi_phdr; Elf_Half dlpi_phnum; };. Future versions of OpenBSD might add more members to this structure. To make it possible for programs to check whether any new members have been added, the size of the structure is passed as an argument to callback.
ld(1), ld.so(1), dlfcn(3), elf(5)
The dl_iterate_phdr function first appeared in OpenBSD 3.7. MirOS BSD #10-current January 17, 2005. | http://mirbsd.mirsolutions.de/htman/i386/man3/dl_iterate_phdr.htm | crawl-003 | refinedweb | 107 | 55.54 |
Jun 13, 2018 09:39 PM|george77479|LINK
hi, I am working on a re-write of my employee self-service portal, which uses a dynamics gp back-end. GP does not use lookup tables, as the values are usually coded into the forms (yea..i know).
so there is a table for dependents in the hr/payroll module, and one field is "relationship" (integer). so I created a class for this:
public class Relationship { public static readonly byte Spouse = 1; public static readonly byte Child = 2; public static readonly byte Parent = 3; public static readonly byte Sibling = 4; public static readonly byte Guardian = 5; public static readonly byte Other = 6; public static readonly byte Self = 7; }
now, I am trying to create a viewmodel to show a list of dependents on the site, and instead of showing a "7" for example, I want it to display "Self".
is this class appropriate to make this happen, or should it be a class that has "name" and "value", and load up a collection of these values in the constructor of the viewmodel?
I'm still getting used to this EF stuff, so pardon if my question doesn't make sense.
thanks,
geo
Contributor
2081 Points
Jun 14, 2018 05:24 AM|DA924|LINK
Maybe, the link will help you.
Jun 14, 2018 10:07 AM|Yuki Tao|LINK
Hi george77479,
According to your requirement, I suggest you could use Enum.
I make a demo according to your code, you could refer to it:
Model:
namespace TestApplication1.Models { public class Relationship { public int Id { get; set; } public string RelationshipName { get; set; } public RelationshipType relationshipType { get; set; } } public enum RelationshipType : int { Spouse = 1, Child = 2, Parent = 3, Sibling = 4, Guardian = 5, Other = 6, Self = 7 } }
View:
@model TestApplication1.Models.Relationship @{ ViewBag.Title = "enum_Index"; } <h2>enum_Index</h2> @Html.EnumDropDownListFor(Model => Model.relationshipType,"--Select--", htmlAttributes: new { @class = "form-control" })
Controller:
public ActionResult enum_Index() { Relationship model = new Relationship(); return View(model); }
How it works:
george77479I'm still getting used to this EF stuff, so pardon if my question doesn't make sense.
More details, you could refer to link which post by @ DA924.
In Entity Framework, an enumeration can have the following underlying types: Byte, Int16, Int32, Int64 , or SByte.
Best Regards.
Yuki Tao
Jun 14, 2018 02:16 PM|george77479|LINK
da924-great! I will bookmark this, and go through this tutorial. the more tutorials I read and videos I watch the better.
i'm sure it will take a little while before this becomes second-nature like datasets are to me now.
Jun 14, 2018 02:21 PM|george77479|LINK
yuki, this is perfect! it is a real mind-bender to switch from ado.net datasets to entity framework. but, ultimately I know that EF is the way to go, but the transition isn't easy. especially, when you have been doing ado.net data access for like 15 years.
thanks a million!
geo
4 replies
Last post Jun 14, 2018 02:21 PM by george77479 | https://forums.asp.net/p/2142297/6212309.aspx?Re+question+on+how+to+do+a+viewmodel+with+lookup | CC-MAIN-2018-39 | refinedweb | 502 | 53.61 |
When a control is disabled, it can not be styled in IE, and the default style sucks. So, I searched for a long time trying to figure out the best way of creating a dropdownlist control that has a ReadOnly property, which would be much more readable to a user, while still maintaining all the benefits of a
DropDownList control. After much searching, I decided to try the easy road, and came up with this control.
A
TextBox set as
Enabled,
Disabled, or
ReadOnly will look three different ways. This allows a
DropDownList to have that same behavior.
Basically, I decided to create my own control derived from
DropDownList. I had it internally contain a
TextBox, and when it went to be rendered, it just rendered the selected item, or the dropdown list, depending on if the
ReadOnly property was set.
using System.Web.UI.WebControls public class DropDownListReadOnly : DropDownList { private bool _readOnly; private TextBox tb = new TextBox(); public bool ReadOnly { get { return _readOnly; } set { _readOnly = value; } } protected override void Render(System.Web.UI.HtmlTextWriter writer) { if (_readOnly) { tb.Text = this.Text; tb.ReadOnly = true; tb.RenderControl(writer); } else base.Render(writer); } }
The easiest way of using this project is to create your own Web Control Library, and then add this class to your library. You can then insert the control into the VS2005 Designer by the normal method (right click on the Toolbox, pick the Add tab; then, right click the tab and pick Choose Items, and browse to the compiled DLL).
To-Do: Might need some error checking in the
Render method.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/webforms/DropDownListReadOnly.aspx | crawl-002 | refinedweb | 271 | 54.12 |
How To Display Custom SDA Extensions in the Clinical Viewer (Sneeziness Example)
How To Display Custom SDA Extensions in the Clinical Viewer
____________________________________________________________________________
Written by Sebastian Musielak, InterSystems Support, November, 2016
A new feature introduced in HealthShare Version 15 is the ability to create custom SDA extensions to store custom data in SDA. In some cases, it might be nice to display that new data in the Clinical Viewer. This is a step-by-step guide for how to do that.
To begin, we will make a few assumptions:
A – This guide was created running HealthShare version:
Cache for Windows (x86-64) 2016.1.2 (Build 208U) Thu Aug 18 2016 16:25:50 EDT [HealthShare Modules: Core:15.01.6402 + Linkage Engine:15.0.6402 + Patient Index:15.0.6402 + Clinical Viewer:15.01.6402 + Active Analytics:15.0.6402]
B - Assume that the custom SDA Extension has already been configured according to the documentation. For the purposes of this guide, we will be using the “sneeziness” extension, which is an example in the HealthShare 2015 documentation for adding a new field called “sneeziness” to the Allergy SDA.
For more information about customizing the Allergy SDA, please see the HealthShare Version 15 documentation on the subject of “Customizing the SDA” and specifically the subsection on “Using the Extension Classes to Customize the SDA”.
C - We will also assume that the TrakCare Layout Manager has already been configured to allow customization of the clinical viewer.
For more information about configuring the system to use the Layout Editor, please search for the HealthShare Version 15 documentation for the subject of “Required Setup for Clinical Viewer Customization”.
1) Open TrakCare Layout Editor
We will start by going into the TrakCare Clinical Viewer and opening the Layout Editor for the chart, which we want to customize. In this case, we want to customize the Allergies and Adverse Reactions Chart.
2) Change an existing column name to suit your needs
In the list of Columns available in the Layout Editor (Category, Allergen, Nature of Reaction, etc.), pick a column that is not used to store any data or that contains data you might not care about. This will be the column in which we display our “Sneeziness” value. For example, I chose to use “External ID”. Double click on the text to edit the column name to “Sneeziness”. Save the chart under the desired site.
When you reload the Clinician Viewer, it still will not contain a column for Sneeziness because the column is hidden. That must be changed.
3) If the Column is hidden by Default, Display It
Open the Column Editor by Double Clicking on the row of columns. Drag the Sneeziness tab from the bottom list of tabs (the hidden tabs) to the top list of tabs (the viewed list). Apply the changes.
The Sneeziness column should now appear in the row of columns. But there is no data listed. We need to write code to pull the data out of the Clinical Viewer cache. We will do this using a custom class and classmethod. Once we have created the class, we need to
4) Write Code to Extract the Custom Data
Create a new class and create a classmethod in the namespace you have your Access Gateway. For example, I created a class called “User.SDAExtension” and a classmethod called “GetSneezinessViewerTransform”.
Since not all implementations are exactly the same, it is a good idea to understand the underlying workflow that allows data to be displayed in the Viewer. The HTML code that displays the Clinical Viewer is actually generated by dozens of HealthShare routines with names like “GCOMx.y.mac”, where x and y are numbered between 10 and 50. The HTML code is also generated from code with the name like “gen.ComponentXRef.X.Y.LZmac”, where X, Y, and Z are other numbers.
The GCOM and ComponentXRef code is, in turn, generated by changes made in the Layout Editor. It is important to understand that changes in the Layout Editor will update the GCOM and ComponentXRef routines, which will change the generated HTML. To find which piece of code generated the HTML, you can use your Chrome, Firefox or IE browser tools to inspect the page (or frame) source. The page and frame source will tell us the routine which generated that piece of HTML. For example, when we inspect the frame source for the data in the Allergies and Reactions table, we see the following HTML code:
You will notice that our Allergy HTML code is generated by gen.ComponentXRef.56.1. Eventually, we see where we define the Row ID (covered in the next step). With the Row ID, id ,we can extract the patientid.
With the patientid, we can do a lookup for the aggregation key, agkey, for that patient using the ^CacheTemp.HS.ViewerStreamlet global. You can look at this global in the portal after loading a Patient into the viewer in another browser tab.
The agkey allows us to find the Streamlet ID, streamletID, assuming we know what kind of streamlet we want. In our example case, we want to find the streamletID for the Allergy Streamlet. The streamletID corresponds to the Object/Table row on disk, which stores our custom extension data.
We can bring the object into memory using the %OpenID() method. This allows us to pass the SDAString property of the Streamlet Object into a Utility method, XMLImportSDAString(), which will allow us to parse the SDA structure and extract the Sneeziness property.
This code is actually run by the Server-Side process which accepts commands from the Clinical Viewer UI. A lot of the variables are already in memory and we just need to write code to find the data we need.
Class User.SDAExtension Extends %Persistent { ClassMethod GetSneezinessViewerTransform(id) As %String { set patientid=$p(id,"||") set agkey= $g(^CacheTemp.HS.ViewerStreamlet("HSACCESS","P",patientid)) set streamletID= $g(^CacheTemp.HS.ViewerStreamlet("HSACCESS","V",agkey,"ALG",id)) set obj=##class(HS.SDA3.Streamlet.Allergy).%OpenId(streamletID) set allergySDA3=##class(HS.SDA3.Allergy).%New() set st=allergySDA3.XMLImportSDAString(obj.SDAString) set sneeziness=allergySDA3.Extension.Sneeziness quit sneeziness } }
5) Create a Transform which Calls your Custom Code
Now that we have code that can run on the clinical viewer to get the Sneeziness, we need to define a transform to run that code. Go back to the Trak Viewer Home Page and click on Tools
. Click on “Code Table Map”, open the “System Management” expansion box, and click on “Transformation” to bring up the Transform Wizard.
Click “New” to create a new Transform.
Set the following fields in the Transformation:
Code: GetSneeziness
Name: GetSneeziness
Description: Get the sneeziness from the SDA3 extension
Expression: set val=##class(User.SDAExtension).GetSneezinessViewerTransform($g(rs.Data("RowID")))
Owner: Site
The expression gets the ID of the row from the CSP Session data and sends to the Class for processing. We found this by digging through the ComponentXRef code and finding where we specify Row ID. From that, we can find the PatientID, StreamletID and eventually extract the Sneeziness.
Note that all expressions in the Layout Editor expect the format:
SET val=<insert some COS code>
Update the Transformation and confirm that it shows up in the list of transformations. Now, we need to configure the Clinical Viewer to call the transform to get the data to show up.
6) Specify your transform to be used in the Layout Editor
Go back to the Layout Editor for the Allergies and Adverse Reactions Chart. Right-Click on the Properties for the Sneeziness column (see Action 2 of this guide). Select the Search button next to the Transform setting. This should allow you to choose the Transform we created in the previous action.
At this point, assuming you already have data populated on disk in Information Exchange, you should be able to Log Out of the TrakCare Viewer and log back in to see the newly populated column.
| https://community.intersystems.com/post/how-display-custom-sda-extensions-clinical-viewer-sneeziness-example | CC-MAIN-2021-21 | refinedweb | 1,328 | 63.29 |
.6 Publishing: General Resources
Book Industry Statistics. Useful facts and figures.
Bookwire. Comprehensive portal of the book industry.
Journal of Electronic Publishing. More scholarly articles on e-publishing.
eContent Magazine. Content on the Internet: news, articles and resources.
General Publishing Resources. Long listing, rather a mixed bag.
Comparison of ebook readers. Wikipedia. Good table and listings.
Internet Publishing. Personal site, with an excellent listing of electronic publishers.
Author 's Guild. Advice on the book (and other) contracts.
Perfect Pages: Book design, typography, and Microsoft Word. Aaron Shepard. 2006. 140 pp. $15.
StyleWriter. Searches for thousands of writing faults and helps you write clear and simple English. $150.
Comparison of e-book formats. Wikipedia. Extensive tables and listings.
PDF Zone. Online hub for all things PDF.
Creating eBooks
eBooks are commonly made by:
1. Compiling webpages: search under 'webpage compiler', 'ebook creator', 'make your own ebook', etc. Many exist for the Windows platform, few or none for the Mac. Features to check for:
Layout precision required: program:
compiles simple HTML pages only.
css layouts preserved: rarely the case: check with pages concerned.
Nature of Input:
text only, text and graphics.
basic multimedia. Flash and/or videos. Pdf Acrobat files.
Functionality:
individual pages can be hidden/password protected.
printing can be disabled.
copying can be disabled.
indexes easily created.
search facility can be added.
Level of security required:
password protection of whole document.
password protection of individual pages.
time expiry of ebook
expiry after certain number of times used.
access restricted to single machine/user.
user tracking.
Most programs have free trials or demo versions.
2. Making PDF files with specific software from webpages and/or Microsoft Word etc. documents.
Search with 'pdf creation software ', 'pdf creation services ', 'word to pdf ', 'online pdf services ', etc. Apart from Adobe's Acrobat, the software is generally inexpensive, and online services even more so.
Preservation of layout and links (bookmarks and exterior links) can be difficult, claims notwithstanding. Test thoroughly, especially with large documents.
Security is a vexing matter as software is readily available to open locked pdf documents (and extract their data) if the whole document is not password-protected (and passwords are commonly passed on with the document). Several commercial options exist, but are expensive. The cheaper options are:
Mac platform: Book Guard.
Windows Platform: Softlocker, AftIndia, and Apinsoft.
3. Making Flash pages, either from scratch (with Adobe Flash, Swish or Toufee) or from previous documents usually pdf (e.g. with PageTurnPro ) or Word (e.g. with Print2Flash. Search with 'easy flash program ', 'word to flash ', 'pdf to flash '. Many companies offer a complete service: e.g. PageGangster and ePaperFlip.
File Handling Routes
Routines are essential. Files need to be properly backed up, and some thought given to consistent naming for easy identification and renaming later.
The starting material is text and graphics, and these must be kept in their pristine form. Graphics should be stored in some lossless format like TIFF so that repeated copying does not degrade them. An author's manuscript should also be saved safely in its original submission, beyond the reach of subsequent editing.
These file handling routes should cover most needs:
1. Text to HTML
any number of HTML editors exist, many free or shareware.
-
2. Text to Acrobat PDF format
use Adobe Acrobat
use one of the many (cheaper) clones available: Win2PDF, PrintToPDF, PDF-Xchange, PDF-Xchange, PDF Online, DaVince Tools,Create PDF, Sonic PDF or NitroPDF.
3. Text to Microsoft Reader format
use ReaderWorks
4. Microsoft Reader to other formats
use ConvertLit.
5. Text to Hiebook format
6. Text to Gemstar format
contact fellow users: Gemstar eBook Publisher no longer live.
7. Text to Rocket eBook format
contact fellow users: Rocket Librarian site is no longer live.
8. Text to Mobipocket format
use Mobipocket Reader 5.
9. Word to Acrobat PDF
use FinePrint, MakePDF, PDF Writer Pro, etc.
10. Word to HTML
save as HTML in Word and use HTML Tidy
use Flash Utility, Word2Web or ePrint Professional.
11. Word to Microsoft Reader
use Microsoft Reader or ReaderWorks
12. HTML to Text
13. HTML to Acrobat PDF
use Acrobat's webpage capture and number pages with Javascript coding.
-
14. HTML to Microsoft Reader
use ReaderWorks
convert to MS Word and use Microsoft Reader or ReaderWorks
15. Add graphic files to HTML documents
optimize file size in a graphics program like Fireworks, and then use an HTML editor.
16. Add graphic files to Acrobat PDF documents
insert in HTML or Word document: import into Acrobat, and save at appropriate resolution.
17. Add graphic files to Microsoft Reader Documents
import to MS Word and use
convert to file formats and then use
18. Add graphics to MS Word documents
use Word Autoshapes or WordArt tools
import in GIF, JPG, PNG, BMP, PCX formats
19. Text and graphics to Adobe InDesign
follow InDesign procedures or consult third-party manuals.
20. Text and graphics to Quark Xpress
follow Xpress procedures or consult third-party manuals.
21. Prepress for Adobe InDesign.
export as Postscript files, setting controls or use DeskPrint
preflight in Adobe Acrobat
use third-party software, e.g.: PitStop Professional, Crackerjack, PDF Robot
22. Prepress for Quark Express
distill using Postscript driver
preflight in Adobe Acrobat
use third-party software: PDF Robot, etc.
23. Convert HTML/Word/Text files to MP3 format
use Verbose and then convert WAV file to MP3, or use Text Aloud
speak into an MP3 encoder: DailyMP3, MP3 Machine, Hitsquad, NCH, MP3-Converter, Winamp, Musicmatch or Blaze Media Pro.
24. Convert audio files to MP3 format for web download
25. Convert between graphics file formats use graphics programs: PhotoShop, Illustrator, Paintshop Pro.
26. Convert PDF to flash: Swiftools.
27. Convert PDF to PageTurn: PDF 2 PageTurn.
28. Convert PDF to ePub: PDF2EPUB.
29. Convert PDF to Kindle: AutoKindle. | http://ecommerce-digest.com/publishing-resources.html | CC-MAIN-2017-04 | refinedweb | 964 | 60.51 |
This article is in the Product Showcase section for our sponsors at CodeProject. These articles are intended to provide you with information on products and services that we consider useful and of value to developers.
/n software () offers components that implement Internet Communications functionality into your Desktop, Server, and Mobile applications. This includes file and data transfer, messaging, email, authentication, encryption, network management, business integration, payment processing, and cloud storage integration. Part of our mission is to keep developers' toolkits up-to-date with the latest platforms, protocols, and security standards. Now that .NET Standard and .NET Core is here, our components are fully compatible.
This article will help you get started with any of our over 30 toolkits. Here are some of the /n software products you can use in .NET Core and .NET Standard applications:
See a full listing of our products at. They are all included in our Red Carpet Subscription, which delivers all of our components on a frequently updated yearly service. It is a great way to keep your applications up to date.
Each /n software product can be used as a free trial and includes samples and online support. This article will give you easy instructions for setting up our components in your development environment. To follow along, please select and install a product as instructed below.
Our products are available in various editions such as .NET, Delphi, Java, C++, and macOS. All .NET Editions support .NET Core and .NET Standard. You can download and install a product through NuGet or directly from our website.
The easiest way to install the library is through NuGet. You can find all .NET Edition packages in the NuGet gallery. For instance, you can search for IP*Works!.
If you would rather get components directly from our website, please select a product from our download page. When you click a product, you will navigate to a list of available editions. Download the .NET Edition.
In addition to NuGet, the libraries are included when running the Windows setup for the .NET Edition and are placed in the lib\netstandard2.0, lib\netstandard1.6 and lib\netstandard1.4 folders of the install directory.
The toolkit is compiled for .NET Standard 2.0, 1.6, and 1.4. When adding the package via NuGet, the correct version is automatically selected depending on your project settings. By supporting .NET Standard 2.0, 1.6, and 1.4, a wide range of platforms are supported. This includes:
No special steps are required to target a supported version. After creating your project, simply add the NuGet package and start using the components.
While the use of the components is the same in all platforms that support .NET Standard, to better illustrate how the components may be used, the below example uses the components in a .NET Core Console Application running on Linux.
To begin, add the nsoftware.IPWorks NuGet package as described above. Once added, add some simple code in the main method. For instance:
nsoftware.IPWorks
main
using System;
using nsoftware.IPWorks;
namespace MyNetCoreApp
{
class Program
{
static void Main(string[] args)
{
Http http = new Http();
http.Get("");
string myData = http.TransferredData;
Console.WriteLine("Transfer OK!");
}
}
}
To deploy, right click the project in the Visual Studio solution explorer and select Publish.... Follow the prompts to create the files necessary for deployment. Copy the files from the bin\Release\PublishOutput folder to the Linux machine.
Publish...
To install a trial license for the components on the Linux system, copy the files from C:\Program Files\nsoftware\IPWorks 2016 .NET Edition\lib\netstandard2.0 to the deployment machine and run:
dotnet ./install-license.dll
After installing the trial license, run the compiled .NET Core application:
dotnet ./MyNetCoreApp.dll
The output should look like:
Whether installing from NuGet or running the Windows .NET Edition setup licensing is handled in the same manner.
To activate a trial license, use the install-license application. The install-license application is a .NET Core console application included in the toolkit.
If the library was installed from a NuGet package, this is present in the tools folder in the package installation directory.
If the library was installed as part of the .NET Edition installer, this is present in the lib/netstandard1.X folders in the installation.
To use the install-license application, run the command: dotnet ./install-license.dll
dotnet ./install-license.dll\XXNXA.lic. Set the Build Action property for the file to Embedded Resource.
%USERPROFILE%\.nsoftware\XXNXA.lic
Build Action
Embedded Resource
In NuGet, you should be prompted to install a license during the install process. In the Windows .NET Edition setup, a license is automatically installed. Alternatively, the install-license application is a .NET Core console application included in the toolkit.
If the library was installed as part of the .NET Edition, this is present in the lib/netstandard1.X folders in the installation.
To use the install-license application, run the command: dotnet ./install-license.dll <key>
dotnet ./install-license.dll <key>
where key is your product key. This will install a license to this particular system. Read on for deployment instructions.
key
The RuntimeLicense property must be set before deploying your application. To obtain this value on a properly licensed development machine, output the current value of the property. For instance:
RuntimeLicense
Console.WriteLine(component.RuntimeLicense);
This will output a long string. Save this value and use it in your real application like so:
component.RuntimeLicense = "value_from_above";
Note: The same RuntimeLicense property value works for all components included in the toolkit.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) | https://www.codeproject.com/Articles/1214026/How-to-Get-Started-with-n-software-NET-Core-Intern | CC-MAIN-2019-47 | refinedweb | 943 | 52.87 |
50039/how-do-you-handle-https-website-in-selenium
Hi Namit, you can handle https websites by changing the Firefox profile this way:
Syntax:
public class HTTPSSecuredConnection {
public static void main(String[] args){
FirefoxProfile profile = new FirefoxProfile();
profile.setAcceptUntrustedCertificates(false);
WebDriver driver = new FirefoxDriver(profile);
driver.get("url");
}
}
I'm currently working on Selenium with Python. ...READ MORE
First, find an XPath which will return ...READ MORE
Actually, its pretty simple. Use this code ...READ MORE
So, for implementing Assert(), you need to ...READ MORE
You need to add the key and cert to the createServer function.
const options ...READ MORE
If you check your config file, it ...READ MORE
As per your snap you have selected ...READ MORE
At a small scale, i.e, with just ...READ MORE
Hey Ritika, you can handle multiple windows ...READ MORE
Hello Vartika, you can handle Untrusted SSL ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/50039/how-do-you-handle-https-website-in-selenium | CC-MAIN-2020-05 | refinedweb | 154 | 53.68 |
Patent application title: PEER-TO-PEER REDUNDANT FILE SERVER SYSTEM AND METHODS
Inventors:
Francesco Lacapra (Sunnyvale, CA, US)
Peter Wallace Steele (Campbell, CA, US)
Peter Wallace Steele (Campbell, CA, US)
Bruno Sartirana (Loomis, CA, US)
Ernest Ying Sue Hua (Cupertino, CA, US)
Ernest Ying Sue Hua (Cupertino, CA, US)
I. Chung Joseph Lin (San Jose, CA, US)
I. Chung Joseph Lin (San Jose, CA, US)
Samuel Sui-Lun Li (Pleasanton, CA, US)
Samuel Sui-Lun Li (Pleasanton, CA, US)
Nathanael John Diller (San Francisco, CA, US)
Nathanael John Diller (San Francisco, CA, US)
Thomas Reynold Ramsdell (Palos Verdes, CA, US)
Thomas Reynold Ramsdell (Palos Verdes, CA, US)
Don Nguyen (Tracy, CA, US)
Don Nguyen (Tracy, CA, US)
Kyle Dinh Tran (San Jose, CA, US)
Kyle Dinh Tran (San Jose, CA, US)
Assignees:
Overland Storage, Inc.
IPC8 Class: AG06F1730FI
USPC Class:
707822
Class name:
Publication date: 2013-01-10
Patent application number: 20130013654
Abstract:
Peer-to-peer redundant file server system and methods include clients
that determine a target storage provider to contact for a particular
storage transaction based on a pathname provided by the filesystem and a
predetermined scheme such as a hash function applied to a portion of the
pathname. Servers use the same scheme to determine where to store
relevant file information so that the clients can locate the file
information. The target storage provider may store the file itself and/or
may store metadata that identifies one or more other storage providers
where the file is stored. A file may be replicated in multiple storage
providers, and the metadata may include a list of storage providers from
which the clients can select (e.g., randomly) in order to access the
file.
Claims:
1-46. (canceled)
47. A storage system comprising a plurality of storage providers for distributed storage of files associated with a filesystem, wherein each storage provider maintains statistics regarding the files that it stores, and wherein the statistics are collected by a designated storage provider for processing.
48. A storage system according to claim 47, wherein the statistics include file access frequency.
Description:
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims priority from U.S. Provisional Patent Application No. 61/048,781 entitled PEER-TO-PEER REDUNDANT FILE SERVER SYSTEM AND METHODS filed Apr. 29, 2008 in the name of Francesco Lacapra (Attorney Docket No. 3319/101) and also claims priority from U.S. Provisional Patent Application No. 61/111,958 entitled PEER-TO-PEER REDUNDANT FILE SERVER SYSTEM AND METHODS filed Nov. 6, 2008 in the names of Peter W. Steele and I Chung Joseph Lin, each of which is hereby incorporated herein by reference in its entirety.
TECHNICAL FIELD
[0002] The present invention relates to large-scale computer file storage, and more particularly to storage of large numbers of computer files using peer-to-peer techniques that provide scalable, reliable, and efficient disk operations on those files.
BACKGROUND ART
[0003] Internet services, such as email, web browsing, gaming, file transfer, and so on, are generally provided using a client-server model of communication. According to the client-server model, a server computer provides Internet services to other computers, called clients. Familiar examples of servers include mail servers and web servers. A server communicates with the client computer to send data and perform actions at the client's request. A computer may be both a client and a server. For example, a web server may contact another computer to synchronize its clock. In this case, the computer providing the clock data is a time server, and the requesting computer is both a time client and a web server.
[0004] Conventionally, a service provider, such as a web site, is responsible for creating and making available content for people to consume. Web sites typically following this model include, for example: news sites like CNN.com or BBC.co.uk; sites offering retail sales like Amazon.com or BestBuy.com; search engines with indexed search data like Google.com or MSN.com; and so on. However, a usage model is emerging whereby the users of a service, rather than the service provider, produce content for others to consume. In this "Web 2.0" model, a service provider operates a content creation server, and invites users to create or upload content to be hosted there. Examples of this model include blog providers such as Blogger.com; news aggregators like Digg.com and Reddit.com; and video sharing sites such as YouTube.com. Some websites are a hybrid between the two, in that the website management provides subject matter for users to comment on. An example of a hybrid site is technology news discussion site Slashdot.org, where staff selects news stories from other sites for comment. Traditional websites that originate content seem to be migrating towards becoming such hybrids. News site MSNBC.com may allow readers to comment on posted news stories, for example.
[0005] The infrastructure behind the Internet is growing to adapt to these changes from the traditional client-server model. A traditional service provider may be a business, and as such have a limited staff that can create and publish only a relatively small amount of content in any given timeframe. With user-generated content, however, the amount of data that can be created over the same timeframe increases by several orders of magnitude. Thus, a server infrastructure may suffer from problems of scalability, as the volume of data that must be processed and stored grows exponentially. Simply buying larger data storage devices can be prohibitively expensive, as technological limitations typically cause the cost-to-capacity ratio of storage devices to increase as capacity increases. Service providers may instead look for more cost-effective ways to store their data, including purchasing larger numbers of devices with smaller storage capacity. Clusters of such smaller devices are known in the art. For example, techniques have been developed to control redundant arrays of inexpensive disks (RAID). Furthermore, service providers may require a storage solution that integrates tightly with their existing computer infrastructure, rather than a system purchased off-the-shelf. Service providers may also need the ability to deal with data storage interruptions. RAID systems may provide these benefits; however, service providers may require that a storage system be cost-effective to support and maintain. RAID systems tend to be expensive, complex, and require considerable expertise and patience to manage.
[0006] Storage systems arrange their data in a filesystem. A filesystem is a system for storing and organizing computer files in a storage system to make it easy to find and access the files. A file, in turn, is a collection of data. FIG. 1 depicts a filesystem directory tree as known in the prior art, for example, as in the UNIX® model (Unix). Files within a filesystem are organized into directories. As with almost everything else in Unix, a directory is a type of file; in this case, one that contains information about other files. As a directory may refer to both (data) files and other directories, directories may nest one within the other. As a result, a filesystem has a tree-like structure, where each directory acts as a branch. Continuing the analogy, a regular data file is sometimes known as a leaf Like a tree, each filesystem has a root--a root directory 110. The root directory 110 depicted in FIG. 1 contains two directories 120 and 122 (branches), and a file 124 (a leaf). Directory 120 has two files 130 and 132, while directory 122 has three files and a subdirectory 140.
[0007] All files in a filesystem may be accessed by specifying a path from the root directory 110 to the file. For example, the location in the filesystem of file 150 is uniquely determined by the path from root directory 110 to directory 122 to directory 140 to file 150. A path is ordinarily represented by a concatenation of the names of the intermediate files, separated by a special character. This written description follows the Unix convention of a forward-slash / as a path separator, although alternate operating systems such as Microsoft® Windows® may use a different path separator. The root directory 110 has the special name /. Thus, if the directories are named as they are labeled in FIG. 1, file 150 has the path /122/140/150. (The Windows equivalent is C:\122\140\150, where C:\ is the name of the root directory.)
[0008] FIG. 2 is a block diagram of various operations that may be performed on files located within a filesystem directory tree. There are four major types of operations performed on files: file creation, reading data, updating data, and file deletion. Together these are known as CRUD operations, and provide the core functionality required of any storage system. Operating system architects support these main operations with additional operations. For example, it may be inconvenient for a software developer to continually refer to the full path of a file for each file operation. Thus, an operating system may provide the ability to open a file (that is, to initialize certain data pertaining to the file, including the file path) before performing any of the four major operations. Similarly, an operating system may provide the ability to close the file, to free up system resources when access is no longer required. All of these CRUD and support operations define the capabilities of the filesystem. POSIX®, which is the Portable Operating System Interface, an industry standard (IEEE 1003; ISO/IEC 9945), defines these operations as well.
[0009] Different filesystem designers may wish to implement different filesystem capabilities. For example, some filesystems support very large files. Some filesystems support a log of file operations, which can be "replayed" to ensure data consistency in case of a system failure. Some filesystems store data to a network, rather than a hard drive in the local computer. Examples of filesystems with different capabilities include the Windows NT® filesystem NTFS, the Common Internet File System CIFS, the Unix File System UFS2, Sun Microsystems® ZFS and Network File System NFS, Linux filesystems EXT3 and ReiserFS, and many others. Each of these filesystems implements the various filesystem CRUD and support operations. Thus, an NTFS filesystem 210 implements an open function 212 for opening a file, a close function 214 for closing an open file, are ad function 216 for reading data from an open file, a write function 218 for writing data to an open file, and others. Similarly, a CIFS filesystem 230 implements an open function 232, a close function 234, a read function 236, and a write function 238. However, these filesystems differ in that NTFS filesystem 210 contains operations that access a local hard disk drive 220, while CIFS filesystem 230 contains operations that access a network 240, such as a local area network (LAN). In a CIFS filesystem 230, network 240 is connected to a file server 250 which may have a hard disk drive 260 that actually stores the file data. CIFS filesystem 230 creates network messages that contain instructions, such as "read one kilobyte of data from file F", and sends them to file server 250. File server 250 receives the network messages, and translates them into requests on its own filesystem, which may access hard disk drive 260. Once the requests have completed, file server 250 creates a response network message and sends it back to CIFS filesystem 230 using network 240. However, a software application running on a computer supporting CIFS may simply use re ad function 236 without concerning itself with the details of the underlying network communication. Filesystems other than NTFS and CIFS similarly differ in their implementations, but all POSIX-compliant filesystems provide at least the same minimum filesystem CRUD and support operations.
[0010] A computer may support several different filesystems simultaneously. However, this capability raises a problem. Users require a unified method to address files, regardless of the filesystem in which they are stored. The exemplary method to address files is to use a file path, as described above. However, there must be a way to distinguish between the two different root directories of the two filesystems--they cannot both be named /. A common solution to this problem is to attach one filesystem tree to the other, in a process known as mounting. The reverse process of detaching two filesystem trees is known as unmounting, or dismounting.
[0011] FIG. 3 shows the relationship between two filesystem directory trees involved in a filesystem mount operation. In a mount operation, one of the filesystems acts as the root of the tree, as before, and is called the root filesystem. Typically, the root filesystem will be one that accesses a local hard disk drive. In the example of FIG. 3, the root filesystem 310 is an NTFS filesystem 210, with associated NTFS filesystem operations that access local hard disk drive 382. The other filesystem is known as the mounted filesystem. Here, the mounted filesystem 340 is a CIFS filesystem 230, with associated CIFS filesystem operations.
[0012] As before, root filesystem 310 has several files in it: directory A 330, directory B 332, directory C 334, and so on to directory Z 336. These directories have subdirectories and contain files, as shown. One of these directories, say 336, is chosen by the filesystem user as a point of attachment (also known as a mount point). A user then mounts filesystem 340 onto this directory using an operating system command, such as the Unix mount command. Before mounting, directory path /Z refers to directory 336. After mounting, mounted directory 350 replaces directory 336 in the filesystem tree, so directory path /Z now refers to directory 350, not directory 336. Any files contained in directory 336, such as file 338, are now inaccessible, as there is no way to address them with a path. For this reason, mount points are usually chosen to be empty directories, and may be specially created for that purpose. A typical Unix example is the directory /mnt. A filesystem may simultaneously mount several filesystems. Thus, /mnt may be empty, or it may contain several empty subdirectories for use as mount points if multiple filesystems are to be mounted therein.
[0013] As an example, before the filesystem 340 is mounted, directory Z 336 is empty. After mounting, the directory /Z now contains two subdirectories, /Z/D1 and /Z/D2. Path /Z/D1 represents a path containing the root directory 320, the mount point /Z (which refers to the root directory 350 of the second filesystem), and the directory 360. As another example, files 370 and 372 are available after mounting using paths /Z/D2/F1 and /Z/D2/F2 respectively (passing through directory D2 362). When a user is finished, the unmount command is available to detach the two filesystems. Once the second filesystem is unmounted, files such as file 338 are accessible to the operating system again.
[0014] Which file operations apply to a given file depends on which filesystem the file is located in. This is determined, in turn, by the path of the file. For example, file 331 has path /A/F2, which is located in an NTFS filesystem. Thus, NTFS operations are used on the file. These operations access a person's local hard disk drive 382, according to the design of NTFS. However, file 372 has path /Z/D2/F2, which crosses the mount point /Z. Thus, CIFS file operations are used on the file. These operations send a CIFS message through LAN 392 to another computer 394. Computer 394 supports CIFS, and contains the root directory 350 of filesystem 340. Computer 394 receives the request, which it then applies to filesystem 340. The process then begins again on computer 394. The path of the file on computer 394 is /D2/F2, which may be seen from looking now only at filesystem 340. Computer 394 determines the proper file operation to execute based on this path, itself looking for mount points. Computer 394 may pass along the operation to its local hard disk drive 396, or even to another device using another filesystem type if /D2 is a mount point in filesystem 340. Thus, the operating system of computer 394 provides a further level of abstraction.
[0015] Filesystem mounting can be used to increase the amount of file storage space available to a web server. Thus, mounting may be used to alleviate a service provider's needs in this respect. There are generally three paradigms for expanding storage space: adding additional local hard drives, mounting a network-attached storage (NAS), and mounting a storage area network (SAN). A NAS is one or more hardware devices used solely for storage (and not for any other applications), accessible over a network, which may be mounted on a computer using a standard network filesystem such as CIFS or NFS. Under a NAS, a computer will recognize the remote nature of the file, and convert file operations into formatted network messages. A SAN is similar, except that the remote devices are mounted using a proprietary filesystem, such that the core operating system is unaware that the file data are stored remotely.
[0016] The first paradigm, adding additional local hard drives, does not scale very well. Modern computers only have a finite number of connections to which to attach additional devices. Thus, this paradigm is not generally used for very large business operations.
[0017] The second paradigm requires mounting a NAS. A NAS scales well hardware-wise, as any number of devices may form the NAS, and they may be added easily to an existing setup. (Several versions of Microsoft Windows limit the number of mounted filesystems. Unix systems generally do not have this limitation.) A NAS is also generally less expensive than a SAN, byte-for-byte. However, because CIFS and NFS access a remote computer for each file operation, they have performance penalties. The process of traversing a file path, for example, requires locating a directory, reading its contents, locating the next directory, reading its contents, and so on until the final file is located. In NFS, each of these operations is a network access. On large networks nearing bandwidth saturation, NFS request/response pairs may be delayed enough to cause user frustration. In addition, NFS does not react well to failure conditions. For example, if a server hosting an NFS filesystem becomes unresponsive for any reason, a client that has mounted the filesystem may wait for a considerable period of time to complete an NFS transaction. In some NFS implementations, this delay may spread to other parts of the operating system, causing the client computer to also become unresponsive. As a result, NFS network administrators may be very particular about the order in which computers are restarted or failure conditions addressed.
[0018] The third paradigm requires mounting a SAN. A SAN is a proprietary product that can take several different storage devices and pool them, so that a computer sees them as a single, large, local storage unit. Thus, a SAN does not have to rely on off-the-shelf protocols such as CIFS or NFS. For this reason, SAN providers may offer better support for their products than NAS providers, including services to better integrate their product into an existing network infrastructure. A SAN is generally more expensive than a NAS. Each SAN has its own method for dealing with data storage interruptions, and different vendors offer different guarantees and service-level agreements. Of course, using a SAN generally implies the presence of an "intermediary" in the form of a device that adapts the "block" view of the world the SAN provides to the application view (e.g., in the form of software running on one or more clients of the SAN that may coordinate access among clients and implement abstractions such as files, or others, for example mail repositories, DBMSes and so on). Thus a direct comparison between a SAN and NAS devices can be misleading as the two have inherently different capabilities.
SUMMARY OF THE INVENTION
[0019] In accordance with one aspect of the invention there is provided a file storage system for handling a standard file system request including a path name. The system includes a plurality of storage providers and a client, in communication with the storage providers, that accepts the file system request and generates, for fulfillment, a corresponding reformatted request to a selected one of the storage providers, the selected one of the storage providers being initially selected by the client on the basis of a hashing algorithm applied to at least a portion of the path name, so that the client serves as an interface between the standard file system request and the storage providers.
[0020] In various alternative embodiments, each storage provider may be a virtual server including a plurality of peer-to-peer computer processes forming a set of peer nodes. A specified request directed to a specified virtual server may be delivered to all peer nodes of the virtual server but the set may be configured so that only a single one of the peer nodes responds to the specified request. Each one of the peer nodes may be implemented as a distinct physical storage medium coupled to a distinct microprocessor. The system may include a plurality of physical storage servers, each physical storage server including a plurality of physical storage media and a microprocessor, wherein each virtual server is configured with a distinct storage server being associated with each peer node of the set.
[0021] In accordance with another aspect of the invention there is provided a method for locating a given file in a file storage system having one or more storage providers, where the given file is associated with a file pathname including a sequence of directory names and a file name. The method involves (a) applying, in a computer process, a hashing algorithm to a chosen one of the directory names to obtain an index number, wherein the hashing algorithm has the property that different index numbers may be obtained for different directory names; (b) identifying a selected storage provider associated with the obtained index number; and (c) contacting the selected storage provider number in order to obtain information maintained by the selected storage provider regarding the location of the given file within the file storage system, whereby the given file may be located whether the given file is stored by the selected storage provider and/or by one or more other storage providers.
[0022] In various alternative embodiments, each storage provider may be a virtual server including a plurality of peer-to-peer computer processes forming a set of peer nodes. The chosen directory name may be a parent directory for the file name. The hashing algorithm may obtain index numbers from zero up to, but not including, a number that is an integer power of a chosen base integer, such that the number is greater than or equal to the number of file servers in the file storage system, and the number divided by the base integer is less than the number of file servers in the file storage system. The chosen base integer may be two. The method may further involve changing the location of the given file within the file storage system and updating the information maintained by the selected storage provider to reflect the changed location. Multiple instantiations of the given file may be stored in the file storage system, in which case the information maintained by the selected storage provider may identify the locations of the instantiations. Identifying the selected storage provider associated with the obtained index number may involve using the obtained index number to index a table of storage providers.
[0023] In accordance with another aspect of the invention there is provided a method of providing access by a client to a file in a storage system, where the file associated with a file pathname. The method involves (a) storing an instantiation of the file in each of a plurality of storage providers; (b) storing metadata for the file in a target storage provider selected based at least in part on the pathname using a predetermined mapping scheme, the metadata including at least a list of the storage providers; (c) sending a request by the client to the target storage provider; (d) providing the list of the storage providers by the target storage provider to the client in response to the request; (e) selecting one of the listed storage providers by the client using a predetermined selection scheme; and (f) communicating with the selected storage provider by the client in order to access the file instantiation stored in the selected storage provider.
[0024] In various alternative embodiments, the predetermined mapping scheme may include a hash algorithm applied to a portion of the pathname. The predetermined selection scheme may include random selection from among the listed storage providers. The predetermined selection scheme may include a user-configurable policy. The target storage provider may be one of the plurality of storage providers in which an instantiation of the file is stored or alternatively may be a storage provider in which an instantiation of the file is not stored. The metadata may further include the pathname, a portion of the pathname, and/or a file version number. An instantiation of the file may be stored in each of a plurality of storage providers for redundancy and/or for distributing processing load across the plurality of storage providers.
[0025] In accordance with another aspect of the invention there is provided a storage system including a client and a storage provider in communication the client over a communication network, the storage provider including a plurality of storage nodes, each storage node managed by a different storage server, wherein the plurality of storage nodes are associated with a multicast address and requests are transmitted to the storage provider using the multicast address.
[0026] In accordance with another aspect of the invention there is provided a storage system including a client and a storage provider in communication the client over a communication network, the storage provider including a plurality of storage nodes and a distributed queuing mechanism allowing tasks to be queued for processing by one or more of the storage nodes.
[0027] In various alternative embodiment, each storage node may be managed by a different storage server. The storage nodes may be associated with a multicast address and tasks are queued using the multicast address. One of the storage nodes may be designated for processing queued tasks at any given time. The storage nodes may be assigned different roles for managing the processing of queued tasks, the roles including at least a primary that manages the processing of queued tasks by default and a secondary that manages the processing of queued tasks if the primary is unable to do so. The roles may be assigned using color designations.
[0028] In accordance with another aspect of the invention there is provided a storage system including a client and a storage provider in communication the client over a communication network, the storage provider including a plurality of storage nodes, wherein one of the storage nodes is designated to act as a proxy for the plurality of nodes for managing storage of data among the plurality of storage nodes and interacting with the client on behalf of the other storage nodes.
[0029] In various alternative embodiments, each storage node may be managed by a different storage server. The storage nodes may be associated with a multicast address, in which case the client may communicate with the storage system using the multicast address. The storage nodes may be assigned different roles, the roles including at least a primary that acts as the proxy and a secondary that acts as the proxy if the primary is unable to do so. The roles may be assigned using color designations.
[0030] In accordance with another aspect of the invention there is provided a storage system including a plurality of storage providers for distributed storage of files associated with a filesystem, wherein each storage provider maintains statistics regarding the files that it stores, and wherein the statistics are collected by a designated storage provider for processing.
[0031] In various alternative embodiments, the statistics may include file access frequency.
[0032] In accordance with another aspect of the invention there is provided a method of distributing processing load across a plurality of storage providers. The method involves (a) determining that multiple clients desire access to a file stored by a given storage provider; (b) replicating the file in at least one additional storage provider such each of storage providers, including the given storage provider, stores an instantiation of the file; and (c) allowing clients to access any of the instantiations of the file so as to distribute processing load across the storage providers.
[0033] In various alternative embodiments, allowing clients to access any of the instantiations of the file may involve providing a list of the storage providers to each of the clients and allowing each client to select one of the storage providers from which to access the file. Allowing clients to access any of the instantiations of the file may involve specifying a different one of the storage providers for each of the clients.
[0034] In accordance with another aspect of the invention there is provided a method for maintaining peer set nodes of a computer file storage system. The method involves identifying waiting nodes associated with a current peer set based on a node-selection algorithm, the node-selection algorithm producing, at a root node, in a first computer process, an updated list of the current peer set nodes, and in a second computer process, conducting a dialog among the identified nodes, the dialog establishing a hierarchy and role distribution among the nodes.
[0035] In various alternative embodiments, identifying the waiting nodes associated with the current peer set of nodes may involve receiving, by a waiting node, from the root node, a message containing descriptors of waiting nodes associated with the current peer set. Conducting the dialog may involve sending invitations, by each of node-inviters, to be received by nodes-invitees, each invitation triggering a node-invitee to respond by sending an acknowledgment to a corresponding node-inviter, and receiving at least one acknowledgment by at least one node-inviter, wherein a node-inviter and a node-invitee are waiting nodes identified as being associated with the current peer set. The dialog indicator may be positive if each of node-inviters received acknowledgments from each of node-invitees and otherwise may be negative. The method may further involve, in a third computer process, allocating replacement nodes for the current peer set if the dialog success indicator is negative. Conducting the dialog may further involve passing messages received from the root node by each of node-inviters to each of node-invitees and/or passing a message by at least one of node-inviters to be received by the node-invitees, the message containing descriptors of waiting nodes associated with the current set and received by the at least one of node-inviters from the root node.
BRIEF DESCRIPTION OF THE DRAWINGS
[0036] The foregoing features of the invention will be more readily understood by reference to the following detailed description, taken with reference to the accompanying drawings, in which like reference characters refer to like parts throughout the several figures, and:
[0037] FIG. 1 depicts a filesystem directory tree as known in the prior art;
[0038] FIG. 2 is a block diagram of various operations that may be performed on files located within a filesystem directory tree;
[0039] FIG. 3 shows the relationship between two filesystem directory trees involved in a filesystem mount operation;
[0040] FIG. 4A is a schematic block diagram showing relevant components of an exemplary client/server system having a client and multiple storage providers in communication over a network such as a LAN or WAN (e.g. the Internet) as known in the art;
[0041] FIG. 4B is a schematic block diagram showing relevant components of a client/server system in accordance with an exemplary embodiment of the present invention;
[0042] FIG. 5 is a block diagram showing relevant components of a storage server in accordance with exemplary embodiments of the invention;
[0043] FIG. 6 shows a possible physical layout of the storage network of FIG. 4B;
[0044] FIG. 7 is a schematic block diagram showing the relevant interaction between logical components that participate in handling a client file operation in accordance with an embodiment;
[0045] FIG. 8 is a conceptual representation of the process of converting a file path into a table index for determining a storage provider in an embodiment;
[0046] FIG. 9 shows a process for expanding a table of storage providers controlling file metadata, indexed by the table index created in the process of FIG. 8.
[0047] FIG. 10 is a representation of the contents of a storage metadata file;
[0048] FIG. 11 depicts the logical components of a peer set in accordance with an embodiment of the invention;
[0049] FIG. 12 depicts communications in an exemplary embodiment between a client and a peer set using the computer network of FIG. 4;
[0050] FIG. 13 shows a data storage area and a metadata storage area in a node within a storage server in an embodiment;
[0051] FIG. 14 is a schematic block diagram of the components comprising, and those communicating with, a queue in accordance with an embodiment of the invention;
[0052] FIG. 15 is a schematic timing diagram showing relevant actions taken by, and messages passed between, peer set nodes and an asynchronous queue in accordance with an exemplary embodiment of the invention during repair of the loss of a secondary node;
[0053] FIG. 16A and FIG. 16B show the peer set of FIG. 11 during the failure of a secondary storage node and after the peer set has been healed by the process of FIG. 15, respectively;
[0054] FIG. 17A and FIG. 17B show the peer set of FIG. 11 during the failure of a primary storage node and after the peer set has been healed, respectively;
[0055] FIG. 18 is a schematic diagram showing a representation of an exemplary namespace of two clients and two servers in accordance with an exemplary embodiment of the present invention;
[0056] FIG. 19 is a schematic diagram showing a representation of clients mounting exported directories in to their respective namespaces in accordance with an exemplary embodiment of the present invention;
[0057] FIG. 20 is a schematic diagram showing a representation of an exemplary hierarchical namespace in accordance with an exemplary embodiment of the present invention;
[0058] FIG. 21 is a schematic diagram showing a representation of the namespace of FIG. 20 implemented using a hashing approach in accordance with an exemplary embodiment of the present invention;
[0059] FIG. 22 is a schematic diagram showing a representation of the namespace of FIG. 21 after renaming of a directory in accordance with an exemplary embodiment of the present invention;
[0060] FIG. 23 is a schematic diagram demonstrating dynamic expansion of a hash table in accordance with an exemplary embodiment of the present invention;
[0061] FIG. 24 is a schematic diagram showing a representation of a small file repository in accordance with an exemplary embodiment of the present invention;
[0062] FIG. 25 is a state transition diagram for node initialization, in accordance with an exemplary embodiment of the present invention;
[0063] FIG. 26 is a state transition diagram for membership in a management server federation, in accordance with an exemplary embodiment of the present invention;
[0064] FIG. 27 is a state transition diagram for discovering and joining a management server federation, in accordance with an exemplary embodiment of the present invention;
[0065] FIG. 28 is a state transition diagram for merging a management server federation by a root node, in accordance with an exemplary embodiment of the present invention;
[0066] FIG. 29 is a schematic diagram showing a representation of lease-based failure detection in a management server federation, in accordance with an exemplary embodiment of the present invention;
[0067] FIG. 30 is a state transition diagram for joining a peer set, in accordance with an exemplary embodiment of the present invention; and
[0068] FIG. 31 is a logic flow diagram showing the relevant components of a peer set protocol in accordance with an exemplary embodiment of the present invention.
DETAILED DESCRIPTION OF SPECIFIC EMBODIMENTS
Definitions
[0069] As used in this description and the accompanying claims, the following terms shall have the meanings indicated, unless the context otherwise requires:
[0070] A file is a collection of data. According to the UNIX® model (Unix), a file may also be an interface to access a computer resource, such as a network card, hard disk drive, or computer memory. These are only some examples--a list of computer resources that may accessed as files may be found in the Portable Operating System Interface (POSIX®), an industry standard (IEEE 1003; ISO/IEC 9945) that defines the core of the Unix specification and is hereby included by reference.
[0071] A filesystem is a system for storing and organizing computer files in a storage system. A filesystem organizes files into lists called directories. Directories are themselves files, as they hold a collection of data pertaining to other files. Thus, a directory may be listed in another directory. This type of inclusion may be repeated to create a hierarchical directory structure. Filesystems have a root directory at the base of the hierarchy. A file's parent directory is the directory that contains the file (the root directory may be considered its own parent). A file or directory may be considered a child of its parent directory, and the other children of a file's parent may be considered the file's siblings. The set of directories between a file and the root directory in a hierarchy (inclusive of the root directory) may be considered the file's ancestors. The set of files in a hierarchy for which a given directory is an ancestor may be considered the given directory's descendants.
[0072] A file path (briefly, "path") is a textual representation of the location, within a filesystem hierarchy, of a target file or directory. An absolute path is formed by concatenating the names of all directories lying between the root directory and the target file or directory, inclusive. A relative path is formed between a source directory and a target file or directory by concatenating two paths: a first path from the source directory to a common ancestor directory through parent directories, and a second path from the common ancestor directory through its children to the target file or directory. Intermediate directory names are separated by a path separator, which may be represented by a forward slash "/". The root directory path may also be represented by a forward slash "/". A file's relative parent directory path may be represented by two periods " . . . ".
[0073] Mounting is the process of attaching the directory trees of two filesystems, a base filesystem and a mounted filesystem. First, a target directory, or mount point, is chosen in the base filesystem. Next, a command is issued to the operating system to associate the mount point with the root directory of the mounted filesystem. After mounting, the file path of the mount point represents the root directory in the mounted filesystem, and requests for this path will return data associated with the mounted filesystem. Unmounting is the process of detaching a mounted filesystem.
[0074] Storage metadata is information pertaining to the storage of a file. For example, storage metadata may include the path of a file within a filesystem and a list of servers on which copies of file data may be found.
[0075] A peer set is a set of peering services, or nodes, running on at least two storage servers, cooperating to control access and modifications to a file or its storage metadata.
[0076] A network switch (briefly, "switch") is a computer networking device that connects network segments in a local area network (LAN), and is able to direct network traffic to a specific segment based on a hardware address known by the switch to attach to that segment. Hardware addresses are assigned to network devices in the data link layer (layer 2) of the ISO Open Systems Interconnection (OSI) networking model and the TCP/IP networking model.
[0077] A storage provider is hardware, software, or a combination of hardware and software for providing storage. A storage provider may be embodied as a single server, such as that depicted in FIG. 5, or it may be any other hardware or software for providing storage, including network attached storage or a storage area network.
I. General Discussion
Hardware and Network
[0078] FIG. 4A is a schematic block diagram showing relevant components of an exemplary client/server system as known in the art. Among other things, the client/server system includes a storage client 410 in communication with a number of storage providers 430, 440, 450 over a communication network 420 such as, for example, a LAN or a WAN (e.g., the Internet). Storage client 410 is a computer that utilizes data storage services provided by the storage providers 430, 440, 450. While the storage client 410 is a client with respect to the storage providers 430, 440, 450, it should be noted that the storage client 410 may be a server for other purposes; for example, it may be a web server. One possible physical embodiment of storage network 420 is depicted in FIG. 6 and described below.
[0079] The storage client 410 includes an application 412 and a filesystem 414. The client application 412 running on storage client 410 generates file operation requests, for example, to create a new file, write to an existing file, or read from an existing file. Filesystem 414 manages file storage and interacts with both the application 412 (e.g., via an application programming interface, or API) and the servers (e.g., via a network file protocol such as NFS or CIFS). On the application side, the filesystem 414 receives file operation requests from the application 412, processes the requests, and generates replies to the application 412. On the server side, the filesystem 414 transmits file operation requests to the storage providers 430, 440, and 450, and receives responses generated by the storage providers. The application 412 and the filesystem 414 are typically implemented in software that is stored in a memory and executed on a microprocessor, although it should be noted that such components may be implemented in hardware and/or software, and the present invention is not limited to the way in which the application 412 and filesystem 414 are implemented.
[0080] Each storage provider 430, 440, 450 includes a storage processor 432, 442, 452 respectively as well as storage 434, 444, 454 respectively. The storage processors 432, 442, 452 process storage operation requests received from the storage client 410 and send responses back to the storage client 410. The storage processors 432, 442, 452 interact respectively with the storage 434, 444, 454 to store and retrieve file-related data. In typical embodiments, each storage 434, 444, 454 includes one or more hard disk drives (e.g., four hard disk drives), although other types of storage may be used in addition to, or in lieu of, hard disk drives (e.g., solid-state or optical storage). Each storage processor 432, 442, 452 is typically implemented in software that is stored in a memory and executed on a microprocessor within its respective storage system 430, 440, 450, although it should be noted that such components may be implemented in hardware and/or software, and the present invention is not limited to the way in which the storage processors are implemented.
[0081] FIG. 4B is a schematic block diagram showing relevant components of a client/server system in accordance with an exemplary embodiment of the present invention. In this exemplary embodiment, each storage client, including the storage client 410, includes an additional component 415 (referred to hereinafter as the "FS client"), which is logically between filesystem 414 and network 420. Similarly, in this exemplary embodiment, each storage provider, including the storage providers 430, 440, 450, includes an additional component 431, 441, 451 (referred to hereinafter as the "FS server"), respectively, that is logically positioned between its respective storage processor 432, 442, 452 and the network 420. The FS client 415 and the FS servers 431, 441, 451 interact to provide an additional layer of file storage functionality (discussed in more detail below) over that provided by the filesystem 414 and the storage processors 432, 442, 452, utilizing services provided by the storage processors to manage the storage of file-related data. In essence, the FS client 415 receives file operation requests generated by the filesystem 414, which in the prior art system would have been forwarded to one of the storage processors, and instead interacts with one or more of the FS server components to satisfy the file operation requests and provide appropriate responses back to the filesystem 414. Each of the FS server components interfaces with its respective storage processor to store and retrieve data based on its interactions with the FS client. In typical embodiments, the FS client 415 and the FS servers 431, 441, 451 are implemented in software, although it should be noted that these components may be implemented in hardware and/or software, and the present invention is not limited to the way in which these components are implemented.
[0082] It should be noted that, in embodiments of the present invention, a client/server system may include multiple clients, each having a FS client component, as well as multiple storage providers, each having a FS server components. It should also be noted that, in various embodiments, a storage provider may be implemented using a single storage server or a group of storage servers (e.g., operating in a cluster) and may be implemented using any of a variety of physical or logical storage constructs. Among other things, this kind of abstraction allows the filesystem 414 to interact with different implementations of storage providers in a heterogeneous storage network. For example, a first storage provider may be a single file server, a second storage provider may be a cluster of two or more file servers, and a third storage provider may be a virtual file server running on one or more servers.
[0083] FIG. 5 is a block diagram showing relevant components of a storage server in accordance with exemplary embodiments of the invention. Among other things, storage server 510 has a microprocessor 520 and memory 530. Microprocessor 520 and memory 530 may cooperate to run a storage processor and an FS server. In addition, storage server 510 contains one or more hard disk drives for storing files. In an exemplary embodiment, storage server 510 contains four such drives 540, 542, 544, and 546; however, it will be understood that any number of drives may be used. Storage server 510 may also contain one or more network interface cards (NICs) for communicating with storage network 420 (not shown here). In the embodiment shown, storage server 510 contains two such NICs 550 and 552 to provide redundancy in case of a hardware or network failure; however, it will be understood that any number of NICs may be used.
[0084] FIG. 6 shows a possible physical layout of the storage network 120 of FIG. 4B. Storage servers are represented individually in this figure, not storage providers which may be a storage processing layer added to storage servers. Storage client 410 communicates with storage servers 630, 640, and 650. The storage network consists of three switches 610, 620, and 622. Each storage server in this embodiment has two NICs, 550 and 552. Each NIC is connected to a switch. In FIG. 6 the NICs labeled 550 all connect to switch 620, while the NICs labeled 552 all connect to switch 622. The storage client 410 is directly connected to a switch 610, which in turn is connected to switches 620 and 622. Storage client 410 may communicate with storage server 440, for example, through two different data paths: the first passes through the switch 610, switch 620, and NIC 550 on storage server 440, while the second passes through the switch 610, switch 630, and NIC 552 on storage server 440.
[0085] The architecture shown in this embodiment is resistant to network failure and hardware failure. For example, if the communications link between the switch 610 and switch 620 is broken, the storage client 410 may still contact storage server 440 using switch 622. If the communications link between the switch 610 and NIC 550 on storage server 440 is broken, the storage client 410 may still contact storage server 440 using NIC 552. Similarly, if NIC 550 hardware fails, the storage client 410 may still contact a storage server using the other NIC 552. In an alternate embodiment, network 420 may include an additional switch, connected to both switches 620 and 622, while storage client 410 connects to both switches. In this way, a switch may fail and the storage servers may still be contacted. Those skilled in the art will recognize other network arrangements that preserve this type of redundancy, and it is understood that these embodiments are also within the scope of this invention. Advantageously, as the cost of disk drives decreases over time on a dollars-per-byte basis, the system becomes more cost-effective.
System Overview
[0086] From the storage client perspective, a client application 412 (for example, a web server) interacts with the storage system to manipulate files. Client filesystem 414 is the point of contact between the client application 412 and the rest of the storage system. Thus, a purpose of client filesystem 414 is to receive filesystem requests from client application 412 and respond with file data or operation results. The inner workings of client filesystem 414 are generally opaque to client application 412. Enforcing such an isolation restriction aids in software design and portability. Client application 412 may communicate with filesystem 414 using a specified interface that the latter implements. In this way, client applications such as 412 may be portable between different implementations of filesystem 414. In some embodiments, the filesystem interface is a set of POSIX application programming interfaces (APIs). Other embodiments may use other APIs defined by the storage client's operating system.
[0087] Client filesystem 414 interfaces with the FS client 415, which, in turn, interfaces with the FS servers to store and retrieve information. The FS client and the FS servers use various complementary techniques (discussed below) to determine where and how information is stored. Among other things, the complementary techniques allow the FS client to determine which storage provider (or storage providers) to contact for each storage transaction and also allow the FS servers to manipulate where and how information is stored, including, for example, balancing storage load across multiple storage providers, balancing processing load across multiple storage providers, replicating information in multiple storage providers for redundancy, and replicating information in multiple storage providers for load balancing, to name but a few. The FS servers are essentially free to store file information anywhere among one or more of the storage providers and to move the information around dynamically, but the complementary techniques employed by the FS client and FS servers ensure that the FS client can locate the file information no matter where it is stored.
[0088] In exemplary embodiments, the FS client determines a target storage provider to contact for a particular storage transaction based on a pathname provided by the filesystem and a predetermined scheme. For example, the FS client may determine the target storage provider using a predetermined hash function applied to a portion of the pathname. The FS servers use the same scheme to determine where to store relevant file information so that the FS client can locate the file information. The target storage provider may store the file itself and/or may store metadata that identifies one or more other storage providers where the file is stored. Such metadata essentially provides a level of indirection that allows the physical location of the file to be decoupled from the pathname. Since a file may be replicated in multiple storage providers, the metadata may include a list of storage providers from which the FS client can select (e.g., randomly) in order to access the file. Among other things, such a list may allow for load balancing of client accesses to a particular file (e.g., if multiple clients are watching the same movie at the same time, the movie file may be replicated and stored in multiple storage providers, and each client may randomly select one of the storage providers from which to access the movie so that, statistically, the accesses are likely to be distributed among the multiple storage providers).
[0089] Thus, in one exemplary embodiment, the FS client decides which provider(s) to contact for a particular storage transaction in a two-step process: first, the FS client may locate a list of storage providers that control the requested data; second, the FS client may determine the subset of those providers that it will contact with a file operation request. In the first step, the FS client may use a hashing algorithm, described below in connection with FIG. 8 and FIG. 9, to locate and retrieve a list of relevant storage providers. The structure of such a list is described in connection with FIG. 10. The second step may use a storage redundancy policy which is configured by a storage system administrator. The FS client may communicate with storage providers using any convenient message data format, as described in connection with FIG. 12.
[0090] A storage provider may provide enhanced service availability for requests made by the FS client. In exemplary embodiments, a storage provider is composed of a number of processes that run on various physical servers and cooperate to control a storage area spread out among those servers. These processes, or "nodes," may communicate with each other as a set of peers using a shared network protocol and message format. However, a node need not be aware of the inner workings of any of the other nodes, according to the portability principle. Thus, for example, storage servers having different operating systems may run nodes having operating system specific optimizations, while participating in a single peer set.
[0091] Each node may control one or more storage media on a given server. For example, a node may control hard disk drives 540 and 542. Alternatively, a node may control only a portion of one or more hard disk drives, or other persistent storage medium such as Flash RAM, CD, or DVD. A node may communicate with the operating system of its own physical server, in order to process filesystem requests in a manner appropriate to that operating system. For example, a node may use a POSIX API to request that the local operating system perform a filesystem transaction in response to a client request, or it may use another API. A logical layout of storage metadata and file data that a node may implement on its server is discussed in connection with FIG. 13.
[0092] A storage provider may also provide enhanced data availability for requests made by the FS client. A storage provider may access only a single physical storage server, such as that depicted in FIG. 5. However, as a storage abstraction, it may be advantageous if a storage provider can access a number of different physical storage servers, across which it may spread its storage area. A storage provider may coordinate filesystem requests across all of the physical storage servers that it monitors, so that the data contained on storage server physical media are kept synchronized. A storage provider may also detect failures in the physical hardware or software of its servers, and effect repairs to improve availability. Such repairs may include, for example, selecting another available server to take the place of a down server. Or, if a storage processor (e.g. processor 432) has failed, a storage provider may issue a network message to the affected server, requesting that the appropriate storage software or hardware be restarted. Other self-healing techniques, and alternate methods of implementing the techniques described herein, that fall within the scope of the invention should be apparent to those skilled in the art. In exemplary embodiments, repairs may be effected using a system-wide queuing mechanism. Such a mechanism allows individual storage providers to queue resource-intensive tasks, such as data replication, for later fulfillment by servers that have spare processing power. This queuing system is discussed below in connection with FIG. 14, and the process of self-healing peer sets is discussed in connection with FIG. 15 through FIG. 17.
[0093] FIG. 7 is a schematic block diagram showing the relevant interaction between logical components that participate in handling a client file operation in accordance with an embodiment of the invention. Application software running on a storage client or on another computing device generates file operation requests. These file operation requests are received by the filesystem, as in step 710, using an application programming interface (API) such as POSIX. The filesystem processes the request, and returns the results of the operation to the requesting application software using the same API, as in step 780. The intervening steps are discussed below in relation to the intermediate filesystem operations, as they pertain to embodiments of this invention.
[0094] File data may be stored in several different storage areas, each of which is controlled by a different storage provider. It thus becomes necessary to track which file data are in which storage areas to ensure data consistency. For this reason, storage metadata may be created and maintained by the file storage system. Storage metadata may include the file path of file data, a list of storage providers controlling the file data, a generation (version) counter to ensure that file data is synchronized in all of the storage areas, and other convenient or necessary information pertaining to file data storage. Advantageously, storage metadata may be stored as files within a filesystem residing on the same physical media as the file data to which it pertains. Thus, storage metadata may also be controlled by a storage provider.
[0095] With these preliminary matters in mind, the method of FIG. 7 may be described. The FS client 415 receives a file operation request for a given file, in step 710. In step 720, the FS client 415 determines which storage provider (i.e., which FS server) controls access to storage metadata for the file (referred to herein as the "target" storage provider) by, e.g., calculating a hash of the path (or portion of the path) of the requested file. In step 730, the FS client 415 contacts the FS server in the target storage provider (i.e., storage provider 430 in this example), which in turn interacts with the storage processor 432 in step 732 to obtain storage metadata for the file from storage 434. In step 734, the FS server 431 returns the storage metadata to the FS client 415. In an exemplary embodiment, the storage metadata includes a list of one or more storage providers that control access to the actual file data; the list may include storage provider 430 itself. It should be noted that, using methods described below in connection with FIG. 8, steps 720 and 730 may advantageously involve only a single network access in order to resolve the path, thereby reducing both the latency and the bandwidth of the storage system.
[0096] The FS client then chooses, in step 740, one or more of the storage providers to contact in order to access the file data. The choice may be made using any of a variety of criteria (e.g., randomly or according to user-configurable policies), and such criteria may be designed to optimize the operation of the storage servers, the storage client, or both.
[0097] Once the choice of storage areas has been made, the FS client may contact 750 the FS server in one or more of the chosen storage providers to begin a filesystem transaction (in this example, the FS client 415 is shown contacting FS server 441 and FS server 451). Specifically, the FS client creates a formatted network message containing the request and sends it to the FS server (in this example, the FS client 415 may send separate messages to the FS servers 441 and 451). In step 760, the FS servers 441 and 451 interact with storage processors 442 and 452, respectively, to access file data from storages 444 and 454, respectively. In step 770, the FS servers 441 and 451 return the file data to the FS client 415. The FS client 415 may collect results from all of the relevant storage providers, and may aggregate 772 them into a result compliant with the client operating system's API (for example, a POSIX-compliant function return value). This result finally may be returned to the filesystem 414 in step 780, completing the process. The steps of this process are now described in detail.
[0098] As discussed above, the storage metadata essentially provides a level of indirection that allows files to be dynamically distributed among the storage providers while still allowing the FS client 415 to locate one or more storage providers that have file data. In lieu of, or in addition to, such storage metadata, the target storage provider may store file data. For example, the target storage provider may store the file data for a particular file, in which case the FS server may return the file data rather than storage metadata to the FS client in response to a request from the FS client. Alternatively, the target storage provider may store a portion of file data along with storage metadata and return both to the FS client in response to a request from the FS client. Since the FS servers may dynamically replicate and move file data among the storage providers, file data for a particular file initially might be stored on the target storage provider (in which case the target storage provider might return file data, rather than storage metadata, to the FS client in response to a request from the FS client) and later the file data may be replicated on and/or moved to one or more other storage providers (in which case the target storage provider might then return storage metadata, perhaps along with a portion of file data, to the FS client in response to a request from the FS client).
Hash Function Applied to Directory Names
[0099] A storage system embodiment may distribute file paths across the entirety of the available storage, according to a storage pattern. An exemplary embodiment distributes paths across the storage under the assumption that a filesystem cannot predict the paths that applications will select for file operations. This distribution allows the work that must be done by a storage system to be distributed amongst the storage providers. However, if file paths are predictable, then this distribution of workload may not be optimal. Implementations within the scope of this invention may allocate storage to providers differently, to best meet other application requirements.
[0100] An embodiment may distribute file paths across the various storage providers using a hash function. Hash functions are known in the art as a tool for evenly sorting an input data set into an output data set, usually of smaller size. Thus, an embodiment may divide the total available storage into a number of storage units of roughly equal size. The embodiment may then create a table of the storage units, and sort the file paths into the table using a hash function. To select a storage area, FS client 415 applies a hash function to part of the path of the file to yield a table index. Since hash functions tend to evenly sort their inputs into their outputs, this process advantageously evenly sorts the set of file names into the set of table indices, and thus evenly into storage units.
[0101] However, an exemplary embodiment does not use the entire file path as input to the hash function. Hashing an entire file path gives rise to certain inefficiencies. Files may move within a filesystem, and directories may be renamed. In either of these situations, portions of the file path would change, and the hash value would change correspondingly. As a result, the storage provider for one or more files may change, and the associated data may need to be moved among the storage providers. Renaming or moving a directory, especially one near the root of the filesystem, would cause the hash of all descendant files to change and would trigger significant data transfer unrelated to client data access. In order to address this problem, when associating a file path to a storage provider, embodiments of the invention may hash only a portion of the file path. An exemplary embodiment hashes only the name of the parent directory of the requested file. In this way, if a directory is renamed, the only data that must be moved is that data associated with the directory. Such data may include the storage metadata for the directory itself, and may also include storage metadata for related files, such as the directory's children, which may be stored for efficiency of certain filesystem operations (e.g. listing the contents of the directory). Files with similar paths, such as sibling files, advantageously produce the same hash value and may be stored in the same storage unit.
[0102] Consider next the portability principle. FS client 415 contacts storage providers, not storage units, to access data. It is not necessary or desirable for FS client 415 to have knowledge of storage units, which properly should be the concern of the storage providers. For this reason, an entry in the table may contain the name of the storage provider that controls the corresponding storage unit, not the name of the storage unit itself. Each entry in the table should correspond to roughly the same amount of storage, but the amount of storage controlled by a storage provider may be the same or different from the amount controlled by any other storage provider. Thus, the table may be redundant, in that a storage provider may appear in multiple table entries. In one embodiment, each storage provider has a number of entries in the table approximately proportional to the size of the storage it controls. For example, if storage provider A controls half as much storage as storage provider B, then storage provider A has half the number of entries in the table as storage provider B. In this way, each table entry is associated with approximately the same amount of storage as any other table entry, while hiding storage provider implementation details from FS client 415. In other embodiments, storage system administrators may wish to assign more table entries to storage providers with more powerful microprocessors, more available bandwidth, or for other reasons.
[0103] FIG. 8 is a conceptual representation of the process of converting a file path into a table index for determining a storage provider in an embodiment. An embodiment begins with a file path 810, obtained during step 710. The path in FIG. 8 is /docs/papers/paper.doc. There are three directories in this path: the root directory / 818, the first-level directory docs 812, and the second-level directory papers 814. There is a file leaf in the path, paper.doc 816. These components are separated with path separators /. As there are three directories in FIG. 8, there are at least three different directory hashes that could be formed from this path.
[0104] As a first example, a client requests directory papers. Client FS client 415 hashes the parent directory docs 812 using a hash function 820 to produce a hexadecimal value 830, namely f67eba23. Next, the hexadecimal value is converted to a table index by reduction modulo the size of the storage table. For example, a table may have size 16, or 24. In such a case, a bitmask 840 may be applied to discard all but the four least significant bits of the hash. Thus, the hash value f67eba23 is masked to 3 hex, labeled 850. This value corresponds to a (decimal) table index of 3.
[0105] As a second example, a client requests file paper.doc. The parent directory papers 814 is hashed using the same hash function 820 to yield a hexadecimal value 832, namely 8c2ab15c. Applying the same bitmask 840 yields c hex, labeled 852. This value corresponds to a (decimal) table index of 12. The root directory / may be similarly hashed and bitmasked to arrive at a third table index, if a client made a file operation request for directory docs 812. Thus, each directory is uniquely associated with a table index that corresponds to a particular storage provider.
[0106] The approach taken by embodiments of this invention has an advantage over prior `file path resolution` protocols such as those found in NFS. In NFS, resolving a file path to a file consists of an iterative process. First, the NFS filesystem breaks a file path into its component parts: a root directory, intermediate directories, and a data file. The filesystem locates a directory file for the first NFS-mounted directory (the NFS root) and retrieves it from the network. NFS then locates the directory file for each subdirectory, and retrieves it from the network. NFS repeats this process until the file path is entirely resolved. This process accesses a network several times, once for each intermediate directory. In embodiments of this invention, step 720 advantageously does not require network access to locate a file. As the hashing function applies only to a portion of the file path, the system may locate the file in an amount of time that does not substantially depend on the number of directories in the file path, or even the number of storage servers in the storage system. To access the file requires a single network message to the appropriate storage provider, which may look up the particular file in its local filesystem without accessing the network.
Adding Storage Capacity: Expanding a Hash Table
[0107] From time to time, a storage system administrator may wish to add additional storage capacity to a system. She may purchase additional servers, such as the server depicted in FIG. 5, and add them to the storage system. As an embodiment may distribute file paths evenly across all storage, the system should account for the additional servers. The system may give full or partial control over the new storage areas to existing storage providers, or add additional storage providers that control the new storage areas. In the first case, the size of the area controlled by each storage provider changes. In the second case, the number of storage providers changes. In both cases, the storage table may need to be changed. For example, a storage system may begin with three storage providers. An administrator purchases additional physical servers that require two more storage providers to be added to the system (by a process described below in connection with FIG. 9). Some of the content controlled by the first three storage providers should be distributed to the two new storage providers in order to balance processing load.
[0108] A table having a number of entries equal to the number of providers would be inefficient, considering that a hash value must be reduced modulo the size of the table to produce a valid table index. If the table size were to change from three to five, as in the above example, the hash values for most files in the filesystem would change (only one in five would stay the same: those with hash values equal to 0, 1, or 2 modulo 15). Such a change typically would force 80% of the storage metadata files to be transferred from one storage unit to another. This result would cause considerable performance penalties, and is clearly disadvantageous.
[0109] Embodiments of the invention may restrict the table size equal to a power of an integer. This constraint enables the efficient expansion of the storage table, as described below. In exemplary embodiments, the table size is equal to a power of two, but other embodiments may use a different exponential base. The choice of base two allows for certain efficiencies, for example the use of a hardware bitmask primitive as in FIG. 8, which is found on most modern computer architectures.
[0110] FIG. 9 shows a process for expanding a table of storage providers controlling file metadata, indexed by the table index created in the process of FIG. 8. Table expansion begins with the table of storage providers 910 in phase I. Here, there are three storage providers, with table entries for providers A 951, B 952, and C 953. Provider A 951 appears twice in the table--perhaps due to having the most storage capacity of the three servers. Suppose now that two more storage areas are added to the storage system, controlled by providers D 954 and E 955. The storage system may be reconfigured by a system administrator to allow the system to recognize the additional storage. The storage system may then determine that the table of storage providers has fewer indices than the number of storage providers, and expand the table.
[0111] Updating the table occurs in two phases: phase II and phase III. In phase II, the table is expanded the next-higher power (e.g., from 2 squared=4 entries to 2 cubed=8 entries in the example shown in FIG. 9) by copying the existing table entries 940, so that the table appears as 920. During this phase, it is important that the table size is constrained to be a power of an integer. If the base integer is N, the existing table entries will be copied N-1 times. In the exemplary embodiment of FIG. 9, the base integer is two, so the existing entries 940 are copied once, as entries 942. Although the number of entries of any storage provider in the table is multiplied by this process, the ratio of occurrences of one entry in the table to each other remains constant. Thus, the ratio of storage allocated to each storage provider remains fixed, as it should. Also, the size of the table at the end of phase II remains a power of the exponential base.
[0112] The process of phase II does not change which storage provider controls a given directory name. To see why this is so, let the size of the table be Nk for some value of k and consider the base-N representation of the hash value of a given directory name. The operation of reducing this hash value modulo the table size as in FIG. 8 is equivalent to discarding the most significant base-N digits of the value, and retaining only the k least significant digits. After expanding the table by a factor of N, the table will have size Nk+1. The process of FIG. 8 will then yield a table index having the k+1 least significant digits of the hash value. But the existing entries of the table were duplicated, once for each possible positive value of the digit at location k+1, so this digit merely `selects` one of N identical copies of the pre-expansion table. The remaining k least significant digits of the index have not changed. Thus, the new computed table index still corresponds to the same storage provider and storage area as before. As a result, the expansion in phase II does not require migrating any data between storage areas.
[0113] In phase III some of the duplicate entries of table 930 are replaced by entries for new storage providers. In exemplary embodiments, replacements follow the proportionality rule between table indexes and storage space. In FIG. 9, table index 4 is changed from provider A 951 to provider D 954, and table index 7 is changed from provider A 951 to provider E 955. As a result of this process, some hash values will be reassigned from one storage provider to another. Here, directory names with a hash value equal to (4 modulo 8) are reassigned from provider A 951 to provider D 954, while directory names with a hash value equal to (7 modulo 8) are reassigned from provider A 951 to provider E 955.
[0114] Additional details of dynamic table expansion are included below.
Automatic Migration of Data between Storage Providers
[0115] After a new storage provider is added to the provider table, the storage metadata for each of the directories controlled by the original storage provider may be migrated to the new storage area. The process of migrating directories may take some time, so the storage provider may not implement it immediately, but instead may place a migration entry in an asynchronous queuing system, such as that described below in connection with FIG. 14.
[0116] While migration is ongoing, a provider table may store both a new provider and an old provider for a given index. If a filesystem operation has a file path that hashes to a directory being migrated, the new storage provider is first queried for the storage metadata for that path. If the metadata has been moved to the new storage area, it is returned. If the metadata has not been moved yet, the old storage provider is queried, and the storage metadata is returned.
[0117] The migratory table itself (with multiple providers for each migrating index) is first installed in the old storage provider. A client may request a file path that produces a hash value associated with the new storage provider, while migration is ongoing. When making the first request, the client will have an old version of the table of providers, and will request the file from the old storage provider. This storage provider may use a generation counter to detect that a client has an old version of the table, and return a newer table to the client. (The actual storage metadata may still reside on the old storage server, as discussed above. In this case, the storage provider may reduce network communication by returning the metadata immediately to the client.) The client may replay the storage metadata fetch request, if needed, using the correct storage provider. At this time, the client may detect that the `new` provider has an older version of the table, and update the provider. In this way, the migratory table may propagate throughout the system.
[0118] After migration is complete, the migratory table may be replaced by a non-migratory table having only one storage provider per index. Again, using a generation counter, any given storage provider may determine during a filesystem operation that the client's table of storage areas is stale, and refresh it. And the client may determine that a provider has a stale (migratory) copy of the table, and refresh it. In an embodiment, several migrations may occur at once, in which case the system may contain more than one migratory table. Each table, however, may have a different generation counter, so the system may maintain consistency.
[0119] In one embodiment, migration between storage providers of the storage metadata itself is lazy. Lazy migration transfers storage metadata for a directory from one storage area to another as a client application requests filesystem operations for that directory. Migration of the storage metadata between storage areas in another embodiment is immediate. In immediate migration, as soon as a new storage provider entry is added to the table 930, all of the directories controlled by the old storage provider are immediately rehashed by the old storage area, to determine whether to migrate them. The old storage provider transfers storage metadata for each of the migrating directories to the new storage area, without waiting for a file operation request from the client.
Storage Metadata and Multiple Storage Providers
[0120] A storage client, in the process of fulfilling a filesystem operation received in step 710, may determine in step 720 a file path and which storage provider controls the storage metadata for the path. In step 730, the storage client may create a storage metadata request message containing this information, using a convenient data format, and send it to the storage provider. The provider may then retrieve 732 storage metadata for the file and return it 734. In exemplary embodiments, the storage metadata fetch request is the only network access required by the FS client to locate the storage providers controlling access to a file having a given path.
[0121] FIG. 10 is a representation of the contents of a storage metadata file 1010. Each storage metadata file 1010 pertains to a data file. The methods by which a storage provider stores that file should be generally opaque to the FS client. Still, the client filesystem may use data used by a storage provider, so long as the client does not process that data in any way. In particular, storage providers in embodiments of the invention may store file data under a name different from the name known to a client application, and provide that name to the client filesystem. One possible naming convention is discussed below in connection with FIG. 13. Metadata file 1010 may contain such an `opaque` filename 1012 if an appropriate naming convention is used. Alternate embodiments may simply store file data using the filename known to the client application.
[0122] In addition to the filename, storage metadata file 1010 also may contain a list of storage providers that control access to the file data. In the exemplary depiction of FIG. 10 this list has four entries 1014, 1016, 1018, and 1020. Storing this list enables the system to replicate data among up to four providers--other embodiments may have more or fewer entries in the list, depending on their data replication needs. Each entry may have at least the number of a storage provider that controls access to the file data, and a generation (version) counter, as well as other useful information. Generally, the first entry 1014 of the list will contain the storage provider that controls access to the metadata file itself. The same storage provider may advantageously control both file data and its storage metadata. Other embodiments may use a different file layout. In this example, entry 1014 represents that storage provider #6 controls access to the storage metadata for the file stored as 123abc, as well as having access to version 1233 of the file data. Entry 1016 represents that storage provider #18 has access to version 1232 of the file data. Entry 1018 is blank. Blank entries may occur, for example, if a storage provider held another version of the file data in the past, but ceased to do so (perhaps due to hardware failure). Or, a blank entry may occur if the storage system administrator changed the inter-provider replication policy to store only three copies of file data, instead of four. Those skilled in the art may recognize other reasons why an entry may be blank. Entry 1020 represents that storage provider #23 contains version 1233 of the file data for this file.
[0123] In this example, not all of the storage providers have the same version of the file data. Providers #6 and #23 contain a later version than provider #18. Thus, the file data is unsynchronized. The storage provider that controls the metadata file (in this example, provider #6) may recognize this condition, and begin repairs. Depending on whether this is the only file that needs replicating, repairs may take some time. Thus, the storage provider may queue a file data replication request in an asynchronous queuing system, such as that described in connection with FIG. 14, upon recognizing this condition. A storage provider in accordance with an embodiment may undertake periodic sweeps of the storage metadata files it controls, in order to detect such conditions before a file operation request arrives for a file that is out of sync.
[0124] In an exemplary embodiment, metadata may be stored in symbolic links. A symbolic link is a special system file that does not contain file data, but other data which refers to file data stored elsewhere. Metadata may be stored in any convenient format. Different filesystems store, and allow access to, the data in symbolic links differently. Unix systems advantageously require only a single system call readlink( ) to read a symbolic link, instead of the three system calls open( ), read ( ), and close( ) required of regular files. Also, Unix systems provide greater guarantees of file integrity to symbolic links than to regular files. Exemplary embodiments take advantage of symbolic links to enhance the speed and reliability of storage metadata retrieval. Other embodiments may use other methods of physically storing metadata.
Aggregating File Data Stored in Multiple Storage Providers
[0125] In step 740 the FS client 415 may parse the storage metadata and choose storage areas having copies of the file data to interact with. Until this point, the storage system has dealt only with locating and retrieving file storage metadata. Step 740 is the first step in the process where the distribution of file data is relevant. Embodiments may distribute file data between storage areas in a number of different ways. For example, a storage system may distribute the data across the various storage providers using RAID techniques, such as striping, mirroring, and parity-keeping. Each of these techniques has different advantages and disadvantages, and in an exemplary embodiment a storage system administrator may select a technique appropriate to the storage problem at hand. Each of these techniques also requires a storage client to access storage providers differently. For example, in mirroring, each storage area contains a complete copy of the relevant file, so the storage client may select a storage provider based on factors such as server load and available bandwidth. However, with striping, each storage area contains only part of the relevant file, and some or all storage providers may need to be accessed in any given file operation. It should be noted that a file may be replicated (mirrored) on multiple storage providers for redundancy, for load balancing, or for other purposes. For determining when a file should be replicated on multiple storage providers for redundancy, criteria that may be useful in some contexts include file type (for example, all text documents or all word processing documents), file size (for example, all files greater in size than 1 GB), and file name (for example, all files having a name including the string "account"). In the case of redundancy, for example, a file may be replicated in multiple storage providers and, using the indirection techniques described above, the client may be provided with a list of the storage providers and may contact one or more of the listed storage providers successively as needed to obtain access to the file; in this way, if the first storage provider contacted by the client is unavailable, then the client will contact another storage provider in order to obtain access to the file. In the case of load balancing, a file that is being accessed by multiple clients may be replicated in multiple storage providers and, using the indirection techniques described above, the client accesses may be distributed among the multiple storage providers by providing the clients with a list of storage providers and having the clients randomly select one of the listed storage providers to contact for access to the file. A storage system embodiment may contain logic for detecting heavy user access for a particular file or files, and dynamically, automatically replicate the file or files among storage providers to provide system-wide load balancing.
[0126] Given the configuration of file replication within the storage system, a filesystem in step 740 may decide which storage providers to contact to gain access to the actual file data. In an exemplary embodiment, file data is mirrored between storage areas. Thus, the decision may be driven by a policy engine that considers factors such as: current storage network usage; storage server load, capacity, and processing power; file data replication techniques; and any other useful and relevant information. Other embodiments may use other techniques to decide which storage provider(s) to contact for file data.
[0127] Note that, regardless of which storage provider the client chooses to contact, the storage providers themselves may coordinate with one another in order to maintain the relevant replication configuration without client direction. For example, storage providers may migrate data between themselves after an increase in storage capacity, as described above in connection with FIG. 9. So long as the client has a consistent picture of the data for access purposes, the storage providers may perform other manipulations of the physical data.
[0128] Once FS client 415 decides the proper storage providers to contact, the process continues to step 750. In step 750, FS client 415 may forward file operation request messages to the various chosen storage providers using the storage network. These messages correspond directly the originally requested file operation: open( ), close( ), read( ), write( ) or other operation specified by the filesystem API such as stat( ). In step 760 the servers of the various storage providers process these messages, as described in more detail in the next section. In step 770 the filesystem receives the results from the storage network.
[0129] In step 772 the FS client 415 may analyze the various aggregated responses to determine a further course of action. There are four possibilities. First, if all storage providers reported that the file operation completed successfully, the filesystem 414 may return 780 a success value to the requesting application software 412. For example, if the application requested a listing of all files in a directory, each of the storage providers would execute the appropriate system calls or library functions such as opendir( ) and readdir( ) to obtain a directory listing, and the FS client 415 may then place all of those listings into a master list to return to the application software 412.
[0130] Second, the file operation may be asynchronous. Some filesystems support the ability to read or write data in a file in an asynchronous, non-blocking fashion, so that the requesting application may execute other instructions while waiting for the file operation to complete. This ability is important in applications where the file represents a communications channel such as a network device, file socket, or pipe. The POSIX method to accomplish non-blocking operations is to issue an open( ) or fcntl( ) system call with O_NONBLOCK argument. In cases such as this, the filesystem 414 may return 780 a value immediately, and communicate with the requesting application software 412 at a later time using out-of-band channels, such as signals, in accordance with the standards for asynchronous file operations.
[0131] Third, the file operation may be synchronous, but may have timed out. Some filesystems support the ability to wait for a set period of time for a communications channel, such as a network device, file socket, or pipe, to be ready to present or accept data. The POSIX method to wait for a file is to issue a select( ) system call. In an exemplary embodiment, the FS client 415 sets a timer and issues the select( ) command to the various storage providers, waiting for a reply. If none reply within the set time limit, the filesystem 414 is free to return 780 a timeout condition to the requesting application software. Given that embodiments may communicate using a network, a wait time less than the average storage network latency should be expected to timeout. Other embodiments may allow the individual FS servers to perform their own timeouts, but network latency must be carefully monitored to allow filesystem 414 to return a value to the requesting application software 412 in a timely fashion.
[0132] Fourth, a file operation may be properly executed on all storage providers, but an error condition arises on one or more of the storage providers. For example, a request to write data to a non-existent file may generate such a condition. Here, FS client 415 has several options. The filesystem 414 may return 780 a single error to the application software 412 that adequately summarizes the aggregate error conditions. The filesystem 414 may rank the error conditions in a priority order, and return the most serious error. Or filesystem 414 may return the error condition returned by the largest number of storage providers. A person having skill in the art may devise alternate ways to aggregate errors, while falling within the scope of the invention.
[0133] Alternatively, the FS client 415 may recognize the error or errors, and replay the file operation request on one or more storage providers returning the errors. Some errors may arise due to internal inconsistencies in file data replication, such as an out-of-sync condition. Storage servers in accordance with embodiments of the invention have mechanisms in place to deal with such conditions, as described below. Still, these conditions may occur from time to time, and FS client 415 may recognize these conditions as transient. In such cases, the FS client 415 may replay the file operation request at a later time. If a number of replay attempts fail, the filesystem 414 may return 780 an error condition to the application software 412, as described above.
Storage Providers as Peer Sets
[0134] It is convenient and advantageous for a storage provider to safeguard against hardware failure or network failure, by storing copies of file data and storage metadata on different storage servers. For this reason, a file storage system as embodied herein may create and maintain peer sets to act as storage providers. A peer set is a set of peering services, called nodes, running on several storage servers, cooperating to control access to a file or its storage metadata. A node may control one or more disk drives, or more generally a number of volumes (mountable filesystems), on the server on which it operates. A peer set may appear to client FS client 415 as a single storage provider having a single network address, in accordance with the portability design principle. It will be understood that in other embodiments, a storage provider may be a single storage server.
[0135] FIG. 11 depicts the logical components of a peer set in accordance with an embodiment of the invention. Each storage server in the embodiment, for example server 1 (1110), has several storage devices (hard disk drives) 1120, 1122, 1124, and 1126 as in FIG. 5. A peer set may be embodied as processes, or nodes, running in a number of the storage servers. In an exemplary embodiment, the number of nodes per peer set (referred to herein as "cardinality") is fixed at three, although other embodiments may have more or fewer nodes in a peer set, and the cardinality may be fixed for a particular embodiment (e.g., some embodiments may be fixed at two nodes per peer set while other embodiments may be fixed at three nodes per peer set) or configurable, perhaps within certain constraints (e.g., cardinality may be configured for either two or three nodes per peer set). In typical embodiments, all peer sets are required to have the same cardinality, although other embodiments may be adapted to support peer sets of mixed cardinality (for example, to support different storage tiers for different types of files or file backup purposes). The examples below describe peer sets with three nodes. As discussed below, when a peer set has three nodes (or, more generally, an odd number of nodes), it is convenient to structure some processes to occur when a majority of the nodes (e.g., two nodes out of three) operate in agreement with each other. However, when a peer set has just two nodes (or, more generally, an even number of nodes), and in a process there is no prevailing agreement, an outside entity (e.g., a designated management node) may be enlisted to resolve the disagreement.
[0136] In the three node embodiment of FIG. 11, the peer set 1130 consists of node 1 running on server 1 (1110), node 2 on server 8 (1112), and node 3 on server 6 (1114). For simplicity, each node here controls a single storage device, but in other embodiments, a node may control several storage devices on a single server. The peer set 1130 thus controls storage devices 1122 using node 1, 1132 using node 2, and 1134 using node 3. Each physical server may run, simultaneously, several nodes that participate in different peer sets, but each node may only belong to one peer set. Again for simplicity, only one peer set is depicted, although typical embodiments may run four peer sets using these three servers (12 nodes for 12 storage devices).
[0137] Each peer set may designate a primary node, such as node 3 running on server 6 (1114). The non-primary nodes in a peer set are designated secondary nodes. The primary node may be responsible for coordinating a number of functions that should appear to a client as if they were performed by a single storage provider. The primary node may be the only node in the peer set that communicates with the client, as described in connection with FIG. 12. The primary node may also ensure that storage metadata and file data is properly synchronized across all of the nodes in the peer set, so that file operations are consistent. A primary node may use RAID techniques (striping, mirroring, parity-keeping) to distribute file data among the servers of the peer set, in accordance with an intra-set data replication policy. The advantages and disadvantages of using such policies are described above in connection with step 740, but it will be understood that replicating data between nodes of a peer set has certain advantages over replicating data between storage providers. One such advantage is isolation of the details of the process from the client. The primary node within a peer set may control authoritative data to which the other nodes synchronize, as described below in connection with FIG. 15.
[0138] In an exemplary embodiment, each peer node is assigned a label or other designation (referred to hereinafter as "color") that is used to distinguish that node in a peer set from all the other nodes. For example, one node may be designated red, one node may be designated blue, and the third node may be designated green, as represented by labeling storage media 1122, 1132, and 1134 as "R", "G", and "B" respectively. In an exemplary embodiment, colors are used to arbitrate the choice of the peer set member that has to fulfill a given request so that requests are distributed among the nodes of the peer set, although colors may be used for other purposes. (The choice of color may be entirely arbitrary, so long as each node in the peer set has a distinct color.) Each request sent to a peer set (e.g., using IP multicasting, as discussed below) receives initial processing by each member of the peer set to determine which member of the set will handle the processing. This determination may be performed for example, using a hashing scheme on a portion (such as the message ID or the IP address of the client or some combination of these items) of the request. Thus each member of the peer set can determine what "color" peer will be doing the processing of each request without any need for communication among the members of the peer set. If a request is determined by a peer to be one to be processed by the peer based on its color, then the peer performs the processing; and otherwise, the peer can ignore the remainder of the request. It should be noted that, in an exemplary embodiment, the color designation is separate from the primary/secondary role designation. A node can switch roles from primary to secondary or vice versa, but the node would not change color. Similarly, a node that replaces a crashed node in a peer set inherits the color of the crashed node but does not necessarily inherit the role of the node it replaces.
Using IP Multicasting to Communicate with a Peer Set
[0139] The exemplary peer set above controls three nodes. To provide enhanced availability, embodiments may place only one node belonging to a peer set on any given storage server. In this way, if a physical server fails for any reason, or if the node on that server fails, the peer set may still contain other nodes for processing file operation requests. According to the principles of portability and isolation, it is advantageous that the filesystem 414 on a storage client be unaware of the number of physical storage servers. Yet in order to provide service efficiency, a storage client may contact all of the physical storage servers controlled by a storage provider with a single network message.
[0140] Thus, in an exemplary embodiment, the storage system may assign each storage provider a multicast IP address, and the client may send file operation requests to this address. IP multicasting is known in the art--it is described in Internet Society, RFC 1112: Host Extensions for IP Multicasting (August 1989), and Internet Society, RFC 3170: IP Multicast Applications Challenges and Solutions (September 2001), which documents are hereby incorporated by reference. IP multicast addresses use the same format as, but a different address range than, unicast addresses. Other embodiments may contact a storage provider using a unicast (single-host) IP address, contact each physical server controlled by the provider using a unicast address, or have another communication model.
[0141] As additional servers are added to the storage system, perhaps to increase storage or processing capacity, more peer sets may be added to the system. In one embodiment, a system administrator may reconfigure the storage system to recognize the additional servers and to add peer sets. In another embodiment, the storage system may automatically detect new servers, and reconfigure the list of peer sets automatically. For example, a system may employ Dynamic Host Configuration Protocol (DHCP). DHCP is described in Internet Society, Request for Comments (RFC) 2131: Dynamic Host Configuration Protocol (March 1997), which is hereby incorporated by reference. In such an embodiment, storage servers may request configuration parameters, such as a host IP address, from a DHCP server automatically, with no additional configuration by a system administrator. A peer set IP (multicast) address is assigned to the members of the peer set using a membership protocol described below.
[0142] FIG. 12 depicts communications in an exemplary embodiment between a client and a peer set using the computer network of FIG. 4. Storage client 410 may access FS client 415, which communicates with a peer set 1210 via network 420. In particular, a storage system administrator may assign an IP multicast address, such as 227.0.0.1, to the peer set 1210. Each of the nodes 1222, 1232, and 1242 in the peer set may be configured to listen for client storage messages sent to this multicast address. However, the primary node 1242 may be the only node configured to respond to such a message. Thus, each message sent by FS client 415 may be answered by a single message sent by a primary node 1242, simplifying network communications between FS client 415 and the peer set.
[0143] The distributed processing arrangement of this embodiment is both efficient and simple. In terms of efficiency, the client need send only a single message for handling of a request. Multicasting of the request permits handling of each class of request with great efficiency, since all members of the group are sent the request simultaneously, yet there is only a single reply. The switch configuration of FIG. 6 handles traffic on the client network efficiently, because packets are replicated only when the switch closest to the nodes is reached. The arrangement of this embodiment is simple because it avoids the need for pinpointing failures that would be required by a centrally supervised system; the distributed embodiment herein avoids the need for centralized failure detection.
[0144] The following are some additional references relating to multicasting:
[0145] [CISCO-99] Cisco Systems, Inc., "Multicast Deployment Made Easy", 1999..- pdf
[0146] [CISCO-02] Cisco Systems, Inc., "Cisco IOS Profile Release 12.1(13)E7 and 12.2(12)b--System Testing for Financial Enterprise Customers," 2003.- ccont--0900aecd80310d60.pdf
[0147] [CISCO-05] Cisco Systems, Inc., "Cisco 7600 Router: Resilience and Availability for Video Deployments", Whitepaper, 2005.- t--0900aecd80322ce1.pdf
[0148] [QUINN-03] Michael Jay Quinn, "Parallel Programming in C with MPI and OpenMP", McGraw-Hill Professional, 2003.
[0149] [DEMIRCI-02] Turan Demirci, "A Performance Study on Real-Time IP Multicasting", Thesis, Dept. of Electrical and Electronics Engineering, Middle East Technical University, September 2002. Also in Proceedings of the Eighth IEEE International Symposium on Computers and Communications. IEEE, 2003.
[0150] [GRANATH-06] Derek Granath, "How to Optimize Switch Design for Next Generation Ethernet Networks", Network Systems Design Line, Jun. 14, 2006.;jsessioni- d=2GUIWZFYBGDIOQSNDLRSKH0CJUNN2JVN?articleID=189401062
[0151] [RFC-1112] S. Deering, "Host Extensions for IP Multicasting", STD 5, RFC 1112, August 1989.
[0152] [RFC-1700] J. Reynolds, J. Postel, "Assigned Numbers", ISI, October 1994.
[0153] [RFC-2113] D. Katz, "IP Router Alert Option", Standards Track, February 1997.
[0154] [RFC-2236] W. Fenner, "Internet Group Management Protocol, Version 2", RFC 2236, November 1997.
[0155] [RFC-3376] B. Cain, "Internet Group Management Protocol, Version 3", RFC 3376, October 2002.
[0156] [SSM-02] Bhattacharyya, S., et. al., "An Overview of Source--Specific Multicast (SSM)", Internet Draft, March 2002.
Layout of Data within a Node of a Peer Set
[0157] The first issue to address is that of the namespace of storage metadata files within a storage area. Two different directories may store their metadata in the same storage area if they have identical names. As an example, given the path /docs/joe/pdf/file.pdf, an embodiment may hash the parent directory name pdf to determine a table index and a peer set. Given a path /apps/adobe/pdf/pdfviewer, the client may hash the parent directory name pdf to find the same table index and peer set. Although the last two directories differ in their file paths, an embodiment may determine the same peer set for both, if it used the same input to the hash function: the parent directory name pdf. Thus, the directory name pdf is not enough information to assign a location to /docs/joe/pdf and /apps/adobe/pdf in the same storage area. To avoid collisions, embodiments may save the storage metadata using entire path names. Thus, while the two directories ending in pdf may be controlled by the same peer set, they may be stored within the peer set's storage area based on their full, absolute paths.
[0158] There are several advantages to this scheme. First, if a directory is renamed, only it and its immediate children may need to be rehashed and possibly moved to another storage area. As only storage metadata must be transferred, and not file data, such service disruptions use a minimal amount of bandwidth. Next, each node may use its native filesystems to look up paths, and to guarantee that path name collisions cannot happen. Also, renaming a directory may be done in parallel on each of the nodes in a peer set. However, other embodiments may store metadata in other ways more appropriate to different applications, and a person of skill in the art should recognize how to make changes to the implementation of the redundant namespace as required.
[0159] The next issue to address is that of the namespace of data files within a storage area. File data need not be stored using the name requested by a client. Flat directory structures require fewer directory lookups than deep structures. However, lookups within a directory become slower as the directory stores more files, due to the mechanics of accessing the relevant data structures. Thus, the most rapid file lookups occur in directory trees wherein each directory contains a fixed, finite number of enumerated subdirectories, where the fixed number may be adjusted based on hardware and software capabilities to adjust response time. A common scheme, and that of an exemplary embodiment, assigns a unique file ID to each file (irrespective of its possible renames or moves through the global file system hierarchy). The file may be stored in a directory path based on the unique ID.
[0160] FIG. 13 shows a data storage area and a metadata storage area in a node within a storage server in an embodiment. Each storage server runs one or more nodes, such as node 1310. Each node may control one or more storage volumes. Node 1310 controls two directory trees 1320 and 1330 for storing metadata and file data, respectively. In some embodiments, the directory trees 1320 and 1330 are independently mountable filesystems, while in others they are not. One tree may be a root filesystem, and the other tree may be located within a directory of the root filesystem, or both trees may be mounted in a third filesystem.
[0161] Directory tree 1320 contains a storage metadata repository (MDR). In a storage system in accordance with an exemplary embodiment of the invention, storage metadata may be placed in a filesystem and given the same absolute path as the file requested by the client filesystem 414. Storage metadata is stored in this manner to facilitate its rapid retrieval. Thus, when a client makes a file operation request for a file having a given path, a storage server may retrieve the storage metadata for that file by applying that path to its metadata filesystem. As with any filesystem, the metadata filesystem contains a root directory 1322, several directories 1324 arranged in a hierarchy, and several files such as file 1326. In some embodiments, the storage metadata repository is not the root filesystem, but is contained within a directory such as /MDR in the root filesystem. In this way, a storage server may segregate the storage metadata repository from other files, such as operating system files and a file data repository.
[0162] Directory tree 1330 contains a file data repository, and has a simple structure. The base of the tree is a root directory 1332. Up to 256 directories, enumerated in hex from 00 1334 through FF 1336, may be contained in each directory in the tree. For example, directory 1338, named B3, contains a subdirectory 1340, named 1A. The name of each leaf file, such as file 1350, may contain the complete hash value, in this case B31A.
[0163] In some embodiments, a generation counter may be stored as part of the file name. This counter can be used by a peer set to ensure that each file controlled by the peer set is properly synchronized in each file data storage hierarchy. Thus, a data file's full path from the root directory of the repository may be, for example, /B3/1A/B31A-17, the path of file 1350. The counter may be incremented any time the data in the file is written or rewritten. This counter enables data files to move between peer sets coherently--when a file is copied to the new peer set, its counter is incremented, so the copy does not overwrite any older file data already stored in the new peer set. In some embodiments, the file data repository is not the root filesystem, but is contained within a directory such as /DR in the root filesystem. In this way, a storage server may segregate the file data repository from other files, such as operating system files and the storage metadata repository.
[0164] The generation counter may also be used to simplify the operation of another embodiment. For example, file read-write locking has certain implementation challenges that can be entirely avoided by using a generation counter. One embodiment may permit only creates, reads, overwrites, and deletes, but not updates. These file operations in practice may be easier to implement than the full set including updates, due to the avoidance of race conditions. Such an embodiment may implement this functionality as follows. Create operations may check for the existence of a file of the appropriate name having any version, creating version 1 or returning an error if the file is already present. Read operations may locate the latest version of a file and return its data. Delete operations may mark the metadata for deletion, without disturbing ongoing read operations. Overwrite operations may locate the latest version of a file, create a new version, write the new version, and update the metadata (if it still exists), also without disturbing ongoing read operations. Such an embodiment may run a `garbage collector` process on a regular basis to compare files in the filesystem against their metadata, and permanently delete files and their metadata if there are no ongoing read/write operations.
[0165] Storage metadata in directory tree 1320 may be associated with file data in directory tree 1330 as follows. In an exemplary embodiment, each time a file is created by the client, the controlling peer set assigns the file a unique file identifier. For example, the unique identifier may be formed by combining the ID of the peer set that created (and will initially control) the file, with a counter of files created within the peer set. This algorithm may be used to create the opaque file data storage name discussed in connection with FIG. 10.
[0166] Once a peer set creates a storage name, it may create the data file 1350 itself, and create a storage metadata file 1326 which is associated with the data file 1350, as indicated in FIG. 13. The peer set may then replicate the storage metadata and data file throughout the storage servers in its own peer set according to the storage metadata replication policy (in exemplary embodiments, mirroring) and the intra-set file data replication policy. As replication may be resource-intensive, the primary node may queue a request to do so in an asynchronous queue, as described below.
Small File Optimizations
[0167] In some applications, a storage system may provide very fast access to small files. For example, a web bulletin board system may allow users to select small images to represent their on-line personas, called "avatars." These avatars are typically no larger than a few kilobytes, with some bulletin boards having a maximum size restriction. In addition, posts made in web bulletin boards and blogs are typically textual, and of a few kilobytes in size. For these applications, a storage system that provides rapid access to the small files representing a post or avatar, has clear advantages in system response time and may have improved user satisfaction.
[0168] An embodiment may provide rapid access to small files by employing flat storage. In a flat storage embodiment, a storage medium (such as a hard disk drive or an area of a hard disk drive) is partitioned into equally-sized storage areas, or "extents." Each extent may be, for example, 1 kilobyte, 4 kilobytes, or another appropriate size. For example, an extent may be equal in size to a physical disk block. A "small file" is then any file whose data occupies a limited number of extents, up to a maximum file size. In such an embodiment, a particular extent's number may be mapped onto a physical location by a simple multiplication. Thus, if an extent is 4 kilobytes (0x1000 in hexadecimal), then the first extent begins at byte 0x0000 of the storage medium, the second extent begins at byte 0x1000 of the storage medium, and so on. In another embodiment, one or more of the extents may be used as a bitmap for the storage system, so that it may determine which of the remaining extents contain small file data. In this embodiment, the physical location may be found from a multiplication followed by an addition (to offset the size of the bitmap). Thus, if the first two extents are used as a bitmap, then the second file data may be located at, for example, byte 0x1000 (second file)+0x2000 (offset)=0x3000. Such multiplications followed by additions exist in some modern computer architectures as low-level hardware primitives, the use of which may increase the speed of the storage system in locating files on disk. An embodiment may create a small file storage area upon the request of a system administrator, or under direction from system configuration data.
[0169] It is advantageous to use a naming scheme for small files that does not directly related to the physical location at which the file is stored for several reasons. If the number of an extent were used directly, an application could directly access physical storage, regardless of whether data is stored there or not. This type of access may lead to data corruption. Also, if a file is modified in-place using the same name, there is no historical data regarding prior versions of the file data. And if a file name is tied to a physical storage offset, it may be difficult to identify which server manages the small file repository where this particular file is kept. Thus, each small file should have a globally unique ID within a storage system embodiment.
[0170] Thus, small files within a storage system may be named according to the following exemplary scheme. A file name may contain the ID of the peer set that created the file. In one embodiment, other peer sets may take over management of the file, although this ID will not change for the life of the file. In another embodiment, only the peer set that created the file may manage it. A file name may also contain the number of the extent on disk at which it starts. In embodiments including this name component, the file must reside at a fixed location on disk, and cannot be moved (for example, to defragment the disk). A file name may contain the number of consecutive extents that it occupies on disk. In embodiments including this name component, the size of the file cannot grow beyond this number of extents. Such embodiments may store the actual number of bytes consumed by the file in a special portion of the physical disk, or in a storage metadata file. Also, a file name may include a generation number for the file, to ensure that two files using the same extent at different times can be distinguished from each other. A complete file name may incorporate any or all of this information, for example by concatenating it together. A complete file name may be embedded in a URL to allow direct access by a web browser or other application for retrieving small files.
[0171] An embodiment may deal with a large number of such small files, and may name them for convenience using an alphanumeric string, a hexadecimal string, or use another naming convention. Small files in an embodiment may be accessed using artificial paths. For example, a fictitious directory may be designated as an access point for small files. Such a directory may be named, e.g., /smallfiles. Thus, a request for a small file named XYZ, on a storage filesystem mounted on a storage client as /storage, might be accessed by a client application as /storage/smallfiles/XYZ. However, this file path may not correspond to an actual directory structure in the storage system; instead, an embodiment may interpret the path /smallfiles/CD3A to mean `access the 4 kilobytes of data starting at byte 0x0CD3A000 from the flat storage medium`. Alternatively, the embodiment could treat CD3A as an index into a table containing the beginning physical offsets of small files on the storage medium.
[0172] These small file optimizations may be combined in an embodiment with further optimizations. Any given disk drive has a maximum number of I/O operations per second it can accomplish. This number is basically independent of the amount of data being read or written to the drive. Since individual seeks to reposition the drive head count as independent operation and take up the most relevant portion of a drive's access time, having contiguous files is advantageous as they can be read with a single operation rather than via multiple seeks. Generally, most file system require first a seek to access the directory that references a file, then another one to access the file metadata, that tells where the file data can be found and finally a seek to access the data. This entails 3 operations for a single read.
[0173] If the file metadata is contiguous to the data and the location of the file is embedded within the file name, the first two operations are unneeded and the metadata+data can be read in with a single I/O op. This reduces the I/O count per drive by at least a factor of 3 and therefore allows drives to serve more requests. This is very important for very randomly accessed small files which, because of the randomness, cannot be cached. For such files (i.e., thumbnails, etc.) reducing the number of I/O operation reduces the number of drives a storage infrastructure needs to achieve a certain throughput. For example, a node may receive a request for metadata for a certain file. The storage metadata for that file could contain an indicator that this file is a small file, and also contain the small file's path, such as /smallfiles/CD3A. The node may then retrieve the file using this path from its local storage media, and return it with the storage metadata, or instead of the storage metadata. Referring to FIG. 7, steps 740 through 772 may be avoided by this optimization, decreasing response time and network bandwidth, and increasing performance. In another embodiment, the node may have logic for deciding whether to immediately return the small file or the storage metadata for the file. Such logic could be useful, for example, where small files change rapidly, and any given node may not be able to determine whether it contains the most recent version of a particular file.
[0174] In another embodiment, the small file optimization may be combined with the read-write lock avoidance functionality. Rather than creating a new generation number each time a given small file is written, as described above in connection with FIG. 13, an embodiment may simply assign a new name to the file. In this case, a node may update a bitmap of small files with the new extents to use, and mark the old extents for deletion.
[0175] An exemplary small file repository is described below.
Asynchronous Queuing
[0176] Embodiments of a storage system may include a highly scalable, system-wide, asynchronous, atomic queuing mechanism backed by a persistent store. From time to time, a storage system may execute resource-intensive operations. These operations include, for example, replicating file data, replicating storage metadata, and resolving file data differences to ensure data consistency. Executing such operations should not significantly reduce the performance of the storage system, by reducing either processing power, bandwidth, or storage available to a client. By placing such resource-intensive operations in a persistent queue, storage servers may advantageously fulfill these operations when sufficient processing capabilities become available. Thus, system performance will not be significantly degraded.
[0177] FIG. 14 is a schematic block diagram of the components comprising, and those communicating with, a queue in accordance with an embodiment of the invention. A queue is known in the prior art as a mechanism for processing data records in a First In, First Out (FIFO) manner. Exemplary queue 1410 contains a first record 1420, a second record 1430, and a third record 1440. A queue may contain no records, or any number of records, and the number of records in the queue may change over time as a storage system requires. Records may be taken from the head of the queue for processing, as indicated by arrow 1412, and added to the tail of the queue, as indicated by arrow 1414. Thus, first record 1420 was added to the queue before second record 1430, and second record 1430 was added to the queue before third record 1440.
[0178] A queue in accordance with an embodiment may allow any system component to enqueue a record, and may allow any system component to dequeue a record. In this way, the producer of a record may be decoupled from the record's consumer. In one embodiment, one node of a peer set manages queue operations for the peer set. This node could be the primary, or it could be the member of a particular color. This allocation is advantageous in that there may be several queue requests in a queue at any given time, and processing those requests may consume considerable system resources. Other embodiments may allow each node in a peer set to interact with the queue.
[0179] A queuing system may support the creation, maintenance, and deletion of more than one queue 1410. Each queue in a queuing system may have a name. In one embodiment, a name may be composed of file path name components. Such a naming scheme is advantageous in a storage system having tasks that are associated with paths, such as copying storage metadata or file data in a directory from one node to another node in a peer set. Other queuing system embodiments may use any consistent naming scheme for uniquely identifying queues, such as the POSIX ftok( ) function.
[0180] A queuing system may employ a system of leases. Data inconsistencies could result if a node took a task from a queue, such as a data migration task, and crashed before completion. Thus, queuing leases may be used to guarantee that tasks are completed before they are dequeued. In FIG. 14 the first record 1420 is leased 1422 to a first node running on server 1424, while third record 1440 is leased 1442 to a second node running on server 1444. As records in a queue are processed in FIFO order, this diagram is consistent with a third node (not shown) taking a lease on the second record 1430 before lease 1442 was granted, but failing to complete its task. Record 1430 thus remains in the queue for another node to process at a later time. A queuing lease may contain information such as an identification of the record and the leasing node, the time of the lease, and the lease duration.
[0181] A queuing system may have several capabilities. The system may allow a user to create a new queue having a given name. The system may also allow a user to flush, or empty, a queue of all of its old entries. Or, the system may allow a user to delete a queue entirely. Once an appropriate queue has been located, the system may allow a user to read a record in the queue non-destructively, optionally waiting for a period of time for a record to become available if the queue is empty. Or, the system may allow a user to make a queue record invisible to other users by taking out a lease, optionally waiting for a period of time for a record to become available. A record may become visible again for processing by other nodes if, for example, the lease expires. The system may also allow a user to adjust the length of a lease already taken. Such a function may be useful if processing the record is taking longer than the user expected. The system may allow a user to append a record to the end of a queue, optionally waiting until the record has been transferred to persistent storage.
[0182] Advantageously, queue records may be stored using the persistent storage providers of the storage system itself. In this way, records may be preserved in case some of the physical storage servers fail for any reason. Should this situation occur, the storage system may treat a queue record as any other type of data file, and schedule it to be copied to another node, as described below in connection with FIG. 15. In an embodiment, queue records pertaining to a particular peer set should not be stored by that peer set, in order to avoid the system losing queuing tasks related to that peer set in case of server failure. Queue records may be stored in a filesystem hierarchy separate from that of storage metadata and file data. Records may be named in any convenient fashion.
[0183] In one embodiment, records for a particular path are stored in append-only files. As records for that path are enqueued, the data for the records is appended to the file. A record file may include an index entry, containing information about the records located in that file. An index entry may include, for example, each record's name, offset within the file, time of creation, start time of lease, and length of lease. Records may be updated or deleted from a queue by appending a new index entry with updated information. Further, each directory may contain an index entry that keeps track of the offset of the index entries in the record files of the directory's children. When a new index is stored at the end of a record file, a new index entry may be added to the end of the parent directory's record file with this new information. As the offset of the parent file's index record has now changed, its own parent may be updated, and so on to the root of the hierarchy. In this manner, records may be removed from a queue without deleting any files or any data within any files. At some point, as a record file becomes large and filled with a proportion of stale data that exceeds a given percentage, the queuing system may create a new record file and update the parent record file to reflect the new record file's name.
[0184] An exemplary queuing system is described below.
Node Failures and Self-Healing
[0185] A storage node may fail to fully function for a number of reasons, including hardware failure, software failure, network outages, or power failure. When a failure occurs, peer sets may replace the failed nodes automatically, without the need for administrator intervention, if appropriate hardware is available. A storage system in accordance with embodiments of the invention may take four steps to recover from a storage node failure: detection, selection, replication, and replacement. As replication may be resource intensive, the asynchronous queue may be used to distribute load.
[0186] FIG. 15 is a schematic timing diagram showing relevant actions taken by, and messages passed between, peer set nodes and an asynchronous queue in accordance with an exemplary embodiment of the invention during repair of the loss of a secondary node. Before a failure can occur, the storage system must be stable. A storage system administrator starts the system at the top of the timing diagram. The queuing system first initializes 1510, which may include verifying the consistency of the queue data stored throughout the storage system. Each of the servers initializes 1512, a process which may include booting an operating system, verifying network connectivity, initializing storage software or hardware, and other routine tasks. The storage system forms peer sets, and three of the nodes join 1514 the same peer set. Joining a peer set may involve sending synchronization messages and health messages between the various peers. In particular, each peer may take a lease from one or more other peers, as described below. Once the nodes have established a stable peer set, they may begin 1516 servicing filesystem requests from a storage client, as represented by the timelines of heavier weight.
[0187] At some time later, one of the secondary nodes experiences 1520 a system failure. Detection of a node failure is a critical first step in the recovery process. A storage system incurs a substantial penalty for restructuring peer sets by adding and removing nodes. Any data stored only on that node's server is lost. All storage metadata and file data that was controlled by the node must eventually be replaced, using the configured file replication policy. Selection of a replacement node, data replication, and restoration of service can be expensive operations in terms of disk I/O, network traffic, and latency.
[0188] A storage system may distinguish transient failures from permanent failures using a system of health leases, similar to the system of queue leases. The lease period may be adjusted by a storage administrator to optimize the performance of the system, based on such criteria as the mean time between server failures, the number of servers in the system, average network latency, the required system response time, and other relevant factors. Or, the lease period may be determined automatically by the storage system, using information about the dynamic performance of the system such as current system load, actual network latency, and other relevant factors.
[0189] Each primary node of a peer set may request a lease of each of the secondary nodes for a period of time. In an exemplary embodiment, each secondary node requests a lease only of the primary node. In other embodiments, each node in a peer set may request a lease of all other nodes. When the lease time is one-half expired, each node may attempt to renew its lease or leases. If all is well, the lease will be renewed will before it expires. If a lease expires before it is renewed, a lease-holder may attempt to directly contact the lease-grantor, using standard network query tools, such as ping or traceroute, or software written specially for this purpose may be employed. Such software may be of simple design, and its implementation should be clear to one having skill in the art. If a number of connection retries are unsuccessful, the lease-holder may conclude that the lease-grantor is unreachable or inoperative, and complete the first step of healing, detection. The node may then proceed to the second step: selection of a replacement node.
[0190] A replacement node is selected in process 1522. This second of four steps aims to determine a suitable replacement for a lost node. A principal concern in this step is avoiding a particular race condition. Suppose that a primary node and a secondary node are unable to contact each other due to a network outage, but both nodes are otherwise fully operational. Each node will assume that the other node has failed, and wish to select a new node to replace it. If each node succeeds, the storage system will have two peer sets that each lay claim to a third operational node. However, this situation is unacceptable, as a node may participate in only one peer set. Thus, an arbitration system may be used.
[0191] In an exemplary embodiment, each peer set has a supervising peer set, assigned in a round-robin fashion, which acts as an arbitrator during node replacement. Peer set #0 supervises peer set #1, which in turn supervises peer set #2, and so on. The last peer set added to the system supervises peer set #0. When a node determines that another node is unresponsive, it may contact a supervising peer set for permission to replace the other node, as in 1522. The primary node of the supervising peer may determine a replacement node and respond, but it may respond only to the first request it receives. Thus, a supervising peer may respond to only one of the remaining nodes in the broken peer set. This node may then become the primary for the peer set.
[0192] In the exemplary embodiment above, if the requesting node is a secondary, then the other node was a primary, and a new primary is needed. In this case, the first node to contact the supervising peer set becomes the new primary node. (All secondary nodes should make the request, as they each hold an expired lease from the primary.) If the node making the request is a primary node, then the other node was a secondary, so the new node will be a secondary. (In the exemplary embodiment, only the primary node makes the request. In other embodiments, all nodes may make the request, and a secondary may beat the primary. In this case, the primary becomes secondary to the requestor.)
[0193] In this example a secondary node failed, so the original primary remains primary. Once permission is granted, the primary node may send 1524 the new node a join message. The spare node may then join 1526 the peer set. The spare node is not yet a fully functioning member of the peer set, as it contains none of the peer set data. Thus, the primary node may send 1528 a replication task to the queue, which is then enqueued 1529. The primary node of the peer set may also increment a generation counter to alert any client or server that its membership has changed. The node may now proceed to the third step: replication.
[0194] Replication proper begins when the primary node notifies 1530 a remaining secondary node to begin replication. Although the exemplary peer set contains three nodes, other embodiments may contain more nodes, and in such embodiments the primary may select a secondary to control replication by any appropriate criteria, such as computational load. The selected secondary node may then query 1532 the queue for an appropriate task to perform. There it will find the task enqueued by the primary, and may find other tasks as well. The secondary node may then lease 1534 the synchronization task from the queue, as described in connection with FIG. 14. A lease which is not long enough may expire before synchronization completes. Thus, the node may determine the length of the lease from the size of the task. Or, the node may take only a relatively short initial lease, and renew the lease each time renewal is required to avoid lapse.
[0195] Once the node has leased the task from the queue, it may begin to synchronize 1536 the storage metadata and file data on the joining node. Replication of storage metadata and replication of file data may proceed with slight differences. Each node in an exemplary embodiment may contain a complete, mirrored, metadata repository for files controlled by its peer set. This policy requires more space than would a less redundant policy, but is better for two reasons: first, storage metadata files are small, so the difference in storage requirements is minimal; and second, this policy enables faster rebuilding of the storage metadata on a new node. When building a joining node, the primary may thus direct a secondary to copy its own metadata repository (which should be complete and up-to-date) onto the new node. This kind of delegation advantageously balances load between the primary and secondary, reducing overall system response time. In an exemplary embodiment, migration of storage metadata between nodes in a peer set is immediate, not lazy, because the joining node should have a complete metadata repository.
[0196] Requests to update storage metadata, such as a file rename operation, may be received by a node while metadata migration is ongoing. Migration may be accomplished by traversing a metadata repository recursively. The traversal may be performed depth-first or breadth-first--the only requirement is that the copying node keeps track of which metadata it has processed and which it has not. If a request for a metadata change arrives, the copying node may check to see whether it has already copied this metadata to the joining node. If not, it may simply make the change to its own metadata--it will copy the updated metadata to the joining node eventually. If it has already copied the metadata, the copying node may send the change to the joining node so the latter node may update itself.
[0197] File data, by contrast, tends to be much larger than storage metadata--kilobytes or megabytes instead of bytes. For storage efficiency, file data may be stored on less than the full complement of servers participating in a peer set. File data replication is similar to storage metadata replication, but the copying node need not always copy the file data. Only file data that was stored on the unresponsive node may need to be duplicated onto the joining node. Thus, as the active node traverses its metadata tree, it may also check whether the storage metadata indicates that the file was stored on the lost node. If so, the copying node also copies the file data to the passive node. If the copying node does not have the file data, it may make a request to another node that does. If no other node has the file data, the data may be marked lost and further client storage requests for the data will fail. Thus, to ensure availability of file data in an exemplary embodiment, the data is stored on at least two nodes, or replicated across peer sets using redundancy techniques such as RAID.
[0198] If replication fails for any reason, the queue lease for the task will expire and the task may become visible again in the queue for later retrial. Also, if the failure occurs in the secondary node, the primary node may detect this condition through its system of health leases and join another node to the peer set. Assuming no failures, after some period of time replication will be complete, and the secondary node may send 1538 a completion message to the queue. This message may instruct the queue data structures to dequeue 1539 the completed task.
[0199] Once storage metadata and file data have been copied to the joining node, the peer set enters the final stage: replacement. Until this point, the joining node has not been responding to metadata change requests or file data access requests to avoid race conditions. Instead, the other nodes have been responding to such requests. When the joining node's metadata and file data are current, the secondary node may notify 1540 the primary that it has finished replication. The primary is then free to issue 1542 a start-up message to the joining node, which then may begin 1546 to provide filesystem services. Once activated, the joining node is a full member of the peer set, and replaces the lost node in all functions. In particular, the new node may take out one or more health leases with the primary node or any of the other nodes. The two original nodes may thus continue 1544 providing filesystem services, joined now by the third node to make a complete peer set.
[0200] To facilitate replacement, nodes within the system may keep track of a generation counter for their peer set. If a client requests a peer set using an out-of-date counter, the primary node in the peer set can send a current copy of the peer set membership information to the client. Alternatively, if a client receives a file operation response from a peer set with a newer counter, the client may request a new copy of the peer set membership.
[0201] FIG. 16A and FIG. 16B show the peer set of FIG. 11 during the failure of a secondary storage node and after the peer set has been healed by the process of FIG. 15, respectively. FIG. 16A shows the same servers 1110, 1112, and 1114 and peer set 1130 as in FIG. 11. However, node 2 (server 8) has suffered an outage, indicated by shading. Once the peer set detects a failure condition, a replacement server 1610, server number 3 in FIG. 16B, is selected. This `joining` server runs the new node 2. Any unused hard disk drive or storage volume in the new server may be controlled by the peer set. One of the old nodes copies storage metadata and file data to the new node. In an exemplary embodiment, a secondary node performs this process, to efficiently balance load between the nodes in the peer set. Once all of the data has been copied, the new node may begin responding to client file operation requests as a full member of the peer set 1130A. The new node takes the color of the node that was lost. In FIG. 16A a "green" node was lost, so in FIG. 16B the new node is colored "green."
[0202] FIG. 17A and FIG. 17B show the peer set of FIG. 11 during the failure of a primary storage node and after the peer set has been healed, respectively. FIG. 17A is similar to FIG. 16A except that now the primary node on server 6 (1114) is unresponsive, as indicated by shading. The process for replacing a primary node is similar to that in FIG. 16B, except that one of the other nodes may become the new primary using the selection process described above in connection with step 1522. In FIG. 17B, the old node 1, running on server 1110, has become the new primary. A new server 1710, server number 4 of the storage system, has been added. A "blue" node was lost in this example, so the node running on new server 1410 is designated a "blue" node, as indicated. This node joins newly constituted peer set 1130B.
Exemplary Storage Scenarios
[0203] Various operations that can be performed by the above-described storage systems are now described with reference to various exemplary storage scenarios based on the exemplary storage system shown in FIG. 4B. In these scenarios, storage provider 430 is the target storage provider for the file.
[0204] File Data Stored in Target Storage Provider. In this scenario, file data for the file is stored in the target storage provider. Upon receipt of a request from the FS client, the FS server 431 may return the file data or may return storage metadata listing the storage provider 430 as the storage provider that is responsible for the file data.
[0205] File Data Moved from Target to Provider 440.
[0206] In this scenario, file data for the file is moved from the target storage provider to the storage provider 440. The FS server 431 maintains storage metadata indicating that the file data is stored in storage provider 440. Upon receipt of a request from the FS client, the FS server 431 returns storage metadata indicating that the storage provider 440 stores the file data. The FS client then contacts FS server 441 in storage provider 440 to access the file data.
[0207] File Data Moved from Provider 440 to Provider 450.
[0208] In this scenario, file data for the file is moved from storage provider 440 to storage provider 450, specifically by making a copy of the file data in storage provider 450. The storage metadata maintained by FS server 431 is then updated to reflect that storage provider 450 is responsible for the file data. Upon receipt of a request from the FS client, the FS server 431 returns storage metadata indicating that storage provider 450 stores the file data. The FS client then contacts FS server 451 in storage provider 450 to access the file data. The copy of file data stored in storage provider 440 may be marked for deletion.
[0209] File Data Replicated in Multiple Storage Providers.
[0210] In this scenario, file data is replicated in multiple storage providers (e.g., in storage providers 430 and 440; in storage providers 440 and 450; in storage providers 430 and 450; or in storage providers 430, 440, and 450, e.g., for redundancy or load balancing). The storage metadata maintained by FS server 431 includes a list of all storage providers in which the file data is stored. Upon receipt of a request from the FS client, the FS server 431 may return storage metadata that lists one or more of the storage providers in which the file data is stored. If only one storage provider is listed, then the FS client contacts the listed storage provider to access the file data; if multiple storage providers are listed, then the FS client selects one of the storage providers (e.g., randomly or according to a predefined policy) and contacts the selected storage provider to access the file data.
[0211] Modifying File Data by Replacement.
[0212] stores the modified file data as a separate file marked as version 2. For subsequent requests, the FS server 441 provides access to file data version 2. The file data version 1 may be marked for deletion.
[0213] Modifying File Data by Appending.
[0214] appends file data to the existing file data and marks the file data as version 2. For subsequent requests, the FS server 441 provides access to file data version 2.
[0215] Adding Storage Providers.
[0216] As discussed above, storage providers may be added as desired or needed. The hashing scheme described above is expandable without requiring rehashing and re-storing data across the entire namespace.
II. Description of a Specific Embodiment
[0217] The following is a description of a specific embodiment that is referred to hereinafter as MaxiFS.
1 Introduction
[0218] MaxiFS is the name of a file storage infrastructure targeted to Web 2.0 companies. MaxiFS is designed for implementing a high performance, highly resilient, indefinitely scalable File System as a pure software solution on top of a single storage pool built out of commodity 1U servers, each containing its own storage devices. The characteristics of the 1U servers in an envisioned embodiment are as follows:
[0219] 1. Dual-core CPU.
[0220] 2. 4 GB of RAM.
[0221] 3. 4 SATA drives with the capacity of 750 GB each.
[0222] 4. Dual 1 Gb/s Ethernet NICs built into the motherboard.
[0223] Systems of this nature can be purchased with a cost of goods of about $3,000.
[0224] In an exemplary embodiment, each such server node runs FreeBSD 6.2 or later (e.g., FreeBSD 7.0) and deploys an UFS2 file system. The latter has very desirable characteristics, as it supports Soft Updates [1] that give the speed of asynchronous writes for system data structures, guaranteeing at the same time that the file system transitions occur from consistent state to consistent state. Therefore, in case of a crash, access to the file system can occur almost immediately after the system reboots and it should only be necessary to garbage collect orphan disk blocks in the background. All the communications between clients of the infrastructure and the server nodes, as well as those among server nodes, occur in terms of IP networking, whether they are simple storage-oriented requests or administrative queries or commands. The following discussion often uses the terms "client" and "sever." For this discussion, the term Server (Or Server Node) identifies any of the 1U servers that are part of the file storage array while the term Client refers to a client of the file storage infrastructure. In the target market where the systems are expected to be deployed, the clients are not web clients but rather the web servers or the application servers that the customer uses. The following attributes of the MaxiFS system are among those that allow for scalability:
[0225] 1. The servers that implement the infrastructure are loosely coupled, instead of being part of a clustered file system built around a Distributed Lock Manager.
[0226] 2. Each server added to the system expands it in three directions: amount of storage, processing power and aggregated network bandwidth.
[0227] 3. The MaxiFS software running on each of the infrastructure's clients interfaces with the infrastructure itself and interacts directly with the servers. This client component can aggregate as much bandwidth as it needs, by directly interacting with as many server nodes as is appropriate, and without additional devices in band between client and server.
[0228] Some key driving principles in the MaxiFS architecture are the following:
[0229] The system must be lightweight and the consistency scheme it supports is that of eventual consistency. This implies that it is not guaranteed that all the redundant versions of a given file are all identical, as long as: 1. All the copies will converge to an identical version in a finite and limited amount of time. 2. MaxiFS can always discriminate more up-to-date versions of a file from previous incarnations. 3. A client process will never be given access to inconsistent copies of the same file at the same time. 4. A file that is being accessed by a client in read mode, will always be available to the client until the client closes the file, even if that version of the file is replaced by a newer version.
[0230] As a result of server failures and crashes, inconsistencies may develop over time. However, the system is expected to be self-healing, by treating such inconsistencies gracefully (i.e., avoiding panics or crashes) and by logging and repairing them, as soon as it detects them.
[0231] MaxiFS implements the POSIX file system interface. Some APIs may be optimized with respect to others, in order to guarantee the best performance for applications targeting the market segment MaxiFS addresses, whereas other APIs are allowed to be inefficient, if deemed rarely used in the market segment of interest. It is also possible for APIs that are of extremely limited use to be implemented only partially, if at all when their implementation would cause a negative performance impact on the parts of the system that need to be optimized. In addition to that, the system must be self-healing. This implies that any inconsistencies detected as the system is running, should be promptly corrected by the system, without affecting the clients. The files clients create and access are stored in the file system of the individual server nodes and are replicated according to policies the customer sets up.
2 The Network Infrastructure
[0232] Although MaxiFS is designed to provide scalability and availability, proper network wiring is a prerequisite to fully achieve these capabilities. Ideally, MaxiFS would be built within its own subnet. In this subnet the two NIC interfaces available within each of the server nodes should be connected to separate switches. This increases the redundancy for each node, regardless of whether a switch or some cabling might be disrupted.
[0233] Clearly, when the switch structure is hierarchical, it would always be desirable to make sure that the NICs in the same node are attached to independent branches of the tree. The existence of two NICs in the server nodes should possibly lead to trunking them up for maximum availability. This may be in conflict with having the NICs attached to different switches. However, since the network structure for MaxiFS is part of the MaxiFS setup, appropriate detailed instructions should be provided to make sure the highest achievable levels of availability compatible with the network infrastructure are achieved.
3 The Structure of the MaxiFS Name Space
[0234] This section of the document describes the structure of the namespace MaxiFS offers to its clients, as well as the way this abstraction is implemented across multiple server nodes. The MaxiFS infrastructure creates a global namespace distributed across all the servers that compose the infrastructure. This namespace has a global root. The MaxiFS clients use the MaxiFS software to "mount" the root directory (or directories) of the trees of interest in the MaxiFS namespace.
[0235] The mount operation is a key operation in that it accomplishes the following: It establishes the connection between a client and the MaxiFS infrastructure. Note that this is done by using the name assigned to the infrastructure, so that it is possible for the same client to access multiple MaxiFS infrastructures and the associated namespaces. It also fetches all the relevant information the client needs to operate within the infrastructure. This way the client learns where to address the requests for files stored within the infrastructure.
[0236] Users of the infrastructure need not be restricted to exporting only the global root. They should have the flexibility to export whatever subtree of the name space they want to export. Essentially the only constraint MaxiFS imposes in this regard is that any MaxiFS client should not mount locally any two exported directories, when one of them is an ancestor of the other (i.e., if the intersection of the two trees is not null).
[0237] FIG. 18 shows an example in which there are two clients, a MaxiFS infrastructure and an NFS server. The MaxiFS infrastructure exports directories "dirx" and "a" to its clients. NFS server "Z" exports directory "z0" to its clients.
[0238] FIG. 19 shows what happens when Client 1 "mounts" directory "dirx" and directory "a" exported by the MaxiFS infrastructure to its own directories "/d1" and "/d2", respectively. The directories "/d1" and "/d2" are known as "client mount points". After the mount operation, Client 1 sees the entire original file system hierarchy under the exported "dirx" logically accessible as the content of directory "/d1". Likewise, the hierarchy underneath exported directory "a" appears beneath "/d2". Therefore, the pathname "/d1/dirz/fileb" refers to the "fileb" in the MaxiFS infrastructure, in a totally transparent fashion. Similar considerations hold for file "/d2/b/d".
[0239] In FIG. 19, Client 2 mounts the exported "a" directory from the MaxiScale infrastructure, along with exported directory "z0" from the NFS server "Z", under its own directories "/W" and "/T", respectively. The result of the mounts in this case is that "/W/b/d" within Client 2's file system refers to the same file as "/d2/b/d" for Client 1, while "/T/z2/z3" refers to file "z3" on the NFS server.
[0240] Note the following: Clients can selectively mount only the directories they want to access, as long as they do not overlap in the global name space. The ability to mount directories exported by MaxiFS does not preclude access to other file servers installed before MaxiFS, such as the NFS server "Z", in this example. The mount operations performed with respect to MaxiFS and the NFS server are carried out through different software modules the clients run.
[0241] From the point of view of an application running on one of the clients, once the appropriate mounts have been performed, access to files in the MaxiFS infrastructure, rather than on an NFS server, is absolutely transparent. In other words, the application need not be written in a special way, nor does it require the invocation of special APIs. It continues to access the remote files through the file system, as it would in NFS. The appropriate MaxiFS software layers to be used to access MaxiFS are automatically involved every time the pathname the application specifies is beyond the client mount point associated with a directory exported by MaxiFS, much as this happens for NFS exported directories.
[0242] Whereas in the case of an NFS server, clients know how to interact with the server to mount its exported directories, in the case of a distributed infrastructure like MaxiFS, it is harder to see how a client would go about requesting exported directories to be mounted. To simplify the picture, assume for the time being that all the servers in the MaxiFS pool have 100% availability. This is clearly untrue, but the constraint will be removed in the further discussion.
[0243] The following describes a solution chosen to distribute the namespace across the server nodes, using the name space in FIG. 20 to illustrate the proposed scheme. MaxiFS distributes the file system hierarchy across the server nodes by hashing directory pathnames. This could be done by hashing pathnames below a client's mount point to a particular server, which would store the corresponding file system object. Such a scheme has the benefit that the resolution of a pathname to a server name can occur in constant time regardless of the number of servers that implement the namespace and of the depth of the pathname. A disadvantage is that any rename of an intermediate directory in a pathname would produce a different hash value, would imply the need to rehash all the children (direct or indirect) and to move them to the locations associated with the new hash codes. Thus it would be an extremely disruptive operation, involving a large amount of network traffic.
[0244] It is interesting to consider the fact that in Amazon S3 (Amazon S3 targets a market segment similar to the one addressed by MaxiFS, although its functionality is available in the form of a service, rather than as a product), objects are completely immutable (even in terms of name) and their hierarchy is constrained to two levels. This completely circumvents the problem for a hashing scheme. Something similar occurs for GoogleFS, where files are identified by immutable numeric IDs for the same reasons. It is a fact that in the particular market sector MaxiFS targets, efficient handling of rename operations is not a requirement. Nevertheless, even if this is the case, given that MaxiFS supports the POSIX semantics, it is at least desirable that a rename operation be non-disruptive for the entire system. Therefore, a hashing scheme should have the following characteristics:
[0245] 1. It should distribute files and directories uniformly across all the servers.
[0246] 2. When a directory is renamed, it should avoid the need for all files and directories that are direct or indirect children of the directory being renamed to be moved to new locations, on the basis of rehashed pathnames, as this would suddenly cause major bursts of system activity and would totally disrupt the system performance clients perceive.
[0247] 3. It should avoid rehashing and moving entire file system trees when the number of servers in the system changes.
[0248] Item 1 above can be dealt with by relying on a suitable choice of a hashing algorithm and should be fairly straightforward. Item 2 is harder to fix, when the pathname of a file or directory is used to generate a hash. Item 3 is also hard to tackle in the context of a hashing scheme. Given a hash table in which each hash bucket is mapped to a peer set, once hashes are computed, the server node to be used for each file or directory is fixed. If the number of nodes changes (and the size of the hash table changes accordingly) the mappings between files/directories and nodes change as well. As for item 2 above, this would require files and directories to be all moved to implement the new mappings. The two subsections that follow tackle the latter two problems.
3.1 Hashing and Renames
[0249] This section deals with item 2 above. The problem to solve consists of finding a way to avoid the redistribution of files and directories mapped to server nodes when their pathnames change. A few issues to be considered are:
[0250] a) The first concern is that of avoiding the need to relocate lots of files, because this would absorb most of the bandwidth and computing resources of the server nodes for a purpose that strictly relates to internal MaxiFS bookkeeping and would be perceived by the customer as having little to do with performing useful work. Therefore, all this work preferably should be eliminated. The most destructive case to be considered is the one in which a top level directory name changes. This would affect the entire file system hierarchy beneath it. This means that lower parts of the hierarchy should, as much as possible, not depend on the pathname of their parent directory.
[0251] b) It is desirable that whatever scheme is used, the enumeration of a directory should not be an extremely expensive operation. A pure hashing scheme based on pathnames would make directory enumeration extremely inefficient.
[0252] c) Having to move a file just because its name changes is, once again, very undesirable. Although renaming files and directories is not going to be an extremely common activity, it is necessary to make sure that relatively more common actions should have less impact than more unlikely ones. So, since the rename of a file is more likely than a directory rename, this case should be optimized with respect to a directory rename.
[0253] If the hashing, instead of being performed on the entire pathname, is performed just on the name of the file or directory, the hash value obtained would be independent of the rest of the pathname. This makes file system objects distributed across the server nodes insensitive to what happens as a consequence of renames of their parent or ancestor directories and would eliminate the main concern (item a above).
[0254] However, this would create problems with item b. Files that would be otherwise contained in a single directory would be scattered all over the distributed system. This would make a directory enumeration an extremely inefficient operation.
[0255] It would also create problems with item c because renaming a file would likely cause it to be moved elsewhere. A better alternative relies on hashing only the names (not the entire pathnames) of directories. This would mean that all the files that clients see as children of the same directory, would also be stored within a single directory on the same server where the directory resides.
[0256] The implications are the following: The enumeration of a directory would still be efficient because each directory would still contain all of its children. This solves any issues with item b. Since any time the name of a file is changed, this only amount to a name change within the same directory, this also solves any problem with item c.
[0257] A consequence of this approach is that since directories are always placed on the basis of their hash code, a subdirectory is generally not stored with the directory that is its parent in the global name space: it is normally allocated elsewhere (even when it is stored on the same server node). Yet, in order to continue to satisfy item b, at least a placeholder for the directory within its parent would have to exist. This placeholder (that would have the name of the directory it represents) would point to the location where the actual directory is placed.
[0258] For the time being, we ignore the hash function to be used and the way the hashing produces a mapping to a server. This will be discussed in more detail in the following subsection. We then examine this scheme in more detail.
[0259] A further consideration has to do with how the directories whose hash code is mapped to a given server should be stored within that server. It is certainly neither convenient, nor possible to simply store all the hashed directories within the same parent directory. The reason for this is two-fold: this would create humongous directories, with an extremely high number of subdirectories and this would have an impact on the speed of access to any child directory, and the likelihood of name collisions would increase.
[0260] Because of the above, one could think to proceed in a different way: each directory hashed to a given server would be stored within a subdirectory whose name is based on the hashing of the entire pathname of the parent (In reality the hashing would generate a number that can be represented as a hexadecimal string. The latter, instead of being used as a directory name, could be broken down into fixed length segments that would constitute the actual directory hierarchy to go through to reach the target directory. This approach, if implemented on two or more levels, significantly reduces the number of items in the parent directory.). This allows better partitioning of the namespace. This has the implication that the hashed directory is not completely free from the hierarchy it belongs to and therefore renames of intermediate directories in a pathname still have some impact. However, in this case, when the rename of an intermediate directory occurs, directories need not be moved from one server to another one because the server where they reside is only determined by the directory's name.
[0261] However, all the (direct or indirect) children of the directory being renamed must end up in a different directory on the same server, on the basis of hash code for the new pathname. This requires a recursive scan of all the children of the directory being renamed. Special care must be used to make sure that the overall client view of the directory being renamed and of all its children remains consistent while this operation is in progress.
[0262] The renaming of the directories proposed above is clearly not as disruptive as a relocation of entire directories across the distributed system. Nevertheless, it may cause a couple of negative side effects. Depending on the structure of the namespace, the necessary readjustments might still require a significant amount of time, as they entail recursively scanning the entire subtree of the directory being renamed so that the hashes of the directory pathnames can be updated. This adjustment is local to each server, in other words, it only involves the renaming of directories within the same server, but not the moving of files. Nevertheless the activity potentially affects all of the servers and may have to be performed sequentially. And while the rehashing and renaming takes place, client requests involving pathnames that contain the directory being renamed have to be deferred until the adjustment is complete.
[0263] In this scheme one problem has been properly addressed so far: two directories with the same name and different pathnames hash to the same value and therefore to the same server. Hence both directories should appear in the same parent directory on the server. This is impossible to do because the directory names are identical. A strategy to handle such name collisions needs to be devised.
[0264] Possible collision handling strategies could consist of creating a single directory with the colliding name, prefixed by a character that would be unlikely as the first character in the name of a directory, such as a blank (This "special" character should be properly handled, by allowing an escape sequence to be used in the unlikely case that a user names a directory using the special character in its first position.). At this point this "collision directory" would contain the colliding directories that would be stored with a different name and with additional information that allows discriminating between them (for example, storing the number of components and the string length of the pathname). However, as discussed below, even this scheme does not fully solve the problem. The real issue depends on the fact that the name collision strategy chosen needs to cope with the following constraints:
[0265] 1. As stated earlier, when a client tries to access a file or a directory, the only piece of information it provides to the system is the pathname of the object it intends to act upon.
[0266] 2. To disambiguate between two file system objects within the namespace on the basis of information coming from the client, the only possibility is using the absolute pathnames of the file system objects.
[0267] 3. It is desirable for the hashing scheme to hash as little as the pathname as possible, because this restricts the scope of a readjustment of hashes after a directory rename.
[0268] Since the hashing entails just a directory name, the name collisions would potentially increase with respect to the case in which the hash applies to larger portions of a pathname. Therefore, each directory should store somewhere its absolute pathname to handle name collisions. This makes the hashing of just a portion of the pathname not very advantageous because, even if the readjustment would involve only the direct children of the directory being renamed, the pathnames stored with all the direct and indirect children of the directory being renamed would have to be updated. So, we would back to the initial hashing strategy and to its drawbacks.
[0269] Because the only effective way to disambiguate file system objects through client-provided information is through absolute pathnames, it is possible to envision a variant of the scheme described so far in which directories are still hashed to server nodes on the basis of their name and in which the actual pathname within the server node where the directory is stored is the absolute pathname of the directory.
[0270] The scheme still retains the property that a directory rename only causes the renamed directory and the files in it (as it will be explained, the move of the files is not as disruptive as it may sound, because the files to be moved are metadata files, generally much smaller than regular files) to be moved around, without affecting its child directories. Therefore a directory rename is not disruptive for the whole system.
[0271] There are no longer name collisions because within each server, the directories are reachable through their absolute pathnames in the hierarchy. So there is no need for a separate repository for pathnames for each directory to deal with name collisions.
[0272] A directory rename requires at most a single directory on each server to be renamed, to reflect the new hierarchy and this can be done locally within each server and in parallel across all the servers, thus minimizing the time interval over which operations under that directory need to be deferred.
[0273] However, all servers must be informed of a directory rename and many of them may have to perform the rename, depending on the relative position of the directory in namespace.
[0274] A significant part of the namespace is replicated in all the servers. Although files are stored only in the node where the directory is hashed, directories have replicas.
[0275] When a pathname branch that does not exist in a server needs to be created, this may entail the creation of a number of intermediate placeholder directories.
[0276] The access to a directory within a server node may no longer be an operation that involves a very short local hierarchy, depending on the position of the directory in the hierarchy.
[0277] Nevertheless, this last scheme satisfies all the requirements. Its most negative aspects are the first two in the list above. However, since the rename to all the servers can be performed in parallel across all of them, the time disruption can be kept to a minimum. This has to be coordinated by a single entity (the most likely candidate for this is the node where the directory is hashed).
[0278] The propagation of a directory rename needs be neither instantaneous, nor atomic across all the peer sets. In practice, if a file needs to be accessed within the directory that is the object of the rename, only the owner peer set needs to deal with such a request. That peer set is aware of the rename and can operate consistently. Any other pathname operation in the subtree below the directory being renamed and hosted on other peer sets can be safely performed whether the directory had the old or the new name. If an operation is requested to a peer set that has not received the change, everything behaves as the latter request had been performed before the rename was issued, otherwise, the requested operation would occur as if the rename had been received before the new request. The propagation of the change to the other peer sets is handled as follows:
[0279] 1. The peer set to which the original rename is requested performs the rename.
[0280] 2. When the rename is completed, the peer set that is now hosting the directory sends a rename request to all the peer sets that host the directories immediately below the directory originally being renamed.
[0281] 3. This is performed recursively for all the directories below.
[0282] This has some positive attributes. The change propagates with the parallelism implied by the average fan-out of the directory being originally renamed and would insure a fairly rapid propagation because this would happen with a speed proportional to the logarithm of the average number of subdirectories per directory. Also, this would also insure that a directory would be notified only if its parent is already aware of the change.
[0283] Another aspect (the partial replication of the namespace) has one main implication in the storage space that would be "wasted" in doing this. However, replicating a directory means using one i-node per directory and a variable number of disk block that depends on the size of the directory. Since the "placeholder" directories do not have to store files, but only other directory names, the amount of storage used is likely to be a small portion of the storage available. Moreover, each node shares its storage space in a volume between user data files and the structure of the namespace. The former can be reduced by migrating user data files. The latter can be reduced by increasing the number of nodes that are part of the system and by migrating directories according to new hashes. To clarify the mechanism, it is worthwhile to go through an example.
[0284] FIG. 21 shows how the hierarchy in FIG. 20 can be distributed across three server nodes ("X", "Y" and "Z") using the latest scheme described. In order to understand the figure, the following should be kept in mind: The thick arrows labeled "hash to" indicate that a hash function is applied to the names of the directories listed above them and that this maps the names to the specified servers. The thick, broken ellipses include the hierarchy each server implements. Note that this hierarchy is similar to the hierarchy clients see (FIG. 20), although some items in it are missing. The underlined names (i.e., in Node X, "docs," "docs," and "java" are underlined; in Node Y, "a," "powerpoints" and "perl" are underlined; in Node Z, "papers" and "source" are underlined) are those of the directories stored within their host servers. The names shown with an italic font (i.e., in Node X, "a," "powerpoints," "papers," and "source" are in italic font; in Node Y, "docs" and "source" are in italic font; in Node Z, "a," "docs," "docs," "perl," and "java" are in italic font) are directory placeholders (these are real directories in each server, but their role is that of placeholders for the actual directories, to allow the underlying portions of the file system to be stored with their pathnames.). They never contain files because the files are stored in the copy of the directory kept on the node the directory hashes to. As such, they can be seen as references to the real directories they represent. These references are shown as broken arrows that are labeled with the name of and point to their target directories.
[0285] Assume a client has mounted MaxiFS at the mount point "/mnt/shared" and requests the opening of file "/mnt/shared/a/source/java/y.java". The sequence of steps to be performed is the following:
[0286] 1. First of all, the MaxiFS module running in the client performing the request would be requested to perform an open with the pathname beyond the mount point, in this case: "a/source/java/y.java".
[0287] 2. The first thing the client module should do is hashing the name of the parent directory for the leaf node in the pathname. This would be: h("java"). Assume that (according to the figure), this produces a mapping to server node X.
[0288] 3. The next step for the client module is to talk to node X, asking for access to "/a/source/java/y.java". The server node would then perform the local file system tree traversal to get to "/a/source/java" and the subsequent lookup and open of "y.java".
[0289] This exemplifies how the scheme shown here allows fast access to files by avoiding multiple network hops or lookups.
[0290] Also look at a case in which a client requests a directory to be renamed. Assume that the client requests the rename of "a/docs/powerpoints" into "a/docs/presentations" and that whereas "powerpoints" hashes to Node Y, "presentations" hashes to Node Z. The sequence of steps to be performed would be the following:
[0291] 1. The MaxiFS module running in the client performing the request would issue the request: "rename("a/docs/powerpoints", "a/docs/presentations")".
[0292] 2. The client would then hash the source directory to its target node Y.
[0293] 3. The client then would request Node Y to perform the rename (and relocation) to Node Z.
[0294] 4. Node Y would relocate the directory and the underlying files to Z and would issue a parallel request for all the nodes to update the name of the directory.
[0295] 5. At the end of this, the client request would be acknowledged.
[0296] The resulting state of the file system is then the one shown in FIG. 22 (in Node X, "docs," "docs," and "java" are underlined while "a," "presentations," "papers," and "source" are in italic font; in Node Y, "a" and "perl" are underlined while "docs," "presentations," and "source" are in italic font; in Node Z, "presentations," "papers," and "source" are underlined while "a," "docs," "docs," "perl," and "java" are in italic font). In principle the directory placeholders "docs" and "presentations" are no longer needed. However, since they are already in place, they do no harm and can simplify the creation of additional branches under them if that is needed sometime later. Also note that the files previously under "powerpoints" are now under "presentations" on node Z.
[0297] One thing that needs to be emphasized is the fact that the relocation of a directory and of the underlying files per se should not require a large amount of bandwidth because, as will be seen in the following, the files are not the real data files but small metadata files that point to them.
[0298] Note that in case a client requested that a given directory be opened, as in the case of a directory enumeration, the client should hash the directory name, rather than that of its parent. For example when "a/source/java" is opened, "java", rather than "source" should be hashed. However, for a directory like "java" that appears as the leaf of the requested pathname, this would be a two-step process. In this case, the parent directory would be hashed and the client would access the appropriate server node to open it. The server, knowing that the item being opened is a directory, would know that the server to be used would be the one where "java" resides and would return an error indication to the client that would cause the latter to repeat the previous step using the proper node. The extra access is undesirable. Yet, compared to an NFS access that requires a round-trip interaction for every component of the pathname, this way of operating is by far more streamlined and efficient.
3.2 Hashing and Dynamic Scaling
[0299] This section deals with item 3 above and is meant to add more details on the hashing scheme to be used. The scheme is straightforward and can be described as follows: Given M server nodes, a hash table is constructed with a number of entries T , M<=T. Each of the table entries stores a pointer to the server node associated with that entry. A suitable function is chosen to provide a uniform distribution of hash values over the file names. If such function is f( ), then the hash value for string s will be computed as: h(s)=f(s) mod T. The computed value h(s) will be used as the index of the hash table entry that points to the server to be used for string s.
[0300] The difficulty with this approach is that, in a system like MaxiFS, the number of servers can and should grow dynamically. So, if the number of servers grows beyond T, a new, larger table must be created and its entries must be initialized again to point to server nodes. However, in general, this might require all the directories to be moved on the basis of their new hash values, which is considered to be unacceptable for MaxiFS.
[0301] Thus, in MaxiFS, a dynamically scalable hashing scheme is used to get around this limitation. Assume that T is constrained to be a power of 2. Also assume that h is the hash value obtained for a given file or directory name. In general, any such number can be expressed as: h=q2n+r. Hence:
h mod 2n=(q2n+r)mod 2n=r
[0302] It can be shown that there is a consistent relationship between the value of h mod 2n and the value of h mod 2n+1. There are two cases to be considered: one for an even value of q and another one for an odd value. For q even:
h mod 2 n + 1 = ( q 2 n + r ) mod 2 n + 1 = ( q / 2 2 n + 1 + r ) mod 2 n + 1 = ( q / 2 2 n + 1 ) mod 2 n + 1 + r mod 2 n + 1 = r mod 2 n + 1 = r ##EQU00001## h mod 2 n + 1 = r , for q even ##EQU00001.2##
[0303] For q odd:
h mod 2 n + 1 = ( q 2 n + r ) mod 2 n + 1 = ( ( q - 1 ) 2 n + 2 n + r ) mod 2 n + 1 = = ( ( q - 1 ) / 2 2 n + 1 + 2 n + r ) mod 2 n + 1 = ( ( q - 1 ) / 2 2 n + 1 ) mod 2 n + 1 + ( 2 n + r ) mod 2 n + 1 = = ( 2 n + r ) mod 2 n + 1 = 2 n + r ##EQU00002## h mod 2 n + 1 = 2 n + r , for q odd ##EQU00002.2##
[0304] Therefore:
h mod 2n+1=h mod 2n(for q even)
h mod 2n+1+2n (for q odd)
[0305] Using these relationships, the hash table can be dynamically expanded by doubling the size of the hash table and copying the first half of the hash table to the newly created second half of the table (assuming the size of the hash table is a power of 2 and that the hash table is expanded by doubling its size).
[0306] Therefore, assuming that one starts out with 3 servers and a hash table with 4 entries, the situation could be depicted as in FIG. 23 (Phase I). Note that since there are 3 servers and 4 slots, the last slot points to Server A, just as the first slot.
[0307] If we imagine that we need to increase the number of servers to 5, the original hash table would no longer be adequate. So, the next possible size for the hash table is 8. To create a situation that does not change anything with respect to the original mapping, the second half of the expanded table should have the same content as the first half (see Phase II in FIG. 23). Note that Server A now appears in 4 of the table slots, whereas the other servers appear only twice.
[0308] The following step is that of including the new servers (D and E) into the picture. This can happen by replacing them in slots 4 and 7 with these new servers (see Phase III in FIG. 23). However, this cannot stop at this point, otherwise all the names that were hashed to slots 4 and 7 would no longer be found.
[0309] So, whereas Phase II is totally benign, in that it has no unwanted side effects, Phase III must be completed by other actions to still map the same namespace.
[0310] The additional actions to be performed include migrating all the directories previously on Server A that were mapped to entry number 4 of the table to server D. Likewise, all the directories on Server A whose names were mapped to entry 7, would have to be moved to Server E. The algorithm to be followed would amount to processing each of the directories on Server A, checking their hash value, so as to verify which slot of the hash table it would point to. Whenever slots 4 or 7 would be the target entries, the corresponding directory would have to migrate to the proper server. Since it would be highly impractical to suspend operations while all the directories are being migrated, both the old and the new server are stored in the slot being affected. This way, during the migration any access would look at the new server first and would then resort to the old one in the cases when the target is not found.
[0311] The updates to the hash tables would have to be propagated across the entire infrastructure because each client of the infrastructure needs one such table. By allowing the table entry to co-host both the old and the new server within the table slot being changed, clients would have the option to look up the item in which they are interested in both locations before concluding that the item does not exist. This reduces the time it takes to replace a table entry with respect to the case in which case one had to wait for an update of the entire infrastructure before allowing new requests to go through. When such an update is needed, the infrastructure should be aware of it. However, the nodes that must be aware first are the node being replaced and the replacing node. This way, the first time a client tries to access the old node, as the migration is occurring, or after it has occurred, the client is told to replace its table with the new one that co-hosts the old and the new node in the affected slot. For this reason it is useful to add a generation number for the table being used. The client will store the generation number of the table in all of its requests, so when one of the two servers involved in the update is accessed, it will notice that the table is not up to date and will tell the client to use the new table. A further increase in the generation number is needed when the migration is complete. This will replace the two co-hosted entries in the slot being modified with the ID of the new server. The system will take care of serializing such changes so that only a change at a time will be allowed. This does not mean that a change should only involve a single slot. However, independent changes will be serialized by blocking a new one, until the previous one is complete. In any case, there is no need to update the table of a client until the time when it tries to access one of the servers corresponding to slots that have been changed. Moreover, it is not necessary for all the clients to receive all of the updates since it is sufficient for them to be updated with the latest version in a lazy fashion, even skipping intermediate ones, as long as they have no need to access entries that have been changed. To optimize the table sharing by minimizing the amount of information exchanged, it may even be desirable to have all the servers and all the clients share a common algorithm and to push only the minimal information necessary to the clients to locally update their table.
[0312] If the number of hash buckets in a table is much larger than the number of servers in the system, this data structure lends itself to a very elegant way to balance the computational/network load and capacity across servers. As shown in FIG. 23, several hash buckets within the same table may reference the same server. If the number of such buckets is much larger than the number of servers, each server will appear in many buckets and only a relatively small subset of directories a given server owns will be hashed to a given bucket. This allows the system to monitor the number of references to each such bucket. The total count per server can also be computed as the sum of the counts of the buckets that are associated to each server, so that the servers that are referenced most often can be spotted very easily. Once this is done, it is possible to look at the individual buckets for the servers that are heavily loaded and it is possible to decide to move directories associated to a given bucket to servers less loaded, having the bucket point to a less loaded server. This achieves the purpose.
[0313] Note the following: The use of the "MapReduce" distributed algorithm [6] that can compute the most heavily used servers is beneficial, as it performs the computation in a distributed fashion. The system should make sure that the move of directories has some hysteresis, so that MaxiFS does not waste cycles continuously moving directories back and forth. The actual move of directories should never affect the count of the most used servers, otherwise all the statistics would be inaccurate.
[0314] So far hash tables are assumed to have a number of entries that is a power of 2. Constraining a hash table's size to powers of 2 is considered to be suboptimal. It is a well known fact that when a hash table contains a number of buckets that is a prime number and the hash value is computed modulo that number, this produces the best distribution among the slots. Nevertheless, it must be kept in mind that unlike normal hash tables, the hash tables used to implement the distributed namespace do not contain pointers to a linked list of colliding items. They contain references to servers. As explained, it is convenient for the number of servers in use to be much smaller than the size of a table; therefore, as in the case of FIG. 23, some servers would appear in the table more than once. By replacing items in the table, when necessary, through some appropriate algorithm, the suboptimal distribution of items through the table induced by the table size would be counterbalanced.
[0315] The scheme described so far is quite flexible. However, in its present form, it does not allow directories mapping to the same hash bucket to be distributed across server nodes. Also, cases in which the storage space in a given server node is exhausted can only be dealt with by trying to change the content of individual table entries, so that they can map different servers. However, since a mechanism already exists to handle transitions from a server to another one as directories are migrated and this consists of allowing clients to access both the server a directory is being moved away from and the server that is the target of the move, the same mechanism could be used in case of storage overflow. In other words, if directory X currently on server A cannot host any more files, a backup server B can be designated so that one or more directories can be moved to B without having to move all the directories that would hash to a given table entry. In any case, directories are never allowed to be split across different servers. They are entirely on one server or on another one.
[0316] This way, if a client is unable to access a directory that should be on A through the hash bucket to which the directory hashes (such hash bucket would now list both the primary server A and the backup server B), it could always look up the directory not found on server B. This works well only if the backup servers are used for extreme cases in which little else is available until the infrastructure is expanded by adding more server nodes. Otherwise, the impact on performance could become noticeable. Nevertheless, even an impact on performance resulting in graceful degradation is much more desirable than outright outages.
3.3 Servers and Volumes
[0317] Server nodes in MaxiFS have 4 physical drives available (see above). It would be possible to aggregate them together into a single logical volume via RAID-5. This has a couple of positive aspects: The boundary between physical volumes is removed, which allows using the logical volume obtained this way as a single storage pool. The logical volume has built-in redundancy and is resilient to the loss of one disk drive.
[0318] On the other hand, it also has some disadvantages: The redundancy needed for the RAID-5 set effectively removes 1/4 of the total storage available. The loss of two drives would make the entire server unavailable, whereas if the volumes were managed individually, only the loss of four drives would make the server completely unavailable.
[0319] Note that the redundancy internal to one server obtained via RAID-5 would not eliminate the need for redundancy across servers because if the CPU, the power supply or any other single point of failure ceases functioning, the data stored on the redundant logical drive is not accessible anyhow. Therefore it is more convenient for MaxiFS to make use of the individual drives, rather than of a single RAID-5 drive.
3.4 Redundancy in MaxiFS
[0320] The previous sections only describe how the MaxiFS namespace is structured, and provide a logical view of how the data can be accessed.
[0321] One important fact about the expected access patterns to MaxiFS is that all files are handled as essentially immutable (the single exception is that of files used as logs that cannot be modified, except by appending new records). In other words, a file can be created and written to. However, when a file exists, it will never be partially updated. It will be either deleted or replaced completely. This is the way Web 2.0 applications work and the limitation greatly simplifies the complexity of MaxiFS. The previous sections rest on the idea that the server nodes are 100% available. This is clearly not the case. The following explains how redundancy is factored into the picture. MaxiFS is a distributed file system built by aggregating the local file systems of multiple servers. In principle, once it is possible to distribute the namespace across multiple nodes the way that has been described in the previous section, it could be possible to have the file themselves contain the user data. However, the problem MaxiFS solves is that of building availability and scalability through redundancy and of doing so with a level of redundancy that can be set depending on the nature of the file, of the frequency with which it is accessed, and so on. This makes it impossible for a file to exist in a single location and MaxiFS has to make sure that the loss of even multiple nodes would not bring the system to a grinding halt. This is even more important as the individual MaxiFS nodes are low cost, commodity servers, with no intrinsic hardware redundancy.
[0322] So, MaxiFS must necessarily rely on additional data structures that describe where redundant copies of a file are kept. In normal file systems the data structures needed to support the file abstraction are file system metadata. In MaxiFS, it is necessary to store MaxiFS metadata in addition to the metadata of the native file system (the latter is the responsibility of the file system local to each node). Because MaxiFS is built on top of the local file system, this metadata can only be kept in a file (There is actually a slightly better approach that will be described ahead. However, this does not change the essence of the present discussion).
[0323] This means that two options arise: The metadata could be stored with the file itself, in a special MaxiFS area adjacent to the user data. The metadata could be stored in a file that points to the actual file(s) where the user data is stored. Therefore, the client view of a file stored within MaxiFS is different from reality, in that the file containing the data, when multiple mirrors exist, must also contain "pointers" to the locations where the additional mirrors are kept.
[0324] All this is realized by means of the Remote File Access Service, active on each server node. Its purpose is two-fold: It supports the ability to read or write the user data. It also identifies where, in the distributed infrastructure, a file or directory resides, allowing a client to access it. The service makes use of the local file system hierarchy on each server, in order to implement the MaxiFS hierarchy (as explained in "The Structure of the MaxiFS Name Space"). This means that any directory visible to clients is a directory that exists as such in the hierarchy of a local file system on at least one server. Any user-visible file is represented by a metadata file with the same name that contains metadata of use to MaxiFS (this includes the locations of the data files the metadata file is associated with and other relevant attributes) along with (in most cases) file data.
[0325] So, in MaxiFS the individual client-perceived directories contain files with the client-perceived names. These files certainly contain MaxiFS metadata (pointers to where the copy or copies of the user data is stored and more). To achieve the appropriate levels of availability, the file system hierarchy, the MaxiFS metadata and the user data need to be replicated. The file system hierarchy and the metadata are replicated by making sure that a fixed and predefined number of copies exist. However, the level of redundancy of the user data is supposed to be chosen by the end users of the system.
[0326] This allows the following possibilities: Some files may not be replicated at all. This makes sense for files that can be easily rebuilt, such as temporary files. Some files may have a fixed degree of replication, for example, mirroring by 3. Some files may have a minimum level of replication and a dynamic replication scheme so that the number of copies is increased or decreased on the basis of demand. This is useful especially for streaming media files that, by being replicated multiple times can be more readily accessible by more users, taking advantage of the additional processing power and network bandwidth that each server keeping a copy can add.
[0327] Therefore, whereas the number of replicas for the file system hierarchy and the metadata files is fixed, individual files may have a number of replicas that is below the replication factor used for the MaxiFS metadata, equal to it and even higher than it. In principle, metadata files could be allowed to include user data, the consequences would be that: In the case in which the replication factor for a file is lower than the standard number of replicas for the metadata, some of the metadata files will only contain the metadata, but not the user data. When the replication factor for metadata files and user files is the same, all metadata file may contain user data. And when the replication factor for user data is higher than that for the metadata files, there will be additional files that store the user data. This implies that in addition to the portions of the local file systems where the file system hierarchy and the MaxiFS metadata are kept, other areas need to exist, where copies of files beyond the replication factor of the metadata can be stored.
[0328] If, however, metadata files are not allowed to contain user data, then the metadata portion of the name space is completely decoupled from the handling of the copies of the user data. The latter is the model that is followed in MaxiFS. This suggests that any server should have its local file system structured in terms of a hierarchy/MetaData Repository and of a Data Repository that are independent of each other. In the following they will be identified as MDR and DR, respectively.
3.4.1 Server Nodes and Peer Sets
[0329] The requests MaxiFS clients send to the MaxiFS servers have the following purposes:
[0330] 1. Lookup of file and directory names.
[0331] 2. Directory enumeration.
[0332] 3. Setting and retrieval of file and directory attributes and protections.
[0333] 4. Creation and deletion of files, directories and symbolic links.
[0334] 5. File reads and writes.
[0335] All such requests start out with the identification of the file system object of interest and this is done through a pathname. So, all such requests stem from some pathname request. Pathname requests are mapped to operations performed on the MDR of some server node. The discussion on the structure of the namespace has been conducted in the previous sections, assuming individual servers implementing portions of the namespace. This is fine to illustrate the overall architecture and the concepts it is based on. However, in order for MaxiFS to be highly available, its services must remain available in the presence of server crashes and failures. Therefore, the functionality must be made redundant through the use of mirror nodes. This is particularly important for the MDR, as it constitutes the repository that implements the file system hierarchy. Therefore the loss of a portion of the MDR implies that some portions of the namespace would be no longer accessible and is not acceptable.
[0336] In MaxiFS, servers that replicate the same MDR are said to be members of the peer set that implements that MDR. Thus the basic building blocks of MaxiFS become peer sets, rather than individual server nodes and all the considerations related to the implementation of the distributed namespace (see above) need now be reinterpreted by replacing the notion of a server node with that of a peer set. The number of nodes that are members of a peer set ("cardinality of the peer set") is a key attribute of such sets. The trade-off is between having fewer members (that simplifies the management of the set and reduces the interactions among the members) and having more members (that increases the redundancy of the metadata peer sets support). Even if one assumes the very low reliability figure of 0.99 for an individual node, using 2-way redundancy, the resulting reliability for a peer set would be 0.9999. For 3-way redundancy, the reliability goes up to 0.999999. This is enough to satisfy the most demanding enterprise-level requirements. So, replicating the MDR (and the associated peer set membership) by 3 is certainly desirable and, although this need not be a strict requirement, MaxiFS uses 3 as the cardinality of peer sets for the distributed file system namespace and the associated metadata.
3.4.1.1 Nature of a Peer Set
[0337] One important decision taken has to do with whether peer sets members should be individual servers or <server, volume> pairs or <server, subtree> pairs, such that each subtree is a subset of an entire volume. Whichever of the previous choices is made, the three members of given peer sets must manage independent storage resources, otherwise the high redundancy peer sets need to accomplish would be lost. We now examine the above alternatives.
[0338] If members of a peer set are entire servers, there is a significant reduction in complexity and bookkeeping and all the resources on the member are dedicated to the peer set the server belongs to. The number of peers sets would be lower and with it the number of multicast addresses (or virtual IP addresses) to be assigned to them. However, peer set members in this case could simultaneously belong to one and only one set. This is clearly a disadvantage in that it makes it more difficult to make use of some servers, unless the number of servers is appropriate.
[0339] In case a finer granularity is chosen for peer set members (<server, volume>, or even <server, directory subtree>), then the same server, as long as it is associated with different volumes or subtrees, could simultaneously belong to more than one peer set. This requires more bookkeeping, but has the advantage that a smaller number of servers can constitute a useful redundant configuration and that if a drive should become unavailable the situation would be easier to manage with respect to one in which a peer set should transition to a form of degraded behavior.
[0340] To explain how the two cases above have implications on the efficacy of additional servers, assume that each server has four drives and that there are 3 servers available in total. With the first scheme, only a single peer set can be constructed. In the same situation, using to the second scheme, with <server, volume> pairs as peer set members, it is possible to create 4 peer sets, across which the namespace can be distributed. So, despite a bit of additional complexity, the second scheme allows the construction of a more flexible framework and a better distribution of the namespace across all the servers. It could be argued that a possible choice could be that of adopting the second mode as long as the system is made of few servers, whereas the first mode could be used when a certain threshold in node count is passed. However this would lead to further complexity and therefore is not a convenient path to take.
[0341] In general, given the nature of the servers used in MaxiFS (see above) that have M disk drives each and given the choice of having 3 members in each peer set, using set members defined as server/volume pairs, the number of peer sets p that can be generated out
N M 3 ##EQU00003##
[0342] With respect to the case of 2 members per peer set, having 3 members has the slight drawback that for all the server/volume pairs to be used, the product of the number of servers by the number of drives per server should be divisible by 3. When this is not the case, one or even two server/volume combinations that could be potential peer set members cannot carry out this role.
[0343] However, this does not mean that such "spares" would be unused because they can always host user data, even if they do not store metadata. Moreover, they can be kept in stand-by, ready to replace server/volume pairs that go offline. Volumes peer set members associate with the peer sets to which they belong are very similar in structure and contain an MDR whose structure essentially identical for all set members.
[0344] This concept could be generalized by allowing multiple MDRs to coexist within the same physical volume. This could be useful because without it, if a node could only be associated to a peer set on the basis of a volume, essentially each node could at most be member of 4 peer sets (the number of disk drives). Allowing multiple "logical volumes" to co-exist within the same drive (the system takes care of avoiding that members of the same peer set are implemented on the same node), even if each node already has 4 memberships and in case another node fails, it is still possible to reassign the role of the failed node to one of the healthy nodes.
3.4.1.2 Member Recovery and Replacement in Peer Sets
[0345] The possibility that a member of a peer set may crash or become unreachable is far from remote, especially considering that the servers MaxiFS runs on are inexpensive. As such they do not provide hardware redundancy of any sort. The idea is that when a server node dies or some of its vital components fail, the server must be replaced, but this must not affect the operation of MaxiFS. There could be various reasons why the member of a peer set may cease to function properly. These include hardware breakage, software faults and network outages. MaxiFS must be able to deal with such events making sure the reductions in data redundancy may only last for a very limited time, to prevent resources from becoming inaccessible. So, the steps necessary to properly deal with such issues are the following:
[0346] Detection.
[0347] MaxiFS must be able to realize that a system is no longer available, so that appropriate actions can be taken. The difficulty here is in reliably detecting that a node is down, because premature replacement of a node impacts the costs caused by the amount of load and network traffic needed to reconstruct the missing redundancy (when it had no need to be reconstructed in the first place, because the diagnosis was premature and inaccurate). This implies that the choice of the time period after which a node is considered lost must minimize the likelihood of having performed useless work and the temporal window over which the data redundancy is reduced.
[0348] Selection.
[0349] Once a system is no longer a member of a peer set, it is necessary to select a new node that will take over the role of the lost member. The node should not be overloaded already and, possibly, very similar to the remaining peer set member, in terms of performance and capabilities. The remaining peer set member should perform the selection as soon as it is authorized to do so by the peer set supervisor.
[0350] Replication.
[0351] This phase entails the selected node to synchronize the metadata with the surviving member of the peer set. This phase is complex and critical. The entire MDR managed by the peer set must be replicated on the candidate member. Since the MDR is limited to containing only the MaxiFS metadata (no user data), the quantity of information to be copied would not be massive. On the other hand, this is very much a metadata driven activity and therefore it will involve a fair amount of I/O operations.
[0352] Replacement.
[0353] Once the data replication is complete, the new member of the peer set should start operating as a full member of the set.
[0354] The above sequence is necessary once it is clear that a member of a peer set is unavailable. However, before reaching that conclusion, it is possible to attempt simpler recovery strategies, such as a restart of the MaxiFS subsystem running on the server. If this is unsuccessful, the server could be rebooted. Nevertheless, it would be worthwhile to proceed with the sequence previously described, as soon as possible, to avoid reducing the redundancy for a significant amount of time.
3.4.1.3 Peer Set Identity
[0355] Each server node that joins the infrastructure is assigned a unique and permanent ID. Also, each peer set, when created, is assigned an ID that is unique for that peer set and is not changed even if the members of the set change (This peer set ID could be associated with a multi-cast address for the peer set (if multi-casting is used), or it might be a virtual IP address that is assigned to the primary set member and migrates with the primary role. The unique peer set ID could also be used as the least significant portion of the multi-cast (or virtual) IP address). The namespaces of node IDs and peer set IDs are disjoint. Also, for each set another peer set is designated as its supervisor. Its role will be clarified below. The algorithm used to choose a supervisor peer set is simple. If there are N peer sets in the system, the supervisor of set i is set i-1. Set 0 has set N-1 as its supervisor. This implies that a single peer set is not admissible for a MaxiFS system to function: at least two are needed. When a peer set is established, a counter is initialized to zero. This number is called the peer set generation counter. Members of the same set always have to have the same generation counter and embed it within any message they send to clients or to other server nodes. This way, clients are capable of detecting whether the information they have on the peer set is stale and can request updates. One out of the 3 members of a peer set is identified as the primary member. The others are secondary members. The primary member is the authoritative node, meaning that its state and MDR are always the reference point for the entire peer set. Members of a set perform a sort of heartbeating, so that it is always known whether they are all reachable. Rather than pure heartbeating, as in traditional clusters, the mechanism in place is lease-based. This is only marginally different from many traditional heartbeat implementations, except for the fact that cluster heartbeating is normally performed over redundant connections some of which are dedicated to this function. The primary member of the set requests a lease of the secondary members. The secondary members only request a lease to the primary, but not to each other. After half of the lease time has expired, any member has to renew its lease. If this does not happen within the lease period, the member that does not receive the lease requests tries to query its peer directly. If a number of retries are unsuccessful, the member concludes that its peer is down or unreachable.
[0356] When the latter occurs, the peer set is in a degraded state and its original cardinality must be reestablished, by adding a new member. Typically a situation of this nature, if due to hardware failure of a node or to loss of connectivity, may cause the same problem to occur in all the peer sets to which the node belongs.
[0357] In case connectivity issues (if a hardware fault is involved that takes down a node, there would be just one or two subsets of the original peer set), it may well happen that a peer set breaks into two or even three subsets (in the first case one subset would contain two members and the other only one, whereas in the second case, each subset would contain just one member). Any subset may then try to add new members to the peer set. To avoid races, a member that has detected the loss of a peer requests its supervisor peer set for permission to delete the unavailable member of the set and to add another one. The supervisor peer set will authorize only one of the subsets to delete its peer node from the peer set and to replace it with another one. The fastest subset to reach the supervisor (the slower node may in fact have crashed and restarted) wins. The act of authorizing the winning member to elect a new peer, also allows it to bump the peer set's generation counter. From that point on any packets the other former members of the peer set send to servers or to clients are labeled with an old generation counter and this allows the detection of stale servers. The new primary is aware of the existence of another secondary member and updates it with the new status (including its new role and the new generation number). At this point the peer set enjoys full membership, but needs to reconstruct the set cardinality by updating the new set member with the MDR associated with the peer set. When this is completed, heartbeating fully resumes and the set is no longer degraded. A server that could no longer communicate with the peer may have crashed or disconnected. Whether it could communicate with the supervisor set and saw its request to be the new primary denied, or whether it was totally unable to communicate with its supervisor, it should consider itself free and available to join another peer set needing a new member. In any case, it should not delete its prior MDR until the time when it joins another set. In case the member authorized to become the primary, used to be a secondary member, it may be true that the previous primary became unavailable. The other possibility is that the other secondary disappeared. In the former case, the ex-primary node now changes its role to that of secondary member.
3.4.1.4 The "Color" Property of Peer Set Members
[0358] Independently of the primary and secondary roles in a peer set, each member of a peer set is also assigned a color property. It can assume three values: Red, Green or Blue. The color is totally unrelated with the primary or secondary role in the peer set. Its value is assigned when a member joins a peer set and never changes, even if for members that transition from the primary role to a secondary one, or vice-versa. The color property loses its value when a node leaves a peer set. Also, when a new member replaces a previous peer set member, it receives the color of the member it replaces.
[0359] The purpose of the color attribute is that of allowing the partitioning of tasks to be carried out only by one or two of the members of the peer set, in such a way that the tasks can be assigned by hashing to a color. For example, when a file needs to be created in a single copy, depending on the file name, the file might be stored only within the peer set member that has the color to which the file name is hashed. Likewise, in reading there would be no need to have the members of the peer set interact to verify which member should serve the file because this would be determined by the hashing of the name to the appropriate color. Likewise, specific node management tasks could always be carried out by the node with a given color.
3.4.1.5 Interactions between Clients and Peer Sets
[0360] Interactions between the clients and the peer sets can be implemented in one of two ways: A) By relying on multi-casting and assigning a permanent multi-cast address to each peer set. B) By assigning a virtual IP address to the primary node of a peer set. This IP address would have to migrate with the role of peer set primary member. The first option is attractive in that it simplifies the protocol and greatly simplifies the process of having one IP address tied to a peer set. For multi-casting, new members of the set should merely join the multi-cast group associated to the peer set and members leaving the group should disassociate themselves. Whereas, if the second option is adopted, making sure that the virtual IP address for the set is bound to the new primary member must rely on the clear indication that the old primary is definitely out of business.
[0361] Also, multi-casting greatly reduces the message traffic between clients and servers by leaving the replication of the packets to the appropriate nodes in the network infrastructure. On the other hand, multi-casting may have impact on the customer's network or may be perceived as a potential source of additional and unwanted traffic. The MaxiFS design relies on the multi-casting based scheme. In addition to the advantages outlined above, the negative aspect of multi-casting (the reliance on packet replication by network switches) is not very limiting as the replication would only occur within the MaxiFS infrastructure and not between clients and the infrastructure. The range of multi-cast addresses can be chosen, so as to avoid unwanted interactions with the customer's network infrastructure. Effectively each peer set will be associated to a multi-cast address and members of the peer set will join or leave the multi-cast group associated to a peer set at the time they join or leave the peer set. Given the one-to-one mapping of peer sets onto multi-cast addresses, effectively clients only need to interact with the infrastructure in terms of multi-cast addresses. So, client requests will never be addressed to one server, but rather to a peer set. Note that within a peer set, the members need to have a closer level of integration and must be aware of each other's identity and IP address, in order to properly coordinate the activities peer sets are asked to carry out.
[0362] Non-destructive operations (the expression destructive operation is used to identify any operation that alters the state of the namespace or the content of a file) requested to a peer set can be distributed among all the members. This allows the members to share the load. In order to allow the distribution of such requests in a way that is fair among all the peer set members, either the primary member of the set needs to pre-allocate tokens to set members so that each member knows which requests it should deal with, or an appropriate algorithm should be defined that obtains the same effect. This is much more effective than having the set members negotiate to decide who should handle each request. When destructive operations come into play, they need to make sure the evolution of the state of the members of the peer set occurs in lockstep, so that it would be impossible to obtain different outcomes as the result of a request, depending on the node the client is interacting with. Very often applications tend to use files as semaphores. This reliance on the atomicity of pathname operations emphasizes the need for all the destructive pathname operations to always operate consistently across all the members of a set.
[0363] One possible option to allow destructive operations to be performed in lockstep among all the members of a peer sets is explicitly managing the redundancy, by creating a service layer that insures that the servers mirroring one another are always in sync. This entails a "logical" form of mirroring, in that it is necessary and sufficient to replicate only what is needed to make sure that the client view is consistent between members of groups of servers that work together.
[0364] A disadvantage of this approach is in the fact that this scheme is very much dependent on the MaxiFS architecture, so it is an ad hoc design that has to be implemented from scratch. The fact that the scheme is specific for the MaxiFS architecture is also an advantage because this provides a logical view of the world, rather than a physical one. Therefore it can minimize the amount of information that has to be transferred and streamlines the server interactions. Since it is based on a logical view, it better accommodates physical differences in the servers (such differences would undoubtedly develop in any system, due to the gradual replacement of servers over time).
[0365] Another option is using mechanisms of automatic block replication in which the actual disk writes to a node can be forwarded automatically to other nodes to keep them in sync. This scheme operates on a physical level and is available in standard packages for Linux and other Operating Systems (for example, see NBD (), DR:BD () and DoubleTake ()).
[0366] Here a major advantage consists of the fact that this software is available off-the-shelf and needs no special adaptation. This approach requires the configurations of the servers involved to be very well matched, if not identical. Sector-by-sector replication may have to replicate data structures inessential with respect to the client view. This may require more bandwidth and processing than in the other case. Packages based on this type of scheme require a traditional clustering infrastructure, in which it is possible to detect the state of the other members of the cluster via redundant network connections, at least one of which needs to be dedicated to this function.
[0367] The second scheme may in fact be overkill, because it would probably require the transfer of much more information than it is strictly needed, thus causing waste of network bandwidth. Therefore, MaxiFS uses the first scheme. As a general criterion, it is desirable to let the MaxiFS clients perform as much work as possible, with respect to the server nodes, for all matters in which they have direct knowledge. This has two positive effects. It allows the entity that is most knowledgeable about a given issue to exercise the appropriate decisions in cases in which the server nodes might have to resort to generic behavior. And it reduces the amount of load on the server nodes.
[0368] When a client requests a peer set to perform a destructive operation, the primary member of the set coordinates the actions to be performed with its peers by receiving their acknowledgments for any operation the client requests. It also manages the retries and the error recovery, in case one or both secondary members of the set are unable to successfully complete. Finally, the primary is only member of the set that sends an acknowledgement packet back to the client. There are other cases in which the server nodes are the ones that should perform the necessary actions because they might be the best informed entities. All the actions that relate to resynchronization of a peer set and the like fall into this class.
[0369] An appropriate System Management service exists to perform the resynchronization of the file systems of the secondary members (or of their subsets) with the primary (see below). Since the system cannot be expected to remain idle while the resynchronization is in progress, it should still be possible to perform destructive operations in the peer set being regenerated, at least within the portion of the hierarchy that has been resynchronized. This is relatively easy to do if the active peer keeps track of where in the tree the resynchronization is occurring.
[0370] The algorithm works as follows: the peer set member (active member, which can be any member of the set that is in charge of the reconstruction and it need not be the primary member) that is replicating its MDR to another joining member (passive member) performs a recursive traversal of the MDR tree to be replicated and copies the items it scans one at a time. As it processes files and directories, it keeps track of where it is in the tree. Whenever a it receives a client request to change any portion of the MDR, the active member checks whether the request relates to an item that is part of the portion of the tree already processed. If it is, the request is forwarded to the member being updated. If it is not, the update is only performed to the member's MDR because the updated version will be replicated when the scan reaches that item. The active member need not be the primary. In fact, it is convenient that this is avoided, to avoid overburdening the primary.
3.4.2 The MDR and the Structure of Metadata Files
[0371] An MDR is always associated to a peer set, in the sense that all the members of a peer set are expected to have identical MDRs at all times that should always evolve in lockstep. When this is not the case, it is an inconsistency that must be repaired immediately.
[0372] An MDR only exists in those server/volume pairs that are members of a peer set. However, it is conceivable to have multiple MDRs to coexist within the same volume. This could be useful because without it, if a node could only be associated to a peer set on the basis of a volume, each node could at most be member of 4 peer sets (the number of disk drives). Allowing multiple peer sets to co-exist within the same volume (the system takes care of avoiding that members of the same peer set are implemented on the same node), even if each node already has 4 memberships, in case another node fails, it is still possible to reassign the role to one of the healthy nodes. Metadata files hosted within MDRs are used to describe where the data associated to a file is stored within an infrastructure. Such files could just contain metadata or could contain user data, as well. However, since MaxiFS can have a variable number of mirrors per file across the entire infrastructure, even if user data is stored in the metadata files, there is the need for separate mirrors when their number exceeds the cardinality of the peer set.
[0373] Therefore two options exist: to store user data in metadata files, until the peer set cardinality is exceeded, and to always store files separately from the metadata. An advantage of the first option is that, especially for small files, once the metadata file is opened, the client could read the user data, instead of having to open a separate data file. On the other hand, two aspects suffer: more complexity needs to be built into the product, to cope with two separate cases and the process of copying a portion of the file system hierarchy to another node is more expensive in time and complexity. The second alternative seems far more attractive for the reasons discussed. Thus, metadata files will merely be descriptors of where the actual user data is stored.
[0374] When a file is created, its metadata file is hosted by the peer set that also hosts the parent directory. If the file has multiple mirrors, the mirrors can be hosted on other peer sets as well. The latter peer sets, however only store the file, but not its metadata. In a sense, the first peer set is the one that owns the file.
[0375] A second aspect to be discussed is whether it should be possible to stripe files across multiple nodes. The advantage here would be that of allowing the most effective use of space. The disadvantage is the resulting complexity. Because of the latter, at least in the first release of the product the striping of files across nodes will not be supported, although the architecture is open to this evolution.
[0376] Metadata files contain two kinds of information. First is a generation number for the metadata file. This starts at 0 when the file is created and is increased by 1 for every time the content of the metadata file is changed. The reason for this is that of allowing the verification of the consistency of the metadata files across the members of a peer set. Second is a list of <peer set ID, file name> pairs that identify where copies of the file are kept. The file name identifies the way to reference the file in the DR of each of the peer sets where a copy of the data file is stored.
[0377] The first peer set listed in the metadata file is always the one that owns the file, in the sense described above. The actual name of the data file need not be correlated to the name of the metadata file. The latter is the name by which clients of the infrastructure know the file. The former is the name used to access the file within the appropriate member(s) of the specified peer set. A consistent naming scheme throughout the infrastructure is necessary to make sure that file names are unique, so that moving a file from one peer set to another does not entail the risk of name collisions.
[0378] Thus the name can be made of two components: First is a unique per-file ID expressed as a hexadecimal string. This ID could be made of a portion that relates to the peer set where the file is created initially and by a counter incremented each time a new file is created within the peer set. The peer set ID component of the name is only to partition the unique ID space to avoid that the same name may be generated at the same time on different peer sets. However, once the file is created, it can migrate to any peer set, if need be, without having to change that portion of its name. The second component is a generation number that starts at 0 when the file is initially created and is bumped every time the file is rewritten. The generation number must be returned to the client for any transaction that involves the file (see below for details).
[0379] The full pathname of the directory where each such file resides need not be listed explicitly in the metadata file, because it can be chosen to be that of the root of the DR, followed by the names of subdirectories obtained by breaking the hexadecimal string representing the unique ID for the file into a number of segments, to limit the number of data files in each directory (for example, given that the ID is a hexadecimal string, if each segment is 8-bit long, then each directory corresponding to a segment can contain no more than 256 children) in the DR. As an example, assume that we are looking at a certain file, whose metadata file contains the following information:
TABLE-US-00001 File ID: 12ab34cd56ef 1st peer set: 6, 1233 2nd peer set: 18, 1232 3rd peer set: -- 4th peer set: 23, 1233
[0380] This means that the file whose name is "12ab34cd56ef" in the Data Repository is stored on three out of 4 possible peer sets (the list need not be limited to 4 peer sets).
[0381] Peer sets 6, 18 and 23 host copies of the file. For each peer set that contains the file, the ID of the peer set is listed, along with generation number of the copy it stores. The first Peer Set in the list is also the owner of the file (note that to make room on a peer set that is approaching full capacity and "owns" a certain file, it might be necessary to migrate the data file away from its owner peer set. In this case, an appropriate marker in the table would indicate the situation), i.e., the peer set that stores the file metadata. The other peer sets host only additional copies of the data file (not of the metadata). In this example, given the name of the file ("12ab34cd56ef"), the copies on peer sets 6 and 23 are up to date, as they contain the latest generation number (1233), whereas those on peer set 18 are behind by one generation and need to be updated. Assuming that the DR for the peer sets has the pathname "/DR" and that the intermediate directories are chosen by dividing the ID string so that each directory covers one byte of the unique ID, the actual pathname for the file would be: "/DR/12/ab/34/cd/56/ef/12ab34cd56ef-1233" for peer sets 3 and 23 and "/DR/12/ab/34/cd/56/ef/12ab34cd56ef-1232", for peer set 18.
[0382] When a file needs to be created, the identity under which it will be created will be that of the client process requesting it. This implies that the ownership of the metadata files will be associated with the identity used by the client process performing each request (this allows the client to rely on each local system's protection subsystem to validate the operations requested, rather than forcing to a reimplementation of the protection mechanisms in the MaxiFS layers). The way open file requests should be handled is the following. Every time the peer set is asked to open a file, it opens the corresponding metadata file. It then checks the consistency among the generation numbers in the <peer set ID, file name> pairs. In other words, it makes sure that the generation numbers for all the mirrors are the same. Should this not be the case, the peer set is responsible for the resynchronization of the copies. In this case, the peer set should only return the subset of the members of the mirror list that is in sync and start offline operations to resynchronize the stale copies. The peer set returns the list of <peer set ID, file name> pairs to the client. The latter then decides which peer set should be accessed and how.
[0383] The hypothesis of using regular files as metadata files is certainly acceptable. On the other hand, there is another possibility that can have some advantages: the information that would be stored within a metadata file could be encoded and stored within symbolic links. Symbolic links are simply implemented as files whose special type is recognized by the file system. They contain pathnames that point to nodes in the local file system hierarchy. Being symbolic, they do not have the same restrictions that hard links have. Specifically, they are not constrained to be interpreted only within the file system volume to which they belong and can point to directories, not just to files. They also have the characteristic that, unlike hard links they are not reference counted and may become dangling references whenever the target object they point to is deleted.
[0384] Because of the fact that dangling symbolic links are normal, it is certainly possible to think of encoding the metadata information into them. As any other pathnames, the pathnames stored in a symbolic link must be made of components that do not contain the slash character, nor the null character (C language string terminator), are no longer than 255 bytes and are separated by slashes. There is also a limit to the length of a symbolic link that is system dependent.
[0385] The pathname stored in a symbolic link can certainly be used to encode whatever information MaxiFS needs to keep in a metadata file. The length limit, however, could be a problem, especially for files that have many mirrors. In any case, the length limitation can be extended with a minimum of programming. So, assuming symbolic links are used as metadata files, a peer set member would set the content by creating the file through the "symlink( )" system call and would read the content via the "readlink( )" system call.
[0386] It is attractive to think of symbolic links as repositories of metadata information. A symbolic link uses as little room as needed. If the string it stores is short, it is entirely contained within the i-node that represents it on disk. Otherwise, it can expand to direct data blocks associated to the symbolic link. This means that for files that have a limited amount of metadata, it is possible to limit the amount of storage used to the size of one i-node, that is generally much smaller than the size of a data block. Since a symbolic link is a system file, the guarantees the system offers on the integrity of its content are higher than for any user data file. And the number of system calls needed to create and write and to read the content of a symbolic link is limited to one. The "symlink( )" call creates the link with the specified content. The "readlink( )" call retrieves the content. Both of them do not require prior "open( )" and subsequent "close( )" calls.
[0387] For all the above reasons, the MaxiFS metadata is stored within symbolic links. The next section describes the how files in the DR are managed.
3.4.3 The DR and the Structure of User Data Files
[0388] The concept of a DR is logically disjoint from that of the MDR and from that of a peer set. It is certainly possible to associate the DRs to individual server/volume pairs. However, this tends to make DRs less robust with respect to MDRs. The reason is that MDRs are associated to peer sets. This is an abstraction that is independent of the physical nodes that are members of a peer set at any one time. Therefore, when the MDR within a given peer set is referenced, this reference is always accurate over time regardless of how the peer membership may evolve. Moreover the peer set concept makes the MDR more available because the likelihood of all peer set members crashing before new members are added to the set is very small. In the case of DRs attached to individual servers, this would not be the case. In addition to this, interactions at the MDR level could always be managed abstractly via peer sets, whereas for DRs, clients would have to talk to individual nodes. However, if some minor restrictions are introduced, most of the advantages of peer sets can be made available to DRs. To avoid introducing entirely new abstractions, it is possible to tie DRs to peer sets. In other words, each peer set would then manage one MDR and one DR. In principle, this becomes even easier when one constrains the cardinality of mirrors to multiples of the size of a peer set (i.e., when a file is stored in a particular peer set, then a copy of the file is stored in each node of the peer set). Given that a peer set is made of 3 members, this would mean that a file could exist in 3, 6, 9 . . . , 3×N copies, where N is the number of peer sets in which the file is stored, and N can be selected based on various rules or policies and may be different for different types of files. With this limitation, we can have better conceptual economy and simplify the system. The clear drawback of this scheme is that this systematically multiples the amount of storage used by at least a factor of 3, which may be undesirable, especially when the MaxiFS infrastructure must also store files that require no mirrors or files for which mirroring by 2 is more than adequate.
[0389] A way out to allow only the peer set that owns a file to store not only a number of mirrors equal to the cardinality of the peer set, but also a single or just 2 copies (this is an optional optimization that is not required to be implemented). This breaks a bit the full symmetry of DRs with respect to peer sets, nevertheless, in case a peer set member is lost, the remaining members would get a new member and would make sure both the MDR and the DR are updated on the new member. There is always the case of files that existed as the only copy on a member that died. However, if they existed in a single copy, the customer must have decided that those files were in fact disposable. The decision on how many mirrors a file should have (if any) is a configuration decision that depends on the file suffix, the file size, the directory where it resides and so on. Large files are decoupled from their metadata counterparts and can have as few or as many mirrors as needed. In any case, these files will be managed in the DR.
[0390] When a server/volume fails, one of the first responsibilities of MaxiFS is that of restoring the redundancy of the files that were in the server/volume that failed. At that point scanning the entire global name space hierarchy would be time-consuming and would generate additional load, at a time when considerable load may be induced by the failure. However, on the basis of the fact that peer sets manage both MDRs and DRs, after a member leaves a peer set and a new one is elected to replace it, it is sufficient that as the MDR replication proceeds, the active member replicating the MDR should trigger a file replication every time a metadata file is encountered that had a mirror on the crashed node. Clearly this is impossible for files that only existed on the crashed node, but this would be the case of a file not replicated because it was not deemed important. Each data file has itself a header at the beginning of the file. The header contains the following:
[0391] The offset at which the actual user data is stored, following the header. Client-level read and write operations can only be performed starting at that offset. File offsets specified by the client should always be incremented by the data offset before a read, write or truncation is performed.
[0392] The ID of the peer set and the pathname that clients use to reference the file (this would be problematic if MaxiFS had to support hard links, which it does not). This allows the system to find out which metadata files point to the data file and to access other copies of the file if needed. Note however, that this pathname is to be considered as a hint, rather than as an absolutely accurate reference. The reason is that if this reference were to be accurate, any rename of a directory in the pathname of the file should cause all the pathnames in all data files below the renamed directory to be updated. This is far from desirable. On the other hand, since renames are not frequent, the pathname can be updated the first time the file itself is updated.
[0393] As mentioned earlier, data files are immutable (the only exception are data files used as logs, which will be discussed in more detail later). Therefore, a file with the new generation number replaces the previous version atomically at the time the new file is closed, after being modified. The generation number for a file is chosen by the peer set that owns the file at the time the file is opened for writing. Secondary members of the set will use the same number and this will be true of any other mirrors. One question that needs to be addressed is how writes should be handled. In a way, having clients directly writing to all the servers that are supposed to store the mirror copies of a file appears the best way to go, since it allows the creation of parallelism and redundancy right away and again it concentrates the "intelligence" within the component that is most knowledgeable about the file: the client.
[0394] On the other hand, this might not be the best policy when the number of mirrors a file needs is higher than 3. In this case, the writes would not only affect the peer set, but also external members and the coordination of the writes might become problematic. Whereas, if the writes only go to the members of the peer set that "owns" the file (in the sense that the file is part of a directory hashed to that peer set (see above)), the peer set has internal mechanisms that allow the writes to proceed in lockstep. The compromise chosen in MaxiFS is that, since DRs are tied to peer sets, when a file is to be updated, the client directly interacts with the members of the peer set where the parent directory for the file is stored, including up to three members. If the number of mirrors goes beyond three, the peer set will schedule the creation (or the update) of additional copies in an asynchronous fashion, when the client closes the file.
[0395] Note that writes behave pretty much like metadata operations. In both cases, clients send their requests only to the primary member of the set. This is appropriate despite the fact that metadata operations normally carry a minimal data payload, whereas data write packets may carry much larger payloads. In the case of metadata operations all members of the peer set need to receive the request. In the case of writes, even if the payload is large and just one copy of the file exists (which means that just one server would need to perform the write), the packet is replicated by the last switch and therefore, the impact should be contained. Moreover, the general case will be that of a file that has more than one copy, in which case more than a single server must process the write. The case of reads is a bit different. Multicasted read requests have a minimal payload. So, even the replication of the packet has minimal impact. In any case, by having a read request reach all of the server in a peer set, mechanisms internal to the peer set may properly distribute the read accesses among the servers that have a copy of the file (the others would ignore it). Clients that want to operate by performing striped reads from multiple files would do so for files that have mirrors on at least two peer sets and would split the multi-cast read requests appropriately.
3.5 Special Handling of Small Files
[0396] In the kind of application environments MaxiFS targets, there are many situations in which the ability to provide extremely fast access to files that are very small is mandatory. This is typically the case for files that contain thumbnails or small pictures. In such cases the overhead implied in the access of such files is excessive. To open one such a file, even discounting the time it takes for NFS to lookup the intermediate components of a pathname, it would be necessary to lookup the file i-node from the directory, to read in the i-node for the file and finally to read the data block for the file. This entails at least 3 I/O operations. In many systems, most accesses are of this nature and the files to be accessed are very random, so that no advantage can be obtained by using front-end caches. Therefore, special facilities to minimize the number of I/O operations to access such small files are desirable.
[0397] A way to do this is to keep files in this class within file systems implemented on the server nodes as an array of extents all of the same size (in an actual implementation, this restriction might be relaxed by allowing files to span multiple fixed size extents in a volume, up to a pre-established maximum) (see FIG. 24). Access to the individual extents would occur by simple indexing into the array. A bitmap could keep track of the extents that have been allocated.
[0398] To understand how this could be used in practice, assume that a special top level directory in the namespace of MaxiFS could be dedicated to this functionality. Assume that this directory does not really exist on any local file system but is interpreted by the client software in such a way that all accesses to names that encode an index under that directory are managed as special accesses to a short file via its index. For example, assume "/sfr" is such a directory. Then opening "/sfr/CD3A" would in fact request access to a small file on an optimized repository that has 0xCD3A as its hexadecimal index. This would be implemented within dedicated volumes that would have to be allocated upfront. The reason for the dedicated volumes is that either a very simple file system could be implemented to deal with such volumes or the volumes themselves could be used through a specialized service that accesses these volumes as raw devices.
[0399] A possible layout of the volumes dedicated to this function is shown in FIG. 24, where the bitmap (alternative structures without a bitmap could be devised as well) is stored in the initial portion of the volume and the array of extents follows. The color red in FIG. 24 is used to mark the allocated extents (and the corresponding bits in the bitmap). The other extents are free.
[0400] Giving clients direct access to the small files via their index would be impractical. An index by itself would always provide access to an extent, without regard to whether it is still allocated or has been freed. There would be no way to discriminate among successive incarnations of small files stored in the same location. It would be difficult to identify which server manages the specific small file repository where the small file of interest is kept.
[0401] For these reasons, each such file should have a globally unique ID within MaxiFS, instead of just an index. The Unique Small File ID ("USFID") could be structured as the concatenation of four components, as in: USFID=<ps><s><b><g>. Each component of the unique ID is within angle brackets. Their meanings are as follows: <ps> This field is the ID of the peer set where the small file resides. Note that by embedding the peer set ID in the USFID, the file is permanently tied to the peer set and cannot be freely relocated from a peer set to another one. <s> This is the slot ID or, in other words, the index of the logical volume block where the file is stored. By making this piece of information part of a USFID, the file can only reside at a specified logical offset within a volume. <b> This is the number of logical blocks that the file uses. By embedding this piece of information into the USFID, the file cannot change length. Note that the actual length of the file in bytes is stored in the file metadata region that precedes the actual user data on disk. <g> This is the generation number for the file. It is used to make sure that two different files occupying the same slot at different times cannot be confused with each other. With a large enough number of bytes devoted to this function, the recycling is practically impossible to achieve, within a given time frame.
[0402] So, with respect to FIG. 24, assuming <ps> is 0xABCD ("0000ABCD", 4 bytes), <s> is 5 ("00000000005", 6 bytes), <b> is 16 ("10", 1 byte) and the generation number is 0xB87F81692 ("B87F81692", 5 bytes), the USFID for the file, expressed in hexadecimal, would be:
[0403] 0000ABCD 00000000 000510B8 7F181692
[0404] This information could be made available to applications through system calls of the stat( ) family, broken down into two components: the device number and the i-node number ).
[0405] Information such as the generation number should also be stored as file metadata, along with other information, such as the actual file length (amount of storage space used for the filer can be smaller than the entire extent), ownership data, access permissions, creation time and more. This metadata would be stored in the first portion of the extent, followed by the actual data. The POSIX file interface does not have a way to create anonymous files, to later assign names to them. However, MaxiFS allows the same to be accomplished through a sequence of POSIX calls similar to the following:
TABLE-US-00002 1. fd = creat("/MaxiFS_mp/sfr/smallfile", 0777); 2. n = write(fd, buff, bytes); 3. ... 4. sfn.buffer = name, sfn.length = sizeof(name); 5. fcntl(fd, MAXIFS_GETUSFID, &sfn); 6. close(fd);
[0406] In statement 1, the name supplied is purely conventional. It is made of a stem that is the mount point of MaxiFS on the client where the creation of the file is requested (in this case: "/MaxiFS_mp") and by a pathname relative to the mount point ("sfr/smallfile"). The latter identifies the MaxiFS-wide small file directory ("sfr") and a conventional name ("smallfile"). Use of the directory (the special directory "sfr" is the directory under which all small files are accessible. It has no subdirectories, nor any subdirectory.
[0407] From statement 2 onward, the caller writes data to the new small file. In statement 5 the client invokes a fcntl( ) operation ("MAXIFS_GETUSFID") specific to MaxiFS. The execution of this call entails the following:
[0408] 1. The client informs MaxiFS that the small file has now been copied completely.
[0409] 2. The client requests the USFID the system generated for the file. The name of the file will be returned as a string that is stored in the data structure fcntl( ) takes as an argument (`sfn`). For this reason the caller sets the buffer where the name will be stored and the buffer's length in statement 4.
[0410] 3..
[0411] Finally (statement 6), the client closes the file. From now on, the file can be accessed in reading via its name. Assuming that the fcntl( ) invocation returned the USFID "0000ABCD00000000000510B87F181692", the new small file would be opened as: "/MaxiFS_mp/sfr/0000ABCD00000000000510B87F181692" (in order to support this functionality at the application level, it may be necessary to develop packages, libraries and so on for the prevalent programming languages used for Web 2.0 applications (Java, Perl, Python, etc.)).
[0412] Typically, such files are opened for reading. However, there is an important case when such a file may have been.
[0413] In any case, after a small file is created, MaxiFS supports read access to it via a single I/O operation. Therefore such USFIDs can become part of URLs, so that access to such files, even if extremely random, need not cause the servers to perform lots of I/O operations.
[0414] The enumeration of the small files contained in the special namespace directory merely requires identifying the allocated extents (from the bitmap, in this example) and reconstructing their unique IDs. To enumerate all such files across the entire MaxiFS infrastructure one such enumeration should be performed within the small file volume in each of the peer sets in the system.
[0415] Deletion of small files would be possible through their USFIDs.
[0416] Such files would have to have redundancy. For simplicity, this would be done make sure any such files exists in three copies: one on each of the small file volumes in each member of the peer set the files belong to.
[0417] A departure between replications across file systems of this nature and the ones that have been discussed previously is that the previous discussions focused on a logical replication, in which the actual layout of files across replicas is totally immaterial. The only thing that matters is for the copies to be synchronized.
[0418] In this case, instead, not only must the files be replicated, but it is also necessary to store each file exactly at the same location in each replica of the small file volumes. Were this not the case, the same ID could not apply to different copies of the same file.
[0419] (Some of the limitations due to this form of partitioning could be easily circumvented if the file system running on the server nodes were ZFS. In this case it could be possible to always allocate such partitions and to include them within the ZFS file system whenever they are unused and extra space is needed, since ZFS would allow such partitions to be seamlessly and dynamically added to a running ZFS file system).
3.6 System, Node and Client Initialization
[0420] Since multiple MaxiFS infrastructures could potentially coexist within the same network, it is necessary to assume that each such infrastructure would have its own name and identifier. They would be used by clients when they mount exported MaxiFS directories to a local file system directory. The name of the infrastructure and its ID are stored within all the servers that are members of the infrastructure.
3.6.1 Initial Setup of a MaxiFS Infrastructure
[0421] The initial setup of a MaxiFS infrastructure with multiple nodes is an iterative process. This is a task that is essentially handled by System Management after a System Administrator has identified the servers that should be part of the infrastructure. This involves the creation of the initial peer sets. The first peer set to be created should be peer set 0. This is a special peer set, in that the procedure followed for its initial set up is not the standard one. This is so because the standard automatic procedure requires a supervisor set to be present and there is no supervisor set available for set 0 initially. After this is done, other node/volume combinations can be assembled together into peer sets using the standard procedure.
3.6.2 Addition of a Node to a MaxiFS Infrastructure
[0422] When a server node initially joins an infrastructure there are the following possibilities, which must each be handled differently:
[0423] 1. The node may be rejoining the infrastructure after a crash.
[0424] 2. The node may be rejoining after an orderly shutdown of the infrastructure and the subsequent reboot.
[0425] 3. The node may be joining the infrastructure for the first time.
[0426] In case 1, when the node is rejoining the infrastructure after a crash, on reboot it should be able to identify the infrastructure it belongs to. Assuming this is the case (if it is not, the situation is handled in case 3), then for each of its volumes, the node should first identify whether it was a member of a peer set before crashing.
[0427] If it was a member of a peer set, it should send a message to the peer set primary, asking them to rejoin the set as a secondary member. If the primary member refuses the request, the node should delete the information regarding its previous peer set, it should delete the MDR relative to the set and should simply make itself known to System Management as a node that can operate as a DR server (a mechanism should be included to reclaim storage for stale DR data that is no longer usable) and peer set member. If it was not a member of a peer set, it should simply advertise its presence to System management and wait for peering requests or for DR requests to come in.
[0428] In case 2, when the node is rebooting after an orderly shutdown, it should have stored this piece of information and the time of the shutdown. Thus on the reboot it should have all the information it needs, including which peer sets, if any, the node was a member of.
[0429] If the node was a member of a peer set, it should try and rebuild the peer set or should try to rejoin it. In normal conditions this should possible and everything should be pretty smooth. Note however that, in case the entire infrastructure is restarting, there are some critical issues to be managed. For example, rebuilding a peer set requires the permission of a peer set that is the supervisor of the peer set being rebuilt and the latter may not be available yet. Therefore, the node should be aware of the situation and should be periodically polling its supervisor until the latter is able to grant the permission or until another member of the set being reassembled gets in touch with the node and invites it to join the peer set. As before, if the node was not a member of a peer set, it should only make itself known to System Management as a potential DR server and peer set member.
[0430] In case 3, there are two possible subcases. However, in both cases, an operator must explicitly request a standalone node to become part of the infrastructure. This could be done through a GUI interface that would identify server nodes (this means: "server nodes that are running MaxiFS software") that are accessible in the network and do not belong to a MaxiFS infrastructure yet and would show them in a standalone pool. The operator should be able to select one or more of such nodes and request them to join an existing MaxiFS infrastructure.
[0431] If the node never belonged to a MaxiFS infrastructure, it should just make itself known to system management, update the version of software it is running from the infrastructure code repository, if needed, and make itself available to System Management as a potential DR server and peer set member. In case the node never belonged to the MaxiFS infrastructure it is going to join, yet was a member of another infrastructure, before falling back into the previous subcase, an explicit acknowledgement to do so should be provided by a system administrator. In other words, the migration of a node from a MaxiFS infrastructure to another one should only be allowed by explicit operator request.
3.6.3 Initial Setup of a MaxiFS Client
[0432] The other part of the initialization of a MaxiFS infrastructure is the initialization of clients. To obtain this, the following steps should be followed:
[0433] 1. First of all, the MaxiFS infrastructure a client is going to join should be up and running
[0434] 2. The system administrator should then be able to use the MaxiFS node administration GUI and point to the client node it wants to make part of the infrastructure. It would then upload a software package to such client.
[0435] 3. The setup function of the package would then be executed on the client and would be given the ID of the MaxiFS infrastructure to be used. This would allow a number of things, including the mount point(s) for exported MaxiFS directories, to be configured.
[0436] 4. At this point the client should be able to take the MaxiFS client loadable module, to install it, and load it. This might involve the reboot of the client.
[0437] 5. Finally, the client should be able to mount the exported directories of interest and to start operations.
4 Details on the Implementation of File Operations
[0438] This section of the document provides more details on the file operations performed on the basis of client requests.
4.1 Details on Non-Destructive Operations
4.1.1 File Lookup, Stat, Open, Read and Write Operations
[0439] File lookup operations are not directly invoked by applications. In general applications either operate on a file descriptor returned by a successful open call, or perform pathname-based system calls. Traditional network file system designs rely on a lookup operation that is used to translate a pathname into some kind of an opaque handle. Most such file systems need to convert a pathname one component at the time, i.e., translating step-wise the entire pathnames into the handle that identifies the leaf of the pathname. Generally, each such translation requires a network roundtrip between client and server.
[0440] In order to make MaxiFS very efficient and to avoid unwanted network round trips, the resolution of a relative pathname (The expression "relative pathname" is used to emphasize that it is not an absolute pathname that needs to be looked up, but that the lookup operation only needs to be performed for the portion of a pathname that refers to file system objects in the MaxiFS namespace, i.e., below a MaxiFS "mount point") is performed a single network interaction. This is at the core of the hashed approach to pathname resolution.
[0441] This is possible according to the scheme described in "The Structure of the MaxiFS Name Space" because the MaxiFS name space is self-contained and because MaxiFS operates on homogeneous servers, in terms of hardware and software, MaxiFS can make stronger assumptions than those other types of distributed file systems can make. For example, it can assume that the volumes each server exports do not contain mount points for other file systems and that the file system type in use does not change across directory boundaries. The result of a lookup operation is that, in case of success, the requesting client is given a handle to the file system object of interest that can be subsequently used to access the file. The client also receives a list of the peer sets where the corresponding data file resides.
[0442] However, the internal behavior of MaxiFS is different from what the application patterns might suggest. MaxiFS implements some file system operations by first retrieving a file handle and then operating on the handle via other primitives, or it directly requests pathname-based operations to be performed by servers. From the point of view of MaxiFS, the functionality needed to open a file is similar to what is needed to gather file system metadata with regard to a file (this is generally done via the stat( ) family of system calls). This is so because a MaxiFS client needs to fetch the file system metadata for the file of interest at open time, just as it does for stat. So, a single type of request performs both activities. The only difference is that open requires that a reference to a file be made available to the client so that subsequent read or write calls may operate on that file, whereas stat does not.
[0443] In case the request is performed in order to open a file, a stateful session between a client and a peer set is established. This session has a time-out associated with it and effectively behaves as a lease. The peer set that "owns" the directory where the file metadata resides opens the metadata file for the file of interest and returns a handle that characterizes the session. The handle is valid until the client relinquishes it by closing the file. However, it is possible that a client may crash after opening a file. In this case, after a suitable time-out, the peer set pings the client to check whether the latter is still alive. If it is no longer alive, it closes the handle. The client also receives a list of up to four peer sets that contain copies of the data file that is associated to the metadata file. Then the client is allowed to use the handle on any of the peer sets that have a copy of the file available. The handle is sufficient to let the server access the data file, if available. The client may also decide to stripe the reads from multiple peer sets in order to increase the available bandwidth, as needed. It can also make use of the data file redundancy to continue reading from a different peer set in case the server from which it was originally reading the data file becomes overloaded or crashes. An open in read-only mode clearly identifies a non-destructive operation. Should the client go away or crash, the peer set can simply reclaim the file handle. When a file is opened in write-only or read-write mode, MaxiFS introduces some restrictions. The lookup process for the file is still identical to the one performed for an open in read-only mode. However, the client is granted the access in write only if no other client is accessing the same file in write mode. This effectively enforces a form of locking such that changes to a file can only be performed via serialized open-(read)-write-close sessions. The file being modified is effectively a private copy only the writer sees. This allows other read requests to be still satisfied by the current file copies. Only when the session terminates, the modified file replaces the original one. However, clients that had the older file open will continue to access the same file until they close the file. This differs from the semantics of typical file systems. Nevertheless, it is fully acceptable in the market segment MaxiFS targets where the likelihood of multiple processes writing to the same file is extremely remote. MaxiFS also supports another mode of operation that is very useful especially in the handling of log files, where there can be multiple readers and multiple writers, yet data is only appended to the end of the file. This behavior is different from that of the previous case because it is necessary that the file be shared among readers and append-mode writers.
[0444] In order to make use of such a behavior, opens in read-only mode are always allowed. However, if a process opens a file in append mode (Using the POSIX open flag O_APPEND), then no other process is allowed to open the file in write mode, unless it also sets the append mode flag. Conversely, if a file is already opened in write mode, it cannot be opened in append mode.
[0445] In any case, the clients (in this context what is meant by "client" is not the physical machine that is requesting the open, but the actual process on any machine requesting the file to be opened) that open a file in append mode have the guarantee that each individual write up to a system-defined length (the maximum length of an append mode write is anticipated to be 1 Mbyte) will be atomically appended to the file. This means that parts of this write will not be interleaved with those coming from other clients and that such append-mode writes will be serialized, although the order of serialization is not predefined. In any case, when a file open is open in append mode and it has mirrors, all the mirrors are guaranteed to be identical, i.e., the order in which the individual records appended appear in the file is always identical. Files are not intrinsically usable in append mode or write mode. Any file can be opened in write mode or append mode. However, if it is open in append mode, nobody can open it in write mode and if it is open in write mode, it cannot be opened in append mode. Unlike files open in write mode, each append mode writer appends its records to the same physical file.
4.1.2 File Close Operations
[0446] Close operations have minimal semantics for files open in read-only or in append mode. Basically, the close goes to the peer set that "owns" the directory where the file resides and the latter makes the associated handle invalid. However, in the case of files open in write or read-write mode, the close operation has also the effect of increasing the generation number of the file and replacing the previous generations with the new one. In any case, the client closing a file has no need to perform a close of the data file, since the close sent to the owner peer set will take care of the metadata file and this is all that is needed. The server that was serving the data file will perform an automatic close of the data file.
4.1.3 Write-Back Mode, Write-Through Mode and Fsync
[0447] A standard POSIX flag for the open call (O_SYNC) allows clients to choose to perform writes in write-through mode, rather than in the default write-back mode. Write-through mode allows applications to have better control over what is really on disk in that the client receives control back only after the data written out is committed to disk. The negative aspect of this is that the client perceives a write latency that is much higher than in write-back mode. Nevertheless, for specialized applications that need to implement checkpointing and similar mechanisms, this is highly desirable. POSIX also supports a file system primitive called fsync( ). This is useful for files that normally operate in write-back mode. Whenever the latter primitive is invoked, passing the file descriptor of the open file of interest as an argument, the caller is blocked until the system acknowledges that all the file writes buffered in the system have been committed to disk. Besides write-back mode, MaxiFS also implements write-through mode and fsync( ) when a file is open for writing (either in regular write mode or in append mode).
4.1.4 File Locking
[0448] MaxiFS supports the implicit locking of entire files, when open for writing. This has been discussed above. Effectively files open also for writing are implicitly opened with the O_EXCL POSIX flag. Explicit file or byte-range locking primitives are not supported in MaxiFS, as they have no use because the only files shared across multiple clients are files open in read-only mode and files open in append mode. The files that are open in append mode provide implicit locking in the sense that the individual writes of clients are serially appended.
4.1.5 Attribute Setting
[0449] There is no special behavior to be associated with the explicit setting of file attributes, file ownership, access bits, etc. etc.
4.1.6 File Extension and Truncation
[0450] File extension and truncation are fundamental operations that need to implement the appropriate semantics. It is very important to always satisfy the requirement that garbage data should never be returned to the user. This means that when a file is extended, first the additional blocks for the file should be allocated (generally using blocks that have been zeroed) and then the length of the file should be updated accordingly. The reverse is true for truncation: first the length of a file should be reduced and then the blocks of the data file(s) should be released. Since these operations alter a file, they implicitly operate on a private copy of a file. At the end of such modifications, on close, the updated file replaces the original version and increments the generation number.
4.1.7 File Renames
[0451] File renames are in principle trivial. Unlike directory renames (see below), they entail no name rehashing or file relocation and are completely local to the file system of the peer set that owns the parent directory. As for all pathname-related operations, the only complication is in the fact that the primary member of the peer set must coordinate the update across the peer set, to prevent discrepancies among the members.
4.1.8 Directory Creation and Deletion
[0452] The creation and deletion of directories has fairly straightforward semantics. However, some caveats apply, especially when the namespace is distributed according to the hashing scheme because in this case these operations always span two peer sets.
[0453] Such operations are coordinated by the primary member of the peer set across all members because any inconsistency, even temporary, might result in incorrect application behavior.
[0454] The process of creating a directory affects both the parent directory (and the peer set where it resides) and the MDR where the directory would be stored. The primary member of the peer set that owns the directory to be created is in charge of the coordination of the peer set that owns the new directory's parent. Should the request fail, the system should implement the appropriate semantics, by returning an error to the client. In case the system detects any inconsistency, it should try and repair it right away.
[0455] In case all the checks succeed, the operation would occur in two steps: first a reference to the new directory would have to be created within the parent directory and then the directory should be created within the target MDR. Because of the fact that in the creation phase the checks are performed in the same order, it would not be possible to have collisions between requests, even though the operation spans two peer sets.
[0456] In case of the deletion of a directory, the order of the checks should be reversed with respect to the creation, and the target directory must be removed before the reference in the parent directory is deleted.
4.1.9 Hard Link Creation and Deletion
[0457] Hard links are not supported in MaxiFS but could be added if necessary or desirable for a particular implementation.
4.1.10 Symbolic Link Creation and Deletion
[0458] Unlike hard links, depending on the evolution of product requirements, MaxiFS may support symbolic links. In any case, the client platforms that support symbolic links can always create symbolic links to files or directories stored in MaxiFS.
4.1.11 Directory Renames
[0459] Directory renames are in principle complicated because in the general case they involve four objects: the old and new parent directory and the old and new name. There are three classes of directory renames.
[0460] If a directory rename does not change the name of the directory, but simply moves the directory to another area of the file system name space, the directory has to move but only within the same local file system. This entails no other peer sets and can be handled internally to the peer set by invoking the rename primitive of the underlying file system. However, since a portion of the name space changes shape, these changes need to be reflected across all the peer sets that contain that portion of the name space (see above). This can be done in parallel to the rename, for the reasons previously explained (see above).
[0461] If a rename changes the name of a directory so that its new hash value still maps the new name to the same peer set, the operation is once again local to the file system and peer set. It is trivially implemented by using the underlying file system rename. In any case, as in the case of directory creation or deletion a change in the reference from the parent directory is needed and this can be handled in a way that is similar to the one discussed for directory creation and deletion.
[0462] If a rename causes the directory to hash to a different peer set, then the operation is much more complicated, because it entails the coordination across two peer sets. In this case, a coordinator for the rename need be chosen and it would be the peer set that owns the old directory names. As the rename progresses, all the files in the directory need to be physically moved to the new peer set, along with their parent. However, the coordinator must be able to intercept all operations that relate to the directory being moved, to make sure that directories entries are managed consistently (an example of this could be the case in which a request to delete a file is received in a directory being moved and the file itself has already been relocated to the new peer set. If the file were looked up only in the old directory, the delete would fail. Conversely, a client could be capable of creating a directory entry that already exist but has been moved to the new peer set. Clearly all such checks need to be managed atomically and therefore the need for a single reference point (i.e., the rename coordinator) is needed). In any case, it should be kept in mind that even the rename of a large directory in such circumstances should not take an inordinate amount of time because in reality it is not the data file, but only the much smaller metadata files need to be moved and this is far less expensive. As the rename is completed, as for the first case examined above, the coordinator also needs to inform all the peer sets that contain a name space subtree in which the directory renamed is included of the change so that the peer sets may take the change into account and correct the shape of the subtree. As in the first case of directory renames, this need not be completed before the rename returns success, as explained in a preceding section of this document.
[0463] With respect to a traditional rename, greater complexity stems from the need to update the peer sets that know about the directory. Nevertheless, directory renames are not expected to be frequent operations in the target market MaxiFS is addressing. So this is an acceptable cost.
5 Issues in Crash Recovery
[0464] This section briefly explores some general criteria MaxiFS employs in managing node and system failures. The common underlying criteria are the following:
[0465] 1. The system must be as self-healing as possible.
[0466] 2. Each node and each peer set must be as autonomous as possible.
[0467] 3. Decisions must never be centralized within a single entity.
[0468] 4. There must never be a need for a complete consistency check/repair of the entire name space, except for the case of disaster recovery.
[0469] 5. In case of inconsistencies within a peer set, the primary member is the authoritative entity.
5.1 Peer Set Member Resynchronization Revisited
[0470] Whenever a peer set member goes offline, the state of its MDR, DR and small file repository may no longer faithfully reflect that of the other set members. However, such outages are characterized as belonging to different classes:
[0471] 1. Intermittent outages: these are outages that last no more than S seconds and repeat more than N times within a time window W.
[0472] 2. Transient outages: these are outages that occur occasionally and last no more than S seconds.
[0473] 3. Permanent outages: these are outages that occur and take down a node for more than S seconds.
[0474] On the basis of the above classifications, MaxiFS implements the following policies. If a peer set member experiences outages that can be classified as intermittent, the other members of the set expel the faulty member from the set and have another join in. In such cases, it is likely that the responsibility for these outages is that of the network connections or of the node hardware itself. If a peer set experiences a transient outage, then the other members log the operations they carried out during the outage and play them back to the member when its functionality is restored. If a peer set member experiences a permanent outage, that member is removed from the set and replaced.
[0475] This means that operational members of a peer set must log the operations that occur in case one of the members has an outage. The operations to be logged should span no more than S seconds, because above that limit an outage is considered persistent.
[0476] When a peer set member is to be replaced, if it was the primary set member, a new primary must be elected. After which a new member is selected and it receives the color property of the member that left the set. At that point, the MDR of the peer set is replicated from the remaining secondary member to the new member. When the MDR replication is completed (this should take a relatively brief amount of time as it only entails creating directories and copying small metadata files), the files in the DR are replicated. In parallel the small file repository can be replicated, via a volume to volume copy. As an optimization, the replication of the MDR can occur in such a way that whenever a client requests a destructive operation, the new member receives the request and operates on it if the object of the operation is in a portion of the MDR that has been replicated already. Otherwise, the request is ignored and the change will occur when the area of the MDR where the object resides is updated.
5.2 Reinitialization after a Complete System Crash or Reboot
[0477] A catastrophic system crash should never occur. Nevertheless, MaxiFS must be ready to cope with such an unlikely event. This can be treated in a way that is similar to a complete system reboot. MaxiFS implements a federation protocol that is able to reconstruct the configuration of the entire system (including peer set membership) to the last valid state for the system. This occurs gradually with the reconstruction of peer set 0, and then with the reassembly of all the peer sets. In case, a member of a peer set is no longer available, the remaining member will elect a new member.
5.3 MaxiFS Integrity and Checking
[0478] It is always possible that as a consequence of some unexpected event the MDR of one peer set member may become inaccurate. The same is possible for the DR. The MaxiFS implementation is such that as discrepancies are detected at runtime, one of the following alternatives is taken. If the entity that detected the inconsistency has enough redundant information to restore what is missing in a very limited amount of time, it does so right away. But if the information available at the time of the detection is insufficient to restore the integrity, or if this is known to be an expensive operation, in terms of time, the entity that detected the problems marks the file system object as partially inconsistent and queues up a request to repair the object via a queuing mechanism as discussed below. This will trigger a system daemon to intervene to restore the consistency.
5.4 Power Loss and Disk Sector Corruption
[0479] The root file system on any MaxiFS node is essentially immutable, in that the areas that get modified are transient in nature, as in the case of the swap device. The system also forces periodic snapshots of the file system volumes. In case a volume becomes corrupted because of a bad sector in an area where a file system data structure is stored, the volume is recreated with the image of the last valid snapshot. The use of ZFS would make this issue a moot point.
REFERENCES
[0480] [1] McKusick, M., K., Ganger, G. "Soft Updates: A Technique to Eliminate Most Synchronous Writes in the Fast Filesystem", Usenix 99 Proceedings,.- html. [0481] [3] Knuth, D. "The Art of Computer Programming Volume 1: Fundamental Algorithms", 2nd Edition (Reading, Mass.: Addison-Wesley, 1997), pp. 435-455. ISBN 0-201-89683-4. [0482] [6] Dean, J., Ghemawat, S., "MapReduce: Simplified data Processing on Large Clusters", Google, 2004 ().
Iii. Queuing Service for MaxiFS
1 Introduction
[0483] This section describes an exemplary robust queuing service for MaxiFS referred to hereinafter as MaxiQ. MaxiQ is resilient to individual server failures and allows the decoupling of consumers from producers. The need for a queuing facility in MaxiFS stems from the fact that services such as those that asynchronously replicate files and manage the infrastructure must be able to work asynchronously with the components requesting such services. The queuing service must also be robust, so as not to lose records that have been enqueued, even across system crashes, and must be scalable with the infrastructure itself. The queuing facility described here is a real queuing facility, i.e., it should not be confused with a data repository or a data base management system. It is targeted to allowing producers to queue records so that consumers can later dequeue them, to act on them. The terms consumer and producer are used in a loose sense in this document. The producer or the consumer can be any thread or process executing within any server node in the MaxiFS environment that has access to the queuing facility to enqueue or dequeue records to/from it. The following sections highlight the requirements for this facility, a proposed high level semantics and a brief description of a possible implementation.
2 Requirements
[0484] The requirements for MaxiQ are the following:
[0485] 1. The queue is a global data structure accessible from any server node part of MaxiFS, regardless of where the queued records are physically stored.
[0486] 2. Records to be put into the queue facility should be persistently stored until they are explicitly extracted or removed, or until their life span expires, even in the presence of server failures.
[0487] 3. Each record appended to the queue is to be appended to the end of the queue.
[0488] 4. Records are not guaranteed to be extracted from the queue in a FIFO order.
[0489] 5. Records are associated with a specification (a description of what a specification amounts to is provided ahead) that identifies their nature. The extraction of records from the queue is done on the basis of the specification the consumer provides.
[0490] 6. Each record appended to the queue should preserve its identity, i.e., it should always be possible to treat separate records independently and without crossing boundaries between one record and the next.
[0491] 7. The action of appending or removing a record to/from the queue should be atomic, i.e., the addition of partial records, removal of partial records and/or interleaving of portions of separate records must not be possible.
[0492] 8. Atomicity in the addition or removal of individual records to/from the queue should be guaranteed in the presence of multiple producers and multiple consumers, without any need for explicit locking by producers and consumers.
[0493] 9. A consumer should delete a record from the queue if and only if it has been acted upon. Node failures should not allow records queued up to be lost.
[0494] 10. The queue implementation should be highly scalable.
3 Theory of Operation
[0495] Before proposing possible primitives to operate on the queue, it is necessary to give at least a high level picture of how the facility should operate. This is the purpose of this section. The MaxiQ facility should allow any system components to enqueue records, so that whenever a consumer of the record is available it can remove it from the queue and process it. The typical operations to be expected on such a queue facility should then be the following:
[0496] 1. Enqueuing a record.
[0497] 2. Reading a record without removing it from the queue, i.e., copying a record from the queue.
[0498] 3. Retrieving a record and deleting it from the queue.
[0499] A difficulty with this has to do with the fact that in case a consumer thread takes a record out of a queue and then the server where the thread is executing dies or hangs, the record would be effectively lost. Therefore, the facility and its primitives should be structured in such a way that the crash of a node cannot cause the loss of any records in the queue. In addition to this, to achieve the ability to distribute the queue facility across multiple nodes and to achieve scalability, it should be possible to identify subsets of the queue facilities where certain records are kept. The "specification" associated with each enqueued record has this purpose.
4 Primitive Queue Operations
[0500] To operate on the queue in the way just described, appropriate primitive operations must be available. These are loosely modeled on the facilities the Linda kernel [1] makes available. A first attempt to meet the requirements could be that of providing the following primitives:
[0501] mq_put(record)--this primitive enqueues the record passed as an argument into the queue. Note that records do not have to be all of the same size, nor do they have to share some abstract type definition. The invocation of this primitive never blocks the caller.
[0502] mq_read(spec, record)--this primitive reads one record that matches the specification (spec) from the queue, without extracting it.503] mq_take(spec, record)--this primitive reads one record that matches the specification (spec) from the queue and removes it from the queue. As in the previous case,504] The primitives just listed, in theory, allow proper management of the queue records. However, in the case where a consumer uses the mq_take( ) primitive to extract and read one record from the queue and subsequently dies before it is able to post a result of the operation performed, the record is effectively lost. A way to solve this problem is through the following enhancements to the previously described set of primitives:
[0505] Each record in the queue is assigned a unique ID. This ID is automatically assigned by the queue infrastructure and returned on a successful mq_read( ) or mq_take( ) call.
[0506] The mq_take( ) primitive takes one additional mandatory parameter that specifies the time the caller expects is needed to process the record. This time should be in excess of the actual time needed, in order to cope with possible delays. This is effectively a lease. If the lease expires without a renewal, the record becomes visible again to every other consumer.
[0507] An additional primitive (mq_reset(ID, lease)) operates on the record in the queue whose ID is ID and has different behaviors depending on the value of lease. There are three cases:
[0508] 1. If lease is set to the constant MQ_TMINFINITE, the "taker" informs the queuing system that the record whose ID is specified was fully processed. So, it can be deleted.
[0509] 2. If lease is set to the value 0, the "taker" informs the queuing system that the record whose ID is specified was not processed and that the caller has no more need for it, so the record should become visible to everybody again.
[0510] 3. If lease is positive, the "taker" informs the queuing system that it needs to extend the lease for the record whose ID is specified. So the record remains invisible for the time of the requested extension.
[0511] With the above changes, the possible loss of a consumer would be avoided, as follows:
[0512] 1. The consumer would invoke mq_take( ) to extract a record from the queue, specifying the time needed to process the record. This time would be converted into a lease by the system.
[0513] 2. At this point the consumer would have access to the record that would be leased and therefore only logically deleted from the queue. This way no other consumer would be able to take it or read it, until its lease expires.
[0514] 3. If the lease expires, the record is resurrected and becomes available again for any other consumer. This would be the case if a previous consumer died or hung as it was processing the record.
[0515] 4. In the case where the consumer decides it cannot or does not want to complete the processing, it should invoke mq_reset(ID, 0). This would make the record available in the queue once again, for processing by other consumers.
[0516] 5. In the case where the consumer completes its processing, it should indicate the completion of its processing by invoking mq_reset(ID, MQ_TMINFINITE). This would permanently remove the processed record from the queue.
[0517] 6. In the case where the consumer needs additional time to process the record, before its lease expires, it would invoke mq_reset(ID, extension), so that the lease would be extended for an additional time equal to extension and the record the lease relates to would continue to remain hidden for the requested amount of time.
[0518] What remains to be addressed is what the specifications of enqueued records should be like. A specification is represented using a name, expressed as a variable length, null-terminated string made of individual substrings, each of which is separated by slashes (`/`) from the next. Each such substring can only contain any 8-bit character (with the exception of `/` and of the null character that is used to terminate C language strings) and cannot be longer than 255 characters.
[0519] A specification identifies a "hive": the portion of the queuing system repository that contains homogeneous records (this does not imply that all the records within a hive have the same size) that can be described by the specification itself. Specifications obey some rules:
[0520] 1. They are names of hives, not templates and they live in the same name space.
[0521] 2. A specification cannot exceed 1024 characters in length.
[0522] 3. A specification cannot be incomplete and the prefix of a hive's specification cannot be another usable specification. For example, if "a/b/c" specifies a hive, "a/b" cannot specify a hive, whereas "a/b/d" and "/a/b/e/f" can.
[0523] 4. No form of pattern matching or use of wild cards is supported in a specification.
[0524] 5. A specification is to be taken literally, meaning that the case of any alphabetic character is significant and that hive names can differ just in the case of the specification. Moreover, blanks embedded in a specification are significant and are not stripped by MaxiQ.
[0525] 6. Optionally, the hive specification can be of the form: [0526] N:a/b/c . . .
[0527] where the N prefix that precedes the `:` character is a decimal string that represents the ID of a peer set and tells MaxiQ that the hive stores information of importance to peer set N. When this is the case, the hive itself will not be stored on peer set "N" (see below). The "N:" prefix is an integral part of the hive name. The only difference with respect to names that do not include such a prefix is that the MaxiQ system associates semantics to the "N:" prefix. For example: [0528] 729: marketing/inquiries/log
[0529] specifies that the hive named "729: marketing/inquiries/log" (note the trailing blank after the colon) is of relevance to peer set 729. One or more such blanks are effectively part of the name. Thus: "729: marketing/inquiries/log" is a different hive from: "729:marketing/inquiries/log". However non-decimal strings or blank characters preceding the colon would not adhere to the previous syntax. So: "729:marketing/inquiries/log" would specify a hive name, but the blank character before the colon prevents this hive to be considered of relevance for peer set 729.
[0530] One additional issue to be addressed relates to the fact that in the case where a consumer just wants to go through records in the queue, since an mq_read( ) would not cause any changes to the queue, subsequent reads would return the same record over and over, until a mq_take( ) operation is performed. To be able to enumerate the queue records, a small change to the mq_read( ) call is necessary. This consists of adding one argument to mq_read( ) that is the ID of the queue record that should be skipped. Effectively, by setting the ID to MQ_NULLID, the primitive would read the first record available. By setting it to the ID of the last record read, it would return the next record. If the record with the specified ID does not exist any longer within the queue, the behavior would be identical to that of invoking the primitive, by setting the ID argument to 0. Finally, two more primitives are needed:
[0531] 1. The mq_create(spec) primitive takes a hive specification as an argument and creates such a hive, if it does not exist.
[0532] 2. The mq_delete(spec) primitive takes a hive specification as an argument and deletes such a hive, if it exists.
5 Design
[0533] MaxiQ is implemented as a facility available to MaxiFS services. The logical model of this is that the basic distributed file system functionality would be available as an infrastructure on which to implement MaxiQ, however, MaxiQ would be available to the higher level distributed file system services that take care of replication, reconstruction of redundancy and so on. Therefore, the MaxiQ functionality can be easily superimposed to the file system name space MaxiFS supports. Thus a hive could be mapped to a file. This would clearly offer MaxiQ the redundancy and scalability MaxiFS offers. The MaxiFS name space is implemented through a hashing technique that distributes directories across multiple servers so that a sufficiently homogeneous distribution of the name space across all the nodes allows for the distribution of the workload across nodes (scalability) and for keeping redundant repositories for data (availability). Therefore, the availability and scalability attributes of MaxiFS can be easily inherited by MaxiQ.
[0534] The design of MaxiFS already supports the notion of an append-only write mode for files (without need for explicit synchronization). This is the basic facility needed to implement the mq_put( ) primitive. The additional functionality to be supported is the ability to retrieve records from a file (conditionally deleting them, when necessary through the lease and life span mechanisms described earlier).
[0535] The design of MaxiQ thus builds on the strengths of MaxiFS and supports the replication and exception management needs of MaxiFS. This may appear to be somewhat conflictual in the sense that MaxiQ uses MaxiFS while MaxiFS uses MaxiQ. However, the reality is that MaxiQ uses the MaxiFS data path components, while the MaxiFS management uses MaxiQ. So a real problem would only occur if the MaxiFS Management System were to use a certain hive on a peer set to which the hive information pertains. The solution is that of identifying along with the hive also the peer set a hive relates to. This peer set ID becomes part of the hive specification, as explained above. This way the system will insure that the hive will be stored within a peer set that has no relationship to the hive content. The individual MaxiQ hives are implemented as files in a special branch of the global MaxiFS name space. This branch is invisible through the file system name space and can only be accessed indirectly via the MaxiQ primitives. Such files are 3-way redundant (one copy on each member of the peer set where they reside) and access to them is in reading or in writing. The latter however only occurs in append mode. In other words, such hives only change because of new records appended at the end. Otherwise, their content is unchanged.
[0536] One member of the peer set at a time manages the hive. Clients send their requests to the hive manager via a specialized protocol that is used by the MaxiQ primitives. The peer set member that runs the manager is the primary member of the peer set. It provides a thread pool used to carry out user requests. These are appropriately synchronized so as to guarantee consistency of the hive. In case the peer set member that is managing a hive goes offline, the member of the set that takes the role of the new primary also takes over the management of the hive, to guarantee the continued availability of the hive. The hives themselves are structured as balanced trees that keep reference to all the records and allow prompt access to each of them. Index records contain pointers in memory for subordinate index pages, along with their file offset on disk. They also contain references for data records in the form of file offsets. Each data record is stored on disk as it is received and its offset is recorded within the balanced tree. The tree allows the deletion of records from anywhere in the hive and the addition of new records to the end of the hive.
[0537] Attributes of individual data records, such as their ID, their lease time and their size are stored with the index pages that reference the data records themselves. This allows changes to the lease time of a record (These are caused by the invocation of primitives such as mq_take( ) and mq_reset( ) to be performed by only updating the referencing index page. The scheme relies on a deleting existing data records in purely logical fashion. In other words, a record is deleted by removing the reference to it from the tree page that points to it, rather than through a physical deletion of the record. As an index pages is modified, it is appended to the end of the file that is the backing store for the hive. This causes the file offset for the last incarnation of the modified index page to be updated in the parent index page, which then is appended to the file and so on all the way to the root page of the tree. When the new root is appended, the hive file contains the entire updated tree. When the hive manager opens the hive file, it reads in memory the entire index hierarchy starting from the last incarnation of the root page at the end of the file and working its way through the rest. In case a tree update was incomplete (in the sense that the root or an intermediate page is missing), the hive manager automatically recovers the previous version of the tree. This is not critical because the MaxiQ primitives that modify the hive file update it synchronously, before returning control to the caller. Therefore, the only items that can be lost are those for which the execution of a primitive did not complete normally. The caller would be aware of this and would be unable to assume that the update reached stable storage. The fact that hive files are redundant makes the probability of an unrecoverable bad sector read very small. Over time hive files may end up containing a fair amount of stale records and stale index pages, along with current ones. When the ratio of active records to stale records passes a given threshold, the hive manager restructures the hive, by creating a new file that is purged of the stale data.
6 Conclusions
[0538] MaxiQ implements a robust facility that can be used to store information for off-line processing. It supports the following functionality:
[0539] 1. Ability to append records within a replicated hive that survives the failure of up to two members of the peer set that implements the hive.
[0540] 2. Transparent failover among peer set managers to properly handle the failover of the service.
[0541] 3. Ability to traverse the entire list of records.
[0542] 4. Lease-based extraction of records from the head of the hive for a predefined amount of time. This supports the survival of the record if the leaser crashes.
[0543] As such, MaxiQ is expected to be the foundation for many System management services in MaxiFS. The Appendix details exemplary C language syntax of the primitives available to clients of the MaxiQ facility.
APPENDIX
Specifications of the MaxiQ Primitives
[0544] This section of the document provides details on the APIs the MaxiQ facility supports in the form of a C language library in an exemplary embodiment of the invention.
[0545] The C language header file that contains the constants, type definitions and function prototypes for MaxiQ is mq.h and needs to be included by the C programs that use the facility. At link time these applications need to link in the MaxiQ library.
[0546] Constants
[0547] MQ_TMINFINITE This constant is used to specify a lease of infinite length for mq_reset( ) (effectively equivalent to permanently removing a record leased via mq_take( ) from the queue) and to set an infinite lifespan for a record via mq_put( ).
[0548] MQ_MAXTMO This constant specifies the maximum length of a time-out expressed in seconds.
[0549] MQ_MAXBUF This constant specifies the maximum number of bytes for an individual data record appended to a hive.
[0550] MQ_NULLID This is the null value for a variable of type rid_t (see below).
[0551] Types
[0552] A number of data structures are defined here. They are used with the primitives in the MaxiQ library.
[0553] uint8_t Unsigned byte.
[0554] uint64_t Unsigned 64-bit long.
[0555] rid_t This type is used to define a variable that is to contain the unique identifier for a queue item. Note that IDs are unique only across records associated with a given specification.
[0556] rdmode_t This enumeration type is used in mq_read( ) to choose whether the mode of operation is that of retrieving a record whose ID matches the ID in input to the primitive or whether, the primitive should retrieve the first record after the one whose ID is specified. The values of the type are: RDM_EXACT (to be used when an exact ID match is being sought) and RDM_NEXT (to be used when the record that follows the one whose ID is provided is expected).
[0557] mqr_t This type is used to define a variable length structure that contains a pointer to a component of a record specification and one to its actual value once it is retrieved via mq_read( ) or mq_take( ). The data structure contains the following fields: [0558] rid_t mqr_id; [0559] int mqr_lease; [0560] int mqr_bufsize; [0561] int mqr_size;
[0562] uint8_t mqr_buffer[ ];
[0563] The field mqr_id is always set to MQ_NULLID, by the caller of any primitive that takes a pointer to an mqr_t structure in input. It is set by the called primitive.
[0564] The field mqr_lease is the duration of the lease for the record; it can be set to MQ_TMINFINITE, or it can be a positive number of seconds.
[0565] The field mqr_bufsize specifies the size in bytes for the mqr_buffer[ ] array and is always set by the caller.
[0566] The field mqr_size specifies the number of bytes for the mqr_buffer[ ] array that are in use. For a mq_put( ) call, the caller sets both mqr_bufsize and mqr_size to the bytes in use in the buffer. For a mq_read( ) or mq_take( ) call, the caller sets mqr_bufsize to the size of the buffer and mqr_size to 0. The primitive sets mqr_size to the number of bytes actually in use in the buffer.
[0567] The field mqr_buffer[ ] is a variable length buffer in which the actual record is stored. Its length cannot exceed MQ_MAXBUF bytes.
[0568] Utilities
[0569] The MaxiQ infrastructure makes available a utility macro that can be used to allocate a variable length mqr_t structure capable of storing `b` bytes: [0570] MQR_ALLOC(p, b)
[0571] The macro takes a first argument (p) that is of type mqr_t* and a second argument (b) that is a length in bytes. The first argument is the name of a pointer variable to a new record. the second argument is the size in bytes of the buffer for the record to be allocated. If successful, the macro assigns a pointer to the newly allocated structure to p. Otherwise, the assigned value is a null pointer. The structure allocated this way can be freed via the standard library routine free( ).
[0572] Return Codes
[0573] The codes returned by the primitives to indicate success or failure are defined here. They are:
[0574] MQ_OK The primitive was successfully executed.
[0575] MQ_INIT MaxiQ not initialized.
[0576] MQ_BADID No such record exists.
[0577] MQ_SIZE The size of the buffer was insufficient to retrieve the record.
[0578] MQ_BADSIZE Invalid buffer size of record length.
[0579] MQ_TMO No record found. This can happen when the primitive was invoked specifying a time-out and at the expiration of the time-out no record matching the specification existed.
[0580] MQ_BADREC Invalid or null record pointer.
[0581] MQ_BADSPEC Invalid record specification.
[0582] MQ_BADREQ Invalid or unimplemented request.
[0583] MQ_NOSPEC No such specification exists.
[0584] MQ_BADLEASE Invalid lease value.
[0585] MQ_BADTMO Invalid time-out value.
[0586] MQ_OPEN Hive already open.
[0587] MQ_NOTFOUND Item not found.
[0588] MQ_NOMORE No more items to look at.
[0589] MQ_SYSERROR Internal system error.
[0590] MQ_BADARG Invalid argument.
[0591] MQ_EXISTS The hive already exists.
[0592] MQ_ALLOC Unable to allocate memory.
[0593] MQ_BADIO I/O operation failed.
[0594] MQ_NOHIVE Inexistent hive.
[0595] MQ_NOFLUSH Unable to flush out hive.
[0596] MQ_NODEL Unable to delete hive.
[0597] MQ_ENET Network error.
[0598] MQ_SHUTDOWN System undergoing shutdown.
[0599] MQ_ECONN Connection error.
[0600] MQ_NETDOWN Network access error.
[0601] MQ_EMSG Invalid message received.
[0602] mq_create( )
[0603] Name
[0604] mq_create--create a new hive
[0605] Synopsis
[0606] #include <mq.h>
[0607] int mq_create(const uint8_t *spec);
[0608] Arguments
[0609] spec This argument is the pointer to a string that contains the specification for the hive of interest. The string is not allowed to start with a slash character (`/`).
[0610] Description
[0611] The purpose of this primitive is that of creating a new hive within MaxiQ.
[0612] The only argument to this call (spec) is used to identify the specification for the hive to be created (as described above).
[0613] The new hive will be initially empty, until data records are appended via mq_put( ).
[0614] Return Values
[0615] MQ_OK The primitive was successfully executed.
[0616] MQ_INIT MaxiQ not initialized.
[0617] MQ_NOSPEC Null hive specification.
[0618] MQ_BADARG Hive specification starts with a `/` character.
[0619] MQ_ALLOC Unable to allocate memory.
[0620] MQ_EXISTS The specified hive already exists.
[0621] MQ_SYSERROR Unable to create hive.
[0622] MQ_ENET Network error.
[0623] MQ_SHUTDOWN System undergoing shutdown.
[0624] MQ_ECONN Connection error.
[0625] MQ_NETDOWN Network access error.
[0626] MQ_EMSG Invalid message received.
[0627] mq_delete( )
[0628] Name
[0629] mq_delete--create an existing hive
[0630] Synopsis
[0631] #include <mq.h>
[0632] int mq_delete(const uint8_t*spec);
[0633] Arguments
[0634] spec This argument is the pointer to a string that contains the specification for the hive of interest. The string is not allowed to start with a slash character (`/`).
[0635] Description
[0636] The purpose of this primitive is that of deleting an existing hive from MaxiQ.
[0637] The only argument to this call (spec) is used to identify the specification for the hive to be deleted (as described above). Deletion of a hive implies permanent deletion of the data records it contains.
[0638] Return Values
[0639] MQ_OK The primitive was successfully executed.
[0640] MQ_INIT MaxiQ not initialized.
[0641] MQ_NOSPEC Null hive specification.
[0642] MQ_BADSPEC Invalid hive specification.
[0643] MQ_ALLOC Unable to allocate memory.
[0644] MQ_SYSERROR Unable to delete the hive.
[0645] MQ_ENET Network error.
[0646] MQ_SHUTDOWN System undergoing shutdown.
[0647] MQ_ECONN Connection error.
[0648] MQ_NETDOWN Network access error.
[0649] MQ_EMSG Invalid message received.
[0650] mq_read( )
[0651] Name
[0652] mq_read--read the next available record in the queue that matches the specification
[0653] Synopsis #include <mq.h>
[0654] int mq_read(const uint8_t*spec, rid_t id, rdmode_t rdm, mqr_t*precord, int tmo);
[0655] Arguments
[0656] spec This argument is the pointer to a string that contains the specification for the hive of interest. The string is not allowed to start with a slash character (`/`).
[0657] id This argument specifies the ID of a record previously read. It can also be set to MQ_NULLID.
[0658] rdm This argument specifies whether an exact match of the record ID with the ID provided in id is sought for the record to be read in (in this case, this argument should be set to RDM_EXACT) or whether the record that follows the one whose ID is specified as id should be read in (in this latter case, this argument should be set to RDM_NEXT).
[0659] precord This is a pointer to the data structure that contains the record specification and will be filled with the record content on return.
[0660] tmo This argument specifies the maximum number of seconds the primitive should wait if no record is available, before returning with an error message. The argument can be set to 0, if immediate return is requested when no record matching the specification exists, or to a number of seconds that cannot exceed MQ_MAXTMO, if the call must suspend until one such record becomes available.
[0661] Description
[0662] The purpose of this primitive is that of reading a record from the queue, without removing it.
[0663] The first argument to this call (spec) is used to identify the hive whence the record should be retrieved (as described above).
[0664] The second argument to this call (id) is used to identify a record that has been already processed, so that, depending on the value in the third argument (rdm) the invocation returns the record with the specified ID or the first record following that record. When id is set to MQ_NULLID, the rdm argument should be set to RDM_NEXT and the first available record in the hive is returned. When id is set to a non-null record ID, the rdm argument should be set to RDM_EXACT if the record with the specified ID is to be retrieved, or to RDM_NEXT if the record to be retrieved is the one that follows the one whose ID was specified. When the rdm argument is set to RDM_EXACT and the record with the specified ID no longer exists in the hive, the error MQ_NOTFOUND is returned. This could happen if the record was "taken" (see mq_take( ), while the caller was scanning all the records.
[0665] The fourth argument (precord) points to the data structure into which a record is to be. The members of this structure are used as follows: The caller of the function always sets the field id to MQ_NULLID. The called primitive updates this field to the ID of the record retrieved. The field mqr_lease is the duration of the lease for the record and is always 0 when a record is read in. The field mqr_bufsize is set by the caller to specify the size in bytes for the mqr_buffer[ ] array. The caller also sets mqr_size to 0. The primitive sets mqr_size to the number of bytes actually in use for the record. of the structure precord points to is set to the ID of the record and the field mqr_size is set to the actual length of the record. By checking the return code, the caller can identify the situation, allocate a large enough buffer and reissue the request with the ID of the record that could not be read in, specifying the read mode as RDM_EXACT. The field mqr_buffer[ ] is the buffer into which the actual record is retrieved.
[0666] The fourth argument (tmo) specifies whether the caller should be suspended for tmo seconds in case a record matching the specification is unavailable. This argument can be set to 0, in case immediate return is requested, or to a positive value not exceeding MQ_MAXTMO for calls that should be suspended until either a record meeting the specifications becomes available or the specified time-out expires.
[0667] A typical invocation of this primitive, to retrieve and process all the records associated with a hive is along the lines of the following code fragment:
TABLE-US-00003 rid_t id; mqr_t *pr; /* 1024 is just a randomly chosen size for the buffer */ MQR_ALLOC(pr, 1024); if (!pr) exit(1); id = MQ_NULLID; while ((ret = mq_read("a/b/c", id, RDM_NEXT, pr, 0)) == MQ_OK) { id = pr->mqr_id; processrecord(pr); }
[0668] An invocation like the one above reads all the existing records stored in hive "a/b/c", but leaves them in the hive for other processes. In a case like this, a null time-out is specified in order to go through all the items in the list. Had an infinite time-out been used, the caller would have blocked after the last item in the queue, waiting for another one to be appended. This code snippet does not highlight the fact that the return code should be looked at in more detail because the invocation may have not been successful for other reasons. For example, in case one of the invocations returns the error MQ_NOTFOUND, it means that the item that was previously retrieved is now no longer available and that the loop should be re-executed. This may entail that the application may have to skip the items it already processed.
[0669] Return Values
[0670] MQ_OK The primitive was successfully executed and one record was retrieved.
[0671] MQ_NOHIVE Null hive specification.
[0672] MQ_BADARG Null record buffer pointer. MQ_BADIO Unable to read the record.
[0673] MQ_BADREC Invalid record.
[0674] MQ_SIZE Buffer too small for the record. In this case, the "mqr_size" field of the record buffer contains the actual length of the record that could not be retrieved. However, the data buffer ("mqr_size") is returned empty and should not be accessed.
[0675] MQ_BADSIZE Invalid buffer size.
[0676] MQ_TMO Time-out expired before a suitable record could be retrieved.
[0677] MQ_BADTMO Invalid time-out value.
[0678] MQ_ENET Network error.
[0679] MQ_SHUTDOWN System undergoing shutdown.
[0680] MQ_ECONN Connection error.
[0681] MQ_NETDOWN Network access error.
[0682] MQ_EMSG Invalid message received.
[0683] mq_take( )
[0684] Name
[0685] mq_take--read and remove the next available record that matches the specification, from the queue
[0686] Synopsis
[0687] #include <mq.h>
[0688] int mq_take(const uint8_t*spec, mqr_t *precord, int lease, int tmo);
[0689] Arguments
[0690] spec This argument is the pointer to a string that contains the specification for the hive of interest. The string is not allowed to start with a slash character (`/`).
[0691] precord This is a pointer to the data structure that contains the record specification and will be filled with the record content on return.
[0692] lease This argument specifies the duration of the lease for the record being sought. The lease duration is expressed in seconds. The requested lease time must be a positive value and is not allowed to be set to MQ_TMINFINITE.
[0693] tmo This argument specifies the maximum number of seconds the caller should wait if no record is available, before returning with an error message. The argument can be set to 0, if immediate return is requested for the case when no record matching the specification exists, or to a number of seconds that cannot exceed MQ_MAXTMO, if the call must suspend until one such record becomes available.
[0694] Description
[0695] The purpose of this primitive is that of extracting a record from a specified hive in the queue.
[0696] The first argument to this call (spec) is used to identify the hive whence the record should be retrieved (as described above).
[0697] The second argument (precord) points to the data structure that will store the record being. In this case, the call operates like an mq_read( ) operation in that the record is not removed from the queue. The members of the mqr_t structure are used as follows: The caller always sets the field id to MQ_NULLID, before invoking this function. The called primitive updates this field to the ID of the record retrieved. The field mqr_lease is the duration of the lease for the record in seconds; it is not allowed to be set to a non-positive value, nor to MQ_TMINFINITE. The field mqr_bufsize is set by the caller to specify the size in bytes for the mqr_buffer[ ] array. The caller also sets mqr_size to 0. The primitive sets mqr_size to the number of bytes actually used to copy the data record into the buffer. is set to the ID of the record and the field mqr_size is set to the actual length of the record. By checking the return code, the caller can identify the situation, allocate a large enough buffer and reissue a request (which may not yield the same record, if, in the meanwhile, the latter had been extracted by another client). The field mqr_buffer[ ] is the variable-length buffer into which the actual record is retrieved. The third argument (lease) specifies the number of seconds the caller expects to use to process the record. For the specified time duration the record will be unavailable in the queue. The caller has then the following options: [0698] If it lets the lease expire (this could be due to the death of the thread that performed the call), the record reappears in the queue. [0699] It may invoke mq_reset(ID, MQ_TMINFINITE) to permanently erase the record from the queue. [0700] It may invoke mq_reset(ID, 0) to make the record available in the queue, before the lease obtained when mq_take( ) was invoked expires.
[0701] The fourth argument (tmo) specifies whether the caller should be suspended for tmo seconds in case a record matching the specification is unavailable. This argument can be set to 0, in case immediate return is requested, or to MQ_TMINFINITE for calls that should be suspended until either a record meeting the specifications becomes available or the specified time-out expires.
[0702] Return Values
[0703] MQ_OK The primitive was successfully executed and one record was retrieved.
[0704] MQ_NOHIVE Null hive specification.
[0705] MQ_BADARG Null record buffer pointer.
[0706] MQ_BADLEASE Bad lease value.
[0707] MQ_NOMORE No more records available.
[0708] MQ_BADIO Unable to read the record.
[0709] MQ_BADREC Invalid record.
[0710] MQ_SIZE Buffer too small for the record.
[0711] MQ_BADSIZE Invalid buffer size.
[0712] MQ_TMO Time-out expired before a suitable record could be retrieved.
[0713] MQ_ENET Network error.
[0714] MQ_SHUTDOWN System undergoing shutdown.
[0715] MQ_ECONN Connection error.
[0716] MQ_NETDOWN Network access error.
[0717] MQ_EMSG Invalid message received.
[0718] mq_put( )
[0719] Name
[0720] mq_put--append a record to the end of the queue
[0721] Synopsis
[0722] #include <mq.h>
[0723] int mq_put(const uint8_t*spec, mqr_t *precord, int wait);
[0724] Arguments
[0725] spec This argument is the pointer to a string that contains the specification for the hive of interest. The string is not allowed to start with a slash character (`/`).
[0726] precord This is a pointer to the data structure that contains the record specification and will be filled with the record content on return.
[0727] wait This argument is set to 0 if the caller does not want to wait until the new record is on stable storage before receiving control back from the call.
[0728] Description
[0729] The purpose of this primitive is that of appending a record to the end of the queue within the specified hive.
[0730] The first argument to this call (spec) is used to identify the hive to which the record should be appended (as described above).
[0731] The second argument (precord) points to the data structure containing the record to be appended. Such a data structure can be allocated via the MQR_ALLOC( ) utility. The members of the mqr_t structure precord points to are used as follows: The caller always sets the field id to MQ_NULLID, before invoking this function. After the successful execution of the call, the primitive will set it to the ID assigned by the system. The field mqr_lease is the duration of the lease for the record in seconds, it should be set to 0 and is ignored by this primitive. The field mqr_bufsize is set by the caller to specify the size in bytes for the mqr_buffer[ ] array. The caller also sets mqr_size equal to mqr_bufsize.
[0732] The field mqr_buffer[ ] is the buffer into which the caller stores the record to be appended. If the last argument (sync) is set to 0, i.e., it is a null argument, this call is non-suspensive for the caller and the caller gets control back as soon as the record is cached. Otherwise, the caller is given back control only when the record is on stable storage.
[0733] Return Values
[0734] MQ_OK The primitive was successfully executed and one record was appended to the queue.
[0735] MQ_NOHIVE Null hive specification.
[0736] MQ_BADARG Null record pointer or invalid record size.
[0737] MQ_BADSIZE Invalid record length.
[0738] MQ_BADIO Unable to write the record.
[0739] MQ_ENET Network error.
[0740] MQ_SHUTDOWN System undergoing shutdown.
[0741] MQ_ECONN Connection error.
[0742] MQ_NETDOWN Network access error.
[0743] MQ_EMSG Invalid message received.
[0744] mq_reset( )
[0745] Name
[0746] mq_reset--reset the lease for a specified record in the queue
[0747] Synopsis
[0748] #include <mq.h>
[0749] int mq_reset(const uint8_t*spec, rid_t id, int lease);
[0750] Arguments
[0751] spec This argument is the pointer to a string that contains the specification for the hive of interest. The string is not allowed to start with a slash character (`/`).
[0752] id This argument specifies the ID of an existing record previously "taken".
[0753] lease This argument specifies the number of seconds after which the record lease expires, with respect to the time when this call is performed. Admissible values are 0 (the record becomes visible instantaneously), a positive value (the lease will expire in that many seconds from the time of this call) or MQ_TMINFINITE (the record is permanently removed from the queue).
[0754] Description
[0755] The purpose of this primitive is that of resetting either the lease time or the lifespan of an existing record.
[0756] The first argument to this call (spec) is used to identify the hive to which the record should be appended (as described above). The second argument to this call (id) is used to identify the record that will be affected by the execution of the primitive. The third argument (lease) is the new number of seconds the record lease should last from the time this primitive was last invoked. Admissible values are 0, a positive value or MQ_TMINFINITE. The following cases occur: [0757] If the new value of lease is 0, the record affected will become immediately visible in the queue. [0758] If the new value is a positive value, the record will remain invisible for the specified additional time interval from the time this primitive is invoked. [0759] If the new value is MQ_TMINFINITE, the record is permanently erased from the queue.
[0760] Return Values
[0761] MQ_OK The primitive was successfully executed.
[0762] MQ_NOHIVE Null hive specification.
[0763] MQ_BADID Invalid record ID.
[0764] MQ_BADLEASE Invalid lease value.
[0765] MQ_NOTFOUND Record not found.
[0766] MQ_BADIO Unable to write out modified record.
[0767] MQ_ENET Network error.
[0768] MQ_SHUTDOWN System undergoing shutdown.
[0769] MQ_ECONN Connection error.
[0770] MQ_NETDOWN Network access error.
[0771] MQ_EMSG Invalid message received.
REFERENCES
[0772] [1] Carriero, N., Gelertner, "Linda in Context", Communications of the ACM, Vol. 82, No. 4, April 1989, pages 444-458.
IV. Exemplary Membership Protocols
1 Introduction
[0773] MaxiFS infrastructure consisted of an aggregation of storage nodes. There are two logical memberships of the storage nodes in the infrastructure. One is the Management Server Federation (MSF). The MSF is to facilitate system management activities in the MaxiFS infrastructure. The other logical membership is the peer set. A peer set is used to facilitate file system related operations.
This document describes the membership protocol used to construct the MSF and peer sets. We also present a simulation framework serving as a development and validation framework for the protocol.
2 Persisted States
[0774] A storage node exercises the membership protocol for MSF and peer set joining During the process, the node persists milestone states for crash recovery or normal restart. In additional to the states, the following information is also persisted: [0775] The MSF group view. There can be 0 or 1 view. [0776] 0 or more peer set views.
2.1 The MSF Group View
[0777] The MSF group view consists of the following: [0778] The ID of the MaxiFS infrastructure. [0779] The version of the MSF group view last known to the node. [0780] The timestamp of view (used to make a heuristic decision, as discussed below). [0781] The MSF group vector containing the ID of the nodes in the view. [0782] The IP address of the root of the MSF.
2.2 The Peer Set View
[0783] The Peer Set view consists of the following: [0784] The ID of the peer set. [0785] The version of the peer set view. [0786] The timestamp of the view. [0787] The ID of the nodes belonging to the peer set. [0788] The IP address of the primary of the peer set.
3 Node Membership State Transition
[0789] When a node joins the MaxiFS infrastructure, it always joins the MSF before the attempt to join a peer set is made. Therefore, as shown in FIG. 25, the membership state of a node transits as follows: [0790] INIT: The initialization state, no membership is obtained. [0791] MSF-JOINED: The node has joined the MSF. [0792] PEER_SET-JOINED: The node has joined one or more peer sets.
[0793] The membership protocol, therefore, consists of a protocol for MSF and a protocol for peer set formation. Exemplary protocols are described below
4 MSF Membership Protocol
[0794] The MSF membership protocol consists of the following sub-protocols: [0795] Discovery/Join: The protocol for a node to discover and join the MSF. [0796] Merge: The protocol that allows a MSF root to synchronize the group view to the rest of the members and allow several MSF trees to merge after a network partition. [0797] Failure Detection (FD): The protocol to ensure the integrity of the MSF group view.
[0798] FIG. 26 shows the state transition of a node during MSF joining Details of the sub-protocols are discussed in the following sections.
4.1.1.1 Discovery/Join Protocol
[0799] FIG. 27 shows the state transition of the discovery/join protocol.
[0800] When a node initializes, it remains in the "thawing" state for a time ranging from tmin to tmax. Setting the node in a dormant state initially prevents a "packet storm" condition when the entire storage infrastructure is restarting (maybe after a power failure). The time it takes for it to time out from the state is a function of the ID of the node. The ID is a persistent identification for the node (the ID could be, for example, a number based on the MAC address of the first network interface of the node). The fact that the time is a deterministic function of the node's ID helps in resolving contention for the MSF root during this state and helps in achieving fast convergence.
[0801] The node enters the "join-req" state after it wakes up from the "thawing" state if there is any persisted MSF view stored. It sends request to the root of the MSF. If the request is granted it is considered a member of the MSF and starts the FD sub-protocol. If there is no previously persisted MSF view or the node times out from the "join-req" state, it enters the discovery state and starts IP multicasting discovery packets (e.g., using TTL, local link multicast addresses 224.0.0.0/25, or limited scoped addresses 239.0.0.0-239.255.255.255 to confine multicast packet within the MaxiFS system).
[0802] In the discovery state, the node listens for incoming and determines a candidate root to join. The information of a candidate root can come in one of the two forms: 1) suggestion packets sent by other nodes addressed to the node or 2) group synchronization packets sent by the root on the group multicast address.
[0803] If the node reaches timeout in the discovery state, the node assumes the root responsibility and starts the merge protocol.
4.1.1.2 Merge Protocol
[0804] When a node assumes the responsibility of the root, it enters the merge state and starts the merge protocol. It periodically performs limit scoped IP multicast of the group synchronization packet that contains the following: [0805] The MaxiFS ID (an ID assigned to the entire infrastructure upon creation time) [0806] The version of the view. [0807] The time elapse in milliseconds a receiver should expect for the next synchronization packet. [0808] A list of the node IDs in the MSF. [0809] The hash table indicating peer set allocation to facilitate namespace resolution.
[0810] The version of the view should be embedded in all calls involving intra-node communication, especially calls performed via EJB. Any version mismatch can be detected and can help in view synchronization. To avoid modifying the EJB interface, this can be implemented using the Interceptor provided in EJB 3.0. The information contained in the synchronization packet serves the following purpose: [0811] It provides a synchronized view for all nodes. A node should consider itself shunned from the MSF and be required to re-join if its version is out-of-sync. [0812] It serves as a lease of the root to the hierarchy. [0813] It provides a mechanism to accelerate convergence of the hierarchy during system startup. [0814] It provides a mechanism to merge MSF trees (and peer sets) after a network partition.
[0815] FIG. 28 illustrates the state transition of the merge protocol.
[0816] A node can transit from the merge state to the "join-req" state in which it exercises the joining protocol to merge its federation with another federation. This event can occur when the root of a MSF receives a suggestion or a group view from other nodes that contains information indicating the existing root with lower ID.
[0817] Another important aspect of the merge protocol is to merge peer sets. A peer set can be broken up into two degraded peer sets due to network partition. We will define the process in the following section.
4.1.1.3 Leased Based FD Protocol
[0818] A node enters the FD state and starts the FD protocol once it joins the MSF. In additional to a possible FD protocol that runs within a peer set after the node has joined one or more peer sets, an FD protocol that runs at the MSF level is included, since it is possible for a node to not be a member of any peer set.
[0819] As shown in FIG. 29, to perform MSF level failure detection, the MSF is typically organized as a circular link list, sorted by node ID. The smaller ID node establishes a lease with its adjacent node. With each lease renewal, the requestor supplies the duration to extend the lease, and it is the requestor's responsibility to renew the lease in due time. A node is suspected if it fails to renew the lease.
[0820] It should be noted that if any node is suspected, an event will need to be generated to notify the MSF root to keep in-sync the peer set hash table and MSF group view.
[0821] However, it is not impossible for the root of the MSF to experience failure. This should be dealt with in the following fashion: [0822] The node with the lowest ID is always the root of the MSF. [0823] The root periodically propagates the group view throughout the infrastructure. The data contains the elapse time a node should expect for the next view propagation. If a node does not receive the message within the specified time for n times, the root should be suspected. [0824] If the root is suspected, a node should try to elect the next root by going through all the nodes in the MSF in ascending ID order, one at a time. It stops at the first node that accepts the election. [0825] The new root responds to the election request and includes the requesting node to the MSF. Noted when a node sends an election request it includes its peer set information, therefore, the new root learns about peer set composition during the election process.
5 Peer Set Joining Protocol
[0826] After a node joins the MSF, it should proceed for peer set joining There are essentially two possibilities: [0827] The node was not a member of any peer set. [0828] The node was a member of one or more peer sets.
[0829] In the first case, the node is a candidate for joining any peer set or it can simply become a data repository node. The MSF should determine proper action based on the state of the infrastructure. If there are degraded peer sets in the system, the node will be instructed to join a degraded peer set at a later time.
[0830] In the second case, the node should resume its previous peer set membership with the primary nodes of all the peer sets to which it belongs, one at a time. The primary node of a peer set chooses to either grant or deny the request. The protocol outcomes will be sent to the root of the MSF such that the current peer set view is informed to the root. The primary of the peer set does the following: [0831] If the request is denied. [0832] Notify the joining member about the decision. [0833] If the request is granted: [0834] Notify peer set secondary about the new view. [0835] Collect acknowledgements from the members. [0836] Persists the outcome and update the root of the MSF about the new peer set view.
[0837] As shown in FIG. 30, in terms of the joining node, the protocol proceeds as follows: [0838] Sends unicast requests to the primary IP address (not necessary if the node was the primary). The IP address of the primary is given by the MSF root when the node is joining the MSF. If the address is not given, then the address would be the one that is persisted previously by the joining node. [0839] If time out occurs, sends the request to the multicast address own by the peer set. [0840] If time out occurs in this state, there are two possible actions: [0841] If the node was a peer set primary, it sends request to the root of the MSF to become the primary (This task potentially could be coordinated by the supervisor set, although it is not guaranteed that the supervisor set is available especially during system startup. Therefore, it may be more reliable to have the root of the MSF coordinate the process). There are several outcomes: [0842] The request is granted and the node becomes the primary. The reply contains the information of any existing secondary nodes in "join-wait" state. [0843] A The request is denied and the node remains in a "join-wait" state. [0844] The root replies with a peer set primary information. The node then resumes the joining process. [0845] If the node was not a peer set primary, it will enter the "join-wait" state.
[0846] When a node is in a "join-wait" state for a peer set, it will wait for events to resume joining process. It is possible that the primary of the peer set has failed. The peer set is in a faulty where all secondary nodes are just waiting for the primary to come up.
[0847] One heuristic decision that the MSF root can make is that if the peer set is in this state for up to a limit it may go ahead and instruct the secondary nodes to form the peer set. With this, the peer set will be at least back to a degraded state. The protocol proceeds as follows: [0848] The MSF root instruct one of the node (with the smaller ID of the two) to become the primary giving the information of the secondary node. [0849] The primary bumps the version of the view and invites the other node to join the peer set. [0850] The primary receives the acknowledgement from the secondary. [0851] The primary saves the protocol outcome. [0852] The primary updates the MSF root about the new peer set information.
[0853] The peer set is now still in a degraded state in that it has only two members. The MSF will recover the peer set back to normal state as the system evolves and volumes become available.
6 Peer Set Protocols
6.1 Peer Set Protocol 1
[0854] The management system (MS) persists the set of nodes that are part of the federation in each local database, along with all required information describing the allocated peer sets. One key structure that the system maintains is the nodes table, which the system shares with the federation protocol engine (FPE). When the FPE on a given node (which may be referred to hereinafter as a "tuple") starts, it retrieves a copy of the nodes table from the system and operates on this copy as the protocol logic progresses, synchronizing changes to the table with the system at each merge cycle. The description in this section focuses mainly on a peer set protocol, and the federation protocol, and describes how the peer set protocol engine (PPE) interfaces with the FPE.
[0855] The peer set protocol (i.e., a dialog among the members of a given peer set) is used to confirm that individual members of the set are able to communicate with each other. The selection of the members into a peer set is done by the MS, and neither the FPE nor PPE have any direct control over that process. (The member selection algorithm of the MS considers various criteria, such as volume size and health of the peers as well as other business rules, and this information is not available at a level of the protocol engine.)
[0856] Whenever the MS runs its selection algorithm and allocates new potential peer sets, the FPE uses the member changes, produced by the MS, at the next merge cycle and reflects these changes in its own copy of the nodes table. The updated nodes table is then distributed to the other members of the federation as part of the Merge messages sent out by the root node. If the nodes table indicates that a peer set member has changed since the last Merge message was sent, then the arrival of a new Merge message reflecting the changes in the nodes table signals to the PPE to initiate its peer set dialog and confirm whether the members of a given peer set can or cannot communicate with each other. Next, after the PPE completes the dialog among the members of a peer set (whether successfully or not), the PPE passes on the results of the dialog to the MS, with indication of success or failure of the dialog. If the results of the dialog convey failure in member communication, then the MS uses the passed on information and runs through its selection algorithm yet again, allocating replacements of the members that failed to communicate, as necessary.
[0857] The FPE also informs the MS when new nodes have joined the federation or existing nodes have left (for example, due to a node failure). Such information also triggers the MS to run its member selection logic. A detailed description of the inner workings of the peer set protocol follows.
[0858] The processing flow discussed below is schematically illustrated in FIG. 31. When a node first starts up, multiple threads are spawned to handle each of the peer set tuples represented by the node/volume pairs of a node. Each tuple thread enters a "wait" state, waiting for a Merge message to arrive. When such a Merge message arrives, a tuple first examines, 3100, the peer set membership data contained in the Merge message to determine if this particular tuple has been assigned to a peer set. If, based on the contents of the Merge message, a given tuple does not belong to a peer set, such tuple goes back into a "wait" state in which it continues to examine each arrived Merge message to determine if it has been assigned to a peer set.
[0859] However, when a Merge message indicates that a given tuple does belong to a peer set, the tuple determines which other tuples are in the same peer set and starts a conversation with them, sending Invite messages to each member and waiting for InviteAck acknowledging messages to be returned, 3105. According to the peer set protocol, the tuple will try initiating a conversation with other tuples, associated with the same peer set, several times before giving up. An InviteAck message contains a member's current color, role, checkpoint number, and a peer set generation number (if the member belongs to an existing peer set) or "unassigned" indicator (if the peer set is new). Each of the tuples retrieves this information from the MS, which has persisted it in its local database. The overall result of such communication among the tuples is that, when the Invite/InviteAck exchange is complete, each member should know the other members' color, role, checkpoint number and peer set generation.
[0860] Generally, any discrepancy in the data exchanged by the tuples indicates some kind of system failure, which has to be resolved at step 3110. An example of such situation may be a case when each of two or more tuples, associated with the same peer set, indicates that it is primary. In general, disagreements should be resolved by the tuples, for example, by choosing the information associated with the highest peer set generation. If there is a discrepancy and the generation numbers are the same, the tie is resolved, for example, by using the peer set member with the highest checkpoint number. In a case of discrepancy when both the generation number and the highest checkpoint number are the same, a tie-breaking mechanism may be provided, for example, by selecting the peer set member with the lowest node id.
[0861] Assuming each of the three members of the peer set receives replies from the other members, the peer set protocol engine (PPE) proceeds on to the confirmation state. In this state, a designated tuple (e.g., the tuple with the lowest id) sends a ConfirmPeerSet message, 3115, to the root node indicating that all three members have successfully exchanged Invite/InviteAck messages, and then each of the tuples enters a "wait" state for the next Merge message. (On receiving a ConfirmPeerSet message from a peer set tuple, the root, in turn, sends a PEER_SET_CREATED event to the primary MS including in this event the list of tuples that have successfully exchanged invite messages. The MS updates its nodes table accordingly, indicating which peer set members have been confirmed. The root node, then, synchronizes these changes with its own nodes table at the next merge cycle, updating the federation view id in the process, and distributes these changes to the other nodes.) When a new Merge message arrives, the waiting tuple threads check if their peer set entries have been updated.
[0862] One practical failure scenario may include an inadvertent loss of a Merge message during the transfer of the message (for example, due to a UDP transmission error). In a case when all three tuples of a given peer set lose the Merge packet, each of theses tuples simply continues to wait further and no harm will be done. However, if at least one member does not receive the packet and the other tuples do receive it, the tuples will be coerced to become out-of-sync with each other. To prevent this from happening, when a tuple in the confirm wait state receives a Merge message, such tuple makes a remote method invocation (RMI) call, 3120 to every other node in its peer set, redundantly passing to these nodes the Merge message it has just received. The handler for the RMI call receives the Merge message and injects it into the message queue for a target tuple, thus guaranteeing that each tuple will receive the Merge message. (If a given tuple has already received the Merge message through the normal process, it simply rejects any duplicate packets.) The overall result, therefore, is that utilizing an RMI call warrants that all tuples will receive a Merge message even if only one of the tuples receives it. Consequently, all tuples proceed to the next state in unison.
[0863] If such update has occurred, the tuples send a PEER_SET_CONFIRMED event to the local MS announcing the confirmed peer set. Prior to sending such event, however, the tuples may perform additional activities, 3125. In particular, in the specific case when a new peer set has been created, before sending the event, the tuples negotiate color assignments, for example, based on their relative node id ordering. In particular, red may be assigned to the member having the lowest id, green to the member having the second lowest id, and blue to the third, remaining member. Furthermore, the roles of the members would also be selected, for example, based on the relative node id ordering and on the modulus of the peer set id by 3. For example, if the modulus of the peer set id is 0, the member with the lowest node id is selected as primary; if the modulus is 1, the member with the second lowest id is selected to be the primary member, and so on. Moreover, each of the non-primary members would be assigned the role of a secondary member. Finally, the generation number of the new peer set would be set to 0. All this information is then passed to the local MS as part of the PEER_SET_CONFIRMED event, 3127. It should be noted that color assignments and primary/secondary roles can be determined for the peer set nodes in other ways.
[0864] On occasion, however, the peer set information, which is distributed according to the FPE to the tuples in a Merge message based on the nodes table of the MS, may be not updated. Possible reasons for not having the nodes table updated include: (i) a loss of the initial ConfirmPeerSet message on its way to the root node, (ii) a decision, by the MS, not to confirm the peer set members, or (ii) a timing issue, and the tuples will have to wait for the next Merge message before proceeding with their operations. If the peer set information in the Merge message has not been updated when the Merge messages arrives, the tuple with the lowest node id will again send, 3130, a CreatePeerSet message to the root and enter into another "wait" state. As currently envisioned, the peer set tuples will wait indefinitely for a Merge message with their entries updated as "confirmed." However, such timing interval may be adjusted as required, and a different timing interval, precisely defined for this purpose, is also within the scope of the invention.
[0865] Another possible failure scenario may arise during the Invite/InviteAck exchange in that, after several Invite attempts, a given tuple has not received InviteAck replies from at least one of its peers, 3135. (It can be said that the "inviting" tuple does not receive replies from "missing" members, in which such case the "inviting" tuple enters a recovery state.) The reasons that a member could fail to reply falls into two major categories: there is either a physical failure of a node or volume, or there is a network partition. Although the "inviting" tuple does not differentiate between these two kinds of failures, the responses to these failures by the system are quite different. In the following description, the case involving an actual member failure is addressed first, and then the matter of how the protocol engine handles a network partition is elaborated on.
[0866] A case of a failure of an isolated member presents several possibilities. First, a peer set may lose either one or two members. Second, a "missing" member of the set may be a currently designated primary. Neither of these situations can be immediately resolved. The first action taken in either case is to proceed with sending, 3140, the usual ConfirmPeerSet message to the root node, in the exact same way as it would in the case of a peer set without missing members. This message is sent by a designated tuple from among the tuples that have responded to the Invite message, for example, the tuple that has the smallest node id. The sent message indicates which of the peer set members responded to the Invite messages. After sending the message, the sending tuple enters a "wait" state, waiting for the next Merge message. On receiving the ConfirmPeerSet message, the root node will perform actions similar to those it would perform having received a ConfirmPeerSet message about a fully populated peer set. As described above, these actions include: sending a PEER_SET_CREATED event to the MS and, in response to the changes introduced by the MS into a node table, adjusting its own node table accordingly. In the particular case of "missing" members, the MS will recognize, based on the PEER_SET_CREATED event, that some members have not responded to the peer set invitations. In response to receiving such a PEER_SET_CREATED event, the MS will flag as "confirmed" only the responding members. With respect to the other, "missing" members, the MS will either leave these "missing" members as is for the time being (thus allowing for the case of late arrivals), or, perhaps, will select replacement members if it decides that the members are missing because they are indeed faulty. In either case, the root node will synchronize any changes, made by the MS to the MS nodes table, with its own nodes table at the next merge cycle.
[0867] The tuple threads that wait for a Merge message from the root node will examine the message to confirm that their own entries have been confirmed, and will also check if any replacement members have been selected by the MS. Since one or more members are missing, some additional operations will be performed, 3145: the generation number of the peer set in the message will be increased by 1, and if one of the missing members was previously the primary, a new tuple will be selected to assume the role of primary, using the same modulus based selection mechanism discussed above. However, the color designation of the new primary will not change but will remain the same. Regardless of whether or not new members have been selected at this Merge cycle, if the nodes table passed to the tuples by the root node indicates that the existing members have been flagged as "confirmed", each tuple will send a PEER_SET_CONFIRMED event to the local MS, 3150. When a local MS receives the message, it will flag, 3155, the peer set as a "degraded" or "faulty" peer set and take appropriate actions. In the case of a faulty peer set, for example, the MS typically will start the fast path service (i.e., the file system service implemented by individual file server processes running on each member of the peer set) in "read only" mode (i.e., the MS will start the file system service on a member of the peer set in such a way that it will not be allowed to update the partition of the distributed file system residing on the local disk until the MS switches the mode to read-write).
[0868] If new replacement members have been selected, after sending a PEER_SET_CONFIRMED event to the local MS (and having an generation number increased), the tuple threads will start, 3160, a new Invite/InviteAck exchange similar to their original boot-up exchange. If all members respond as expected, the now fully populated peer set is ready to be confirmed and each tuple sends another ConfirmPeerSet message to the root node, where the root node performs the exact same actions as described above, i.e. it notifies the MS of the now fully populated peer set, retrieves the updated table from the MS, and sends out the updated nodes table in its next Merge cycle. Again, when the waiting tuples receive the new Merge message, they will renegotiate color/role assignments as needed (e.g., each existing member retains its current color and each new member is assigned an unused color, and a new member is typically assigned a secondary role) and increase the peer set generation by 1. The new peer set members will then, again, send PEER_SET_CONFIRMED events to the local MS while the original peer set members will send PEER_SET_UPDATED events to the MS. The PEER_SET_CONFIRMED event will include an additional flag to tell the local MS to start the volume synchronization workflow before it brings up the fast path service, and the PEER_SET_UPDATED event will include a flag to instruct the MS not to publish the member information to the fast path service until after the volume synchronization is done.
[0869] After a tuple has sent a PEER_SET_CONFIRMED or PEER_SET_UPDATED event to the local MS, it goes back into a wait state. Each time a new Merge message arrives, it checks if there has been any change to its last known peer set membership. If any change has occurred, it repeats the Invite/InviteAck exchange with the newly identified members and goes through the same process as described above. There is a possibility that, when a tuple receives a Merge message, it will discover that it is itself no longer a member of a peer set. If this happens, the local MS will have to be notified and then the tuple will enter a "wait" state.
[0870] As a result of a network partition, there is a possibility that each of two different root nodes owns some subset of the full set of nodes. Each partition will see complementary versions of the existing peer sets. For example, a partition could leave two peer set members on one side and a single peer set member on the other. The peer set protocol sees only the smaller picture and each side will report a corresponding complimentary picture to the root node of a respective partition, which, in turn, will pass it on to the root MS of that partition. In a split like this, the simultaneous replacements of "missing" members of the same peer set by both root MS's cannot be afforded because such simultaneous replacements would result, when the partition is resolved, in two versions of the same peer set with potentially different data. How this situation is handled is left for the MS to decide. The FPE simply reports the change in a cluster topology to the MS, and the MS decides how to resolve the matter. The key to remember here is that there are two (or more) partitions each with their own root and primary MS instance.
[0871] On sensing that a network partition has occurred, the rule engine in the root MS of each partition will take appropriate actions, with the following stipulations: 1) not to perform an irreversible action (e.g., migration or replacement) when the system is in flux, and 2) not to take any action when a large percentage of existing nodes have been lost, unless instructed to do so administratively. In the case of n-way partitions, the protocol engines in each partition continue to operate as described above. The root MS in each partition continues to receive events (many of which will be failure events), continues to evaluate the topology, and continues to update its internal status. It should be appreciated, however, that in its operation under network partition conditions, the root MS is limited so as to allocate new peer sets only in a partition associated with the majority of members. In other words, the root MS should not create new peer sets in a partition with the smaller number of nodes. If this condition is not satisfied, i.e., if each of partitions is allowed to create new peer sets, then peer set id collisions may occur when the partitions re-integrate. The following example of a two-way partition illustrates this principle. Peer set ids have to be unique. However, if the highest number of a peer set id in a partition A with the smaller number of modes is N, and the highest number of a peer set id in a partition B with the greater number of modes is N+M, then, should a new peer set be allocated in the partition A, such new highest number will be N+1. The partition B already has its own peer set having an id number of N+1. Therefore, when the two partitions eventually remerge, the two root node MS instances would each have a peer set numbered N+1, which violates the uniqueness of the peer set id and cannot be allowed. It should be emphasized that the above-stated restriction on the operation of the root MS is a restriction on the allocation of new peer sets. Existing peer sets can still solicit for new members, with some conditions.
[0872] Any two-member (degraded) peer set (i.e., a peer set having two functioning members) in any partition can have its missing member replaced regardless of which partition such peer set is associated with and continue to operate in a fully functional state within that partition. However, missing members should be replaced only after an appropriate timeout period elapses, which allows the network partition to be resolved. Missing members of a particular degraded peer set are eventually replaced according to the peer set protocol as described above (which includes the peer set generation being increased by 1, PEER_SET_UPDATED events being sent by the existing members, and a PEER_SET_CONFIRMED event being sent by the new member).
[0873] A single-member (faulty) peer set (i.e., a peer set having only one functioning member) operating in a partitioned environment cannot have its missing members replaced. Instead, the peer set protocol will signal the local MS that the peer set has lost two members and the root MS of that partition will place such peer set in a "read-only" state. The lost members could potentially exist as a two-member peer set in a second partition, and if the partition lasts long enough the missing member will likely be replaced. When the network is eventually resolved and the partitions re-integrate, the MS evicts the old "read-only" member and the volume occupied by such member is reclaimed. The generation of the peer set is defined based on the following: If before a (two-way) partition occurred a given peer set had a generation number N, and after the partition the generation numbers of corresponding peer sets in two partitions are M and L, respectively, then re-integration of partitions the generation number assigned to the "restored" peer set is max(M, L)+1. If the network failure time was short and no replacement member had been selected yet, then the single member will rejoin its old peer set, synchronizing its data with the other two members (in case any changes have occurred). The effective operation that is performed by the MS is to evict the member, throwing away its out-of-sync data, and then select the same member as a replacement.
[0874] With regard to merging an n-way partitioned cluster, this will happen as a part of normal operation of the federation protocol. When the network problem causing the partition is resolved, the roots of each respective partition will receive the next Merge message sent out by the other root nodes. The roots of the clusters created as a consequence of the network partition will see a Merge message from node with a lower id (the original root node) and will request to join that node. The original root will subsequently resume its status as sole root, with the nodes making up the other partition automatically rejoining the root as part of the other root nodes turning over root status to the original root.
6.2 Peer Set Protocol 2
[0875] This section describes an alternative version of the peer set protocol that may be used in certain embodiments of the present invention.
[0876] This version of the peer set protocol has four main states, with certain substates that may impact the how the protocol transitions from one state to the next. The main states are INIT_WAIT, PEER_SYNC, CONFIRM_PEERSET, and FD_WAIT. Under normal conditions, the members of a peer set will transition through each of these states when a cluster starts, finally landing in the terminal FD_WAIT state. Once in this state, the peer set members will wait for some external event to trigger a transition to another state. These states are described in detail below.
[0877] The INIT_WAIT State
[0878] When a node starts for the first time, threads are spawned by the protocol engine to manage each of its disk volumes. Each of these threads represents a potential member in a peer set. These member threads enter the INIT_WAIT state, waiting for the arrival of a Merge message from the root node of the cluster.
[0879] A Merge message contains a Federation object, and this object contains the complete membership information of all peer sets in the cluster. When a Merge message arrives, each member thread examines the Merge message to see if it has been assigned to a peer set and if so who its peers are. If the member has not been assigned to a peer set, it simply remains in the INIT_WAIT state, waiting for the next Merge message. It will do this indefinitely.
[0880] On the other hand, if a member discovers that it has been assigned to a peer set, it checks whether the nodes where its peers reside have joined the federation. A node's joined state is contained in the same Federation object that is passed to a member thread via the Merge messages. If one or more of its peers have still not joined the federation (for example because a node was late being started up), the member simply stays in the INIT_WAIT state, waiting for the next Merge message. It will stay in this state until the nodes to which its peers belong have all joined the federation.
[0881] Once all members have finally joined, they proceed, as a group, to the PEER_SYNC state. This applies to peer sets of any cardinality N, where N is greater than or equal to 2. The case of a singleton peer set (N=1) is covered as well, with the difference being that the member does not have to wait for its peers to join the federation and can proceed directly to the PEER_SYNC state as soon as it has been assigned to a peer set.
[0882] The PEER_SYNC State
[0883] The purpose of the PEER_SYNC state is for the members of a peer set to exchange information as it pertains to their own view of the world. Upon entering this state, each peer set member asks the local Management System for details regarding the peer set of which it is a member, such as the peer set generation, and the role, color and state of each member. Since each peer resides on a separate node and each node has its own local data describing the properties of the peer sets it hosts, there is a chance that the peers could be out of sync with respect to this data (due to perhaps some kind of system failure). The PEER_SYNC state provides a mechanism for the peer set members to reconcile any differences in their data.
[0884] In an exemplary embodiment, the exchange of information between the peer set members is accomplished using UDP packets, although other embodiments may use other protocols. UDP is a convenient mechanism to exchange information across separate systems, but it has one major drawback--there is no guarantee that a packet once sent will actually reach its intended target. As a result, any protocol designed around UDP or similar unreliable protocol should have sufficient redundancy built into it to minimize of risk of packet loss.
[0885] The "peer sync" exchange consists of multiple rounds. In the first round, each member constructs a list of PeerSet objects, consisting in this first round of a single object describing its own peer set data. Each member then sends this list to each of its peers via PeerSync packets, which is basically a container for a list of PeerSet objects, and then after a brief wait checks for incoming PeerSync packets from its peers. If no packets have arrived, it sends out another round of PeerSync packets and then waits again before checking for additional incoming packets.
[0886] If a member receives a packet from a peer, it adds the PeerSet objects contained in this packet to its peer set list, and sends out another round of PeerSync packets to its peers with this updated list. When a member has received packets from all of its peers (specifically, when the length of its peer set list is equal to the cardinality of the peer set it is a member of) it sets an "is Synchronized" flag in the next PeerSync object it sends to its peers, signaling that it has collected all of the peer set objects for its peers. When a member receives a PeerSync packet with the is Synchronized flag set, it notes this, recording which member sent the packet.
[0887] This exchange of information between the members of the peer set continues until all members have received PeerSync packets with the is Synchronized flag set. This guarantees that each peer knows about every other peer. If after some predetermined number of rounds a member still has not received is Synchronized packets from one or more of its peers, the member reverts back to the INIT_WAIT state. If this was to happen, all of the peers of that member should be in the same situation and will also revert back to the INIT_WAIT state.
[0888] If the peer sync was successful, then the peers transition to the CONFIRM_PEERSET state. As with the case of the INIT_WAIT state, the peer sync exchanged described here works with peer sets of any cardinality N, where N>=2. The degenerate case of N=1 is handled as well, but no exchange of information is needed, and the member can proceed directly to the CONFIRM_PEERSET state.
[0889] The CONFIRM_PEERSET State
[0890] Upon arriving in the CONFIRM_PEERSET state, the members of the peer set can begin to process the data that has been exchanged. At this point, each member should have an identical list of peer set objects collected from its peers, where each individual peer set object in this list describes a given peer's view of the peer set. The purpose of this state is to reconcile this data, with the outcome of the processing of these separate peer set objects being a new peer set object on which all of the members agree, including the role and color each member is assigned, the generation of the peer set, and other status information associated with the peer set. There are several cases to consider.
[0891] For example, the members could be part of a newly formed peer set, in which case the peers would exchange peer set objects with no properties defined--no role, color, or status, and the peer set's generation would be set to a default value (e.g., -1) to indicate a new peer set. In this scenario, the members of the peer set have to assign a role and color to each member of the peer set. One peer will be assigned the role of Primary, while the others will be Secondaries, and each member will be assigned a unique color, e.g. Red, Green, and Blue in the case of a three member peer set. The selection of the primary in a new peer set is a key step in this process, and this is discussed further below with reference to a Primary Select Protocol.
[0892] Another possibility is the members are part of a previously created peer set that is being restarted after a cluster reboot. In this case, the peer synchronization should leave each member with a list of identical peer set objects, assuming each of the nodes is in agreement about the attributes of the peer set to which the members belong. If for some reason the peer set objects do not match, rules are defined to determine whose peer set object is selected as the winner. This selection is usually based on the generation of the peer set, where the peer set with the highest generation wins. If the generations are the same but there are other differences (such as role or color mismatches), additional rules are used to select the winning peer set.
[0893] A third possibility is that a new member is being added to a peer set. For example, a two member peer set could have a third member added, so when the peer synchronization is completed, each member will have a list of three peer set objects, with the object for the newly added member having undefined values for its properties. This newly added member will always become a Secondary (because one of the existing members will already be a Primary), and it will be assigned a unique color attribute. With regard to the generation of the peer set, whenever a change in the topology of a peer set occurs, its generation is increased by one.
[0894] A fourth possibility is that a peer set has lost a member, for example, a three member peer set could be reduced to a two member peer set. In this situation, the remaining members simply retain the role and color they already have assigned. A special case in this scenario though is where the member that has been lost was previously the Primary member of the peer set. In this case, one of the remaining two members is selected to be the new primary. In an exemplary embodiment, the new primary is selected first based on which node has the fewest number of primaries currently assigned, and if both members are hosted on nodes with the same number of primaries, then the member with the lowest ID is chosen. For example, each node in a cluster has N disk volumes, and a given peer set is made up of volumes from M different nodes. At any given time, some number of volumes belonging to a node will be primaries, some will be secondaries, and some will possibly be unassigned. When a two member peer set has to decide which of its members to make the primary, the member with the fewest number of primaries already assigned to its host node is selected. This information is readily available to the members of the peer set making this decision, since it is one of the additional bits of information that is exchanged during peer synchronization.
[0895] The Primary Select Protocol
[0896] As mentioned above, one scenario in which a peer set has to select one of its members to be the primary is the case of a new peer set. Because the members exchange the number of primaries already assigned to their host nodes as part of the peer synchronization process, one potential solution to pick the primary in a new peer set is simply to select the member with the lowest number of primaries assigned to its host node. This approach would work fine if a single new peer set was created at some point on a cluster that is already up and running The problem is that when a cluster is being created for the first time, there are no primaries or anything else assigned. All of the peer sets are coming up at more or less at the same time, and when the peer synchronization exchanges take place, the primary counts for all of the nodes are zero. This would mean the members would have to revert to using the member with the lowest ID to be the primary, but this could lead to a poor distribution of primaries, with some nodes have four primaries assigned and some nodes having none. It is desirable for primaries to be balanced across a cluster, to help improve the performance of the cluster.
[0897] The Primary Select Protocol is a substate that the members of a peer set enter to select which member of the peer set is to be the primary. The protocol is designed to try to pick a member that keeps the total number of primaries across the cluster reasonably balanced. Optimal balancing is desirable but not essential.
[0898] The protocol works as follows. Each node maintains a count of the number of primaries assigned to that node. When a cluster is coming up for the first time, this count is zero for all nodes. As primaries are selected, this count increases. The protocol works on a first come first served approach. For example, in the case of a three member peer set, unlike the peer synchronization protocol where all members start the protocol at the same time, in the primary select protocol, the members agree on who will start the protocol and who will enter a wait state. In an exemplary embodiment, the selection is based on the modulus of the ID of the peer set by the size of the peer set. So, if the ID of the peer set is say 15 and the peer set size is 3, the modulus of 15 by 3 is 0, so member 0 will start the protocol, assuming the members are ordered by their node IDs. Members 1 and 2 will enter a wait state, waiting for messages to arrive to tell them what to do next.
[0899] The member that starts the protocol looks for a very specific condition to decide how to proceed. In an exemplary embodiment, it checks its hosting node to see if no primaries have been assigned to this node. If this is the case, then it increases the primary count of this node to 1 and elects itself to be the primary of the peer set. It then exits the primary select protocol and starts the primary sync protocol (discussed below). The check of the node's primary count and its subsequent increment is implemented as an atomic operation (e.g., through a "test-and-set" or locking mechanism), since in the case of a node A with N disk volumes that can host peer sets, there are potentially N-1 other members of other peer sets also checking the primary count for node A at the same time. By making this an atomic operation, only one member will have a successful zero check. The other members will all see that the primary count is already 1 on this node, and instead of selecting this node to host another primary, the peer set members will "hand the torch" to their immediate peer for it to continue the protocol, and the members handing the torch off enter a wait state.
[0900] This hand off is accomplished by a PrimaryCheck packet. This packet includes the primary count that the receiving member is supposed to test against, which in this first pass is zero. On receiving one of these packets, the members exit their wait state and take over as the lead in the protocol. At this point, the protocol proceeds identically for these new members. They each will check if their hosting node has the primary count indicated in the PrimaryCheck packet, and as before only one of the members making this check will get a positive outcome because the test and increment is a single atomic operation. The one member that wins the check elects itself as the primary for its peer set and proceeds to the primary sync step.
[0901] The members failing the test perform the same hand off to their immediate peers via another PrimaryCheck packet, and the process repeats itself with these new members. When the last member of a peer set receives a PrimaryCheck and again fails the primary count test, it sends the next PrimaryCheck packet to the original member that started the primary select protocol, which at this point is in a wait state. On receiving the PrimaryCheck packet, it learns that it is being asked to test against a primary count of zero again, which it has already tested. This signals the member to increase the value being tested against by 1, which during this second pass would increase it from 0 to 1. From here, the protocol continues in this fashion, with each successive member testing against the requested primary count and either electing itself as the primary or handing the check off to the next member in the list. Eventually, all peer sets on all nodes will pick a primary, with the results being a reasonably well balanced distribution, possibly even optimum.
[0902] The primary select protocol has to potentially deal with UDP packet loss, and in an exemplary embodiment, it does this with built-in timeouts. For example, when a node sends a PrimaryCheck to its peer, it knows that it will either receive a signal that a primary has been selected (by means of a PrimarySync packet, described below) or that it will receive another PrimaryCheck packet as the protocol loops around and back. If no new packets are received within an expected timeout period, it resends the last PrimaryCheck packet it sent out. It has no way of knowing if its last packet was received, or if the reason it has not received a new packet is because a packet that was sent to it was lost. So, it simply sends the same packet again. When the target peer receives this packet, it will know whether this is a duplicate of the last primary check packet or a new one. If the packet is new, it simply proceeds with the protocol as discussed above. If the packet is a duplicate, it in turn resends the last PrimaryCheck that it sent to its peer, and this ensures that the protocol will continue to advance. If after some number of retries the protocol fails to elect a primary, all members eventually revert back to the INIT_WAIT state.
[0903] The Primary Sync Protocol
[0904] When a member elects itself to be the primary of a peer set as the outcome of the primary select protocol, that member advances to the primary sync protocol. This protocol is designed to make sure all of the members of a peer set know when a member has elected itself as the primary. Initially, only one member advances to this new substate, with the other members remaining in a wait state, waiting for a packet from a peer to tell them how to proceed.
[0905] When the elected primary starts the primary sync protocol, it sends PrimarySync packets to each of its peers, indicating that it has assumed the role of the primary. When these waiting members receive this packet, they break out of their wait state and transition to the primary sync substate. In this state, they in turn proceed to send PrimarySync packets to each of their peers, including in this packet the ID of the member who has elected itself as the primary. From here the primary sync protocol proceeds essentially identically to the peer sync protocol, where each member continues to send primary sync packets to its peers and receive in turn packets from its peers. The difference here is instead of exchanging peer set objects, the members simply exchange the ID of the member who they believe has been selected as the primary. This exchange of packets continues until all members have received packets with the "is Synchronized" flag set, signaling that all members have received packets from everyone else.
[0906] When this point is reached, each member should have a list of IDs given to it by its peers indicating who they believe has been selected as the primary member. These IDs should all be the same, but if they are not, it indicates the primary select and sync protocols have for some reason failed and all members will revert to the INIT_WAIT state, where they will try the whole process over again when the next Merge packet arrives.
[0907] Membership Acknowledgement
[0908] All members eventually transition to the Membership Acknowledgement substate. They get here either as the next step after completing the primary sync exchange, or as the next step after completing whatever processing has had to be performed on the peer set objects that were collected during the peer sync step. On entering this substate, all peer set members will be in agreement with regard to the specifics of the peer set object that has to be confirmed, including the role, color, and state of each member and the generation of the peer set.
[0909] Before proceeding to the FD_WAIT state, the protocol engine has to get confirmation from the root Management System (MS) that it has acknowledged and approved the peer set object that the members of the peer set have agreed on. To get this approval, the members of the peer set with the smallest ID is selected to send a MembershipAcknowledged message to the root MS. The subsequent acknowledgement comes by way of the normal Merge message broadcast that is sent out by the root MS on a regular interval. The peer set members will wait indefinitely for this acknowledgement to come. When an acknowledgement is finally received, the peer set can either be approved or disapproved. If the peer set is approved, the members will proceed to the FD_WAIT state; if the peer set is disapproved, the members revert to the INIT_WAIT state. There are numerous reasons why a peer set could be disapproved, but from the perspective of the protocol engine, it does not matter why the peer set was disapproved as it simply acts on the data it receives.
[0910] The Merge Sync Protocol
[0911] As mentioned above, the acknowledgement of the MembershipAcknowledged message is sent from the root MS by way of its Merge message broadcasts. As is always the case, the peer set members have to deal with potential packet loss. If all three members lose the same Merge packet, then they will simply continue to wait and no harm is done. If all members receive a Merge packet, then they can proceed on to their next state in sync. However, there is a chance that one or more members of a peer set may miss a Merge packet, potentially leaving them out of sync with their peers. For that reason, another variation of the peer sync exchange is used when a Merge packet is received while the members are in their membership acknowledged wait state. This merge sync exchange again works very similarly to the peer sync and primary sync exchanges. In this case, the members exchange the sequence number of the latest Merge packet they have received.
[0912] For example one member may miss the Merge packet due to packet loss. On receiving this packet, the other members immediately start the merge sync protocol, sending a MergeSync packet to each of their peers. The MergeSync packet contains the sequence number of the most recent merge sync packet that was received. When the member that missed this last merge packet receives this packet, it will break out of its wait state and also start the merge sync protocol. However, because it missed the last merge packet, it will not be able to send the same sequence number that the other members are including in their MergeSync packets. As a result, when the Merge sync completes, the members will see that one of their peers missed the merge packet that the others received and cannot proceed to the next state. As a result, all members simply agree to remain in the membership acknowledge wait state, and will try to sync up again on the next merge cycle. Eventually, all members should receive the same Merge packet and they will all be able to proceed as a group to either the INIT_WAIT or FD_WAIT state.
[0913] The FD_WAIT State
[0914] On successfully completing the CONFIRM_PEERSET state, the members of a peer set transition to the FD_WAIT state. This is considered a "terminal" state. The members of a peer set will remain in this state indefinitely and will only transition to another state when some event occurs signaling a state change is needed.
[0915] There are two main mechanisms that will trigger a state change. While in FD_WAIT, the members periodically monitor their queue for incoming packets. If a merge message is received, they check if anything important has changed with respect to their peer set. For example, a three member peer set could discover that a member has been removed from the peer set, referred to as a topology change. If this happens, the remaining members transition immediately to the PEER_SYNC state to exchange their latest peer set objects and have the new peer set acknowledged by the MS. At the same time, the member that was removed will receive a merge message and will discover that it has been removed from its peer set. In this case, the member sends a MemberEvicted message to the local MS and then transitions to the INIT_WAIT state where it will stay indefinitely until it gets added again to a peer set.
[0916] A second mechanism that can trigger a member to transition out of the FD_WAIT state is via a restart request sent by the MS. This is done in cases where the MS knows there is no change to the topology of a peer set that would cause the members to transition to a new state but it needs to force the members of a peer set to revert to the INIT_WAIT state to recover from certain kinds of failure scenarios. In this case, the peer set members simply proceed through each phase of the peer set protocol and will eventually return to FD_WAIT.
V. Exemplary Small File Repository
1. Introduction
[0917] The maximum number of I/O operations that can be performed on a disk drive in a given time interval is generally much more limiting than the amount of data that can be transferred or the transfer rate of the drive. The characteristics of modern disk drives are such that in the relevant markets, traditional file systems typically cause the number of I/O operations to reach their maximum when disk drives are far from being full, which can lead to proliferation of disk drives even when additional storage capacity is not needed. This in turn, can cause costs to rise more than expected. The relevant application environments generally require extremely efficient access to small files, by minimizing the number of I/O operations a file server needs to perform. This is typically the case for such things as thumbnails or small pictures. To open one such a file, even discounting the time it takes for traditional network file systems like NFS to lookup the intermediate components of a pathname, it typically would be necessary to look up the file i-node from the directory that references it, to read in the i-node for the file, and finally to read the data block for the file. This typically entails at least 3 I/O operations. In many of the relevant environments, it is expected that most accesses will be to files that have an average size of about 64 Kbytes. Besides, such files are accessed in an extremely random fashion, so that it is likely that no advantage will be obtained by using front-end caches. Therefore, special facilities to minimize the number of I/O operations to access such small files are desirable.
[0918] On the other hand, through judicious placement of the blocks in a file, ad hoc file system designs can limit the number of actual I/O operations and guarantee higher disk bandwidth. To achieve this, an exemplary embodiment implements a Small File Repository (referred to hereinafter as "MaxiSFR"). MaxiSFR is designed to reduce the average number of I/O operations for reading such files to one.
2. The Basic Scheme
[0919] A way to deploy a subsystem capable of addressing the needs outlined in the previous section is that of storing small files within file system server volumes used as arrays of extents of the same size (the maximum size of a small file). Access to the individual files could then occur by simple indexing into such arrays.
[0920] To understand how this could be achieved in practice, assume that a special top level directory in the namespace of MaxiFS is dedicated to this functionality. Assume that this directory does not really exist anywhere, but is interpreted by the client software in such a way that all accesses to names that encode an index under that directory are managed as special accesses to a short file via its index. For example, assume "/sfr" is such a directory and assume that "/MaxiFS--1" is its mount point on the client. Then, opening, say, "/MaxiFS--1/sfr/CD3A" would in fact request access to a small file on an optimized repository that has 0xCD3A as its hexadecimal index. This can be implemented within dedicated volumes that would have to be allocated as each server disk drive is provisioned. Clearly, in an infrastructure like MaxiFS, made of up to thousands of servers, just an index would be adequate to fully identify the location of a file within a repository, although additional information typically would be used to identify the repository of interest.
3. Requirements For This Exemplary Embodiment
[0921] This section captures requirements the MaxiSFR facility needs to satisfy for this exemplary embodiment, namely:
[0922] R0. The Small File Repository must be global to each MaxiFS infrastructure and the files stored in it must be uniquely identifiable across the entire name space of a MaxiScale system.
[0923] R1. Small files need to be accessed in such a way that the entire open( ) read( ) close( ) sequence takes no more than a single I/O operation on the server. Enumerating, creating or writing such files need not be as efficient.
[0924] R2. The Small File Repository must enforce limitations on the maximum size of files it stores and that can be accessed according to requirement R1. However, MaxiSFR should allow for any file within such size constraint to be stored within MaxiSFR.
[0925] R3. The caller must be able to specify a file suffix for a small file being created (for example, to distinguish the type of a file: JPEG, GIF, MPEG, . . . ). The suffix can be null. A non-null suffix is an integral part of the file name and shall be retrieved when the content of the volume is enumerated.
[0926] R4. Clients must be able to create small files either by letting the MaxiFS choose a name or by letting the requesting client specify a name (the latter capability may be particularly useful for the restoration of backups).
[0927] R5. It must be possible to enumerate the content of the small file repository and to retrieve attributes associated to small files. The name space for small files should be partitioned in such a way that no more than about 1,000 files per directory would be enumerated.
[0928] R6. A block copy facility that allows to remotely replicate a small file repository must be available, to simplify the backup and restore of the repository itself.
[0929] R7. The small file repository of a MaxiFS infrastructure must be scalable proportionally to the number of nodes that are members of the infrastructure.
[0930] R8. Small files must support all the attributes of other files, such as the identity of the owner, access protection privileges, creation and modification date, etc. Access protection at the file level should be enforced, as for any other file.
[0931] R9. A library function that creates a small file, writes to it and retrieves its name must be available for the C language, as well as for the languages most often used for web applications (Java, Python, Perl, PHP, . . . ).
[0932] The following describes a more detailed design of the facility and the way the above requirements are met.
4. Theory of Operation
[0933] This section provides a detailed view of how the MaxiSFR is expected to be used.
[0934] The approach described earlier conveys the general idea, although giving clients direct access to small files via their indexes is impractical for the following reasons:
[0935] An index by itself would always provide access to an extent, without regard to whether it is still allocated or has been freed.
[0936] It would be difficult to identify which server manages the specific small file repository where the small file of interest is kept.
[0937] For this reason, each such file should not be addressed just via an index, but should rather have a globally unique ID within MaxiFS. Such a Unique Small File ID ("USFID") could be structured as the concatenation of four components, as in: USFID=<psid><sid><bn> Each item within angle bracket is a component of the unique ID, as follows:
[0938] <psid> This field is the ID of the Peer Set (a Peer Set in MaxiFS is the minimal unit of metadata redundancy; it is a mini-cluster made of three servers, each of which manages one drive dedicated to the peer set, where MaxiFS metadata is replicated) where the small file resides. By embedding the peer set ID in the USFID, the file is permanently tied to the peer set and cannot be freely relocated from a peer set to another one while keeping the USFID unchanged.
[0939] <sid> This is the slot ID or, in other words, the index of the logical volume block where the file is stored. By making this piece of information part of a USFID, the file can only reside at a specified logical offset within a volume.
[0940] <bn> This is the number of logical blocks that the file uses. By embedding this piece of information into the USFID, the file cannot change the number of logical disk blocks it spans. Note that the actual length of the file in bytes is stored in the file metadata region that precedes the actual user data on disk.
[0941] So, assuming <psid> is 0xABCD ("ABCD", 2 bytes), <sid> is 5 ("0000000005", 5 bytes) and <bn> is 16 ("10", 1 byte, which indicates that the file is stored in 17 logical blocks), the USFID for the file, expressed in hexadecimal, would be: [0942] ABCD0000 00000510
[0943].
[0944] This information is expected to be made available to applications through the standard POSIX interface via a MaxiFS-specific fcntl( ) call (see below), although alternative mechanisms may be used. The choices with respect to the length of each of the fields within an USFID are justified as follows:
[0945] Devoting two bytes to the Peer Set ID is sufficient. A MaxiFS infrastructure with 64 K possible peer sets, with nodes containing 4 drives each would cover about 50,000 nodes. This should be adequate for a long time.
[0946] Devoting 1 byte to the length of a file in blocks is adequate. A logical block amounts to 1 Kbyte. If the number of blocks that appears in the USFID is equal to the total number of logical blocks in the file minus 1, this would cover files up to 256 Kbytes in length, which is the maximum length expected for a file that qualifies as small.
[0947] Devoting 5 bytes to address the starting logical block number for a small file implies that 240 (≈1012) 1 Kbyte blocks can be covered. This corresponds to a partition of up to 1 Pbyte per drive, which is three orders of magnitude beyond the currently achievable drive capacity.
[0948] Information stored within the file metadata includes the actual file length in bytes (the amount of storage space used for the file can be smaller than the entire extent), ownership data, access permissions, creation time and more. Such metadata would be stored in the first portion of the extent, followed by the actual data.
[0949] The POSIX file interface does not have a way to create anonymous files, to later assign names to them. However, MaxiFS allows the same to be accomplished through a sequence of POSIX calls. So the application code would be similar to the following:
[0950] 1. fd=creat("/MaxiFS--1/sfr/*", 0777);
[0951] 2. n=write(fd, buff, bytes);
[0952] 3 . . . .
[0953] 4. sfn.buffer=name, sfn.length=sizeof(name);
[0954] 5. fcntl(fd, MAXIFS_GETNAME, &sth);
[0955] 6. close(fd);
[0956] virtual small file directory ("sfr") and a conventional file name. Use of the special directory name file name ("*") is not a wild character or a regular expression (Unix system calls do not interpret wild card or regular expressions: any character is interpreted literally because expansion of wild cards or regular expression is performed within libraries or applications before the system is invoked). It is just a conventional way to tell MaxiFS that the system must create a small file and pick the appropriate name for it.
[0957] From statement 2, on, the caller writes data to the new small file.
[0958] Then, in statement 5 the client invokes a operation specific to MaxiFS ("MAXIFS_GETNAME"). The execution of this fcntl( ) call entails the following:
[0959] The client informs MaxiFS that the small file has now been copied completely.
[0960] The client requests the USFID the system generated for the file. The name of the file will be returned as a string that is stored in the data structure fcntl( ) takes as an argument ("sfn"). For this reason in statement 4 the caller initializes the fields of the structure, specifying the buffer where the name will be stored and the buffer's length.
[0961].
[0962] Finally (statement 6), the client closes the file. From this point on, the file can be accessed in reading via its name. Assuming that the file had the USFID: "ABCD000000000510", the fcntl( ) invocation would return the pathname: "/MaxiFS--1/sfr/ABCD/000000000510". To fully support this functionality at the application level, it is expected that packages, libraries and so on will be developed for the prevalent programming languages used for Web 2.0 applications (Java, Perl, Python, etc.).
[0963] Notice that beneath "sfr", the entire pathname of the file includes a parent directory name ("ABCD"). This name matches the ID of the peer set where the file is stored. The reason for the intermediate directory between "sfr" and the rest of the file name is to simplify the aggregation of such files. This avoids the need to list all the small files in the infrastructure as if all of them had the same parent directory ("sfr").
[0964] The pathname in this form appears as a pathname in the traditional sense. However, "sfr" and "ABCD" do not exists as real directories in the MaxiFS name space. Whenever the client component of MaxiFS sees a pathname of this form under the MaxiFS mount point, it transforms the portion under "sfr" in a USFID and sends the request with this USFID to the peer set (in this case 0xABCD) where the file is expected to be stored.
[0965] Typically, such files are opened for reading. However, there is an important case when such a file may have to be.
[0966] In any case, after a small file is created, MaxiFS supports read access to it via a single I/O operation. Therefore the USFID-based pathnames can become part of URLs, so that web access to such files, even if extremely random, need not cause the servers to perform lots of I/O operations.
[0967] The enumeration of the small files contained in the special namespace directory merely requires identifying the allocated extents and reconstructing their unique IDs. To enumerate all such files across the entire MaxiFS infrastructure one such enumeration should be performed within the small file volume in each of the peer sets in the system.
[0968] Deletion of small files is possible through their USFID-based names.
[0969] Small files would have to have redundancy. For simplicity, this would be done make sure any such files exists in three copies: one on each of the small file volumes in each member of the peer set the files belong to.
[0970] Note that whereas MaxiFS implements logical replication of files, in that the actual layout of files across replicas is totally immaterial, for small files, not only must the files be replicated, but it is also necessary to store each file exactly at the same location in each replica of the small file volume. Were this not the case, the same ID could not apply to different copies of the same file.
[0971].
[0972] small file directory ("sfr") and a conventional name made of two subcomponents. The stem of the file name ("*") is not a wild character or a regular expression (Unix system calls do not take wild card or regular expressions: any character is interpreted literally because expansion of wild cards or regular expression is performed within libraries or applications before the system is invoked); it is a conventional way to tell MaxiFS that this is not a real file name, but that the system must create a small file and pick the appropriate name for it. The suffix of the name (".jpg") is one possible suffix, any others (including a null suffix) can be chosen. However, the suffix is stored with the file and the file name generated and retrieved with statement 5 will be made of the string representation of the USFID with the suffix selected (in this case, ".jpg"). Use of the directory . The key points from the above description are the following ones:
[0973] 1. Each file stored in a small file repository has a pathname under the virtual directory named "sfr", under the mount point of a MaxiScale storage infrastructure. This name refers to a virtual entity that is accessible to MaxiFS clients via an abstraction the MaxiFS client software implements.
[0974] 2. The above directory has virtual subdirectories: one for each peer set in the infrastructure. Each such subdirectory has a name that is represented by an 4-character long hexadecimal string that corresponds to the numeric ID of a peer set (in the general case, such subdirectories will contain leading zeroes in their name). The enumeration of one such virtual subdirectory yields the list of files stored in the small file repository of the corresponding peer set. Further virtual subdirectories exist, to limit the number of entries in each, as explained ahead.
[0975] 3. With respect to normal files, small files that adhere to this design have some restrictions that have been briefly mentioned, namely:
[0976] a. Their length cannot exceed a system-wide predefined limit.
[0977] b. Any rename within MaxiSFR is only possible if the name complies with the USFID-based conventions and implies relocation of the file to the area to which the new name points.
[0978] c. They can only be extended to fill the last logical block of the file, if not already full (i.e., so that the number of logical blocks the file uses does not change, although the file's length in bytes may change). Otherwise, the name (that contains the count of blocks used would have to change as well).
[0979] d. Existing small files can be overwritten, as long as the number of logical blocks they span is not increased.
[0980] e. The creation of a small file by name (used mainly to restore dumps) is generally only possible if the physical storage implied by the name within the small file repository is available. This name will include the name of the virtual directory that identifies the peer set where the file is to be stored
5. Design
[0981] This section of the document has the purpose of describing the design of the exemplary Small File facility for MaxiFS in more detail.
[0982] The MaxiFS small file repository is made of the collection of all the small file repositories each peer set in the system makes available. The aggregation of the individual repositories is called Small File Repository (or SFR, for short) and is global to the name space of a MaxiScale system, as required by R0. Each individual repository stored on a peer set is called Peer Set Repository (or PSR, for short). Each PSR is replicated across all members of a peer set, in the sense that each copy on each member of the set is identical in size and content to those of the other members of the set and they all evolve in lockstep. The individual PSRs are fully independent entities, each associated to a "virtual subdirectory" of the global SFR whose name is the hexadecimal string that represents the peer set ID of the peer set hosting the PSR. When a new peer set members joins a peer set, the new member needs to copy the content of its small file repository from its peers. The copy of the PSR stored within each peer set must be identical to that of the other members of the set. This does not require that the file system volumes used for this purpose need to be identical, but implies that the actual space available will be the smallest available among the members of the set (all have to adhere to the most restrictive constraints) and that existing members cannot be replaced with new members that have file repositories smaller than the highest block number used by a file already allocated within the PSR.
5.1 Layout of the PSRs
[0983] Within each individual peer set member, a portion of the disk drive is set aside as a partition to be used as a member of the local PSR. Since the three repositories in each of the members of a peer set are identical and evolve in lockstep, in the following all the discussions relative to PSRs are meant to apply to each of the three members.
[0984] If the PSRs had to contain files all of the same length, the management of the each PSR would be very straightforward, in that the entire PSR could be subdivided into slots all of the same length and one would only have to keep track of which slots are full and which are empty. The small file facility for MaxiFS enforces a maximum length for small files (requirement R2). Files exceeding this length cannot be stored making use of the facility and should rely on the general purpose file system.
[0985] When variable-length files come into play, a simplistic implementation could allocate space for each file as if all the files had the maximum allowed length, regardless of each file's actual length. However, given that small files go from one to a predefined maximum number of blocks, this would result in a very poor space allocation, with major waste of storage due to internal fragmentation.
[0986] Hence, in an exemplary embodiment, space is allocated as a multiple of the "logical block size". This value is set to 1 Kbyte, so that small files can make efficient use of the space available, limiting internal fragmentation. So, the smallest file in a PSR will take 1 Kbyte on the storage repository. The actual storage space used on disk will be a multiple of 1 Kbyte. The initial portion of the storage area for each file contains all the relevant file system metadata, as in any other FreeBSD file. This includes creation time, modification time, user and group ID of the owner, access permission bits, file length in bytes, etc. (requirement R8). In addition to this, the metadata portion of a file also contains other information of relevance for the PSR, such as the string that represent the suffix of the file and a checksum for the file.
[0987] Since each of the files stored in the SFR is going to take up a variable number of logical blocks, it is necessary to do some bookkeeping to do this. Namely, the software that manages each PSR must be able to:
[0988] 1. Find a number of contiguous blocks needed to store a file of given length.
[0989] 2. Identify the number of blocks that a file spans, without having to read the file's metadata.
[0990] There are various ways to manage the empty space for variable length files. However, the most efficient is a bitmap in which each bit is associated to a logical block. When the bit is set to 1, the logical block is in use; otherwise, the logical block is empty. A bitmap is convenient in that it allows to easily find regions of contiguous free space large enough.
[0991] In addition to this, each PSR also needs to keep track of the suffixes of the files stored in the PSR. This speeds up the enumeration of files in the repository. Therefore, a table must be associated to the repository, where such suffixes are stored.
[0992] Finally, each PSR must contain a header that stores information that is global to the PSR and defines its structure. The following information is stored in this area:
[0993] The version of the PSR. Over time, newer layouts may be necessary and this field allows discriminating among them.
[0994] The size of a logical block in the PSR. This might differ for different PSRs.
[0995] The size of the PSR in blocks.
[0996] The index of the block where the free space bitmap for the PSR is stored and the bitmap's length in blocks.
[0997] The index of the first available block in the repository where small files can be allocated, along with its length in blocks.
[0998] The number of files stored in the PSR.
[0999] The PSR is partitioned into three regions:
[1000] 1. The PSR header that describes the characteristics of the PSR, as explained above.
[1001] 2. The free space bitmap.
[1002] 3. The actual file repository.
[1003] Since each member of a peer set has a mirror copy of the PSR, the information stored in the three regions must be identical among the peer set members.
5.2 Small File Operations
[1004] This section and its subsections describe the operations that can be carried out over the small file repository and the way they are implemented.
5.2.1 Generalities
[1005] In the SFR, directories can be neither created, nor deleted, nor renamed, nor can directory attributes (including access permission bits) be changed. In reality, these are "virtual directories" made visible only to ease the enumeration of the files they contain. However, it is desirable to support the ability of clients to change the current directory of a process to any of these virtual directories.
[1006] Each PSR corresponds to a virtual subdirectory of the global SFR whose name is the hexadecimal string corresponding to the ID of the peer set that hosts the PSR. As will be seen in the following subsection, these PSR directories have child virtual directories, as well. Keep in mind that the system gives a view of the SFR in terms of such virtual directories, which, however, have no corresponding data structures on disk. They are visualization abstractions, only needed to give a more structured view of the SFR and of the PSRs.
[1007] The only pathname operations possible in any of the virtual directories are the enumeration of the content of the directory itself, along with the creation and deletion of files. Note that files are and can only be created and deleted at the lowest level of the PRS directory hierarchy, which is balanced.
[1008] As for files, creation (anonymous and with a name) and deletion are supported. Renames within the SFR are allowed only if the new name corresponds to the number of blocks that constitute the file and the range of blocks spanned by the new name is free. Otherwise, the rename operation will be rejected. Clearly, it must be possible to open a small file by name for reading, writing or both.
[1009] The ownership of the virtual directories that appear in the SFR name space is attributed to the system. All such directories have standard access rights that allow all users read, write and execute permissions.
[1010] The file operations that entail updates to data and metadata are managed in the same fashion as they are for regular files.
5.2.2 Client-Side Interactions with the SFR
[1011] The MaxiFS client driver has to behave specially in interacting with the SFR. Whereas for normal files, the client driver uses a system-wide hash table to determine which peer set is responsible for managing a given file or directory, on the basis of the pathname, in the case of the SFR the client needs to identify the fact that the target is a small file from the pathname. This is easy to detect in that the pathname of the object of interest must have the name of the SFR as its first component. Then the client driver has to look at the name of the first level directory under the SFR name expressed as a hexadecimal string and must translate it into the ID of the peer set to which it needs to send its request. The entire pathname must then be sent to the PSR of the appropriate peer set, along with the request to be processed.
[1012] In addition to this, the client needs to interact with the SFR in one of two modes. Some interactions are identical in nature to those used for other types of files. These include opens in write-only mode, file creates performed by name, file deletions, directory enumerations, normal reads and writes, etc. These types of interactions hide all the peculiarities of small files on the SFR server side. A special set of interactions is specific to the SFR and implements the special semantics needed to guarantee the 1 I/O operation in reading small files. There are two interactions in this set:
[1013] 1. The creation of files performed by leaving the choice of the name to the server (on the basis of the location and of the size of the file). The reasons why this interaction is special are essentially captured by the previous example and consist of identifying the peer set whose PSR will contain the new file, of performing a request to create a file whose name is not specified, passing along all the file data and then retrieving the name the SFR server generated for the file.
[1014] 2. The aggregation of opening a file for read, reading its content and closing it, by reducing it to a single I/O operation on the server. This consists of forwarding an open request that includes the read mode, whose reply (in case of success) contains all the small file data. The latter is cached on the client until a subsequent read from the client fetches the data itself to the requesting application.
[1015] Details on the individual operations on the server side are in the following subsections.
5.2.3 PSR Enumeration
[1016] Enumerating of all the files in the PSR corresponding to a given virtual subdirectory of the SFR and associated to the peer set ID reduces the number of items to be enumerated with respect to a global enumeration at the SFR level. However, given that 40 bits in an USFID are devoted to identifying files within a PSR, there is still the possibility of having to enumerate up to 240 (≈1012) files, which would create problems to user-level utilities and would be in contrast with requirement R5. Therefore, this 40-bit name space (this corresponds to using 5 bytes in the file's USFID) is further partitioned in such a way that each virtual subdirectory has no more than 210 (1024) entries. This entails that within a PSR there is a virtual hierarchy made of 4 levels of directories and that files only appear on the bottom level of such hierarchy. The consequence is that in a case like the one shown in the previous example, the file corresponding to USFID: "ABCD000000000510" (note that each of the pathname components below the virtual directory associated to the PSR is constrained to spanning the hexadecimal range 0x000-0x3FF, which is not true of the name of the file itself that includes two extra characters that encode the file length), would have the actual pathname: "/MaxiFS--1/sfr/ABCD/000/000/000/00510", rather than: "/MaxiFS--1/sfr/ABCD/000000000510".
[1017] According to this arrangement, all files whose starting block is within the range of a given block range of the entire PSR corresponding to a virtual subdirectory only appears in that virtual directory, although the file might include blocks associated to a subsequent virtual directory. For example, a file starting at block 0x3FE and 3-block long could have a USFID of "ABCD00000003FE03" and would be listed as "ABCD/000/000/000/3FE03" under directory "ABCD/000/000/000", despite the fact that the last of its blocks is in the block range that falls under directory "ABCD/000/000/001".
[1018] The enumeration of intermediate virtual directories (all the directories in the SFR, including those associated to the PSR and excluding the leaf directories that may contain actual files) is trivial and purely virtual. It simply consists of enumerating the full hexadecimal range available (0x000-0x3FF), excluding the items that would correspond to blocks beyond the size of the volume containing the PSR. So, this is purely computational and requires no disk access.
[1019] The enumeration of the leaf directories requires access to the disk. A way of enumerating the files within a given virtual subdirectory of a PSR is that of starting at the location of the PSR bitmap that corresponds to the virtual subdirectory being enumerated, looking at the next bit that is in use, accessing the metadata information in the corresponding block and reconstructing the file name from the offset of the starting block and by the length of the file. However, since the file suffix should be reported (requirement R3) and this is not implicit in the file location, it is necessary to do two things:
[1020] If the file has a non-null suffix, this should be retrieved from the file metadata that would store it when the file was created.
[1021] The suffix would then be added to the file name built out of its location, length, etc.
[1022] Because of the need to traverse the bitmap and to read the metadata for each file, in order to reconstruct its name, enumerating a directory would not be a very fast operation. In order to enumerate files on the basis of a bitmap, the PSR management software must know at which offset the files start in the volume. The simple indication of the fact that a logical block is in use is not sufficient for this. Effectively, a special marker is needed for the blocks that start a file.
[1023] Also, the same data structure used to identify the starting block for a file would lend itself to optimizing the enumeration for files with no suffix. This can be done by transforming the PSR bitmap to use a pair of bits for each block instead of a single one. This doubles the size of the bitmap. However, the size would still be contained. In the case of a 1 TByte PSR, the bitmap so extended would take just 256 Kbytes.
[1024] The extended bitmap would then mark the various blocks with two bits per block, according to the following signatures:
[1025] 00 Free block.
[1026] 01 Busy intermediate block. This is a block within the body of a file.
[1027] 10 Busy block that starts a file that does not have a suffix.
[1028] 11 Busy block that starts a file with a suffix.
[1029] The enumeration algorithm should then simply look at the extended bitmap starting from the offset corresponding to the range of blocks belonging to the virtual directory to be enumerated and operate as follows:
[1030] 1. Examine the bitmap until as many files as counted in the PSR header are encountered.
[1031] 2. Skip free blocks (signature: `00`) and busy blocks in the middle of a file (signature: `01`).
[1032] 3. For busy blocks that start a file and have no suffix (signature: `10`), reconstruct the file USFID from the location of the starting block and from the length of the file (computed from the first free block or the next header block after the current header block) and transform it into a file name string.
[1033] 4. For busy blocks that start a file and have a suffix (signature: `11`), reconstruct the file USFID from the location of the starting block and from the length of the file (computed from the first free block or the next header block after the current header block), read the file header to retrieve the file suffix and transform the USFID and the suffix into a file name string.
[1034] File operations are dealt with in a slightly different fashion, depending on whether they entail metadata or data updates. If they do not, the requests are carried out in a round-robin fashion by the available members of the peer set. However, if they entail metadata or data updates (as in the case of create, delete, rename, write and fsync), it is the primary member of the set that carries out the requests by coordinating the updates that affect all the copies of the PSR on each of the peer set members and by acknowledging the requesting client only when all the peer set members are in sync.
5.2.4 File Creation
[1035] File creation requests are carried out by the primary member of the peer set.
[1036] To create a file in the SFR, there are two possibilities: either the file is created by specifying its name (this would be mostly done by restore operations), or the name must be chosen by the system (this is the prevalent mode of operation and the caller is allowed to specify at most the file suffix).
[1037] In the first case, the client has chosen a name: the name encodes the number of logical blocks in the file, along with the offset of its starting logical block. Therefore, the system can decode this information from the file name and use it to check that none of the logical blocks between the starting offset and the last logical block of the file to be created is currently in use.
[1038] At this point, if the logical blocks are free, they are allocated to the file and the client is allowed to write up to the file length encoded in the file name. In case one or more blocks are in use, the outcome depends on the identity of the client and the permission bits for the affected files. If the effective identity of the client is compatible with the overwriting of all the files in the block range used by the new file, the blocks in use are freed (by automatically deleting the files to which they belong). Otherwise, the request is rejected. The same applies when the new file completely overlaps an existing file.
[1039] When the new file is created, in case a close occurs before the file could be written, all the blocks are zeroed out. In case communications with the client are lost or no close is performed within a reasonable time period, the file is deleted and the allocated blocks are freed.
[1040] A previous example highlighted the sequence of calls that a client needs to perform to create a new small file by letting the system choose its name. In this case, the file cannot be created right away because the name is tied to its size and the server needs to receive the indication that all the data is available before allocating the necessary space, committing the data to disk and returning the file name. On return from the fcntl( ) invocation (statement 5 in the example), the file name is returned to the client that closes the file and can make its content available.
[1041] Note that in allocating space for a file in the SFR, various strategies can be envisioned. One possibility is that the first time since reboot a client invokes the target peer set in a totally random fashion among the available peer sets. In case the peer set cannot grant the request because not enough space is available in its PSR, the client goes to the peer set that has an ID higher by 1 (modulo the number of peer sets) to repeat its request, until a suitable PSR is available. Each client keeps track of the last per set to which it addressed the last creation request (excluding the ones that specify a file name explicitly) so that the following request chooses a target according to the same scheme used to reiterate a failed creation request. This allows the distribution of files in a random fashion.
[1042] Another possibility is that of having the client keep track of the PSRs which have larger unused capacity and of addressing the next request to the first in the list, to the following one if the request is rejected, and so on.
5.2.5 File Deletion
[1043] File deletion requests are carried out by the primary member of the peer set.
[1044] The deletion of a small file is a fairly straightforward process. Assuming that the effective identity of the requesting client is compatible with the access rights of the file to be deleted with regard to the deletion operation (since all the virtual directories offer write access to all users, the only discriminating item is whether the file itself is writable by the caller), the operation is performed and the relevant file blocks are returned to the free pool.
5.2.6 File Rename
[1045] File rename operations involving the SFR are not supported. If a file needs to be moved out of the SFR, it can be copied and the original can be deleted. The reverse is also possible, as long as either the approach used in the example is used, or the caller has chosen a file name that corresponds to free areas of the relevant PSR and the file is large enough to contain the amount of data to be copied. However, these operations are not performed by the SFR infrastructures and applications need to perform these steps explicitly.
5.2.7 File Open
[1046] A file open is always by name. For the SFR to deliver its intended performance, open and read are performed as a single action. Other open modes relate to the read-write, write, create, truncate and append mode.
[1047] The create mode is treated as for a create request (see above). The truncate and the append mode are not supported for small files (the truncation could be supported by keeping the blocks allocated to the file and reducing its length in the file metadata).
[1048] For read-only, read-write and write modes, the PSR service behaves as follows. The open is successful if the file name exists and the access permissions are compatible with the read request. However, to reduce the number of I/O operations to 1, the target PSR service (that caches the bitmap for the PSR it manages) proceeds as follows:
[1049] 1. It verifies from the bitmap that a file corresponding to the name exists, starts at the specified block offset and has the specified length (the suffix is ignored, initially).
[1050] 2. Then it performs the single I/O operation needed from the disk to read the contiguous file blocks into a buffer of appropriate length.
[1051] 3. Then it checks the file access bits against the identity of the requestor. If the request is not compatible, the data read in is discarded and the requestor receives a negative acknowledgement.
[1052] 4. Then it checks that the suffix (if any) corresponds to the one specified in the request. If there is no match, the data read in is discarded and the requestor receives a negative acknowledgement.
[1053] At this point the behavior differs depending on the open mode.
[1054] 1. In case of opens in read-write or write mode, the primary member of the peer set needs to coordinate the request.
[1055] 2. In case of opens in read-only or read-write mode, if all the above succeeds, the PSR service returns the data to the client with a positive acknowledgement for the request. The client caches the data so that the subsequent read requests on the file can be satisfied from the cached data.
[1056] 3. If the open is in write-only mode, the data is not returned to the client, but the PSR service keeps it in memory, so that subsequent write requests can be merged with the existing file data before they are written out.
[1057] 4. If the O_SYNC mode is requested, this has an impact on the behavior of write operations (see below).
5.2.8 File Read
[1058] File read operations are possible and are expected to be used when a file is opened in read-write mode. The inclusion of the read mode in the open causes the small file data to be returned to the requesting client with the open acknowledgement. So, theoretically, isolated reads should be of very little use. Nevertheless, the SFR service honors them.
5.2.9 File Write
[1059] File write operations are coordinated by the primary set member because it must make sure the other members of the set are in sync before an acknowledgement is returned to the requesting client.
[1060] Writes are limited to the length of the file specified in the file name. They can actually exceed the file length at any time as long as they do not go beyond the last block of the file.
[1061] If the O_SYNC flag is set in the open request, all writes are committed to disk as they are received and the client is sent back an acknowledgement only when the data is on stable storage. If the above flag is not set, the client request is acknowledged as soon as the peer set members have received the data to be written and the coordinator is aware of this.
5.2.10 File Sync
[1062] This primitive must be coordinated by the primary set member. It makes sure all the data cached in the server for a given file is written out and an acknowledgement is sent back only when all the members of the set have the cached data on stable storage.
5.2.11 File Close
[1063] File close has no practical effect for files open in read mode. However, in the case of files open in ways that include the write mode, it causes any data cached in the server that pertains to the given file to be scheduled for being flushed out. Acknowledgements to clients are asynchronous with respect to the flushing of the data. However, if the O_SYNC flag is set in the open request, the acknowledgment is synchronous to the close, although because of the flag, the data must have already reached stable storage.
6 SFR Backups and Restores
[1064] This section provides some details on how the files stored within the SFR can be backed up and restored to MaxiFS platforms or to other systems.
[1065] Performing backups and restores of the SFR is expected not to require special utilities. The purpose is that customers should be able to use whatever utilities they have available without having to adapt to ad hoc programs.
[1066] This is possible for the following reasons. The SFR is seen as an integral part of the hierarchical MaxiFS name space. Therefore, whether a backup utility targets the SFR portion of the name space, one of its subdirectories or the entire MaxiFS name space, the ability to traverse the entire name space and to read and write files in the SFR is part of the design.
[1067] The names of files stored in the SFR are artificial and cryptic. Nevertheless, the entire SFR hierarchy can be copied to other file systems that are large enough to contain it because the names are compatible with those used in the standard Unix file systems.
[1068] The restoration of other types of hierarchies to the SFR is not possible, unless the names the files and directories use are compatible with those used within the SFR and the names map to locations and peer sets that exist in the target SFR name space.
[1069] The restoration of backups to an existing SFR is possible if the number of peer sets the target SFR has available is not smaller than that of the SFR (or portion thereof) that was backed up and the size of the drive volumes used in the target SFR is not smaller than that of the source SFR. This is possible because, with appropriate privileges, any utility can overwrite existing files in the SFR. The best practice, however, is that of wiping out the content of an SFR or of a subset being restored before overwriting it with the content of the backup.
7 Peer Set recovery and Remote Replication of an SFR
[1070] In the normal case, peer sets have three active members. It is possible that during the normal system operation, some nodes may become unavailable and may have to be replaced by others. For this to work properly, the following is envisioned:
[1071] The metadata that implements the normal MaxiFS name space hierarchy must be copied to a new member of the set so that it is completely in sync with the other members. This is a logical operation that does not imply any specific restrictions on the file systems and volumes that implement such metadata hierarchy, as long as the available space in the new peer set member allows this hierarchy to be copied.
[1072] Since the members of a peer set have identical copies of their PSRs, it is necessary to make sure that new members brought into the set are updated with respect to their copy of the PSR. As mentioned, a new member cannot have a PSR volume that is not large enough to contain the file that uses the block with the highest number.
[1073] Assuming that the size requirement of the new peer set member is met, the fastest way to synchronize the new member of the set is that of providing a volume copy facility integrated with the PSR service. What this entails is the following. When the PSR needs to be updated, the source PSR initiates a volume copy to the target peer set member. As long as at least two members of the peer set are fully operational, update operations in the PSR can progress normally. Read-only operations are only supported by the members that are in sync. Whenever a new update operation coordinated by the peer set primary member is requested, the member being updated should take a look at the disk offset the copy has reached. Any operation that relates to portions of the volume that have been updated already can be updated with the new operation requested. The ones beyond the location being copied need not be updated because they will be updated when that section of the volume is copied.
[1074] The volume copy facility can be used to update remote copies of the infrastructure, by copying the individual volumes.
VI. Conclusion
[1075] All of the references cited above are hereby incorporated herein by reference in their entireties.
[1076] Although certain embodiments of the invention are described above with reference to FIG. 4B, which shows a single client, it should be understood that a storage system may include multiple clients, each having a FS client component that communicates with the FS server components over the network. Each FS client operates independently to service requests received from the filesystem in its respective client device.
[1077] In the embodiments described above, the FS client and the FS server components are additional components that are installed respectively into the client and the storage provider. It should be noted, however, that some or all of the FS client functionality may be integrated into the filesystem 414 or other client component (e.g., a client operating system), and some or all of the FS server functionality may be integrated into the storage processor or other storage provider component (e.g., a storage provider operating system). Thus, for example, embodiments of the present invention may include a filesystem with integrated FS client functionality, a storage processor with integrated FS server functionality, and an operating system with integrated FS client and/or FS server functionality.
[1078] It should be noted that, because the FS client components and the FS server components communicate with one another, such communications do not need to comply with a standard network file protocol such as NFS or CIFS. In a typical embodiment, such communications utilize a specialized protocol that allows for interchange of storage management information such as, for example, the locations of files within the storage system, movement of files within the storage system, replication of files within the storage system (e.g., for redundancy or load balancing), and tasks to be performed by the various storage providers, to name but a few. The specialized protocol provides for communication between FS client and FS server (e.g., for satisfying application requests) as well as between FS servers (e.g., for managing storage and reporting statistics).
[1079] It should also be noted that, because the FS clients and FS servers resolve pathnames based on a hashing scheme, the storage system does not need a separate metadata server for translating pathnames. Furthermore, pathnames are resolved in a single operation.
[1080] It should also be noted that, when multiple instantiations of a file are stored in different storage providers (e.g., for load balancing), rather than having the target storage provider return to the client a list of storage providers having copies of the file and allowing each client to select one of the storage providers (e.g., randomly or via a policy-based scheme), the target storage provider may return a different one of the storage providers to different clients so that each of such clients accesses the file through a different storage provider.
[1081] It should also, client, server, computer, or other communication device.
[1082] It should also be noted that the term "packet" is used herein to describe a communication message that may be used by a communication device (e.g., created, transmitted, received, stored, or processed by the communication device) or conveyed by a communication medium, and should not be construed to limit the present invention to any particular communication message type, communication message format, or communication protocol. Thus, a communication message may include, without limitation, a frame, packet, datagram, user datagram, cell, or other type of communication message.
[1083] It should also be noted.
[1084], the FS client and the FS server components are implemented in software that is converted into a computer executable form, stored as such in a computer readable medium, and executed by a microprocessor under the control of an operating system.
[10.
[1086] The computer program may be fixed in any form (e.g., source code form, computer executable form, or an intermediate form)87]).
[1088]89] The present invention may be embodied in other specific forms without departing from the true scope of the invention. Any references to the "invention" are intended to refer to exemplary embodiments of the invention and should not be construed to refer to all embodiments of the invention unless the context otherwise requires. The described embodiments are to be considered in all respects only as illustrative and not restrictive.
[1090] The embodiments of the invention described above are intended to be merely exemplary; numerous variations and modifications will be apparent to those skilled in the art based on the above teachings. All such variations and modifications are intended to be within the scope of the present invention as defined in any appended claims.
Patent applications by Bruno Sartirana, Loomis, CA US
Patent applications by Don Nguyen, Tracy, CA US
Patent applications by Ernest Ying Sue Hua, Cupertino, CA US
Patent applications by Francesco Lacapra, Sunnyvale, CA US
Patent applications by I. Chung Joseph Lin, San Jose, CA US
Patent applications by Kyle Dinh Tran, San Jose, CA US
Patent applications by Nathanael John Diller, San Francisco, CA US
Patent applications by Peter Wallace Steele, Campbell, CA US
Patent applications by Samuel Sui-Lun Li, Pleasanton, CA US
Patent applications by Thomas Reynold Ramsdell, Palos Verdes, CA US
Patent applications by Overland Storage, Inc.
User Contributions:
Comment about this patent or add new information about this topic: | http://www.faqs.org/patents/app/20130013654 | CC-MAIN-2014-52 | refinedweb | 75,936 | 57.2 |
That's pretty much what I want. I need to create my own base with this program, I need to edit this information and in the end, before I go out of the program I want it to be saved (not during...
That's pretty much what I want. I need to create my own base with this program, I need to edit this information and in the end, before I go out of the program I want it to be saved (not during...
You say I should do this line by line or I misunderstood?
Thanks for help! There are also few good ideas that I could use BUT:
first of all I'd like it to be fully-working program. For now I have two classes: movie and customer. I can add and delete...
I use counter which role is to remember how movies are in base, I write an example so You could see what I mean:
int id_movie=0;
(...)
movie_base[id_movie].setMovie();
id_movie++;
Because it's not the first time today I see someone's mentioned these vectors I decided to give it a shot... and I'm confused a litte.
So I have my class movie, I declare vector:
vector...
Hello there!
I'm in a middle of my DVD rental project. It's my first bigger thing in c++ and I had my c++ classes long time ago, so I have a few issues and difficulties.
So, I have a class:
...
Hey, I'm reading now a C++ tutorial from cprogramming.com, lesson about a structures. And I see now and example of pointers in structures:
#include <iostream>
using namespace std;
struct xampl...
Ohhh god, I'm so stupid. I've never considered that function can be something else than void or int :o
Although not everything is okey.
case 1: system("cls");
...
So, in this example it will be?
int
DSWbalance(struct
tree *root)
{
/*algorithm*/
return root;
}
I'm thinking of something like:
struct tree *mytree;
/*code code*/
head=DSWbalance(head);
int DSWbalance(struct tree *root)
{
What I have in mind is this: x=x+y (x is from now old x plus change)
My line of thought goes like this:
1. My tree points to sth, DSW changes what my tree point to -> DSW changes sth.
2. Now I...
Understood but my mind cannot convert it into C. I think I'm to sure what's mine. My balance function should contain return balance()?
Ah that's my bad, I must have missclicked something: that's what I have:
int DSWbalance(two *p){
struct tree *q;
int i,k=0,nodecount;
for(q=p,nodecount=0;q!=0;q=q->right)...
Yes I did, and it's confusing. All I need now from that article is vine_to_tree and tree_to_vine functions right? It's probably because of C/C++ differences, but there are some things I don't get....
int leftrotate(two **p) {
struct tree *tmp;
int data;
if(*p==0 || (*p)->right==0)
return 0; //no right node
tmp=(*p)->right;
(*p)->right=tmp->left; //move node
tmp->left=(*p);
p->right=tmp->right;
Right my bad, obviously, there should be
@edit: No, I'd rather kill myself than finish that project. Now rotations doesn't work...
p->right=tmp->left;
anduril462, that...
I must admit pointers are my pain in the ass. In my previous assignment I had like 1000 lines of code and all I did was avoiding pointers (I had 0 of them). But this time is no way to accomplish my...
I changed rotation a bit, so it's now:
int leftrotate(two *p) //rotates once
{ struct tree *tmp; int data;
if(p==0 || p->right==0)
return 0; //no right node
...
Yes I know it's not AVL. That was my first concept (so i.e. comment You see it's what's left), but I changed my mind 'cause I'm can't manage AVL. All I want now is two simple binary trees and one of...
Hmm... when I do so, new root is 0, and height of the balanced tree is 0 as well, so still something is not correct right?
Ye, it's already changed to
typedef struct tree
{
int val;
struct tree *left;
struct tree *right;
}one,two;
Hi! I got this little assignment on fundamentals of computer programming. Task is about creating two binary trees (simple binary tree and balanced one) and comparision of it. I've got a whole code....
In another part of my code which You can't see, there is a edition tool. It bases on strcmp(). Actually it's like line 55, but it goes this:
printf("What is the tiitle of the book?\n");...
Is there any idea to print it nicely?
There are unused variables because this is not a whole code, just 1/7.
I'm aware that there is better way to do this, but my deadline is Monday, and with my C knowledge it would take a week to... | https://cboard.cprogramming.com/search.php?s=b2fd929d72bd51c0372fa8a64e2d8bc0&searchid=4922524 | CC-MAIN-2020-24 | refinedweb | 838 | 82.75 |
Closed Bug 281988 Opened 18 years ago Closed 17 years ago
Stop sharing DOM object wrappers between content and chrome
Categories
(Core :: DOM: Core & HTML, defect, P1)
Tracking
()
mozilla1.8beta2
People
(Reporter: jst, Assigned: brendan)
References
Details
(Whiteboard: extension buster? needed for b2)
Attachments
(22 files, 14 obsolete files)
Since the the beginning of time we've always shared JS wrapper for our native DOM objects between content and chrome, and this has always caused security problems, some we've found, others I'm sure we haven't. Our approach in tackling these problems so far has been to use Caillon's XPCNativeWrapper() helper in JS to reach the actual DOM properties from chrome and not properties the page set. But that means that developers need to be aware of this problem, and lots are, but not all, and we're all human so occatinally we forget to use the wrapper and we introduce potential security problems. Not sharing JS wrappers between content and chrome is fairly easy, but it has its consequences. Like XBL bindings must not be attached more than once just because a content DOM object is accessed from chrome. And not only that, but XBL properties etc that were attached to the content DOM object won't be visible to chrome. I imagine that it's ok for us (at least from a technical point of view) to make content XBL unreachable from chrome, and that's what my patch does, and it doesn't appear to break anything obvious (that I've seen so far), but it may break other apps, who knows... I'll attach a patch for others to look at that makes us not share JS wrappers between content and chrome (but we'll still share among chrome windows, and among content windows, of course). Please comment and share thoughts on this issue. I believe that at some point in the future we're going to have to make a change like this, and the sooner we do that the easier it'll be for us. But yeah, it would've been even easier to do this 3 years ago... We could, though it won't be easy, provide a mechanism for chrome code to specify on a per JS stack frame basis (or whatever) that it wants to temporarily share JS wrappers, but I don't know yet that we'll need to do that. Thoughts?
Oops, that can't be the right patch -- it looks like a s3kr3t plugindir one I reviewed recently! ;-) /be
The change, as described, seems pretty reasonable to me. The XBL thing may bite some extensions, but the really popular ones that I can think of that bind XBL to content (flashblock, specifically) should be ok...
Yeah, duh, that was the right patch filename, but wrong directory. This is what I meant.
Attachment #174106 - Attachment is obsolete: true
Does this include things like frame names reflected into the window object? This might break a few odds and ends such as the XUL directory listing.
If chrome accesses an object and sets properties on the wrapper, can content later access those properties? I can't quite tell from the patch, but the code in the patch seems asymmetric wrt chrome and content...
What about things like DOMI? The JS object browser in DOMI would be seriously affected by this, correct?. --BDS
If it turns out that we do need to be able to access the content-wrapper from chrome, maybe one option would be to have some XPC method that allowed you to manually fetch the content-wrapper given a chrome one. Rather then doing it per window or some such which seems like it's easy to forget that you've done for a certain window.
To clarify comment 6, if content can access the chrome-wrapper (say chrome ends up doing JS access first, then content does JS access), then we still have a problem...
(In reply to comment #6) > If chrome accesses an object and sets properties on the wrapper, can content > later access those properties? I can't quite tell from the patch, but the code > in the patch seems asymmetric wrt chrome and content... Yeah, the code is asymmetric, but we want that. We only ever want to "prevent" chrome from seeing JS wrapper changes that were done in content, the other way around we don't want to prevent anything, but in reality all access is prevented the other way around since content can generally never access chrome. In the odd case where content code has requested UniversalBrowserRead/Write then IMO content code *should* be able to see chrome wrappers n' all, since there's nothing we want to hide going that way, IMO. (In reply to comment #7) > What about things like DOMI? The JS object browser in DOMI would be > seriously affected by this, correct? Correct, this screws DOMI in a big way. Need to think about what to do there... >. I don't think so. This is not about protecting one domain from seeing or not seeing something in another domain, this is strictly about making chrome development safer. And no matter what wrapper is created for any given domain, there's much more to protect than what's on the wrapper, most of what we're protecting is in the underlying object that's the same no matter what wrapper is used.
i gave bz a list of things i expected to be screwed when i first saw this bug reported, (i didn't list domi, i presumed someone else would cover it for me), domi-sidebar, xul error pages (bz said biesi would check on it), my company's product, my previous company's product, loading chrome in a tab (chrome://navigator/content/, chrome://messenger/content/, chrome://chatzilla/content/, or ...), loading *any* chrome url in winEmbed/mfcEmbed/gtkEmb .... :)
timeless, I think you have the wrong bug... This isn't the bug about dropping chrome privs for chrome:// uris in a content docshell.
So I thought some more about how to do this w/o breaking DOM Inspector etc, and how to provide a way for chrome to access the content wrappers in the rare case when it really *needs* to, which AFAIK won't ever happen in our apps (except in the DOMI case). Ideally we'd have a privilege-like mechanism that we could use in chrome to enable wrapper sharing only in a given scope, but unfortunately that'll be *really* hard to do w/o a lot of suck. We'd need to make XPConnect aware of this and make it check for this *every* time a wrapper is accessed from JS, even in the case where the wrapper already exists (which needs to be as fast as possible, now it's basically just a locked hash lookup). I don't want to mess with that code, it's performance critical. And approaches like providing a method that returns the content wrapper to chrome only goes so far, since any code (read XBL methods/getters/setters) on the content wrapper won't work as expected when accessed in the content scope, even if it's accessed through the right wrapper. So the alternative to me looks like a two-sided approach. 1: For apps like DOMI (and Venkman?) we'll add a global function on chrome windows that it can set that will from then on enable XPConnect wrapper sharing (of wrappers that have not been created thus far) between this chrome window and content windows. This function can only enable sharing, disabling it gets tricky (since XPConnect is caching the already created wrappers etc, so a predictable on/off is not easily doable). 2: For trancient access to a content wrapper from chrome code we'd introduce a new eval()-like method that would evaluate an expression in the given scope, and on the JS context of the given scope (i.e. pass it a content window and the code will be run on the content windows context, just as if it was accessed from the content window). The JS that is run in this method runs with the priveleges of the given scope, i.e. no elevated rights for the code that runs from within this new uber-eval(). Maybe this should be evalInContext(), or something like that, and I think we need a way to not only eval a string of JS, but also a way to call methods (and access properties) on objects from chrome, that means we'd need a way to pass arguments here. Anyone see anything obviously wrong with an approach like this? Or is there an easier approach here that I'm just not seeing? I'll start hacking on this, yell if I should stop.
i'd argue domi and venkman should probably fall into #2. i don't want someone to write a page that waits to attack a domi or venkman user. the code should be careful. and i do believe i meant this bug when i commented in it, i hadn't read or heard about the other bug at the time. and my implementations of all these things has always involved exiting the local <browser/> region and sticking things back into the <xul:window/> or vice versa.
(In reply to comment #11) [...] > xul error pages (bz said biesi would check on it), I can't imagine how this would break XUL error pages... [...] > loading chrome in a tab (chrome://navigator/content/, > chrome://messenger/content/, chrome://chatzilla/content/, or ...), nor would I expect any of these to break. > loading *any* chrome url in winEmbed/mfcEmbed/gtkEmbed/..., I can't see how this would change things in any way for embedd This shouldn't change anything for embedders, we're only changing how *chrome* JS sees content JS objects, embedders are in no way impacted by that that I can see. > .... :) I guess I'm not getting what you're saying here...
(In reply to comment #14) > i'd argue domi and venkman should probably fall into #2. i don't want someone to > write a page that waits to attack a domi or venkman user. the code should be > careful. The whole point of DOMI and Venkman is to see what's on the webpage, what's on the JS objects etc, this change won't open up any new security exploits that we don't already have, so noone gets more vulnerable due to this change. And this is merely about preventing content from shadowing DOM properties from chrome, no matter what a webpage does should ever give content code any elevated priveleges, and if there's ever a bug that does that, this won't save us. I'd be all for only doing #2, but I don't have (nor has anyone else I know) the resources to make DOMI and Venkman use it, so #1 makes a whole lot of sense to me.
my version of xul error pages reached from a chrome page into a content page to play with the pretty error.
(In reply to comment #17) > my version of xul error pages reached from a chrome page into a content page to > play with the pretty error. Well then your version of XUL error pages is free to use either of these approaches here to get around any possible problems. And the only such problems would be if your code relied on XBL properties/methods in the content page (which seems rather far-stretched to me).
I claim we want this for the 1.8b2 milestone, to iron out any problems so it ships in Firefox 1.1. /be
Flags: blocking1.8b2+
Flags: blocking-aviary1.1+
While testing my extensions with this (for related tracking bug 289231), I noticed something unusual. My extensions broke, but *not* when they were triggered by the first page load in a new tab. Reloading a tab, or visiting some other site first in the new tab did result in breakage. Since breakage is probably the (unfortunately) expected result of this bug, I wonder if the current patch is missing some special case for a new tab. FYI, whoever is testing this. Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.7.7) Gecko/20050405 Firefox/1.0.3
So, just to be clear: . If chrome-code stores a property/object on a contentWindow, will the property + wrapper be lost as soon as the chrome-function returns? . Presently, Adblock (among others) is broken, because it stores a per-page list of all blocking-metadata in each webpage's scope. It does this by adding an array on contentWindow._AdblockObjects, from chrome. The array now "disappears". If this is expected breakage, with no possible workaround, then I have to ask: what kind of security problems, besides "occatinally we forget to use the wrapper" is this patch really addressing?
(In reply to comment #22) > If this is expected breakage, with no possible workaround, [...] The patch is incomplete and there would most definitely be a workaround before this was landed for real. But since no existing code is written to use the workaround it's valid to test for breakage with this half of the patch. And boy, have we seen breakage! Far more than I expected, though we knew it was risky. > then I have to ask: what kind of security problems, besides "occatinally > we forget to use the wrapper" is this patch really addressing? Just that. The current model is the chrome author needs to remember to get content properties safely each time. The intent is to switch to a safe default and require the places that really mean it to explicitly code "get me the javascript property". In that model the downside of forgetting is you get nothing, as opposed to the current downside of forgetting being a potential security issue depending on how the data is used. Obviously this isn't going to fly on the branch. It'll have a bumpy landing on the trunk, too :-(
I vote to kill this patch entirely, before it does more harm. . Breaking the Intuitive + useful scoping between chrome and content, just because a few people "forget to use the wrapper", is a very poor choice. Perhaps borderline absurd. It: a) throws numerous extensions + descriptions + tutorials out the window; b) breaks a statistical majority of code against the future minority gain; c) inreases the chrome/extension-dev learning curve; d) doesn't appear qualified by any real security issues (unless forgetfulness counts). . Caillon's wrapper works. So why not leave well-enough alone?
My viewpoint is always about keeping the bar low for people trying to learn the scripting layer around Mozilla, so I put my "struggling learner" hat on while reading this. I vote against this fix because it just makes Mozilla-the-platform harder to learn and harder to use. Can anyone point to a fixed and a not-fixed script example for me to munch on, even if it's old code? - N.
This patch makes it easier, not harder, to write good code for the Mozilla platform. At the moment it is very easy to write highly insecure code, and learning how to work around it is hard. This patch makes it easy to write secure code, and it would only be hard to learn how to write potentially insecure code.
(In reply to comment #24) > just because a few people "forget to use the wrapper" You clearly have no idea what the scope of the problem is. I'd estimate that the wrapper is used in no more than 30-40% of the places it _should_ be used in in well-audited Firefox code (all of which postdates it). In most extensions it's not used at all. In SeaMonkey code that predates the existence of the wrapper it's rarely used. > a) throws numerous extensions + descriptions + tutorials out the window Yes, this is a problem. But looking at the code that is broken by this wrapper (various extensions mostly), it's broken because it's susceptible to being exploited by the web page. In other words, this patch is doing exactly what it should be -- preventing people from writing extensions with security holes, which is what they are doing now. > b) breaks a statistical majority of code against the future minority gain; It would fix about half a dozen existing open bugs that I know of. It would allow vast simplification of the code in both Firefox and extensions that care about security. It would keep extensions that don't care about security from being exploited. I'm not sure where you got the idea that there is "future minority gain". There is a very distinct gain here, if nothing else in a safer browser for our users and a lot less engineering time spent on whacking this issue in every single place it shows up. If it's of any interest to you, I saw another dozen or two places in Firefox context menu code that aren't using the wrapper and should be while I was looking up the code example for Nigel. This is code that's been audited to death already and uses the wrapper stuff extensively. > c) inreases the chrome/extension-dev learning curve; Frankly, no. What it does do is make it a lot harder to write extensions or chrome that have security holes by design. It makes it _easier_ to write safe code. That's the whole point, in fact. > d) doesn't appear qualified by any real security issues (unless forgetfulness > counts). Forgetfulness, just like any other human factor, is a real security issue. Having security policies in place that are easy to follow is a much better security architecture than having hard to follow ones (the latter is what we have now). (In reply to comment #25) > I vote against this fix because it just makes Mozilla-the-platform harder to > learn and harder to use. This seems to be a common misconception. Can you, or someone else, give a reasonable example that is: 1) Harder to do with this proposal. 2) Doesn't open one up to security holes. ? > Can anyone point to a fixed and a not-fixed script example for me to > munch on, even if it's old code? I'm not sure what you mean by "fixed" and "not-fixed". I can point you to an example of safe code as it has to be written now and code that would become safe with the proposed patch. The relevant code is in the contentAreaClick in browser/base/content/browser.js Current code (I left out some parts not needed to illustrate the point; the density of XPCNativeWrapper calls is not quite this high in the original code, because there's other logic in between them): wrapper = new XPCNativeWrapper(linkNode, "href", "getAttribute()", "ownerDocument"); target = wrapper.getAttribute("target"); if (!target || target == "_content" || target == "_main") { var docWrapper = new XPCNativeWrapper(wrapper.ownerDocument, "location"); var locWrapper = new XPCNativeWrapper(docWrapper.location, "href"); if (!webPanelSecurityCheck(locWrapper.href, wrapper.href)) return false; } Code as it could be written: var target = linkNode.getAttribute("target"); if (!target || target == "_content" || target == "_main") { if (!webPanelSecurityCheck(linkNode.ownerDocument.location.href, linkNode.href)) return false; } Note the vastly improved clarity and ease of authoring of the code.
Stupid question. Looking at the "broken" extensions, so we know if there are other ways to do what they are trying to accomplish? Or are some of these extensions relying on things that will never work?
See comment 13.
Boris: Unfortunately, since you did mention comment13: . Toggling wrapper-sharing at the window-level will only result in the popular extensions turning it on, making things "insecure" again. . As for evalInContext(), throwing away calling-privileges would kill access to functionality like: setting an in-page eventlistener that runs privileged code. Or: clobbering a native deliberately, from chrome, with an intelligent getter that returns the native code for all chrome-calls (but something else, for in-page calls). . After considering the contentAreaClick example: . Why not flag everything set by in-page code, at the time it's set. Whenever a flagged / potentially-clobbered value is accessed: a) the code in charge of retrieval would check for a "preferNative" flag in the calling scope-chain -- allowing window-level or isolated toggling; b) if the flag is found, the retrieval code would lookup the native value (if any). This would permit existing extensions to continue unbroken (most don't need to access in-page clobbers), while securing the greater areas of concern. The DOMI would simply need the "preferNative" flag, at window-level; something a scripted-overlay could externally lend.
make that: "preferLocal", sorry. Native-lookup would be default.
> Toggling wrapper-sharing at the window-level will only result in the popular > extensions turning it on, making things "insecure" again. That's the choice of those extension authors. They're free to write insecure code if they really want to (and others are free to publicise this fact, of course)... Note that the window-level thing is there for cases when explicitly annotating every access would be too hard. It's NOT the preferred method of doing things. > throwing away calling-privileges would kill access to functionality jst, would the proposal actually remove that functionality? I'm not up enough on my XPConnect and JSeng to tell.... For similar reasons, deferring to brendan and jst on the counterproposal in comment 30.
My proposal to jst this morning is this: Give chrome and content separate wrappers, but make a one-way binding from chrome to content whereby any property set on a chrome wrapper for native instance x sets the same property on x's content wrapper -- but not vice versa (content sets a property on its wrapper for x, nothing happens to chrome's wrapper for x). Comment 30 describes something in terms that are hard to implement. Where is this flag, and how does trusted code find the "native" value that was clobbered? In the prototype object? That too could be overwritten. You end up needing two wrappers for each native instance x, in order to avoid a pigeon-hole problem. Wherefore my counterproposal. /be
Brendan: Since your proposal likewise solves the breakage, it sounds good. . Per your question: Caillon's XPCNativeWrapper uses Components.lookupMethod(), to determine what a wrapped-native's original property/method was. See xpccomponents.cpp#2004 -- I was thinking my proposal would do the same. The flag itself could just be a variable (__preferLocal__), which would make checking up the chain extremely simple; all top-level contexts (window) would have it default-defined on their prototype: chrome:false, content:true. . Also: with(){..} statements seem the direct equivalent of jst's evalInContext(), from comment13. Could you clarify why usage is more recently deprecated? It's a very useful thing.
.append (for completeness): The other flag -- "potentially clobbered / in-page" -- would be set by the backend, at parse-time, and not script-accessible. Member lookup would always check for it.
)?
i was explicitly asked to determine whether jst's proposed patch breaks our app, the answer is yes. the spider is 100% hosed since it starts out as chrome and then pokes through looking at the web page content to get things like document.links. (and yes, we're aware of the security concerns, for the most part it isn't an issue for our app, although we do use xpcnativewrapper and were screwed when it was moved.) i haven't tested the rest of our code, i decided to take a detour and see how i can work around this security measure (it's possible). i'm 99.9% certain the code will also break our other modules, as they all behave the same (some use xpcnativewrapper more than others, but they all start out as chrome and spend their time interacting with content).
So a Web page could trick your spider into running arbitrary code? That seems like something you _should_ be concerned about.
what i'm concerned about is whatever my manager and his manager tells me to be concerned about. note that as we're a commercial product, we don't license people to spider random web pages, only their own apps, as such, if their own apps decide to attack mozilla, well... fine then. and no, it's not ideal, yes we're working on it, but our app is not a generic web browser, it has a special domain where the rules and realm work differently.
rue: we're working on something sort of like what you sketched, but not exactly. Regarding with statements, they are deprecated because in function f(o) { with (o) return x; return null; } f({x:42}) returns 42, while f({}) returns null -- but setting a global x = 42 before calling f({}) also returns 42. Only at runtime do we know what names bind to which properties of what object. One way to keep with (for utmost compatibility) is to extend JS (a la VB, IIRC) to allow with users to specify that they mean "in the object named by this with statement": function f(o) { with (o) return .x; return null; } (Obviously a contrived example, since it's so short.) With is not the same as evalInContext, however, since it extends the scope chain whereas evalInContext replaces the scope chain. /be
ok, fwiw domi has the ability to violate the new model jst created in the preceding patches, i can make my code use that approach.
(In reply to comment #36) > )? It depends on where they were compiled. If evalInContext is like eval, you pass it a string containing JS program source. If that contains the listener function it will be compiled in the content context, and have content privs. Adding a chrome-loaded (therefore chrome-privileged) function as a listener would work, but would not use evalInContext by itself since there would be no property path to the function. The chrome script would have to set the function as the value of a content (wrapper) property first, then evalInContext("thatContentNode.addEventListener(thatContentNode.listenerRef)") or some such. Ugly. We're trying now to avoid separate wrappers, to preserve compatibility while restoring by-default security, more or less along the lines rue sketched. The devil is in the details, though. /be
(In reply to comment #37) > i was explicitly asked to determine whether jst's proposed patch breaks our > app, the answer is yes. the spider is 100% hosed since it starts out as chrome > and then pokes through looking at the web page content to get things like > document.links. document.links should work just fine even with separate wrappers so I don't understand why that should break your spider. The only thing that is affected is *js* set properties on native objects. So the only way it'd be affected is if the webpages you're crawling overrides document.links and makes it point to some custom js-object. Is that really what you are doing?
try it yourself, use domi, pick any page with at least one link. 0. open navigator to a page w/ at least one link [perhaps verify that it works, javascript:void(alert(document.links.length))] 1. open domi 2. in domi, file>inspect window>navigator 3. in the left pane, select document-dom nodes 4. expand this path: #document/window/hbox/vbox[2]/hbox/tabbrowser/xul:tabbox/xul:tabpanels/xul:browser 5. in the right pane, select object-javascript object 6. expand this path: Subject/contentDocument/links/length with the patch from this bug, the result i get is 0. note that there is a way to get domi to give you the other answer, and if necessary i will use it and complain when someone breaks it. (fwiw links.length is the only thing that's broken, contentDocument.documentElement.innerHTML pretends the document is empty, among other properties that are relatively unhappy but willing to lie.) i was starting to investigate what it would take to do what domi did, but venkman crash :)
Comment 44 needs investigation, but it doesn't prove a general problem (note that we are not trying to make jst's separate-wrappers patch in this bug land for 1.0.3, so don't fret). In fact it explicitly says something odd is going on with document.links.length. Timeless: are you gonna try to debug again? /be
Actually, I have a related question. In our proposed setup, how do array-like properties on node lists work? So if I do document.getElementsByTagName("head")[1], say? Those aren't on interfaces, really...
bz: [] is a shorthand for .item(...). the same had to apply to XPCNativeWrapper before it.
Thanks bz: I see how retaining simple-but-currently-insecure syntax plus a fix yields simple-secure-syntax at the cost of less access to content js properties and/or more complex syntax with no cost. Re comment 33: if s/set/set or get/ applies, what happens if you watch() a content property from the chrome? More generally: I don't believe content-set js properties are interesting only to the special cases of DOM Inspector, spiders and compiled apps. Chrome scripting is useful as a general purpose macro-like language when it manipulates living, running Web applications, just as VBA-for-Excel does for spreadsheets containing formulae. That large class of chrome uses shouldn't be made obscure or impossible to do. - N.
jst just checked in branch patches for bug 289074 that implement what Brendan was talking about yesterday in this bug. Initial testing seems to show the extensions broken by attachment 179766 [details] [diff] [review] are not broken by the new patch (AdBlock, chatzilla, dom inspector,...). Both Firefox and Suite 1.7.7 nightlies are available for testing, look for today's evening builds.
How much more do we expect here for 1.8b2? We're coming into the end game for b2 so we need to make quick work of remaining issues or push them off to 1.8b3.
We need to fix this for 1.8b2. Nigel's right, we shouldn't break getting content-set, non-overriding DOM properties from chrome -- we just need to sandbox them rigorously. To do that without adding the cost of a thunk per content-set getter, setter, and method, we should use jst's separate wrappers approach, with a magic mirror between the chrome and content wrappers, such that content can't affect chrome's wrapper, and chrome runs content-defined getters/setters/methods in the content scope, on the content context (necessary for native method invocation to result in the content principal being found by caps code). The magic mirror will need to thunk methods, but only when chrome script gets a content-set native method on the content wrapper. Content-set scripted methods carry their principals in their script if uncloned function objects, or in their immutable scope chain if cloned function objects. More tomorrow. /be
> content-set, non-overriding DOM properties from chrome -- we just need to > sandbox them rigorously. Just to be crystal clear: we must bypass, not sandbox, content-set DOM-overriding properties. /be
Work in progress, does not compile even (nsTHashtable.h and friends are not so friendly to opaque struct types). But the idea is not only to wrap content native method getter/setter/values safely, but (this part is not even started here) to set content property foo when the corresponding chrome wrapper for the same DOM native content object has foo set. /be
(In reply to comment #53) This seems reasonable to me, and with the code to set properties across scopes I think this could go in with a really low risk of breaking extensions. As for the hash usage, you could just use pldhash directly as there's already other code in nsDOMClassInfo.cpp that does that...
Assignee: jst → brendan
Status: NEW → ASSIGNED
Priority: -- → P1
Target Milestone: --- → mozilla1.8beta2
Whiteboard: extension buster? needed for b2
And testing. Please help. jst, I see that nsGlobalWindow::OnFinalize assertion. Do I need your Aviary version, or some part of it? /be
Attachment #181807 - Attachment is obsolete: true
This patch picks up the old JS_SetObjectPrincipalsFinder API I introduced in late 2003 when working with caillon on eval-based native function exploits, and revises it incompatibly to locate the findObjectPrincipals callback in the JS runtime, not in a particular JSContext. Besides economizing and matching the other principals callback (for XDR), this guarantees coverage: I am worried about a JS component loading and running on a "safe context" that doesn't have a findObjectPrincipals callback configured. This patch beefs up the belt-and-braces checks in obj_eval and script_exec to throw an invalid indirect call error in any case where the eval function object, or the script object, has object principals different from its calling script's subject principals. But the main change here is to nsScriptSecurityManager::GetFunctionObjectPrincipals, so it no longer skips native frames when walking the stack to find subject principals. Skipping natives is a very old design decision (Netscape 4 at least, possibly 3), which assumes that natives are trusted code that have no trust identity, as scripts (with their codebase or certificate principals) do. This assumption is flawed, as the eval and script object exploits recently fixed demonstrate. Evil script can store a native function object reference somewhere (in the DOM, or in a JS object created to masquerade as a DOM subtree), knowing that the native will be called in a certain way, and run with chrome scope and principals. We've been patching the symptoms of this general flaw in caps for 1.0.x, but this fix addresses the root of the problem. I'm attaching optimistically, still recompiling and testing, but an earlier version of this patch that didn't touch jsapi.h fixed the known testcases. /be
Attachment #182036 - Attachment is obsolete: true
Attachment #182537 - Flags: superreview?(jst)
Attachment #182537 - Flags: review?(shaver)
Attachment #182537 - Flags: approval1.8b2+
Attachment #182537 - Attachment description: no shared wrappers, but general native function security (prerequisite to any future patch) → no unshared wrappers, but general native function security (prerequisite to any future patch)
Comment on attachment 182537 [details] [diff] [review] no unshared wrappers, but general native function security (prerequisite to any future patch) This appears pretty sane, and has some portions pretty similar to a patch that I think I worked on with you a while ago (not sure why that fell on the floor). r=caillon
Attachment #182537 - Flags: review+
Comment on attachment 182537 [details] [diff] [review] no unshared wrappers, but general native function security (prerequisite to any future patch) sr=jst
Attachment #182537 - Flags: superreview?(jst) → superreview+
Comment on attachment 182537 [details] [diff] [review] no unshared wrappers, but general native function security (prerequisite to any future patch) >Index: caps/include/nsScriptSecurityManager.h > // Returns null if a principal cannot be found. Note that rv can be NS_OK > // when this happens -- this means that there was no script associated > // with the function object. Callers MUST pass in a non-null rv here. > static nsIPrincipal* >- GetFunctionObjectPrincipal(JSContext* cx, JSObject* obj, nsresult* rv); >+ GetFunctionObjectPrincipal(JSContext* cx, JSObject* obj, JSStackFrame *fp, >+ nsresult* rv); Can you document what fp is used for, if it can be null, etc.? >+ // No chaining to a pre-existing callback here, we own this problem space. >+ ::JS_SetObjectPrincipalsFinder(sRuntime, ObjectPrincipalFinder); Should we warn if we find a pre-existing one? (And if we exit with one that's not us, perhaps?) > * All eval-like methods must use JS_EvalFramePrincipals to acquire a weak > * reference to the correct principals for the eval call to be secure, given > * an embedding that calls JS_SetObjectPrincipalsFinder (see jsapi.h). > */ > extern JS_PUBLIC_API(JSPrincipals *) > JS_EvalFramePrincipals(JSContext *cx, JSStackFrame *fp, JSStackFrame *caller); I'd like a Get in that name to keep me from reading "Eval" as a verb, but that's not what this patch is about. r=shaver on the JSAPI stuff.
Attachment #182537 - Flags: review?(shaver) → review+
I checked in attachment 182537 [details] [diff] [review]. /be
Brendan, sorry, but this Checkin have caused a serious Regression. I am using Windows-CREATURE-Tinderbox-Build 20050400 at moment. First Regression: Unable to install *.xpi with this Build. JS-Console throws: Error: uncaught exception: Permission denied to create wrapper for object of class UnnamedClass and XP-Install is aborted without installation. Second Regression: At Mozilla Startup JS-Console throws: Error: uncaught exception: Permission denied to get property UnnamedClass.Constructor without doing anything exept starting JS-Console. Third Regression: MailNews is unuseable at Moment, Thredpane is complete grey. Startup MailNews give a lot of Errors in JS-Console, e.g.: Error: uncaught exception: Permission denied to get property UnnamedClass.Constructor and using Mnenhy installed in Profile with older Build: Error: goMnenhy is not defined Source File: chrome://mnenhy-headers/content/mnenhy-headers-msgHdrViewOverlay-loader.js Line: 79 Error: goMnenhy is not defined Source File: chrome://mnenhy-headers/content/mnenhy-headers-msgHdrViewOverlay-loader.js Line: 49 and some more. Also Error: Error: uncaught exception: Permission denied to get property HTMLDivElement.nodeType was shown up. Formerly I have used Windows-CREATURE-Tinderbox-Build 2005050322 without this Patch, and it worked fine. Can you please backout this or fix it soon? TIA.
Add Screenshot from broken (regressed) Threadpane in MailNews
Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.8b2) Gecko/20050504 Firefox/1.0+ 02:16pdt this checkin is causing serious trouble If you try to install any extension FF locks up the moment the filecheck starts
I do not know if all possible regressions should be reported here. Opening external links in a new tab is also broke, it opens a blank new tab. (Options -> tabs -> Open links from other applications in -> A new tab in most recent window)
Apparently some of our UI needs to call a content native from chrome, yet have chrome privileges. I haven't debugged to see why. This means we can't use object principals for content natives, for now. It also says we have to change *something* -- we can't have it both ways. /be
I checked in a change to nsScriptSecurityManager.cpp to make it match the branch patch (which already landed on AVIARY_1_0_1...BRANCH and MOZILLA_1_7_BRANCH). I changed nsScriptSecurityManager::GetFunctionObjectPrincipals to skip native frames again. I also re-ordered the tests to avoid preferring cloned function object principals to eval or new Script principals (reviewers take note). That was an independent bug in yesterday's checkin, but not (I believe -- need to test more) relevant to the regression. Testing more today should show why our chrome counts on calling content-scoped natives with chrome privileges. XBL seems implicated. Builds should be restaged, I'll talk to people who can help do that in a bit. /be
Ok, the profile manager works again.
Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.8b2) Gecko/20050504 Firefox/1.0+ 10:02pdt build Adblock still causes Error: [Exception... "'adblockEMLoad: TypeError: adblockEntry has no properties' when calling method: [nsIDOMEventListener::handleEvent]" nsresult: "0x8057001e (NS_ERROR_XPC_JS_THREW_STRING)" location: "<unknown>" data: no] but appears functional. everything else seems to work again again
comment 68 is an unrelated problem caused by changes in Extension Manager's DOM. The exception appears in earlier builds as well.
The initial checkin caused bug 292864, bug 292860.
(In reply to comment #70) > The initial checkin caused bug 292864, bug 292860. And bug 292871.
Also bug 292902 I guess.
jst should look too. In thinking about native function hazards more deeply, it's clear that if we have only one bit (0 for scripted, 1 for native), we don't have enough information to decide whether to use the native function object's scope to find its principals. (We do have enough bits to decide how to handle the scripted case, of course; that is easy because scripts carry their own compiler-sealed principals, and cloned function objects have scope-sealed principals that override their prototype's script's principals.) But if the bit is 1, i.e., we have a native frame, we can't just skip it as we've done for ages -- that leaves us open to attacks involving dangerous natives. Nor can we use the native function object's scope-sealed principals, either -- that obvious broke a bunch of XBL-ish stuff yesterday. We need another bit. That bit is the condition (native function is a bound method and this frame is a call of it on a different initial |this| parameter than the native function object's parent [to which it is bound by native code using the JS API]). Both LiveConnect and XPConnect use the JSFUN_BOUND_METHOD flag to force |this| to bind to a reflected method's parent. That's good, it makes methods able to know the type of their |this| (obj in the JS API JSNative signature) parameter statically. But references to dangerous methods, though the methods are |this|-bound to their immutable parent scope objects, may nevertheless be extracted and called on a different |this| param. The JS engine will override that nominal param with the method's parent, and do the call. This patch adds a frame flag, JSFRAME_REBOUND_METHOD, that the JS invocation code sets in this case, and only this case (whether the function object is scripted or native, but that's just an aside). This patch then modifies nsScriptSecurityManager::GetFunctionObjectPrincipal to test, when about to skip a native frame, that the frame is not for a rebound call to a bound method. If it is, then the object principals for the method are returned, instead of the frame being skipped in search of a scripted caller. This defeats all native attacks using evil bound methods. Unbound methods (JS native functions by default are unbound) must be individually protected from misuse. I've done that in the last patch, on the trunk, for eval and new Script. I don't know of other dangerous unbound native methods, but welcome comment. After this patch is tested and goes in (I expect it will not break anything but exploits), we still have to consider the shared vs. split wrapper issue about which this bug is nominally concerned. /be
Attachment #182713 - Flags: superreview?(shaver)
Attachment #182713 - Flags: review?(caillon)
Attachment #182713 - Flags: approval1.8b2+
Comment on attachment 182713 [details] [diff] [review] distinguish rebound method calls from unbound and default-bound calls jst pointed out a flaw that can be fixed -- not sure why my testing didn't show the same hole he saw... new patch in a minute. /be
Attachment #182713 - Attachment is obsolete: true
Attachment #182713 - Flags: superreview?(shaver)
Attachment #182713 - Flags: review?(caillon)
Attachment #182713 - Flags: approval1.8b2+
Two comments, of which the last one is mostly me brainstorming and might not be right at all :) Using the native function object's scope-sealed principal really feels like the best solution so it sucks that we apparently can't use that. Would be good to investigate what exactly is hindering this. What I feel a bit uneasy about is that we detect the case that the |this| pointer was changed. When in reality the danger isn't that the |this| pointer is tampered with but that we're tricked into calling a native function without knowing. Could we make it so that when a native function is set as a member we flag the _function_ as REBOUND and then always give the frame calling that function the treatment you are now no matter if the parent was changed or not. Ideally we should really flag the member rather then the function, but I'm not sure if that's possible.
shaver can do second sr, I'd be delighted. jst's point was that one of the attacks rebinds the bound method in the same object that it is bound to already, so my parent != thisp check was not enough. This patch checks for a name change in the case where the this parameter is not being changed (is already the bound method's parent). The interdiff -w output is easy to read. I'll attach that next. /be
Attachment #182729 - Flags: superreview?(jst)
Attachment #182729 - Flags: review?(caillon)
Comment on attachment 182729 [details] [diff] [review] fixed version of last patch Fixed, but gdb is misleading me, and for some reason my tests are passing. jst helped a ton and we now see that this patch can't work. The idea is sound, but XPConnect (unlike LiveConnect) does *not* make bound methods for each wrapped native instance. We need a different approach in detail, but the same in general. In the mean time, I believe this is good for LiveConnect safety. I'll test that in another bug (and seek testing help, since FC3 gdb is pretty damn broken). /be
Attachment #182729 - Flags: superreview?(jst)
Attachment #182729 - Flags: review?(caillon)
jst has taken XPCNativeWrapper and rewritten it in C++ with a backward-compatible API, but with extra smarts: it wraps deeply (lazily), and if the second argument is not a string, it is taken as a constructor with which to do an instanceof test (so you can make sure you're getting what you expect). After talking at length today, he and I agreed to move his C++ version into XPConnect. We intend to automate it as a wrapper around XPCWrappedNatives, when chrome is operating on content, with auto-unwrapping when flowing back into a content native method, and with identity/equality ops in JS working as before. More on that soon. /be
This isn't done yet, but it works fairly well. This replaces the current XPCNativeWrapper that's written in JS in a backwards compatible way. It does that by looking at the arguments the constructor gets and if all arguments following the first one are strings then it defaults to working like the current one, only it's all dynamic and doesn't care what the string arguments to the constructor are. This new XPCNativeWrapper implementation exposes the wrapped object through a "wrappedJSObject" property (same name that's used with double wrapping in XPConnect today). In addition to that this also does automatic wrapping and unwrapping for XPCNativeWrapper, that is, you can pass an XPCNativeWrapper to any XPCOM method that expects an interface pointer and we'll get the wrapped object and pass it in stead of the XPCNativeWrapper, and also when chrome is accessing content code this patch does automatic wrapping with XPCNativeWrapper. The part that's really lame in this patch is the detection code that figures out if chrome is accessing a content object so that it knows to wrap the result in an XPCNativeWrapper. Brendan's got some ideas on that, the code is in xpcconvert.cpp. The other part that's lame is that the implementation of this new XPCNativeWrapper is far from ideal. It doesn't use getters and setters, and for every wrapping operation if always creates a new wrapper when it could re-use existing ones. We'd need a hash from XPCWrappedNative to XPCNativeWrapper to make that work. Should be pretty easy to do. I'll be out of town (and out of reach, mostly) for the coming week, so this is pretty much where I leave this for others to look at and hopefully continue working on.
Oh, and I forgot to mention that I intentionally left a bunch of printf()'s in the code to possibly help show this work or not work.
The main point here is to get people trying the patch out. I think I persuaded dveditz to try it. It's working for me so far. /be
Attachment #183880 - Flags: review?(shaver)
benjamin, can you attach that followup patch we crave to set xpcnativewrappers=yes for Firefox's five (or so) chrome URI prefixes? Thanks again, /be
My design notes on the patch: -- someone suggest a place where this should live for good. /be
With the patch applied I crash if I open (suite) browser and then mail from the window->mail menu or via ctrl+2. Starting mail from the commandline works fine.
The other things missing from the first attempt, besides the crash fix, are: 1. Code to configure the five chrome packages in firefox with bsmedberg's fine new xpcnativewrappers=yes option. 2. A call or calls to JS_FlagSystemObject to mark chrome windows and their descendants as "system". Bed soon, more tomorrow. /be
Comment on attachment 183880 [details] [diff] [review] jst's + bsmedberg's + my patch to optimize the former and enable the latter >Index: browser/base/content/browser.js >+ // content.wrappedJSObject.defa...? Nix the comment? >@@ -4411,18 +4407,17 @@ nsContextMenu.prototype = { >- var wrapper = new XPCNativeWrapper(this.link, "href", "baseURI", >- "getAttributeNS()"); >+ var wrapper = this.link; Fix indent? >Index: chrome/src/nsChromeRegistry.cpp >+static PRBool >+CheckFlag(const nsSubstring& aFlag, const nsSubstring& aData, PRBool& aResult) Document what aResult is and what the return value is? And maybe what the function does? > nsChromeRegistry::ProcessManifestBuffer(char *buf, PRInt32 This only changes "content" packages, right? But can't all non-"skin" packages can execute script (looking at nsChromeRegistry::AllowScriptsForPackage here)? More precisely, would it make sense to set all non-"content" stuff in a package to unconditionally require wrapping? >+ xpc->WantXPCNativeWrappers(urlp.get()); If this fails, we should bail out or something. We don't want to be starting if things that expect to be security-checked are not. >Index: js/src/jsdbgapi.c >+JS_GetScriptedCallerFilenameFlags(JSContext *cx, JSStackFrame >+ if (!fp) >+ fp = cx->fp; fp can still be null here. >Index: js/src/jsdbgapi.h Could we document these methods somewhere? I know we don't do it in the JS headers usually; do we have any in-tree API docs on JS? >Index: js/src/xpconnect/src/xpcconvert.cpp >+ printf("Content accessed from chrome, wrapping wrapper in XPCNativeWrapper\n"); Let's stick #ifdef DEBUG_XPCNativeWrapper around these? This is as far as I got so far; more tomorrow.
after applying the patches firefox seems to run fine, dhtml tests pass, normal browsing. the dom abuse seems stopped at first sight, but it is still possible chrome to call content when a javascript object is directly passed to chrome - as in sidebar.getInterfaces(m) where "m" is a luser js object.
bug 289074 testcases attachment 179946 [details] and attachment 180189 [details] (both abuse navigator.preference) are not fixed by this patch. The other testcases appear to be fixed. At least two of my extensions (Web Developer 0.9.3 and ConQuery 1.5.4) are broken, and break the browser. Most of the window comes up, but bookmarks aren't loaded and various tabbrowser things like filling in the location bar and the security UI don't work. The __proto__ change to browser.js also needs to be made to tabbrowser.js and nsContextMenu.js. At startup I get an error about a redeclaration of XPCNativeWrapper. Is that expected since it's now implemented in xpconnect itself? "JavaScript error: chrome://global/content/XPCNativeWrapper.js, line 56: redeclaration of const XPCNativeWrapper"
Similar to the changes made to browser.js, I think we want this.
Apply on top of previous patches. The earlier nsChromeRegistry patch only reads the xpcnativewrappers from app-chrome.manifest, but doesn't put that notation there. You can't even hand-edit for testing because that file gets blown away every start (in a debug build). This bit adds contents.rdf processing to the chrome registry, and xpcNativeWrappers="true" to most of the Firefox content rdf-manifest files. More will have to be done to cover the suite. Adding this does not fix the navigator.preference exploit. I'm not awake enough to decide if that means this patch isn't working or if that's just a different vulnerability.
Please do not use "the rest of the chrome registry patch"... I have the *.manifest build automation in my tree and am testing now.
This patch is ready-to-land by itself; it registers all the ffox content packages using .manifests instead of contents.rdf and fixes a logic error in my last patch.
Please make sure to NOT check in the xpfe changes from the "get rid of more __proto__" patch. The chrome registry changes need to be ported to suite first; at the moment (as of last patches posted in this bug) suite is NOT doing auto-wrapping.
(In reply to comment #95) > at the moment ... suite is NOT doing auto-wrapping. It really ought to, eh? We don't want to ship 1.8b2 with the mfsa2005-41 holes anymore than the Deer Park alpha.
If we're actually doing a suite release based on Gecko 1.8b2 (which it's not clear to me that we are, given the current suite situation), then yes, we need to make similar chrome registry changes there. Neil, do you think you could do that? Or know someone who could?
fwiw, with comment 82 and comment 87 on winxp, the jsshell crashes on every test in the js library, firefox won't start with a specified profile via -P, and it appears to not be able to load chrome urls from the command line.
Comment on attachment 183880 [details] [diff] [review] jst's + bsmedberg's + my patch to optimize the former and enable the latter >Index: js/src/xpconnect/src/XPCNativeWrapper.cpp >+ReWrapIfDeepWrapper(JSContext *cx, JSObject *obj, jsval v, jsval >+ // Re-wrap non-primitiv values if this is a deep wrapper (deep "non-primitive" >+ if (!rvalWrapper) { >+ return ThrowException(NS_ERROR_UNEXPECTED, cx); NS_ERROR_OUT_OF_MEMORY seems to make more sense. >+XPC_NW_GetOrSetProperty(JSContext *cx, JSObject *obj, jsval id, >+ // Be paranoid, don't let people use this as another objects "object's" >+ printf("Mapping wrapper[%d] to wrapper.item(%d)\n", JSVAL_TO_INT(id), #ifdef DEBUG_XPCNativeWrapper >+ printf("Calling setter for %s\n", >+ ::JS_GetStringBytes(JSVAL_TO_STRING(id))); Same. >+ printf("Calling getter for %s\n", >+ ::JS_GetStringBytes(JSVAL_TO_STRING(id))); Same. >+XPC_NW_NewResolve(JSContext *cx, JSObject *obj, jsval id, uintN >+ // Be paranoid, don't let people use this as another objects "object's" Probably file a followup on the "XXX make sure this doesn't get collected" comment? >+ printf("Wrapping function object for for %s\n", >+ ::JS_GetStringBytes(JSVAL_TO_STRING(id))); DEBUG_XPCNativeWrapper >+ // member. This new functions parent will be the method to call from "function's" >+XPC_NW_Construct(JSContext *cx, JSObject *obj, uintN argc, jsval >+ printf("Wrapping already wrapped object\n"); DEBUG_.... >+ printf(" %s\n", ::JS_GetStringBytes(JSVAL_TO_STRING(argv[i]))); Same. >+XPC_NW_toString(JSContext *cx, JSObject *obj, uintN argc, jsval *argv, >+ // Be paranoid, don't let people use this as another objects "object's" >+ resultString.Append(NS_REINTERPRET_CAST(jschar *, I'm pretty sure you need to cast to PRUnichar* here to avoid build bustage on some platforms... >+XPCNativeWrapper::AttachNewConstructorObject(XPCCallContext &ccx, >+ JSObject *class_obj = >+ ::JS_InitClass(ccx, aGlobalObject, nsnull, ... >+ NS_ASSERTION(class_obj, "Can't initialize XPCNW class."); That could be null on OOM or whatever. Change to warning, esp. since we bail out next line if it's null. With those utter nits picked and the chrome registry changes I asked for, sr=bzbarsky
Attachment #183880 - Flags: superreview?(bzbarsky) → superreview+
Comment on attachment 183880 [details] [diff] [review] jst's + bsmedberg's + my patch to optimize the former and enable the latter >- reference = focusedWindow.__proto__.getSelection.call(focusedWindow); >+ reference = focusedWindow.getSelection.call(focusedWindow); Why not just reference = focusedWindow.getSelection(); ? >+ /* Objects always require "deep locking", i.e., rooting by value. */ >+ if (lock || type == GCX_OBJECT) { >+ if (lock == 0 || type != GCX_OBJECT) { So in this case, if I have a non-object which is already locked, and I lock it again, I will have |lock| but type != GCX_OBJECT. So I'll fall into this code, which nets out to: >+ if (!rt->gcLocksHash) { >+ rt->gcLocksHash = >+ JS_NewDHashTable(JS_DHashGetStubOps(), NULL, >+ sizeof(JSGCLockHashEntry), >+ GC_ROOTS_SIZE); >+ if (!rt->gcLocksHash) > goto error; > } else { >+#ifdef DEBUG >+ JSDHashEntryHdr *hdr = > JS_DHashTableOperate(rt->gcLocksHash, thing, > JS_DHASH_LOOKUP); >+ JS_ASSERT(JS_DHASH_ENTRY_IS_FREE(hdr)); >+#endif > } >+ lhe = (JSGCLockHashEntry *) >+ JS_DHashTableOperate(rt->gcLocksHash, thing, JS_DHASH_ADD); >+ if (!lhe) >+ goto error; >+ lhe->thing = thing; >+ lhe->count = 1; > } else { and end up with a count of 1 on the lhe. Then if we do a single Unlock on the same non-object, we'll hit a count of 0 and remove it from the hash table, when we should still have an entry for it. Or we'll blow the JS_DHASH_ENTRY_IS_FREE(hdr) assertion instead if the gcLocksHash is already allocated and we're locking a second time. I must be missing something here. Spell it out for me? > JSBool > js_UnlockGCThingRT(JSRuntime *rt, void *thing) > { >+ uint8 *flagp, flags; > JSGCLockHashEntry *lhe; > > if (!thing) > return JS_TRUE; > >+ flagp = js_GetGCThingFlags(thing); > JS_LOCK_GC(rt); >+ flags = *flagp; > >+ if (flags & GCF_LOCK) { >+ lhe = (JSGCLockHashEntry *) >+ JS_DHashTableOperate(rt->gcLocksHash, thing, JS_DHASH_LOOKUP); Are we guaranteed that gcLocksHash is allocated by this point? If we only did one lock on a non-object, we'll just have flagged it as GCF_LOCK, and never had to allocate the hash, I think. (Update: mconnor just crashed here, I think.) >+ * Try to inherit flags by prefix. We assume there won't be more than a >+ * few (dozen! ;-) prefixes, so linear search is tolerable. >+ * XXXbe every time I've assumed that in the JS engine, I've been wrong! I didn't mentally execute the code, so forgive me if this is a naive question, but: what keeps us from having a prefix entry per web-loaded script? Is it simply that we will tend to have one script prefix per loaded page, and trust our other limiting factors to keep us to a few dozen? Do we care about the DOS attack that comes from a bunch of <script src="data:a = <random>"></script> entries? (It's always been fun to watch you recover from those O(small-enough) assumptions, though!) >+/* void wantXPCNativeWrappers (in string filenamePrefix); */ >+NS_IMETHODIMP >+nsXPConnect::WantXPCNativeWrappers(const char *aFilenamePrefix) This function seems strangely named to me, in that we want wrappers when manipulating objects that _don't_ come from a system prefix, and Want here makes me think that we want wrappers "for" the provided prefix, rather than that the script running in this prefix wants (Expects?) to get wrappers when reaching out. The IDL doc comment doesn't help much either, since it doesn't really explain which side of the line we're "matching" for. nsXPConnect::addSystemPrefix ? >+ if (wrapper->GetScope() != xpcscope) >+ { >+ // Cross scope access detected. Check if chrome code >+ // is accessing non-chrome objects, and if so, wrap >+ // the XPCWrappedNative with a XPCNativeWrapper to >+ // prevent userdefined properties from shadowing DOM >+ // properties from chrome code. >+ >+ uintN flags = JS_GetScriptedCallerFilenameFlags(ccx, nsnull); >+ if((flags & JSFILENAME_SYSTEM) && >+ !JS_IsSystemObject(ccx, wrapper->GetFlatJSObject())) This looks nice, but I can't figure out where the system flag gets set on objects. I thought it might come from being created while a system-prefixed script was running, but I don't see that flag inheritance, and I don't see any calls to FlagSystemObject anywhere else. >+ printf("Calling setter for %s\n", >+ ::JS_GetStringBytes(JSVAL_TO_STRING(id))); >+ printf("Calling getter for %s\n", >+ ::JS_GetStringBytes(JSVAL_TO_STRING(id))); #ifdef DEBUG_xpcwrappednative or some such, pls. >+ // Be paranoid, don't let people use this as another objects "object's" >+ } else if (member->IsAttribute()) { >+ // An attribute is being resolved. Define the property, the value >+ // will be dealt with in the get/set hooks. >+ >+ // XXX: We should really just have getters and setters for >+ // properties and not do it the hard and expensive way. Agreed; is there a bug on file? >+ // XXX: make sure this doesn't get collected before it's hooked in >+ // someplace where it's kept around. Mmm, yes. File a bug? r-, I think the gcLockHash issues are probably real pain, and I'd like to understand the GCF_SYSTEM stuff better, even if it isn't really a bug.
Attachment #183880 - Flags: superreview+
Attachment #183880 - Flags: review?(shaver)
Attachment #183880 - Flags: review-
Comment on attachment 183905 [details] [diff] [review] The rest of the chrome registry patch bsmedberg says not to use this one.
Attachment #183905 - Attachment is obsolete: true
bz: I've fixed those nits, mostly jsts unique possessive unpunctuation style ;-). shaver: thanks, I was sleepy during that lock bit reclamation. But you then got sleepy too -- the prefix list is not one per script filename, but one per call to the new JS_FlagScriptFilenamePrefix API (see the wiki), and that is governed by chrome -- so there's no DOS-from-web-content hazard. Still, the list could get large if extensions go crazy, but that would be a signal to flip the default to xpcnativewrappers=yes. I'll have a better all-in-one patch shortly. /be
It turns out that attachment 183912 [details] [diff] [review] messes up Thunderbird more than a little bit because thunderbird repackages chrome in its own inimitable way and uses a hand-written installed-chrome.txt. I've been planning on getting rid of that chrome repackaging for a while, and I've finally done it. win32 installer continues to work correctly as well. Scott, you don't need to review the browser/* or chrome/* bits, just the mail/* and various xpfe/* changes which affect tbird.
Attachment #183912 - Attachment is obsolete: true
Attachment #183946 - Flags: review?(mscott)
(In reply to comment #97) >Neil, do you think you could do that? Or know someone who could? It wouldn't be easy to use with the provided xpconnect model. Toolkit converts any contents.rdf files to .manifest files; these are then parsed on every launch and any xpcnativewrapper flag is used to notify xpconnect. Suite on the other hand simply maintains two RDF data sources, one for the install data source and one for the current profile data source. It would have to manually scrape the data sources for xpcnativewrapper flags on every profile switch. Fortunately we can probably hack something in at the end of AddToCompositeDataSource(). Note that there's no way to turn the flag off, so if you switch to a profile with an installation of an extension that needs them turned off you're out of luck...
(In reply to comment #96) > (In reply to comment #95) > > at the moment ... suite is NOT doing auto-wrapping. > > It really ought to, eh? We don't want to ship 1.8b2 with the mfsa2005-41 holes > anymore than the Deer Park alpha. Who is "we"? Suite releases are being done by volunteers now, not by mozilla.org staff or MF employees. My brain hurts enough without having to worry about the suite, so I am not going to worry about it. It's someone else's turn... they're "it" :-/. /be
I give up on doing an all-in-one patch. I'll attach a patch to the infrastructure and let bsmedberg track the shaver-induced nsIXPConnect method name change, and attach a follow-on patch. I'm also not going to worry about rolling up dveditz' __proto__-policing patch, although I've applied it locally. /be
This does include bsmedberg's chrome code changes, but not the manifest changes all over the place (but I've got those in my tree, I think). I can't debug -- XPCOM autoreg runs *every* time and horks gdb badly. Anyone else see this re-registration bug? What's the bug # tracking it? Mainly this needs testing and debugging help. Boris is gonna do that, but more are welcome to join in. /be
Attachment #183880 - Attachment is obsolete: true
Attachment #183887 - Attachment is obsolete: true
Patch in comment 103 makes a suite build die with: error: file '../../../mozilla/toolkit/components/cookie/content/contents.rdf' doesn't exist at ../../../mozilla/config/make-jars.pl line 428, <STDIN> line 207. Given that some of our tinderbox tests only run on suite tinderboxen, we might want to not check that in as-is...
bsmedberg: bz and I both see XPCOM registration every single time, in our debug firefox builds. I also crash trying to run with -profileManager. No time to debug that. I use -P test (and my test profile has worked well for trunk builds in general), but when darin said what he was using, -profile <name>, I tried that and got WARNING: NS_ENSURE_TRUE(NS_SUCCEEDED(rv)) failed, file nsAppRunner.cpp, line 1325 followed by app exit. Is there a bug filed on any of this? /be
Firefox starts up fine, and this latest patch fixes the navigator.preference testcase again. Don't have extensions in this profile, will have to quit and try those separately.
Web Developer and ConQuery extensions still mess up browser chrome. Web Developer actually appears to work (haven't tried all features), but having it enabled seems to interrupt browser initialization. Bookmarks aren't loaded, securityUI isn't initialized, doesn't load the start page, etc. Menus and toolbar are there, though. The very first time I ran with this patch I got a PR_assert in the JS_AQUIRE_LOCK() in js_SaveScriptFilenameRT(), ultimately called from bsmedberg's new chrome stuff but everything looked kosher on his end. Haven't been able to reproduce that.
This deals with the zero-contexts-in-runtime condition bz saw during the double (!) XPCOM component registration madness that afflicts both his and my build at startup, every time. For him, this led to prefix losses that were not covered up by later chrome re-registration. Something about his .mozconfig differs from mine because I didn't note any prefix lossage. Anyway, this patch keeps script filename prefixes added by the new jsdbgapi.h entry point JS_FlagScriptFilenamePrefix around till runtime destruction. If we need to unload extensions, switch profiles, or otherwise unflag prefixes, we can add the obvious counterpart API. /be
OK, I tracked down the issue I was seeing. In SaveScriptFilename, when we're handling the |flags != 0| case, we screw up any time a filename that is strictly shorter than all already-inserted filenames is inserted, as long as it's the third or later one. Proof: sfp will get set to non-null any time head->next != head (so we have more than one thing in the list already). The only time it's set to null after this point is if |sfp->length <= length|, which will never happen if the new filename is shorter than all existing ones. So if we had > 1 filenames inserted and insert a new short one, we'll end up not actually adding it to the list and instead just munging the flags of whatever the last thing in the list was some more. I added an |sfp = NULL;| at the very end of the loop, and now I actually get XPCNativeWrappers created when I open the context menu. On a related note, it seems to me that if there is only one thing in the list we'll never enter the loop, so if someone adds it again we'll have two entries for it in the list.
(In reply to comment #113) > sfp will get set to non-null any time head->next != head (so we have more > than one thing in the list already). I pointed out that this is a list with one or more entries, not two or more. Then I realized bz and I were looking at a dump of the circular list where the list head (a JSCList) was being mis-cast to ScriptFilenamePrefix. So we were chasing a phantom. The real bug here, which we're hunting now, is that bz gets no prefix for chrome://browser/ -- but I do. /be
(In reply to comment #114) > The real bug here, which we're hunting now, is that bz gets no prefix Duh, bz pointed out the real bug -- terminating that for loop by reaching the end of the circular list (wrapping to the header) without nulling sfp. This patch fixes that bug, and optimizes/cleans-up XPCNativeWrapper.cpp slightly. Getting close. Interdiff of last patch and this one next. /be
For bc and others following the bouncing patch-ball. /be JS_FlagScriptFilenamePrefix(JSRuntime * 0x0215a348, const char * 0x0012ee64, unsigned long 1) line 1299 + 17 bytes nsXPConnect::FlagSystemFilenamePrefix(nsXPConnect * const 0x020f3690, const char * 0x0012ee64) line 1287 + 16 bytes nsChromeRegistry::ProcessManifestBuffer(char * 0x02150830, int 162, nsILocalFile * 0x020f25e8, int 0) line 1920 nsChromeRegistry::ProcessManifest(nsILocalFile * 0x020f25e8, int 0) line 1780 + 24 bytes nsChromeRegistry::CheckForNewChrome(nsChromeRegistry * const 0x020ef4c8) line 1141 + 19 bytes nsChromeRegistry::Init() line 541 nsChromeRegistryConstructor(nsISupports * 0x00000000, const nsID & {...}, void * * 0x0012f404) line 50 + 128 bytes nsGenericFactory::CreateInstance(nsGenericFactory * const 0x020ef480, nsISupports * 0x00000000, const nsID & {...}, void * * 0x0012f404) line 82 + 21 bytes nsComponentManagerImpl::CreateInstanceByContractID(nsComponentManagerImpl * const 0x020d1ac0, const char * 0x013cc9a4, nsISupports * 0x00000000, const nsID & {...}, void * * 0x0012f404) line 1987 + 24 bytes nsComponentManagerImpl::GetServiceByContractID(nsComponentManagerImpl * const 0x020d1ac4, const char * 0x013cc9a4, const nsID & {...}, void * * 0x0012f470) line 2414 + 50 bytes CallGetService(const char * 0x013cc9a4, const nsID & {...}, void * * 0x0012f470) line 95 nsGetServiceByContractID::operator()(const nsID & {...}, void * * 0x0012f470) line 278 + 19 bytes nsCOMPtr<nsIToolkitChromeRegistry>::assign_from_gs_contractid(nsGetServiceByContractID {...}, const nsID & {...}) line 1272 + 17 bytes nsCOMPtr<nsIToolkitChromeRegistry>::nsCOMPtr<nsIToolkitChromeRegistry>(nsGetServiceByContractID {...}) line 678 ScopedXPCOMStartup::SetWindowCreator(nsINativeAppSupport * 0x01aaf6f8) line 651 ProfileLockedDialog(nsILocalFile * 0x01aa7618, nsILocalFile * 0x01aa7618, nsIProfileUnlocker * 0x00000000, nsINativeAppSupport * 0x01aaf6f8, nsIProfileLock * * 0x0012fac8) line 1095 + 12 bytes SelectProfile(nsIProfileLock * * 0x0012fac8, nsINativeAppSupport * 0x01aaf6f8, int * 0x0012fab4) line 1472 + 40 bytes XRE_main(int 1, char * * 0x01aa7028, const nsXREAppData * 0x0123901c kAppData) line 1823 + 51 bytes main(int 1, char * * 0x01aa7028) line 61 + 18 bytes mainCRTStartup() line 338 + 17 bytes KERNEL32! 7c816d4f() in PR_Lock: + lock 0x00000000 me->flags 136 in js_SaveScriptFilenameRT: + filename 0x0012ee64 "chrome://browser/" flags 1 + rt 0x0215a348 rt->scriptFilenameTableLock 0x00000000 - sfe 0x0012ee64 - next 0x6f726863 next CXX0030: Error: expression cannot be evaluated keyHash CXX0030: Error: expression cannot be evaluated key CXX0030: Error: expression cannot be evaluated value CXX0030: Error: expression cannot be evaluated keyHash 792356205 key 0x6f72622f flags 1919251319 mark 47 '/' + filename 0x0012ee75 ""
FWIW, the flags are -P <name> or -profile <path>... and I'm pretty sure you have to create the <path> folder before you call -profile <path>.
(In reply to comment #117) > Thanks, dveditz saw this too (bz and I do not). Perils of early init. Fixing for the next patch. Easy inline interdiff preview below. /be ----- cut here ----- diff -u js/src/jsscript.c js/src/jsscript.c --- js/src/jsscript.c 19 May 2005 05:56:38 -0000 +++ js/src/jsscript.c 19 May 2005 16:45:51 -0000 @@ -1112,11 +1112,16 @@ { ScriptFilenameEntry *sfe; + /* This may be called very early, via the jsdbgapi.h entry point. */ + if (!rt->scriptFilenameTable && !js_InitRuntimeScriptState(rt)) + return NULL; + JS_ACQUIRE_LOCK(rt->scriptFilenameTableLock); sfe = SaveScriptFilename(rt, filename, flags); JS_RELEASE_LOCK(rt->scriptFilenameTableLock); if (!sfe) return NULL; + return sfe->filename; } diff -u js/src/xpconnect/src/xpcprivate.h js/src/xpconnect/src/xpcprivate.h --- js/src/xpconnect/src/xpcprivate.h 19 May 2005 05:56:47 -0000 +++ js/src/xpconnect/src/xpcprivate.h 19 May 2005 16:46:01 -0000 @@ -139,7 +139,7 @@ #define DEBUG_xpc_hacker #endif -#if defined(DEBUG_brendan) || defined(DEBUG_jst) +#if defined(DEBUG_brendan) || defined(DEBUG_bzbarsky) || defined(DEBUG_jst) #define DEBUG_XPCNativeWrapper 1 #endif
I looked into the web developer extension issues. First off, the security UI issue is not caused by this patch. It's caused by bug 294815 and has been around on tip for a while now (since April 11). That exception is what breaks every single other thing listed as going wrong in comment 111 (as in, when I commented out the throwing line in browser.js I got bookmarks, start page, etc). The line in question is and since it only runs when the securityUI is null (due to bug 294815), it's just obviously wrong.... mconnor says we have a bug on that already somewhere, with patch even.
Brendan, the reason we saw wrapping with the webdeveloper extension is that it gets its documents like so: const documentList = webdeveloper_getDocuments(getBrowser(). browsers[mainTabBox.selectedIndex].contentWindow, new Array()); The thing is, getting the contentWindow property of a <xul:browser> goes through XBL, so ccx.mJSContext->fp->down->script->filename is "chrome://global/content/bindings/browser.xml" (and ccx.mJSContext->fp->script is null as usual). If I look at ccx.mJSContext->fp->down->down->script->filename then it's "chrome://webdeveloper/content/css.js" as expected, but that doesn't help us, of course. This is likely to bite other extensions too, I would bet, since getting the window for the "current tab" is pretty common. Perhaps the answer is to not set the "want wrappers" flag on chrome://global/ for now? What lives in that package anyway?
The issue we were seeing with webdeveloper.js failing on "headElementList[0] has no properties" is due to the following code in XPC_NW_GetOrSetProperty: if (!member->IsAttribute()) { // Getting the value of a method. Just return and let the value // from XPC_NW_NewResolve() be used. return JS_TRUE; } The problem is that in this case |member| is "item()", since |id| is an integer. And "item()" is a function, not an attribute. So we bailed. At the same time, XPC_NW_NewResolve expects us to handle the "|id| is an integer" case in this code. Changing this block to say: if (!member->IsAttribute() && methodName == id) { // Getting the value of a method. Just return and let the value // from XPC_NW_NewResolve() be used. Note that if methodName != id // then we fully expect that |member| is not an attribute and we need // to keep going and handle this case below. return JS_TRUE; } makes things happy over here.
> Perhaps the answer is to not set the "want wrappers" flag on > chrome://global/ for now? What lives in that package anyway? tabbrowser.xml and contentAreaUtils.js do, and both have had security issues in the past. ViewSource and PrintPreview are also, dunno if that's any kind of problem (if so it's probably currently a problem).
One other option if a lot of extensions break is to hack the contentWindow (and contentDocument?) props on these bindings (browser/tabbrowser) to return an unwrapped object. I just tried that, and it does work (give the unwrapped object to the webdeveloper extension), but even code that wants wrapping gets an unwrapped object (since we don't go back out of JS when returning here). So we'd need to audit all calling code for these two getters and use XPCWrappedNative manually for them in Mozilla code... Shouldn't be too bad, really, if we have to do this.
One interesting thing I ran into -- the context menu never sees the content wrappers for some reason, just chrome wrappers for the nodes. Not sure why, really. But in any case, when the context menu comes up this.target.ownerDocument is not an XPCNativeWrapper and isn't equal to the wrappedJSObject of getBrowser().contentWindow.document (which _is_ wrapped).
(In reply to comment #119) > > Thanks, dveditz saw this too (bz and I do not). Perils of early init. Fixing > for the next patch. Easy inline interdiff preview below. Ok, I start up without crashing now. On the first run with firefox -P Debug and set MOZ_NO_REMOTE=1 it says that -P is not recognized but appeared to select the profile anyway. On subsequent starts, the -P message does not appear.
So I talked with bz about the problem in comment 121 (.contentWindow etc being a wrapper when accessed by an extension). I wonder if making <browser>s have an idl-interface with the contentWindow and contentDocument as attributes would solve the problem. Then we should unwrap into a native object and then rewrap without the XPCNativeWrapper when extensions access it. The interface would still be implemented in xbl-js of course, we'd just add it to the 'implements' list.
(In reply to comment #120) > The line in question is > and > since it only runs when the securityUI is null (due to bug 294815), it's just > obviously wrong.... mconnor says we have a bug on that already somewhere, with > patch even. bug 292604
I thought about it some more, and I don't think the XBL-idl approach will work. The basic problem is that the call from extension JS into the XBL JS will just be a JS call. No XPConnect involved unless the XBL-bound thing is passed to a native method or something like that.). Does that sound reasonable? If so, I think I'll try implementing it; I'm having no other bright ideas.
(In reply to comment #128) > > mconnor says we have a bug on that already somewhere, with patch even. > bug 292604 I applied that patch and the securityUI exception went away but my symptoms didn't clear up. Still no bookmarks or start page if Web Developer is enabled. Separate error: Clicking on an error link from the Javascript console opens viewsource but does not take you to the line containing the error. You get the error "pre has no properties" from chrome://global/content/viewSource.js line 292
Will it really be a js-to-js call even if it's declared as an interface? Won't it just look like any other interface to XPConnect, which once we call it happen to call into a XPConnect-created vtable? Or are we 'clever' enough to notice when js calls a js-implemented interface on an XPCOM object and optimize that?
(In reply to comment #131) > Will it really be a js-to-js call even if it's declared as an interface? JS-to-JS calls do not involve natives (whether wrapped or double-wrapped by XPCNativeWrapper around the usual XPCWrappedNative [names suck, don't change them]). (In reply to comment #129) >). Hmm, why wouldn't we make XPCNativeWrapper do this automagically, all the time? When it is called from "system" chrome, it does its thing, but when called from non-"system" chrome, it simply forwards get/set property calls and other hooks to its wrappedJSObject. /be
> Hmm, why wouldn't we make XPCNativeWrapper do this automagically, all the time? Depends on how expensive the "is caller system?" check is. I was trying to minimize the number of such checks, I guess. I agree that if this is not an issue, then it's simpler to just do this in XPCNativeWrapper.
(In reply to comment #130) > I applied that patch and the securityUI exception went away but my symptoms > didn't clear up. Hmm... odd... they cleared up for me when I did basically that... > Separate error: Clicking on an error link from the Javascript console This is fixed by the change described in comment 122.
This is looking good for b2/fx1.1a1. /be
Attachment #183982 - Attachment is obsolete: true
Attachment #184058 - Flags: review?(bzbarsky)
Attachment #184058 - Flags: approval1.8b2+
Comment on attachment 184058 [details] [diff] [review] infrastructure patch, v5 >Index: browser/base/content/browser.js > function checkForDirectoryListing() >- content.defaultCharacterset = getMarkupDocumentViewer().defaultCharacterSet; >+ content.wrappedJSObject.defaultCharacterset = >+ getMarkupDocumentViewer().defaultCharacterSet; This change is good. The rest of the changes in this file shouldn't go in until we're flagging our UI as wanting wrappers. >Index: chrome/src/nsChromeRegistry.cpp I already made some comments on this part; I'd really like them to be addressed, but Brendan's not the one to do it, seems to me. Benjamin, could you fix that stuff up? >Index: js/src/xpconnect/src/xpcwrappednative.cpp Do we need to worry about updating the "system" flag when changing scopes? What about wrapper reparenting? Probably followup-bug fodder here. Other remaining followups: 1) Sort out whether we need to do something for contentDocument/Window. Need to test adblock and friends. 2) Sort out the scope thing with context menus I ran into (working on this). r=bzbarsky
Attachment #184058 - Flags: review?(bzbarsky) → review+
I checked in just the js and dom changes, plus the gutting of the XPCNativeWrapper.js files. It's up to bsmedberg to land the rest, to make this patch actually auto-wrap content when accessed from app chrome. /be
Brief summary of problem: To find the right XPC scope to look for a wrapper in, we get the parent object and get the XPC scope from that. But in this case the parent got a XPCNativeWrapper stuck on it, so we were coming our with the wrong scope. So we created wrappers for the target node, etc all in the XPC scope of the chrome window instead of in the scope of the content window. All the patch does is fix up the "get the XPC scope from that" step to know about XPCNativeWrapper
Attachment #184070 - Attachment is obsolete: true
Attachment #184071 - Flags: superreview?(brendan)
Attachment #184071 - Flags: review?(brendan)
One other comment on the chrome reg changes, in addition to my other ones. What is + entry->flags |= PackageEntry::XPCNATIVEWRAPPERS; actually doing? I'm not seeing that flag tested for anywhere...
Comment on attachment 184071 [details] [diff] [review] Same, but with right include. r/sr/a=me, thanks -- bz is a tough patch's best friend. /be
Attachment #184071 - Flags: superreview?(brendan)
Attachment #184071 - Flags: superreview+
Attachment #184071 - Flags: review?(brendan)
Attachment #184071 - Flags: review+
Attachment #184071 - Flags: approval1.8b2+
Comment on attachment 183946 [details] [diff] [review] Stop repackaging tbird chrome, and use the same flat manifests as the rest of the world uses [checked in] r=mscott for the mail, mailnews and toolkit changes. I'm doing this under duress as I think it will take a couple days at least to shake these changes out for Thunderbird which means I won't be able to release alpha one for 1.1 when Firefox is ready (if it is ready tomorrow). Also, make sure you tweak the installer to remove qute.jar and messenger.jar. Also, I'm assuming you'll back me up when I start adding jar.mn ifdefs to xpfe, editor and toolkit to stop packaging files that my repackaging work was avoiding for me. Once this lands I'll start going through the JAR files by hand to identify the new files that we are building with that we weren't before so e can take them back out. I"d like to do that before we land the patch but I understand the pressures to get this in for Firefox supercede that.
Attachment #183946 - Flags: review?(mscott) → review+
I wanted to say that if we need a day or three to get this right, that's ok -- the state of the trunk is such that we can't predict Firefox and Thunderbird alpha 1 will be on the same day, although I agree they should have about the same versions of common code. I also wanted to back mscott's position that we should support "minimal linking" in the sense that good compiled code linkers do, but for chrome packaging. We shouldn't require Thunderbird to pick up pieces it doesn't want. But if there is a "what's in the common code platform" issue underlying the surface conflict here, let's have that out in a separate venue. /be
(In reply to comment #144) > Ah, I noticed that too, and filed bug 294893 for it. Are you / is anyone sure that this checkin is causing it?
Comment on attachment 183946 [details] [diff] [review] Stop repackaging tbird chrome, and use the same flat manifests as the rest of the world uses [checked in] This is checked in with an additional fix of other-licenses/branding and with some additional comments to match bz's review.
Attachment #183946 - Attachment description: Stop repackaging tbird chrome, and use the same flat manifests as the rest of the world uses. → Stop repackaging tbird chrome, and use the same flat manifests as the rest of the world uses [checked in]
Attachment #184112 - Flags: superreview?(bzbarsky)
Attachment #184112 - Flags: review?(bzbarsky)
Attachment #184112 - Flags: approval1.8b2-
This is probably already reviewed, I'm just looking for a sanity check. /be
Attachment #184113 - Flags: superreview?(bzbarsky)
Attachment #184113 - Flags: review?(benjamin)
Attachment #184113 - Flags: approval1.8b2+
Attachment #184112 - Flags: superreview?(bzbarsky)
Attachment #184112 - Flags: superreview+
Attachment #184112 - Flags: review?(bzbarsky)
Attachment #184112 - Flags: review+
Comment on attachment 184113 [details] [diff] [review] cleanup/fixup patch for various chrome files sr=bzbarsky, but I'm going to debug why this stuff broke things, exactly. All these changes should be no-ops except the first hunk.
Attachment #184113 - Flags: superreview?(bzbarsky) → superreview+
Attachment #184112 - Attachment description: Move xmlprettyprint to the "global" package, rev. 1 → Move xmlprettyprint to the "global" package, rev. 1 [checked in]
Attachment #184112 - Flags: approval1.8b2?
Ah, the following line breaks: var searchStr = focusedWindow.__proto__.getSelection.call(focusedWindow); since there proto of the XPCNativeWrapper has no getSelection or anything on it. I guess that's more or less expected.
Two more issues that we found: 1) We can end up with an XPCWrappedNative as the __parent__ of an XPCNativeWrapper. This is wrong. I'll post a patch to fix..
Brendan, I think this is much cleaner than fixing every single PreCreate method...
Attachment #184115 - Flags: superreview?(brendan)
Attachment #184115 - Flags: review?(brendan)
Comment on attachment 184115 [details] [diff] [review] Fix parenting r+sr+a=me with conforming brace and if( style. /be
Attachment #184115 - Flags: superreview?(brendan)
Attachment #184115 - Flags: superreview+
Attachment #184115 - Flags: review?(brendan)
Attachment #184115 - Flags: review+
Attachment #184115 - Flags: approval1.8b2+
(In reply to comment #151) > Two more issues that we found: > > 1) We can end up with an XPCWrappedNative as the __parent__ of an > XPCNativeWrapper. This is wrong. I'll post a patch to fix. It's easy to do, and here bz did indeed transpose XPCWrappedNative and XPCNativeWrapper in this sentence. Let's say it again, with shorter names: a wn should never have an nw as its parent. >. I believe the nw linkage up the tree, via __parent__ (and parentNode, but that's automated for the deep nw case) should mirror the wn up-linkage. The "down" linkage is isomorphic already for the deep nw case, which is the main point of this automation layer. At the top of the __parent__-linked ancestor line is a nw wrapping a content window, and its object principal should be the same as the content window's. To make this work, we either (a) teach caps about XPCNativeWrappers; or (b) make XPCNativeWrapper objects have private nsISupports that can QI in the way that caps requires to find object principals. Thoughts on the last choice? /be
After building with these changes, I am no longer able to: 1) Open the account Manager 2) Open the Filter Dialog 3) Open the Junk Mail Dialog I'm not seeing any JS errors or console messages. Could it be related to the security fix? I'll keep digging.
I also removed some ThrowException calls that were redundant (when a fallible JS API entry point returns false or null, an exception has already been thrown, or an OOM error reported). /be
Attachment #184129 - Flags: superreview?(bzbarsky)
Attachment #184129 - Flags: review?(bzbarsky)
Attachment #184129 - Flags: approval1.8b2+
Comment on attachment 184129 [details] [diff] [review] XPCNativeWrapper.cpp patch to make deep wrapper __parent__ also deep >Index: XPCNativeWrapper.cpp >+ // JS_NewObject already thread (or reported OOM). s/thread/threw/ r+sr=bzbarsky
Attachment #184129 - Flags: superreview?(bzbarsky)
Attachment #184129 - Flags: superreview+
Attachment #184129 - Flags: review?(bzbarsky)
Attachment #184129 - Flags: review+
My patches at attachment 184113 [details] [diff] [review] and attachment 184129 [details] [diff] [review] are checked in. /be
Since the private of our XPCNativeWrapper's JSObject is an XPCWrappedNative anyway, we may as well flag ourselves as JSCLASS_PRIVATE_IS_NSISUPPORTS. That removes the need for the GetScopeOfObject() hack, fixes object principals, and likely some other places in the code that look for the natives for a JSObject.
Attachment #184130 - Flags: superreview?(brendan)
Attachment #184130 - Flags: review?(brendan)
Comment on attachment 184130 [details] [diff] [review] Object principal fixup BRILLIANT! (best Basil Fawlty voice ;-). /be
Attachment #184130 - Flags: superreview?(brendan)
Attachment #184130 - Flags: superreview+
Attachment #184130 - Flags: review?(brendan)
Attachment #184130 - Flags: review+
Attachment #184130 - Flags: approval1.8b2+
I'm also no longer able to switch folders in the folder pane. I keep getting a JS error saying "Component is not Available" when we call: GetMessagePaneFrame().location
Got as far as listing the URL prefixs that want wrappers. Also needs chrome:xpcNativeWrappers="true" added to a few contents.rdf files.
the original change to mail\base\jar.mn removed (accidentally?) three files that Thunderbird needs to run. I've added those back in and I also needed to register an additional chrome package to avoid errors complaining about not being able to find contextHelp.js. This fixes the problems with: 1) dialogs in thunderbird not opening. Still having problems with: 1) start page no longer loads in the message pane's iframe. 2) unable to switch folders, JS exception saying Component is not Available.
Attachment #184139 - Attachment is obsolete: true
Attachment #184151 - Flags: superreview?(brendan)
Attachment #184151 - Flags: review?(benjamin)
Neil, don't we need to flag xpfe/communicator as needing wrappers too? Also, I'd use NS_ARRAY_LENGTH() instead of sizeof() - 1.
(In reply to comment #166) > Also, I'd use NS_ARRAY_LENGTH() instead of sizeof() - 1. these two have different meanings...
Comment on attachment 184151 [details] [diff] [review] suite changes checked in including communicator sr=bzbarsky if the communicator package is also marked as wanting wrappers.
Attachment #184151 - Flags: superreview?(brendan) → superreview+
Comment on attachment 184151 [details] [diff] [review] suite changes checked in including communicator Gets this security stuff working in Suite builds. Local version of patch contains update for review comments.
Attachment #184151 - Flags: approval1.8b2?
Attachment #184151 - Attachment description: suite changes → suite changes checked in including communicator
Sorry Neil, but your last Checkin for this Bug has caused another regression. When I try to compose a Message (Posting in Newsgroup) Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8b2) Gecko/20050523 Mnenhy/0.7.2.10001 {Build ID: 2005052302} crashes imediatly and reproduceable almost for me. Mozilla Build 2005052221 without this patch works fine.
(In reply to comment #170) >Sorry Neil, but your last Checkin for this Bug has caused another regression. Nah, it just detected a previously existing problem (filed as bug 295200). If you want a workaround, then exit Mozilla, edit dist/bin/chrome/chrome.rdf and add a line chrome:xpcNativeWrappers="true" under urn:mozilla:package:editor.
(In reply to comment #172) >. > > I am seeing the same thing. It works if you click back and forth to another folder a couple of times. That, or click on another message, then back to the folder you want.
Scott, I don't see a bug on the start page issue. Please file one, and file bugs for any other issues you run into? This bug is far too unwieldy to put more patches or discussion here...
Flags: blocking1.8b2+
(In reply to comment #172) > With a clean clobber build from this morning, I am still unable to do the following: > > 1) Switch mail folders (make sure you've loaded a message then try to switch) I filed a separate bug 295222 before I found this discussion. I suppose it could be marked a duplicate.
Trying to wade through all my backed up bugmail. So is this bug fixed and much of this just fall out that should have been filed as separate bugs, or is this issue still being addressed?
The last couple of patches here were just fallout, yes. This bug is fixed. So are all the regressions we know about. ;)
Status: ASSIGNED → RESOLVED
Closed: 17 years ago
Resolution: --- → FIXED
But we still track dependencies such as bug 295937 here. Good way to be sure all followup-bugs are fixed. /be
Please remove the following entries from allmakefiles. configure log message: creating toolkit/components/passwordmgr/resources/content/contents.rdf sed: ./toolkit/components/passwordmgr/resources/content/contents.rdf.in: No such file or directory creating toolkit/content/buildconfig.html creating toolkit/components/passwordmgr/resources/content/contents.rdf sed: ./toolkit/components/passwordmgr/resources/content/contents.rdf.in: No such file or directory creating gfx/gfx-config.h
Comment 179 seems to be unrelated to this bug... crot0@infoseek.jp, did you get the wrong bug number?
(In reply to comment #180) > Comment 179 seems to be unrelated to this bug... crot0@infoseek.jp, did you get > the wrong bug number? In the chekin log, it is bug281988...
Oh, that part... Please file a separate bug on that, make it block this one, assign it to Benjamin.
Component: DOM → DOM: Core & HTML | https://bugzilla.mozilla.org/show_bug.cgi?id=281988 | CC-MAIN-2022-33 | refinedweb | 15,318 | 64.61 |
This article is about a solution to a variation of 894D - Ralph And His Tour in Binary Country, with (almost) complete binary trees replaced by arbitrary trees. The solution turned out to be more complicated than I thought, so I decided to post it as a separate blog.
The main idea is to fix the LCA, just like the original problem.
Note: We are calculating the sum of all Hi - L, such that
where x is any destination vertex and dv is the distance from the root to the v-th vertex.
Therefore, we could calculate the answer using the following functions:
// dist[v] = sorted list of d_x, where x is any node in v's subtree // prefsum[v] = prefix sum array of dist[v] typedef long long i64; i64 helper(int v, i64 val) { if(v == 0) return 0; auto x = std::upper_bound(dist[v].begin(), dist[v].end(), val); if(x == dist[v].begin()) return 0; i64 cnt = std::distance(dist[v].begin(), x); return cnt * val - prefsum[v][cnt - 1]; } i64 solve(int v, int c, i64 happiness, int start) { // par[root] = 0 if(v == 0) return 0; i64 val = happiness - d[start] + 2 * d[v]; return helper(v, val) - helper(c, val) + solve(par[v], v, happiness, start); } i64 query(int v, i64 happiness) { return solve(v, 0, happiness, v); }
We could use this code to calculate the answer for an arbitrary tree, however queries would take
time.
We can speed this algorithm up by using centroid decomposition. We will create a "centroid tree", and create a vector for each node that stores the distances from each node to the nodes in its subtree.
Let d(x, y) be the distance from node x to node y. Then, we need to calculate the sum of all Hi - L such that
, where v is an ancestor of Ai in the centroid tree, and x is a node in v's subtree, but not in the subtree containing Ai.
Note that we can use the exact same helper function to calculate the answer for a single vertex, however we also need to find a way to subtract the answer for v's children. We can reduce subtree queries to range queries on an array using the Euler tour technique.
We also need a data structure that supports the following operations:
1. count(l, r, x): count the number of elements in [l, r] such that ai ≤ x.
2. sum(l, r, x): calculate the sum of elements in [l, r] such that ai ≤ x.
We can use a wavelet tree/persistent segment tree/simple segment tree (offline).
In the following code, let
idx[v] be the index of vertex v in the DFS order, and
sz[v] be the size of v's subtree in the centroid tree. Then, the code can be adapted for the centroid tree like this:
// we build a separate data structure for every vertex data_structure DS[MAXN]; i64 helper(int v, int l, int r, int val) { if(l > r) return 0; return DS[v].count(l, r, val) * val - DS[v].sum(l, r, val); } i64 solve(int v, int c, int happiness, int start) { if(v == 0) return 0; int x = idx[c] - idx[v] + 1; int y = x + sz[c] - 1; i64 val = happiness - d[v][idx[start] - idx[v] + 1]; return (c == 0 ? helper(v, 1, sz[v], val) : helper(v, 1, x - 1, val) + helper(v, y + 1, sz[v], val)) + solve(par[v], v, happiness, start); } i64 query(int v, i64 happiness) { return solve(v, 0, happiness, v); }
This solution takes
time per query. (
solve is called
times per query, and every
helper query takes
time.)
Disclaimer: This is the first time I have used centroid decomposition. Feel free to point out any flaws in this approach. I will try to post a full solution later. Update: The initial version stated that this was a solution for arbitrary binary trees. It is actually a solution for any tree. | http://codeforces.com/blog/entry/55917 | CC-MAIN-2017-51 | refinedweb | 671 | 67.38 |
As a native New Yorker, I would be a mess without Google Maps every single time I go anywhere outside the city. We take products like Google Maps for granted, but they’re an important convenience. Products like Google or Apple Maps are built on foundations of geospatial technology. At the center of these technologies are locations, their interactions and roles in a greater ecosystem of location services.
This field is referred to as geospatial analysis. Geospatial analysis applies statistical analysis to data that has geographical or geometrical components. In this tutorial, we’ll use Python to learn the basics of acquiring geospatial data, handling it, and visualizing it. More specifically, we’ll do some interactive visualizations of the United States!
Environment Setup
This guide was written in Python 3.6. If you haven’t already, download Python and Pip. Next, you’ll need to install several packages that we’ll use throughout this tutorial. You can do this by opening terminal or command prompt on your operating system:
pip3 install shapely==1.5.17.post1 pip3 install geopandas==0.2.1 pip3 install geojsonio==0.0.3
Since we’ll be working with Python interactively, using the Jupyter Notebook is the best way to get the most out of this tutorial. Following this installation guide, once you have your notebook up and running, go ahead and download all the data for this post here. Make sure you have the data in the same directory as your notebook and then we’re good to go!
A Quick Note on Jupyter
For those of you who are unfamiliar with Jupyter notebooks, I’ve provided a brief review of which functions will be particularly useful to move along with this tutorial.
In the image below, you’ll see three buttons labeled 1-3 that will be important for you to get a grasp of: the save button (1), add cell button (2), and run cell button (3).
The first button is the button you’ll use to save your work as you go along (1). Feel free to choose when to save your work. line of code should correspond to a cell.
Lastly, there’s the “run cell” button (3). Jupyter Notebook doesn’t automatically run it.
Introduction
Data typically comes in the form of a few fundamental data types: strings, floats, integers, and booleans. Geospatial data, however, uses a different set of data types for its analyses. Using the shapely module, we’ll review what these different data types look like.
shapely has a class called geometry that contains different geometric objects. Using this module we’ll import the needed data types:
from shapely.geometry import Point, Polygon
The simplest data type in geospatial analysis is the Point data type. Points are objects representing a single location in a two-dimensional space, or simply put, XY coordinates. In Python, we use the point class with x and y as parameters to create a point object:
p1 = Point(0,0) print(p1) POINT (0 0)
Notice that when we print
p1, the output is
POINT (0 0). This indicated that the object returned isn’t a built-in data type we’ll see in Python. We can check this by asking Python to interpret whether or not the point is equivalent to the tuple (0, 0):
print(p1 == (0,0)) False
The above code returns
False because of its type. If we print the type of
p1, we get a shapely Point object:
print(type(p1)) <class 'shapely.geometry.point.Point'>
Next we have a Polygon, which is a two-dimensional surface that’s stored as a sequence of points that define the exterior. Because a polygon is composed of multiple points, the shapely polygon object takes a list of tuples as a parameter.
polygon = Polygon([(0,0),(1,1),(1,0)])
Oddly enough, the shapely Polygon object will not take a list of shapely points as a parameter. If we incorrectly input a Point, we’ll get an error message remind us of the lack of support for this data type.
Data Structures
GeoJSON is a format for representing geographic objects. It’s different from regular JSON because it supports geometry types, such as: Point, LineString, Polygon, MultiPoint, MultiLineString, MultiPolygon, and GeometryCollection.
Using GeoJSON, making visualizations becomes suddenly easier, as you’ll see in a later section. This is primarily because GeoJSON allows us to store collections of geometric data types in one central structure.
GeoPandas is a Python module used to make working with geospatial data in python easier by extending the datatypes used by the Python module pandas to allow spatial operations on geometric types. If you’re unfamiliar with pandas, check out these tutorials here.
Typically, GeoPandas is abbreviated with gpd and is used to read GeoJSON data into a DataFrame. Below you can see that we’ve printed out five rows of a GeoJSON DataFrame:
import geopandas as gpd states = gpd.read_file('states.geojson') print(states.head()) adm1_code featurecla \ 0 USA-3514 Admin-1 scale rank 1 USA-3515 Admin-1 scale rank 2 USA-3516 Admin-1 scale rank 3 USA-3517 Admin-1 scale rank 4 USA-3518 Admin-1 scale rank geometry id scalerank 0 POLYGON ((-89.59940899999999 48.010274, -89.48... 0 2 1 POLYGON ((-111.194189 44.561156, -111.291548 4... 1 2 2 POLYGON ((-96.601359 46.351357, -96.5389080000... 2 2 3 (POLYGON ((-155.93665 19.05939, -155.90806 19.... 3 2 4 POLYGON ((-111.049728 44.488163, -111.050245 4... 4 2
Just as with regular JSON and pandas dataframes, GeoJSON and GeoPandas have functions which allow you to easily convert one to the other. Using the example dataset from above, we can convert the DataFrame to a geojson object using the
to_json function:
states = states.to_json() print(states)
Being able to easily convert GeoJSON from one format to another gives us more freedom as to what we can do with our data, whether that be analyzing, visualizing, or manipulating.
Next we will review geojsonio, a tool used for visualizing GeoJSON on the browser. Using the states dataset above, we’ll visualize the United States as a series of Polygons with geojsonio’s
display function:
import geojsonio geojsonio.display(states)
Once this code is run, a link will open in the browser, displaying an interface as shown below:
On the left of the page, you can see that the GeoJSON displayed and available for editing. If you zoom in and select a geometric object, you’ll see that you also have the option to customize it:
And perhaps most importantly, geojsonio has multiple options for sharing your content. There is the option to share a link directly:
And to everyone’s convenience the option to save to GitHub, GitHub Gist, GeoJSON, CSVs, and various other formats gives developers plenty of flexibility when deciding how to share or host content.
In the example before we used GeoPandas to pass GeoJSON to the
display function. If no manipulation on the geospatial needs to be performed, we can treat the file as any other and set its contents to a variable:
contents = open('map.geojson').read() print(contents) { "type": "Point", "coordinates": [ -73.9617, 40.8067 ] }
The format is still a suitable parameter for the
display function because JSON is technically a string. Again, the main difference between using GeoPandas is whether or not any manipulation needs to be done.
This example is simply a point, so besides reading in the JSON, nothing necessarily has to be done, so we’ll just pass in the GeoJSON string directly:
geojsonio.display(contents)
And once again, a link is opened in the browser and we have this beautiful visualization of a location in Manhattan.
And That’s a Wrap
That wraps up an introduction to performing geoSpatial analysis with Python. Most of these techniques are interchangeable in R, but Python is one of the best suitable languages for geospatial analysis. Its modules and tools are built with developers in mind, making the transition into geospatial analysis must easier.
In this tutorial, we visualized a map of the United States, as well as plotted a coordinate data point in Manhattan. There are multiple ways in which you can expand on these exercises & state outlines are crucial to so many visualizations created to compare results between states.
Moving forward from this tutorial, not only can you create this sort of visualization, but you can combine the techniques we used to plot coordinates throughout multiple states. To learn more about geospatial analysis, check the resources below:
If you liked what you did here, follow @lesleyclovesyou on Twitter for more content, data science ramblings, and most importantly, retweets of super cute puppies. | https://www.twilio.com/blog/2017/08/geospatial-analysis-python-geojson-geopandas.html | CC-MAIN-2020-45 | refinedweb | 1,453 | 53 |
Hello everyone,
I’m using stanalone Psychopy v3.0.1. With the last beta releases of Psychopy3, I could change audioLib, but now, changing prefs.general[‘audioLib’] has no impact. I cannot use sounddevice, because timing is important in my experiment. Is this a bug in psychopy.prefs or a mistake in my code?
from psychopy import prefs prefs.general['audioLib']=['pyo'] from psychopy import sound sound.audioLib
Output
'sounddevice'
I posted this before, but I didn’t isolate the problem from the rest of my code, so I didn’t get an answer. I apologise if anyone is reading this a second time. | https://discourse.psychopy.org/t/setting-prefs-general-audiolib-to-pyo-doesnt-work/6288 | CC-MAIN-2021-39 | refinedweb | 104 | 62.14 |
Server-Side Rendering For Client-Side Apps.
Here at OddBird, we’re lucky enough to mostly work on greenfield projects – which means we choose our own tech stack. One of the first questions is how to render templates for the initial page-load. There are many reasons to prefer server-side rendering over a “pure” single-page app which always renders content in the browser – it’s better for SEO, users don’t have to wait for the JavaScript to initialize before seeing content on the page, etc. But it’s also more work to convince a client-side MV* framework to play nicely – and efficiently – with server-rendered markup.
Kit has already laid out some of the options for sharing templates between client and server, and outlined one way we’ve tried to reduce code/logic duplication in the API layer. That’s a great start, but we still need to turn that server-rendered markup into an interactive single-page application.
Getting the DataCopy permalink to “Getting the Data”
There are a few ways we could transfer data from the server to our client-side application:
- Request JSON from the API via an XHR
- Embed JSON in a
<script>tag (either
type="text/javascript"or
type="application/json")
- Embed JSON in data-attributes on DOM elements corresponding to models or collections
The first option is the “cleanest” (allowing the JS to consistently fetch data through the API), but adds an unnecessary XHR and wait-time before the page is ready for user interaction.
The second and third options are similar. Using a
<script> tag is
probably the most efficient (using only one DOM interaction to acquire
the entire data set), but requires careful namespacing and patterns to
know which data should be attached to which existing markup. In cases
involving large collections, this is my preferred approach.
Storing JSON in data-attributes on individual DOM elements has the advantage of coupling the data and markup together for each component, but requires consistent markup patterns if the JS is to be reusable for various pieces of the app. It necessitates DOM interactions to fetch the data, which could easily cause performance issues with larger collections. In our case – with a relatively small data set for each page – this option provides both reasonable performance and a clear relationship between the data and its corresponding markup.
For example, let’s say that we want to attach models and views to a server-rendered list of comments. Using Jinja2/Nunjucks, our markup might look like this:
<div class="comment-list">
<article class="comment" data-
<p>{{ comment.body }}</p>
<p>{{ comment.author }}</p>
</article>
</div>
Brief
Copy permalink to “Brief <aside>”
<aside>
You’ll note that we’re using a custom
json filter to convert an object
into a JSON string. One downside of sharing templates between front-end
and back-end (Nunjucks written in JS on the front-end, and Jinja2
written in Python on the back-end) is that any custom filters used in
shared templates must be written in both languages. So for this to work,
we have a
json filter added to our Nunjucks environment:
import nunjucks from 'nunjucks';
const env = new nunjucks.Environment();
env.addFilter('json', (val) => JSON.stringify(val));
And a corresponding filter added to our Jinja2 environment:
from json import dumps
from jinja2 import Environment
def environment(**options):
env = Environment(**options)
env.filters.update({
'json': json,
})
return env
def json(val):
"""Return given value as a JSON string."""
return dumps(val)
This isn’t ideal, but seems like a reasonable trade-off since it allows us to avoid duplicating all the template files themselves.
Ok,
</aside>.
Using the DataCopy permalink to “Using the Data”
So we’ve made the model/collection data available in the DOM without requiring an additional XHR. Now we need to add our JS layer, turning the data into actual models or collections that are managed by views.
The details differ here from one framework to another. Since we’re using Backbone.js and Marionette (^3.0.0), let’s look at one approach with those frameworks.
import BB from 'backbone';
import Mnt from 'backbone.marionette';
const ViewWithModel = Mnt.View.extend({
initialize () {
// Only run this code if an ``el`` option is passed in, signifying
// that the view is being attached to existing markup in the DOM.
if (this.options.el) {
this.attachModel();
}
},
// Find the existing [data-js-model] element, adding a model to the view.
attachModel () {
const child = this.$('[data-js-model]');
const modelData = child.data('js-model');
this.model = new BB.Model(modelData);
// Trigger any onRender handlers attached to the view.
this.triggerMethod('render', this);
}
});
const myView = new ViewWithModel({ el: $('.comment') });
Or for a view with a collection of models:
import BB from 'backbone';
import Mnt from 'backbone.marionette';
// Create a child view (used for each individual model).
const MyChildView = Mnt.View.extend({
// ...
});
const ViewWithCollection = Mnt.CollectionView.extend({
collection: new BB.Collection(),
childView: MyChildView,
initialize () {
// Only run this code if an ``el`` option is passed in, signifying
// that the view is being attached to existing markup in the DOM.
if (this.options.el) {
this.attachChildren();
}
},
// Look through existing child [data-js-model] elements, adding models
// to the collection, and attaching views to the models.
attachChildren () {
const view = this;
const collection = view.collection;
const children = this.$('[data-js-model]');
children.each((idx, el) => {
const $el = $(el);
const modelData = $el.data('js-model');
// Check to see if this model already exists in the collection.
let model = collection.get(modelData.id);
if (!model) {
// Create the new model, and add it to the collection.
model = collection.add(modelData, { silent: true });
}
const childView = new view.childView({ model, el });
view.addChildView(childView, idx);
});
// Prevent the collectionView from rendering children initially.
view._isRendered = true;
// Trigger any onRender handlers attached to the view.
view.triggerMethod('render', view);
}
});
const myView = new ViewWithCollection({ el: $('.comment-list') });
Now we have a model (or collection of models) instantiated with data from our server-rendered markup, all being managed by Marionette views! 🎉
Where Do We Go From Here?Copy permalink to “Where Do We Go From Here?”
In the end, we’re moving toward the best of both worlds: a server-rendered page (easily indexable by search engines, with content immediately visible to users), with the client-side benefits of a single-page app (live-updating components, and no page refreshes).
There are a number of improvements we could make – prioritizing the most important pieces of interactivity and lazy-loading the rest, abstracting our code into a Marionette behavior that can be added to any view where we want to pre-load with data from the DOM – but this is a good start. Every step of the way, we strive to minimize the amount of duplicated code or logic – no need for a JavaScript process on the server, and no duplicated templates.
We have a number of other tricks for sharing canonical data – global settings, third-party API keys, minified asset mappings, and even color maps generated directly from SCSS – but those will wait for a later installment in this series.
How have you tackled the problem of wiring up a single-page application with server-side rendering? What are we missing, or where could we improve our methods? Drop us a line via Twitter! | https://www.oddbird.net/2017/02/06/server-side-rendering-client-side-app/ | CC-MAIN-2022-40 | refinedweb | 1,211 | 55.03 |
The story so far: you’ve got a mountain of data coded in XML, you’ve got schemas to make sure the data is valid, and you’ve written tools to manipulate the XML documents in a sensible way.
Now your boss has asked you to generate a report for the board meeting tomorrow. You know just what needs to be done: you have to convert your XML into something that can be displayed and printed–like HTML or a PDF file.
You could whip out your favorite programming language and write a tool to format the information, but data conversion tools are tedious to write. It would be nice if there was a better way. What you need is a tool that lets you describe how to transform your XML into “something else” and not have to write the code that does the transformation.
The Extensible Style Language (XSL) family of specifications can help save the day.
Making it Presentable
XSL makes it easy to change an XML document into a readable on-screen display or an attractive printed presentation (or another XML document based on a different schema).
Instead of writing your own tools, you can use XSL to define rules — change this element to that element, ignore this attribute, sort these sub-elements into this order, etc. — and use pre-built tools to interpret the rules and process the document.
XSL is composed of three complementary technologies:
Unfortunately, the acronym “XSL” is used to refer to the single Extensible Style Language specification as well as the collection of all three specifications. Even worse, the acronym “XSL” is also used to refer to various elements found in the XSLT language. In this article, we’ll always refer to XSL Transformations as XSLT.
Let’s take a closer look at XSLT and XPath in the context of two specific tasks: a transformation from XML to HTML and a simple XML-to-XML transformation. This article won’t cover transformation from XML to XSL-FO; however, once you understand XSLT, you can emit HTML, XML, or XSL-FO equally well. Likewise, it’s impossible to cover all of the features of XSLT and XPath in such a short space, but the topics covered here will get you started.
The XSL Processing Model
To transform an XML document to another form (e.g., XML, HTML, raw text, PDF, etc.) you need an XML document (the source), a set of transformation rules, and an application to interpret the rules, read and process the XML source, and emit a new representation of the data (the result).
In XSL, the set of transformation rules is collectively called an XSLT stylesheet. An individual rule is referred to as an XSLT template. The application that interprets the stylesheet and performs the actual transformation is called an XSLT processor.
You already have the source XML document, and there are several Open Source and commercial XSLT processors available: Saxon (), Xalan (), and xsltproc (), to name just two. Your work will focus on creating a set of templates to control the XSLT processor.
Every template has two parts: an XPath expression that selects elements in your source document, and a set of instructions to transform those elements to the target format. Valid XPath expressions are element names, such as “address“; paths to elements, such as “address/firstname” which specifies the element “firstname” contained in the element “address“; and wildcards such as “*“, which means “all elements.”
Some instructions in your stylesheet may be very simple: copy an element verbatim from the source to the result. Other template instructions might rename an element or condense a hierarchy of elements into a single element. Of course, you might want to ignore elements in the source XML. In that case, you can omit the template for that element or create a template without instructions (both can be done in XSLT, but which technique to apply depends on the transformation you want to achieve). Because templates can also emit whatever format you want (text, HTML, or XSL-FO elements) simply write the instructions to produce the target format.
How does the XSLT processor work? First it takes your source XML document and converts it into a tree composed of nodes. This is known as the source tree. (You can think of this tree as a Document Object Model (DOM) view of your source document; however, XSLT processors are not required to implement any specific object model. For more on converting an XML document into a tree, see “DOM-inating with XML” in the October 2001 issue, available online at). The processor then uses your XSLT stylesheet, matching templates to nodes, performing a tree-to-tree transformation; the transformation yields what’s known as the result tree.
If you’re converting one XML document to another, the result tree may need no additional processing except to capture a text-based representation of the tree to a file. If you’re converting an XML document to HTML, the result tree (most likely a tree of HTML elements) might simply be sent to the browser. Or, when transforming an XML document into an XSL-FO “document,” the resulting XSL-FO tree would be passed directly to the next-stage XSL-FO processor to emit the tree as a PDF file.
Converting XML to HTML
To start learning XSL we’ll create an XSL stylesheet to transform an address book stored as XML into HTML. (The full text of this stylesheet, and all the stylesheets discussed in this article, can be downloaded from) We’ll reuse the address book we created in the article “XML Schema Languages” in the February 2002 issue (). The document will look like Figure One. Illustration One shows the same document as a tree.
Figure One: Sample address book document
<addressbook>
<address>
<name>John Smith</name>
<street>123 Any Street</street>
<city>Anytown</city>
<state>MA</state>
<zip>01004</zip>
</address>z
<address>
<name>Jane Smith</name>
…
</address>
</addressbook>
Most stylesheets begin with the <stylesheet> element (or its synonym, <transform>, used in the xsl namespace):
<xsl:stylesheet xmlns:xsl=
“”
version=”1.0″>
This stylesheet directive uses the xsl: prefix to identify elements in the XSL namespace, but as always with namespaces, the prefix is arbitrary. It also uses XSLT version 1.0, which is declared with the version attribute.
Next, we specify what the output should look like. The <xsl:output> element chooses what sort of serialization to use (XML, HTML, or text), the form of character encoding, whether elements should be indented, and a number of other parameters. To produce HTML output, we say:
<xsl:output method=”html” indent=”no”/>
The indent attribute tells the processor whether it should make the resulting XML “pretty” when it’s serialized. In general, it’s a bad idea to enable indenting if your output document isn’t pure data. For example, in the case of HTML, turning on indenting sometimes results in extra whitespace in <pre> elements, where white space could be significant.
As mentioned above, the heart of any stylesheet is its templates. To transform a document, the XSLT processor starts at the root of the source tree and performs an in-order travesal. For each node it encounters, the processor looks for a matching template in the stylesheet. If it finds a template, it follows the template’s instructions to construct a part of the result tree. Any node without a matching template is copied into the result tree as-is.
Figure Two shows a sample template. All templates starts with the <xsl:template> element. The match attribute specifies the type of node it should apply to. The “/” pattern is special: it matches the root of the document, which “occurs” before the first element. Thus, the “/” template is always processed first.
Figure Two: The root template
<xsl:template match=”/”>
<html>
<head>
<title>Address Book</title>
</head>
<body>
<xsl:apply-templates/>
</body>
</html>
</xsl:template>
Normally, everything between the <xsl:template> and </xsl:template> elements is copied into the result tree. This includes elements which are not in the XSLT namespace, such as the HTML elements <html>, <head>, and so on. Any elements that are in the XSL namespace are usually XSLT template instructions; those are not copied, but are processed in the order in which they appear.
In Figure Two, <xsl:apply-templates> is an example of a processing instruction. <xsl:apply-templates> element tells the processor to interrupt its current traversal of the tree, and start a new “sub-traversal.” This sub-traversal begins at the current node and proceeds down through the sub-tree rooted at the current node in the same in-order fashion.
While the sub-traversal progresses, the processor continues searching for nodes to process and transform. Finding additional template matches along the way can result in more sub-traversals.
Once a sub-traversal finishes, the traversal it interrupted is restarted. However, when the processor resumes the interrupted walk, it will ignore all nodes traversed by the sub-traversal. In XSLT, every node in the source tree is examined only once.
An important concept to understand in XSLT is context. The “context node” is the node currently being processed. Many XSLT instructions change the context. More precisely, the processor always compares the context node to the list of available templates. The processing instruction <xsl: apply-templates/> tells the processor: “Select each child of the current context node in document order; for each one, make it the current context node; find the template that matches it; and process that template.”
More Templates
The template in Figure Three matches <addressbook> elements in the source tree (in our sample document there is only one since <addressbook> is the root element of our simple schema). The most common type of match pattern is a simple element name like this one (“addressbook“). But match patterns can be more complicated. It’s possible to match elements only in a certain context or elements that satisfy certain criteria. We’ll see how to do that later on.
Figure Three: An <addressbook> template
<xsl:template match=”addressbook”>
<p>
<xsl:text>There are </xsl:text>
<xsl:value-of select=”count(address)”/>
<xsl:text> addresses in the address book.
</xsl:text>
</p>
<xsl:apply-templates/>
</xsl:template>
The template in Figure Three is only processed when the context node is <addressbook> (as defined by the <xsl: template> match pattern).
This template introduces a couple of new XSLT instructions. An <xsl:text> element simply wraps text. Using an explict wrapper avoids some whitespace-stripping issues. (For example, there’s a space after the word “are” that will not be stripped.)
The <xsl:value-of> element evaluates the expression given in its select attribute, converts the result to a string, and returns the string. count() is an built-in XSLT function. In this case, the expression “count(address)” counts how many children of the context node are <address> elements. So, this expression will count all the addresses in the address book. If there are four addresses in the address book, this template will insert the following text into the result tree.
<p>There are 4 addresses in the address
book.</p>
After processing the <xsl:text> and <xsl:value-of> instructions, we now process the <xsl:apply-templates> instruction. <xsl:apply-templates> suspends the current traversal and starts a sub-traversal over each of the children of the current context node. Here, the context node is <addressbook>. That node contains a number of <address> elements, so those are processed next. The template in Figure Four matches each <address> element.
Figure Four: An <address> template
<xsl:template match=”address”>
<p>
<xsl:apply-templates
select=”name|pobox|street”/>
<xsl:apply-templates select=”city”/>
<xsl:text>, </xsl:text>
<xsl:apply-templates select=”state”/>
<xsl:text> </xsl:text>
<xsl:apply-templates select=”zip”/>
</p>
</xsl:template>
This time, the <xsl:apply-templates> instruction has a select attribute. The expression given in select identifies the particular child nodes to process (recall the default selection is all the children of the context node).
The syntax of a select expression is governed by the XPath Recommendation. We’ll talk about XPath in just a little while. For now, we’ll use select expressions to simply list which elements we want to process. (Match patterns are also XPath expressions, but their syntax is limited to a subset of XPath.)
The first select expression, “name|pobox|street” chooses all of the <name>, <pobox>, and <street> elements, in the order that they occur in the document. The vertical bar in a select expression represents the union (“OR”) operator.
After all of the <name>, <pobox>, and <street> elements have been processed, the remaining <xsl:apply-templates> elements and select expressions select the <city>, <state>, and <zip> elements, respectively. (Technically, they select all of each element, but the schema dictates that there will only be one of each.)
The text between the state and the zip code, , is two unbreakable spaces (the Unicode value for an unbreakable space is 160 decimal).
By placing our select expressions in a specific order, we force the matching elements into the corresponding order in the result tree. Even if the elements were “out of order” (assuming our schema allowed it), they would now be “in order” after processing.
Also notice that if we’d mistakenly written the first select as “name|street” (forgetting about pobox), then any <pobox> elements would be ignored. Once the sub-traversal is completed, the parent traversal will not traverse the elements which were ostensibly handled by the <xsl:apply-templates> instruction. A mistakenly skipped node is a bug in the stylesheet.
Still More Templates
There are two templates in Figure Five to match the address elements. The first matches <name>, <pobox>, and <street> elements. The body of this template processes the contents of each element by calling <xsl:apply-templates/>. This causes the text of each element to be copied directly into the result tree. Remember the text of each element is contained in a node in the source tree and the default is to copy any node for which there is no matching template.
Figure Five: Matching address elements
<xsl:template match=”name|pobox|street”>
<xsl:apply-templates/>
<br/>
</xsl:template>
<xsl:template match=”city|state|zip”>
<xsl:apply-templates/>
</xsl:template>
After that, an empty <br> element is placed in the result tree. Note that the XSL stylesheet is itself an XML document, so the proper XML “empty tag” syntax must be used (if the result tree is serialized as HTML, the proper “<br>” tag will be produced). Finally, the <city>, <state>, and <zip> elements are processed the same way as the previous three elements.
Figure Six shows the HTML result of processing the sample address book in Figure Five.
Figure Six: Address book formatted as HTML
<html><head>
<meta http-equiv=”Content-Type” content=”text/html; charset=utf-8″>
<title>Address Book</title></head><body><p>There are 4 addresses in the address book.</p>
<p>John Smith<br>123 Any Street<br>Anytown, MA 01004</p>
<p>Jane Smith<br>123 Any Street<br>Anytown, MA 01004</p>
<p>John Bigboote<br>314 Pi Ave<br>Grovers Mills, NJ 08648</p>
<p>Jane Doe<br>1 Missing St<br>Someplace, NY 10130</p>
</body></html>
Understanding XPATH
After that quick walkthrough of XSLT, let’s take a closer look at some of the more technical details and examine how elements can be more precisely matched.
The general structure of an XSL stylesheet is a set of templates that describe how to turn “these things” in the source document into “those things” in the result document. To use XSLT, you must be familiar with both the source and the result: how to point to “these things” and how to describe “those things.” XPath is the language used to describe “these things.”
XPath was designed to be used in both attribute values and URIs, so it’s a string-based expression language. Most XPath expressions locate or identify elements in the source tree. Other, more general expressions are covered later in the section on predicates.
An XPath expression is known more formally as a location path, a set of one or more location steps separated by slashes. The slashes separate levels of hierarchy. Each step, in turn, is composed of three parts: an optional axis specifier, a node test, and an optional predicate. Although they are specified in that order, let’s take a look at the node test first, as the other two modify the meaning of the node test.
NODE TEST
The node test chooses a specific kind of node based on its type (there are six node types: element nodes, attribute nodes, text nodes, comment nodes, processing instruction nodes, and namespace nodes). The simplest node test, and therefore the simplest location step and location path, is just the name of an element (e.g., address): it selects element nodes with only that name (e.g., <address>). Some other examples are:
AXIS SPECIFIERS
Once you’ve specified a node test, the next thing to qualify is the location, in relation to the context node, where the node test should apply. That’s the role of the axis specifier. An axis specifier acts as a filter by identifying a segment of the document tree and restricting the node test to match nodes only in that segment. The axis specifier precedes the node test and is separated from it with a double colon. There are seven axes in XPath that completely subdivide the tree:
There are six additional axes that will select some other useful subsets of the tree.
One important aspect of an axis specifier is the order in which it returns nodes. In general, nodes are returned in the order they appear in the document. The exceptions are preceding, preceding-sibling, ancestor, and ancestor-or-self. These return nodes in reverse document order.
Consider our earlier example of John Smith’s address. In the template that matches <city>, following-sibling::* would return <state> and <zip>, in that order. But if you asked for preceding-sibling::*, you’d get <street> and <name>, in that order.
PREDICATES
The combination of an axis specifier and a node test selects all the nodes of a specific type from a specific segment of the source tree. But sometimes you need even finer control over the selection of nodes. This is the purpose of the predicate.
A predicate is an XPath expression in square brackets following a node test. The simplest predicate is a number: it selects the nth node. For example, the following expression selects the second <para> child of the context node:
child::para[2]
When using predicates with axis specifiers that return nodes in reverse document order, the lower the number, the closer the node is to the context node. For instance, preceding-sibling[1] is the closest immediate preceding sibling, preceding-sibling[2] is the next one further away, and so on.
In general, the XPath expressions that occur in the predicate can be arbitrarily complex. Take a look at this expression:
descendant::listitem[parent::list[not(@mark)
or @mark != "bullet"]
and position() > 4
and size() < 20)]
This selects all of the listitem elements that are among the descendants of the context node that also satisfy the following additional constraints:
position(), size(), and not() are other XSLT functions.
When a predicate is being evaluated, the context node is temporarily set to the node that matches the node-test with which the predicate is associated.
LOCATION PATHS
As specified earlier, location steps (an axis specifier, node test, and predicate) can be strung together to form location paths. Location paths can be used to choose nodes that satisfy all of the conditions of each step. For example, the following expression selects all of the <title>s in <figure>s that occur in<chapter>s below the context node.
chapter/figure/title
The single slash selects one level of hierarchy. On the other hand, a double slash (“//“) indicates arbitrary levels of hierarchy. The following expression selects all the titles of figures that occur at any level of chapters below the context node.
chapter//figure/title
For example, in Figure Seven , if the context node is the <book> element, the first expression above would select only the “F1″ title (“E1″ is not a figure title and “F2″ and “F3″ are not the titles of figures that are the direct children of <chapter>). The second expression would select both “F1″ and “F2″ (but not “E1″ or “F3″).
Figure Seven: A very short book
<book>
<chapter>
<figure>
<title>F1</title>
</figure>
</chapter>
<chapter>
<example>
<title>E1</title>
</example>
<section>
<figure>
<title>F2</title>
</figure>
</section>
</chapter>
<appendix>
<figure>
<title>F3</title>
</figure>
</appendix>
</book>
If a slash occurs at the beginning of an expression, it selects the root of the tree. Referring to Figure Seven, the expression “/book” would select the <book> element regardless of the context node.
If a double slash occurs at the beginning of an expression, it potentially selects every node in the tree. Use this expression carefully, as it can have a significant performance impact on the transformation process.
Transforming XML to XML
Another common task for XSLT is to transform one XML document into another. For example, suppose that you wanted to provide explicit markup for first and last names in the address book. This will make it easy to sort the address book entries. Figure Eight shows the desired result of the transformation. The relevant XSLT constructs are shown in Figure Nine. This is an XML to XML transformation, so the output method is xml.
Figure Eight: Desired XML output
<address>
<name><firstname>John</firstname>
<surname>Smith</surname></name>
<street>123 Any Street</street>
<city>Anytown</city>
<state>MA</state>
<zip>01004</zip>
</address>
Figure Nine: XML in, XML out
<xsl:output method=”xml” indent=”no”/>
<xsl:template match=”*”>
<xsl:copy>
<xsl:copy-of select=”@*”/>
<xsl:apply-templates/>
</xsl:copy>
</xsl:template>
<xsl:template match=”name”>
<xsl:copy>
<xsl:copy-of select=”@*”/>
<firstname><xsl:value-of select=
“substring-before(., ‘ ‘)”/></firstname>
<surname><xsl:value-of select=
“substring-after(., ‘ ‘)”/></surname>
</xsl:copy>
</xsl:template>
As is often the case in XML to XML transformations, most of the XML elements will be copied through without change. This is in contrast to our previous XML to HTML transformation, where we didn’t want any XML elements to pass through to the final HTML document (since they’d likely be illegal tags in HTML).
The first template performs the task of copying XML elements. The match pattern “*” matches any element. It’s then copied via the <xsl:copy> instruction. The <xsl:copy> instruction can copy any type of node. It automatically copies namespace bindings, but does not copy attributes or children of elements.
The <xsl:copy-of> instruction is similar to <xsl:copy>, but it allows you to select specific nodes. In this case we have a select pattern that copies attributes. Just as “*” matches all elements, “@*” matches all attributes. This allows us to copy all the attributes.
Finally, the <xsl:apply-templates> instruction makes sure the content of the element is copied and copies all its children.
Now, if this were the only template in the stylesheet, the result of the transformation would be a complete logical copy of the source document. However, there are some things, such as whitespace in attribute values and entity references, that cannot be preserved with XSLT. You cannot write an XSLT transformation that will always produce a byte-for-byte identical result document.
Our goal here, though, is to make changes to the structure of names, so next is a template for <name> elements. A template with an explicit name always has a higher priority than a template that matches “*“, so this template will be used for names.
This template uses <xsl:copy> and <xsl:copy-of> to make a copy of the <name> element and its attributes. But instead of applying templates to create the internal structure, <firstname> and <surname> elements are created.
The substring-before() and substring-after() functions, as their names suggest, return the text of a string before and after a separator character. So, given the name “John Smith” as the context node (the dot, “.“), “substring-before(.,’ ‘)” will return the string before the first space, “John“. The last name is similarly extracted with the substring-after() function. The result of this stylesheet will be a copy of the original address book, with updated markup for names.
Of course, if the address book contains people’s middle names, simply chopping the name in two at the first space isn’t going to give very good results. You’ll have to do some cleanup by hand. It’s difficult to automatically add more markup to a document, but it’s easy for the processor to ignore markup. As a general rule, always create data with as much markup as you think you’ll ever need, and then add a bit more. Data that isn’t marked up isn’t accessible.
Further Transformations
The ability to transform XML into almost any form makes XML a much better way to store data. With the capabilities of XPath and XSLT, any data stored as XML can be presented in almost any form desired. For instance, a single Web page in XML could be presented to the client in whatever form of markup it’s capable of handling. That means the same Web page needn’t be placed in separate files (an HTML version, a WML version, etc.).
To learn more about XSL (especially XSL-FO) read the W3C Recommendations or any of the numerous books available which cover everything from from the basics (such as named templates, numbering, and conditional processing) to considerably more advanced topics (such as extension functions and elements). However, with what we’ve covered here, you can begin writing simple transformations on your own. And in no time an all, you’ll be able to get your boss that presentation he needs for the board meeting.
Resources
XPath:
XSLT:
XSL:
Saxon:
Xalan:
xsltproc:
Downloads for This Article
Transforming XML:
Previous Articles in This Series
XML Basics:
Dominating With XML:: xmldom_ 01.html
XML Schema. | http://www.linux-mag.com/id/1077/ | CC-MAIN-2019-09 | refinedweb | 4,368 | 61.46 |
IntroductionEvents.Event handler is a method that has the same signature as the event and this method is executed when the event occurs.To define an event you need first to define a delegate that contains the methods that will be called when the event raised, and then you define the event based on that delegate.Example:public class MyClass{ public delegate void MyDelegate(string message); public event MyDelegate MyEvent;}Raising an events is a simple step. First you check the event agaist a null value to ensure that the caller has registered with the event, and then you fire the event by specifying the event by name as well as any required parameters as defined by the associated delegate.Exampleif (MyEvent != null) MyEvent(message);So far so good, in the previous section you saw how to define an event and the delegate associated with it and how to raise this event.Now you will see how the other parts of the application can respond to the event. To do this you just need to register the event handlers.when you want to register an event handler with an event you must follow this pattern:MyClass myClass1 = new MyClass();MyClass.MyDelegate del = new MyClass.MyDelegate(myClass1_MyEvent);myClass1.MyEvent += del;or you can do this in one line of code
myClass1.MyEvent += new MyClass.MyDelegate(myClass1_MyEvent);
//this is the event handler
//this method will be executed when the event raised.
static void myClass1_MyEvent(string message)
{
//do something to respond to the event.}Let's see a full example to demonstrate the concept:
namespace EventsInCSharp
public class MyClass
{
public delegate void MyDelegate(string message);
public event MyDelegate MyEvent;
//this method will be used to raise the event.
public void RaiseEvent(string message)
{
if (MyEvent != null)
MyEvent(message);
}
}
class Program
static void Main(string[] args)
MyClass myClass1 = new MyClass();
myClass1.MyEvent += new MyClass.MyDelegate(myClass1_MyEvent);
Console.WriteLine("Please enter a message\n");
string msg = Console.ReadLine();
//here is we raise the event.
myClass1.RaiseEvent(msg);
Console.Read();
//this method will be executed when the event raised.
static void myClass1_MyEvent(string message)
Console.WriteLine("Your Message is: {0}", message);
}we are doing great, but what if you want to define your event and it's associated delegate to mirrors Microsoft's recommended event pattern. To do so you must follow this patten:
public delegate void MyDelegate(object sender, MyEventArgs e);
public event MyDelegate MyEvent;As you can see the first parameter of the delegate is a System.Object, while the second parameter is a type deriving from System.EventArgs.The System.Object parameter represents a reference to the object that sent the event(such as MyClass), while the second parameter represents information regarding the event.If you define a simple event that is not sending any custom information, you can pass an instance of EventArgs directly.let's see an example: namespace MicrosoftEventPattern
public delegate void MyDelegate(object sender, MyEventArgs e);
public class MyEventArgs : EventArgs
public readonly string message;
public MyEventArgs(string message)
{
this.message = message;
}
public void RaiseEvent(string msg)
MyEvent(this, new MyClass.MyEventArgs(msg));
static void myClass1_MyEvent(object sender, MyClass.MyEventArgs e)
if (sender is MyClass)
MyClass myClass = (MyClass)sender;
Console.WriteLine("Your Message is: {0}", e.message);
}we are done now, in my next article i'll show you how to define your custom event to use it in a custom control in a windows application.
thanks~
Building a UNIX Time to Date Conversion Custom Control in C#
Anonymous methods in C#
Thanks so much!
your article its great!
I wait your article about
"custom control in a windows application."
How can I use Split event?
Hi!! How can I use split event?
This is Simply ExcellentThanksRaviKumar Bhuvanagiri | http://www.c-sharpcorner.com/uploadfile/Ashush/events-in-C-Sharp/ | CC-MAIN-2013-20 | refinedweb | 618 | 56.76 |
In C#, the specialization relationship is typically implemented using.
In C#, you create a derived class by adding a colon after the name of the derived class, followed by the name of the base class:
public class ListBox : Window
This code declares a new class, ListBox, that derives from Window. You can read the colon as "derives from."
The derived class inherits all the members of the base class, both member variables and methods. The derived class is free to implement its own version of a base class method. It does so by marking the new method with the keyword new. (The new keyword is also discussed in Section 5.3.3, later in this chapter.) This indicates that the derived class has intentionally hidden and replaced the base class method, as in Example 5-1.
using System; public class Window { // these members are private and thus invisible // to derived class methods; we'll examine this // later in the chapter private int top; private int left; // constructor takes two integers to // fix location on the console public Window(int top, int left) { this.top = top; this.left = left; } // simulates drawing the window public void DrawWindow( ) { Console.WriteLine("Drawing Window at {0}, {1}", top, left); } } // ListBox derives from Window public class ListBox : Window { private string mListBoxContents; // new member variable // constructor adds a parameter public ListBox( int top, int left, string theContents): base(top, left) // call base constructor { mListBoxContents = theContents; } // a new version (note keyword) because in the // derived method we change the behavior public new void DrawWindow( ) { base.DrawWindow( ); // invoke the base method Console.WriteLine ("Writing string to the listbox: {0}", mListBoxContents); } } public class Tester { public static void Main( ) { // create a base instance Window w = new Window(5,10); w.DrawWindow( ); // create a derived instance ListBox lb = new ListBox(20,30,"Hello world"); lb.DrawWindow( ); } } Output: Drawing Window at 5, 10 Drawing Window at 20, 30 Writing string to the listbox: Hello world
Example 5-1 starts with the declaration of the base class Window. This class implements a constructor and a simple DrawWindow method. There are two private member variables, top and left.
In Example 5-1, the new class ListBox derives from Window and has its own constructor, which takes three parameters. The ListBox constructor invokes the constructor of its parent by placing a colon (:) after the parameter list and then invoking the base class with the keyword base:
public ListBox( int theTop, int theLeft, string theContents): base(theTop, theLeft) // call base constructor
Because classes cannot inherit constructors, a derived class must implement its own constructor and can only make use of the constructor of its base class by calling it explicitly.
Also notice in Example 5-1 that ListBox implements a new version of DrawWindow( ):
public new void DrawWindow( )
The keyword new indicates that the programmer is intentionally creating a new version of this method in the derived class.
If the base class has an accessible default constructor, the derived constructor is not required to invoke the base constructor explicitly; instead, the default constructor is called implicitly. However, if the base class does not have a default constructor, every derived constructor must explicitly invoke one of the base class constructors using the base keyword.
In Example 5-1, the DrawWindow( ) method of ListBox hides and replaces the base class method. When you call DrawWindow( ) on an object of type ListBox, it is ListBox.DrawWindow( ) that will be invoked, not Window.DrawWindow( ). Note, however, that ListBox.DrawWindow( ) can invoke the DrawWindow( ) method of its base class with the code:
base.DrawWindow( ); // invoke the base method
(The keyword base identifies the base class for the current object.)
The visibility of a class and its members can be restricted through the use of access modifiers, such as public, private, protected, internal, and protected internal. (See Chapter 4 for a discussion of access modifiers.)
As you've seen, public allows a member to be accessed by the member methods of other classes, while private indicates that the member is visible only to member methods of its own class. The protected keyword extends visibility to methods of derived classes, while internal extends visibility to methods of any class in the same assembly.[1]
[1] An assembly (discussed in Chapter 1), is the unit of sharing and reuse in the Common Language Runtime (a logical DLL). Typically, an assembly is a collection of physical files, held in a single directory that includes all the resources (bitmaps, .gif files, etc.) required for an executable, along with the Intermediate Language (IL) and metadata for that program.
The internal protected keyword pair allows access to members of the same assembly (internal) or derived classes (protected). You can think of this designation as internal or protected.
Classes as well as their members can be designated with any of these accessibility levels. If a class member has a different access designation than the class, the more restricted access applies. Thus, if you define a class, myClass, as follows:
public class myClass { // ... protected int myValue; }
the accessibility for myValue is protected even though the class itself is public. A public class is one that is visible to any other class that wishes to interact with it. Occasionally, classes are created that exist only to help other classes in an assembly, and these classes might be marked internal rather than public. | http://etutorials.org/Programming/Programming+C.Sharp/Part+I+The+C+Language/Chapter+5.+Inheritance+and+Polymorphism/5.2+Inheritance/ | CC-MAIN-2018-17 | refinedweb | 890 | 52.49 |
Suppose we have a list of points and a number k. The points are in the form (x, y) representing Cartesian coordinates. We can group any two point p1 and p2 if the Euclidean distance between them is <= k, we have to find total number of disjoint groups.
So, if the input is like points = [[2, 2],[3, 3],[4, 4],[11, 11],[12, 12]], k = 2, then the output will be 2, as it can make two groups: ([2,2],[3,3],[4,4]) and ([11,11],[12,12])
To solve this, we will follow these steps −
Define a function dfs() . This will take i
if i is in seen, then
return
insert i into seen
for each nb in adj[i], do
dfs(nb)
From the main method, do the following−
adj := a map
n := size of points
for j in range 0 to n, do
for i in range 0 to j, do
p1 := points[i]
p2 := points[j]
if Euclidean distance between p1 and p2 < k, then
insert j at the end of adj[i]
insert i at the end of adj[j]
seen := a new set
ans := 0
for i in range 0 to n, do
if i not seen, then
ans := ans + 1
dfs(i)
return ans
Let us see the following implementation to get better understanding −
from collections import defaultdict class Solution: def solve(self, points, k): adj = defaultdict(list) n = len(points) for j in range(n): for i in range(j): x1, y1 = points[i] x2, y2 = points[j] if (x1 - x2) ** 2 + (y1 - y2) ** 2 <= k ** 2: adj[i].append(j) adj[j].append(i) seen = set() def dfs(i): if i in seen: return seen.add(i) for nb in adj[i]: dfs(nb) ans = 0 for i in range(n): if i not in seen: ans += 1 dfs(i) return ans ob = Solution() points = [ [2, 2], [3, 3], [4, 4], [11, 11], [12, 12] ] k = 2 print(ob.solve(points, k))
[[2, 2],[3, 3],[4, 4],[11, 11],[12, 12]],2
2 | https://www.tutorialspoint.com/program-to-group-a-set-of-points-into-k-different-groups-in-python | CC-MAIN-2021-43 | refinedweb | 345 | 59 |
This is the mail archive of the libstdc++@gcc.gnu.org mailing list for the libstdc++ project.
Chris Jefferson wrote: > Feel free to not put these things in, they are just the things that > occur to me, of things I have been involved in :) > > std::iter_swap (and therefore most of the mutating algorithms) now > makes an unqualified call to swap when the value_type of the two > iterators is the same. While not required by the standard, iter_swap > will still work on proxy iterators as long as their their > reference_type is not "value_type&". Agreed, but maybe something more concise is more suited. Will look into that. > Also, do you want to add something like: > > No guarantes about the implementation of the code in TR1 is provided. > In particular the final location of the headers and namespace in which > they will be in has not been finalised, and it is not promised that > libstdc++-v3 will remain link compatable when code using TR1 is used. > > The link-compatable bit in particular I think about.. in particular > tuple<> won't be link compatable if next version we give it a maximum > of 20 or 30 parameters, and if anything changes that might break > things to... Yes, this is absolutely right, but (see above) we should aim for something a little shorter, I think. Thanks! Paolo. | http://gcc.gnu.org/ml/libstdc++/2005-03/msg00431.html | crawl-001 | refinedweb | 222 | 68.6 |
First of all, thank you very much for the wonderful course, @jeremy!
I studied 12 lessons currently. It really made me understand deep learning at some level, while other videos and articles did not help.
I found some notebooks that were not covered in the course-v3. For example, Object Detection notebook pascal.ipynb.
I already found the topic, where @muellerzr says that Object Detection will be a separate course.
But I thought, I can learn it by myself (with a little help of the community maybe). I really want to make a project with object detection to detect cards of game “Set”. And also I have some more ideas for deep learning projects in my mind.
So, I tried understanding (not up to the end yet) and running pascal.ipynb.
It fails at first
lr_find():
in _unpad(self, bbox_tgt, clas_tgt) 21 print("clas_tgt: ", clas_tgt) 22 print("self.pad_idx: ", self.pad_idx) ---> 23 i = torch.min(torch.nonzero(clas_tgt-self.pad_idx)) 24 return tlbr2cthw(bbox_tgt[i:]), clas_tgt[i:]-1+self.pad_idx 25 RuntimeError: invalid argument 1: cannot perform reduction function min on tensor with no elements because the operation does not have an identity at /pytorch/aten/src/THC/generic/THCTensorMathReduce.cu:64
As you can see above, I created debug output for
clas_tgt and
pad_idx.
Found that it crashes when there are only zeros in
clas_tgt.
clas_tgt: tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], device='cuda:0') self.pad_idx: 0
And that is clear, why it fails:
torch.nonzero returns empty tensor,
torch.min can’t handle it.
I started thinking, how does it happen, that there is
clas_tgt with zeros only sometimes. I’m not yet completely understanding the notebook, but I guessed that there are some images without bboxes coming here. So I decided to check the databunch and found that there are some images with no bboxes.
Here is my dirty way to check that fact
data = get_data(1,128) i = 0 for smth in data.train_dl.dl: #print(smth[1][0].shape) num_objs = smth[1][0].shape[1] i += 1 print(i, num_objs) if num_objs == 0: print(smth[0].shape) print(smth[0].squeeze(0).shape) show_image(smth[0].squeeze(0)) assert(num_objs > 0)
I spent some more time and found that all original images have bboxes, but it may happen, that there are no bboxes on visible area of some image after this line:
src = src.transform(get_transforms(), size=size, tfm_y=True)
Even if I remove transforms:
src = src.transform(size=size, tfm_y=True)
For example, this image just becomes cropped to square and loses bbox:
I decided to filter images that lose bboxes after transform. I wrote this line of code after transform:
src = src.filter_by_func(lambda x, y: len(y[0]) == 0)
And suddenly found a bug in fastai v1.
get_data failed with this error on next line (creating databunch):
AttributeError: 'ObjectCategoryList' object has no attribute 'pad_idx'
Even
print(src) after filtering caused such error.
After a few hours of debugging, I found that this line of code in data_block.py loses
y.pad_idx.
Proof
Monkey patching
LabelList.filter_by_func:
def filter_by_func(self, func:Callable): filt = array([func(x,y) for x,y in zip(self.x.items, self.y.items)]) self.x = self.x[~filt] print('before: ', 'pad_idx' in vars(self.y)) self.y = self.y[~filt] print('after: ', 'pad_idx' in vars(self.y)) return self LabelList.filter_by_func = filter_by_func
Results:
before: True after: False before: True after: False
I do not know how to fix this correctly. So I created a dirty monkey-patch fix for temporary usage:
Temporary fix
Monkey patch, only for object detection dataset.
def filter_by_func(self, func:Callable): filt = array([func(x,y) for x,y in zip(self.x.items, self.y.items)]) self.x = self.x[~filt] pad_idx = self.y.pad_idx # save pad_idx self.y = self.y[~filt] self.y.pad_idx = pad_idx # set pad_idx return self LabelList.filter_by_func = filter_by_func
This helped me to run
get_data without crashes.
But the filtering didn’t help, because it did not affect sizes of train and validation datasets. It seems, that removing of bboxes, that became invisible after transforms (bboxes, which became out of image), are done while creating databunch.
Right now I’m too tired of debugging and decided to share my experience, tell about the bug in fastai v1 and ask for some help.
Thank you! | https://forums.fast.ai/t/having-problems-running-pascal-ipynb-notebook/55155/3 | CC-MAIN-2022-27 | refinedweb | 732 | 58.89 |
Ok I need to override __getattr__ in one of my product classes. I'm sure this is killing acquisition but not sure about the persistence stuff (I think this is working). Is there a way to make this work? Here is what I'm doing:
Advertising
def __getattr__(self, name): if name == 'myattr': return self.myattr() I assume that somewhere in the Acquisition code there is a __getattr__ but I can't find it. I tried calling Implicit.__getattr__ but its not there. If some one has an example that would be great. -EAD _______________________________________________ Zope-Dev maillist - [EMAIL PROTECTED] ** No cross posts or HTML encoding! ** (Related lists - ) | https://www.mail-archive.com/zope-dev@zope.org/msg10663.html | CC-MAIN-2017-30 | refinedweb | 108 | 68.57 |
Well, part of my point is that a vector-length-agnostic API that monomorphizes+dispatches for each supported packed SIMD ISA under the hood is viable today, and also frees the programmer of the burden (on "iterated instructions") of selecting a specific SIMD extension, or dispatching manually.
Indeed, that's one of the absolute top goals of both SVE and the V extension. The problem is...
LLVM's optimizer just isn't there yet. It's just beginning to grow the kind of polyhedral optimizations that let GCC do exactly this. In the meantime, the single best thing we can do, to ensure it does vectorize code, is to force our vectorizable code into easily-recognized idioms.
That, then, is exactly what my proposal is meant to do: Force "iterated instructions" into trivial, one-pass, easily-fused, contiguous loops. Under the hood, we can use LLVM intrinisics (in order to avoid performance cliffs in debug mode), or we can rely on the optimizer (and have simpler implementations), or other options.
I don't think we need to block SIMD intrinsic stabilization on supporting arbitrary-length vectors. In particular, if we're going with the Simd<[T; N]> abstraction in the future, I could see a very realizable common abstraction in the form of Vector<[T]>.
Simd<[T; N]>
Vector<[T]>
You're right. I've just used a mental shortcut and forgot about the signedness conversions. I still think they're surprising (From impls for i32x4→u32x4 and i32→f32 but no i32→u32 nor i32x4→f32x4) but they won't cause any ambiguity.
From
i32x4→u32x4
i32→f32
i32→u32
i32x4→f32x4
Neither is the hardware. Nor even a public spec.
If I understand you correctly, your proposal is that we design and stabilize a Rust language feature in order to work around a temporary deficiency in LLVM's optimizer. A deficiency that will probably be fixed before the relevant hardware is available.
I don't think we should do that.
This discussion of variable-width vectors and compilation strategies is seriously off topic. This topic is about a low-level, architecture-specific SIMD interface. Unless you think a high-level API can (a) always generate perfect vector code on existing architectures, that a human can't improve on, and (b) provide access to every single weird Intel/ARM/etc. vector instruction that might be necessary for optimal performance, any such API is essentially orthogonal to the set of low-level near-assembly primitives that are desired here. (Actually, not orthogonal, dependent: stabilizing low-level primitives allows experimentation with variant designs of high-level APIs in third-party crates before maybe moving to std someday, which is part of the motivation in the first place.) The only exception is that the set of fixed-width vector types and basic high-level operations on them being proposed could conflict with a future variable-width approach, but this seems minimal: the fixed-width types are necessary no matter what, to properly type the architecture-specific intrinsics, and adding a handful of impls on them really doesn't hurt.
I concur with @comex. I don't think we should try to design around RISC-V and ARM stuff that isn't shipping yet in mainstream hardware. Especially since RISC-V might well fail (I love the idea of it too, but it does no good to pretend there isn't significant risk).
The road to hell is paved with attempts to be future-compatible with standards that haven't shipped yet and whose usage is not well-understood. This is how JavaScript and Java got their obnoxious two-byte wide strings, for example; this new shiny Unicode thing that was 16-bit was right around the corner and they felt they had to embrace it…
Hmm, I was arguing for From impls between every pair of integer vector types, but no From impls between float/float or float/integer. So we'd get: i8x16<->u8x16, i8x16<->u16x8, i8x16<->i16x8, u8x16<->u16x8, u8x16<->i16x8, and so on.
i8x16<->u8x16
i8x16<->u16x8
i8x16<->i16x8
u8x16<->u16x8
u8x16<->i16x8
@burntsushi I was talking about the same set of impls, but I've made a typo, sorry for that. Fixed the previous post, so it's not self-contradicting now. I wanted to write that From impls for i32x4→u32x4 and i32→f32 but no i32→u32 nor i32x4→f32x4 would be surprising, but at least not ambiguous (and I think we do agree on that).
I've just noticed that I erroneously thought that there is an impl From<i32> for f32. It weakens my argument a little, but I still think that implementing From as transmutes is surprising (note that std doesn't do it for signedness conversion).
impl From<i32> for f32
std
+1. .foo(…) should mean "semantically as-if" into array => map foo => into simd. I'm glad to see the proposed simd types not implementing Add, even in step 3. Might be worth adding map directly on the simd types, in step 3, to give an ergonomic way to do things for which there aren't vector instructions when needed. (Seeing the closure should make it plenty obvious that this isn't guaranteed to be a single SIMD instruction.)
.foo(…)
into array => map foo => into simd
Add
map
I agree that wrapping all of the llvm intrinsics into pub fn in std would be horrible. Lots of work, unstable, etc.
pub fn
But I do believe that the intel intrinsics (step 2) should be a crate with code sortof like this:
#[link(name = "llvm", kind = "intrinsics")]
extern {
#[link_name="llvm.fmuladd.v4f32"]
fn llvm_fmuladd_v4f32(a: v4f32, b: v4f32, c: v4f32) -> v4f32;
}
pub fn _mm_fmadd_ps(a: v4f32, b: v4f32, c: v4f32) -> v4f32 { llvm_fmuladd_v4f32(a, b, c) }
Then different people could make different choices about how to pick the type signatures, for example. Most of step 3 (the parts not constrained by coherence) can be experimented on outside of std. And people don't need to use nightly just for the one weird, possibly-custom LLVM intrinsic that isn't wrapped yet.
The stability of #[link(name = "foo")] obviously depends on the semver of foo, so I don't find this unreasonable to call "stable", so long as cargo can understand the dependency. And having the [link] means they intrinsics aren't linked "sneakily".
#[link(name = "foo")]
[link]
How does cargo support people using different versions of rust? Can a crate say "I use the ? operator so need rust = ">1.13"? Can a crate say "I'm still depending on a broken inference rule, so you better not use me after 1.n"?
rust = ">1.13"
Maybe linking intrinsics can be stable in the same sense as normal linking: can break if the [dependency] doesn't prevent it. So essentially require the intel_intrinsics crate to have something like
[dependency]
intel_intrinsics
[compiler-dependencies]
llvm = "~3.8"
(And I assert that updating an intel_intrinsics crate for new versions of LLVM isn't harder than updating std for them. Might even be easier, if it means that a new LLVM version can go into rustc without needing to update potentially hundreds of pub methods in the library.)
@scottmcm
I don't see any future in which we provide a way for folks to link to LLVM intrinsics directly on stable Rust.
I don't understand why you aren't on board with defining and exposing vendor APIs. My view is that the prevailing opinion of everyone here is that we should, at a minimum, export vendor intrinsic APIs in std. Can you explain your disagreement in more depth?
I suggest we table the From impl discussion for now. I continue to think all integer vectors should be convertible to one another via From. Either way, we can hash that out in the RFC.
I agree with conclusion, but I disagree with reasoning. ARM SVE is low-level, architecture-specific interface. It is not about high-level API. It is near-assembly primitives.
I propose to ignore ARM SVE because ARM SVE hardware isn't shipping, not because it is high-level interface.
Let me start with this: I believe that Vendor SIMD APIs should absolutely be defined and exposed for use in stable rust. Sorry that I was unclear about that. I think stoklund's proposal is great. I expect yours will be as well.
I guess my mental block here is "what does std mean?" When I go look it up in the docs I find,
These modules are the bedrock upon which all of Rust is forged, and they have mighty names like std::slice and std::cmp
(I'm probably also biased based on "what would go in std in C++?", but I figure the similarity in names is not unintentional.)
I don't think 4 different names for "add four f32s to four corresponding f32s"—none of which is the usual and mighty add—is bedrock. I don't think that std::simd::intel_intrinsics::_mm512_mask_add_round_ps is bedrock. (Picked on only for having a long function name; I didn't even look at what it does.)
add
std::simd::intel_intrinsics::_mm512_mask_add_round_ps
Suppose Rust takes off so much that there are 5 competing implementations that decide to write an official specification for Rust. Would I include some sort of SIMD type in the spec? Absolutely. Would I include the intel or arm or mips intrinsics in the spec? No, I wouldn't.
If I'm a vendor and want to release a new, clearly-better vendor API, what do I do? Ideally the answer would be "release a crate that everyone would be able to use immediately", not "go through the Rust commit process so people can use it when they upgrade their compiler 12+ weeks after that". (And I acknowledge the realities that make this point irrelevant to a practical solution to "I want to use Intel's MMX+SSE+SSE2 intrinsics in rust 1.15", but do believe in it as a laudable goal. Hardware vendors being deeply entrenched with LLVM is symbiotic, but they shouldn't need to be entrenched in frontends for general-purpose languages.)
I think there are some choices here where neither is legitimately better than the other. For example, the "Should _mm_add_epi16 take an __m128i (to match C and allow for easier mixing with other lane widths) or an i16x8 (to match its semantics and allow for easier mixing with the rest of rust's type system)?" debate. Both are fine options, and while I'll argue for one, I won't say it's unambiguously better. Standardizing both would be a little weird, and having a crate for one by wrapping the standardized other invites "why did they standardize the wrong one" grumbles that wouldn't happen with both being crates where you pick the one that best matches your weighting of trade-offs. And cargo means that "there's a crate for that" is a great answer (unlike the equivalent in many other languages).
_mm_add_epi16
__m128i
i16x8
Maybe all I'm wishing is that "Rust, the currently-most-popular distribution of Rust the language" comes with some special sauce that's explicitly not part of "Rust the language" when that sauce needs to be tightly integrated for practical reasons.
Postscript, feel free to skip:
As I was writing this, I stumbled on C++ committee paper n3759: SIMD Vector Types. Its details aren't relevant to this thread, but conceptually it's exactly what I'd expect from a first step for standardized SIMD:
Surprisingly similar to stoklund's proposal, if you treat step 2 as a "vendor extension".
(If you do look at the details, it has an interesting approach to vector width: they're whatever width is good for the (compile-time) target, not something you pick. It's an interesting thought, especially after the ARM SVE discussion. Makes me glad that Rust is so good at DSTs )
SIMD intrinsics should definitely be part of any spec. Since they're tied to the architecture, not the compiler, there is no reason why every competing implementation shouldn't support the same intrinsics. Exposing LLVM internals just penalizes any such implementations.
(ARM SVE itself can have a low-level interface, and should once it ships, but the interface @eternaleye was proposing is generic and high-level.)
There should absolutely be a spec for the SIMD intrinsics. But as you say, they're tied to the architecture, not the compiler, so the spec should be written by the architecture vendor, not the language designers. If you want the spec for the Intel intrinsic instructions C API, you don't ask ISO/IEC JTC1/SC22/WG14, you download (And yes, we need to define an "intel" intrinsic instructions Rust API for now because Intel won't. But if Rust in the future ends up with as many compilers and users as C currently does, I bet they would then.)
I'm not convinced that allowing a user who knows they're targeting LLVM to write #[link("llvm.intrinsics")] "penalizes" an implementation any more than allowing a user who knows they're targeting Windows to write #[link("Advapi32")]. If I'm writing a compiler that targets Javascript, I'm probably not going to support either of those #[link]s. (That said, I don't really care whether people can link llvm intrinsics—that was never a high-level goal of mine—so am totally fine with it remaining a nightly-only thing or becoming impossible.)
#[link("llvm.intrinsics")]
#[link("Advapi32")]
#[link]
A small correction - N3759 is old/superseded and may not reflect the current position of committee's concurrency study group.The most recent version of the document is P0214R2 Data-Parallel Vector Types & Operations and it's mostly specification, but it refers to most recent versions of design papers as well - N4184 SIMD Types: The Vector Type & Operations, N4185 SIMD Types: The Mask Type & Write-Masking and SIMD Types: ABI Considerations.
Some more links.Official specifications for ARM, recently updated to ARMv8.1:IHI0053D ARM® C Language Extensions 2.1 - contains high level description, including data types, and detailed description for both SIMD and non-SIMD intrinsics.IHI0073B ARM® NEON™ Intrinsics Reference - list of intrinsics corresponding to Advanced SIMD (aka NEON) instructions specifically.ARM SVE is supposed to be a part of ARMv8.3 and the specification is not public yet, the only official doc seems to be DUI0965C ARM® Compiler Version 6.6 Scalable Vector Extension User Guide.
@scottmcm Whether the intrinsics wind up in std or in a different crate, their implementation is compiler-specific. A GCC-backed rust compiler would deal with (say) shuffle intrinsics differently than the LLVM-backed reference compiler, and how that one defines an intrinsic might also change when it grows a Cretonne backend. So even if we decide not to expose vendor intrinsics from std, they would still have to live in a crate that ships with the compiler, is tied to the compiler's internals, and follows the same general stability story. That is certainly a possibility you can argue for.
However:
core
rustc
All right folks, strap yourselves in, because I think it's going to get a little bumpy!
I think that, for now, we've mostly settled on the least offensive way to expose (in std) a set of low level vendor intrinsics and a very tiny cross platform API based on defining some platform independent SIMD types. The low level vendor intrinsics are not exposed as compiler intrinsics. Instead, we define normal Rust function stubs implemented by LLVM intrinsics and export those. There are undoubtedly details on which we'll disagree, but I think we can punt on those until the RFC. (I do intend to write a pre-RFC before that point though.)
So... time to mush on to the next problem: dealing with functions whose type signatures permit SIMD types to either be passed as parameters or returned. I'd like to describe this problem from first principles so that we can get other folks participating in this thread without having to read all of it.
A key part of the SIMD proposal thus far is the existence of a new attribute called #[target_feature]. This feature maps precisely to the __attribute__(((target("..."))) annotation found in both gcc and Clang. An initial form of #[target_feature] was recently merged into Rust proper. In short, #[target_feature] is an attribute that one can apply to a function that specifies target options specifically for that function that may be different than ones specified on the command line (e.g., with rustc -C target-feature=...). This means that one can, for example, call AVX2 intrinsics like _mm256_slli_epi64 in a function labeled with #[target_feature = "+avx2"] without having to use rustc -C target-feature=+avx2.
#[target_feature]
__attribute__(((target("...")))
rustc -C target-feature=...
_mm256_slli_epi64
#[target_feature = "+avx2"]
rustc -C target-feature=+avx2
There are two really really important problems that this solves:
CPUID
cfg!
SIGILL
avx2
If we decided not to go through with #[target_feature], we'd be missing out on (1) completely, and I don't think there's any way around it. We'd also have to completely rethink how we stabilize these intrinsics, since a cfg! oriented system in std doesn't quite work as explained in (2).
Let's make this concrete with an example. This is how one might define the _mm256_slli_epi64 AVX2 intrinsic:
#[inline(always)]
#[target_feature = "+avx2"]
fn _mm256_slli_epi64(a: i64x4, imm8: i32) -> i64x4 {
// I think this extern block may be offensive.
// Presuppose some other way to access the
// correct LLVM intrinsic from *inside std*. :-)
#[allow(improper_ctypes)]
extern {
#[link_name = "llvm.x86.avx2.pslli.q"]
fn pslliq(a: i64x4, imm8: i32) -> i64x4;
}
unsafe { pslliq(a, imm8) }
}
And an example use inside of a larger program:
#!) }
}
#[inline(always)]
#[target_feature = "+avx2"]
fn testfoo(x: i64, y: i64, shift: i32) -> i64 {
let a = i64x4(x, x, y, y);
_mm256_slli_epi64(a, shift).0
}
#[target_feature = "+avx2"]));
}
Compile/run with (remember this uses #[target_feature] which I don't think has hit nightly yet):
$ rustc -O test1.rs # no -C target-feature required
$ ./test1 15 5 4
240
The generated ASM is correct. Namely, I see a vpsllq instruction emitted and everything appears to get inlined. We've made it to the promised land! Errmm, not quite...
vpsllq
There are some really interesting failure modes here. First up, removing #[target_feature = "+avx2"] on testfoo is A-OK. This makes sense, I think, because the caller is asserting that they know the platform has AVX2 support. Indeed, compiling and running the program produces the expected output. However, keeping the #[target_feature = "+avx2"] removed from testfoo and also changing inline(always) to inline(never) produces an LLVM codegen error:
testfoo
inline(always)
inline(never)
#[inline(never)]
fn testfoo(x: i64, y: i64, shift: i32) -> i64 {
let a = i64x4(x, x, y, y);
_mm256_slli_epi64(a, shift).0
}
Compiling yields:
$ rustc -O test1.rs
LLVM ERROR: Do not know how to split the result of this operator!
AFAIK, this cannot be part of stable SIMD on Rust. We must prevent all LLVM codegen errors. How? What exactly is going on here?
OK, let's move on to another interesting problem. Instead of testfoo calling a SIMD intrinsic internally, it will try to return a SIMD vector. Here is the full program. Notice the missing inline annotation on testfoo:
inline
#!) }
}
#[target_feature = "+avx2"]
fn testfoo(x: i64, y: i64, shift: i32) -> i64x4 {
let a = i64x4(x, x, y, y);
_mm256_slli_epi64(a, shift)
}));
}
Compiling is successful, but running the program leads to interesting results:
$ rustc -O --emit asm test1.rs^C
$ ./test 15 5 4
i64x4(240, 240, 4, 0)
The correct output, AIUI, should be:
$ ./test 15 5 4
i64x4(240, 240, 80, 80)
Interestingly, if we apply an inline(always) annotation to testfoo, then we get an LLVM error again:
$ rustc -O test.rs
LLVM ERROR: Do not know how to split the result of this operator!
If I force the issue and instruct rustc to enable avx2 explicitly, then things are golden:
$ rustc -C target-feature=+avx2 -O test.rs
$ ./test 15 5 4
i64x4(240, 240, 80, 80)
The above works regardless of the inline annotations used on testfoo.
I am pretty flummoxed by the above. I don't actually know what's going on at the lowest level here, although I can make a high level guess that applying #[target_feature = "..."] to a function changes something about its ABI, and that this can interact weirdly with SIMD vectors. Since I don't quite understand the problem, it's even harder for me to understand the solution space.
#[target_feature = "..."]
@alexcrichton and I talked about the above briefly yesterday, and we brainstormed a few things. The solutions we were tossing out were things along the lines of "always passing SIMD vectors by-ref to LLVM" or "banning SIMD vectors from appearing in the type signature of a safe function" or something similar. These all have additional complications, such as what to do in the presence of generics or when defining type signatures for other ABIs like extern "C" ....
extern "C" ...
Overall, this seems pretty hairy. I briefly tried to play with something analogous in C and compared the behaviors of gcc and Clang. It seems like they suffer from similarish problems, although gcc appears to warn you. For example, consider this C program:
#include <stdint.h>
#include <stdio.h>
#include <x86intrin.h>
__attribute__((target("avx2")))
__m256i test_avx2() {
__m256i a = _mm256_set_epi64x(1, 2, 3, 4);
__m256i b = _mm256_set_epi64x(5, 6, 7, 8);
return _mm256_add_epi64(a, b);
}
int main() {
__m256i result = test_avx2();
int64_t *x = (int64_t *) &result;
printf("%ld %ld %ld %ld\n", x[0], x[1], x[2], x[3]);
return 0;
}
Compiled/run with Clang:
$ clang test.c -O
$ ./a.out
12 10 3399988123389603631 3399988123389603631
And now with gcc:
$ gcc test.c -O
test.c: In function ‘main’:
test.c:13:13: warning: AVX vector return without AVX enabled changes the ABI [-Wpsabi]
__m256i result = test_avx2();
^~~~~~
$ ./a.out
140724298882158 140120498842949 1 4195757
$ ./a.out
1 4195757 0 0
$ ./a.out
140736787806926 139757628374341 1 4195757
Now, of course, this isn't to say that I expected this C program to work. Rather, I'm showing this as a comparison point to demonstrate different failure modes of other compilers. In Rust, we have to answer questions like whether an analogous Rust program should be rejected by the compiler and/or whether the behavior in the above C program exhibits memory unsafety.
Can anyone help me untangle this?
cc @alexcrichton, @nikomatsakis, @withoutboats, @aatch, @aturon, @eddyb (please CC other relevant folks that I've missed :-))
Wasn't the solution for the ABI problem to turn the necessary features on (i.e. resulting in usage of ymm registers) and let the code SIGILL on hypergeneric misuse instead of causing UB? | https://internals.rust-lang.org/t/getting-explicit-simd-on-stable-rust/4380?page=13 | CC-MAIN-2017-34 | refinedweb | 3,801 | 62.27 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.