text stringlengths 20 1.01M | url stringlengths 14 1.25k | dump stringlengths 9 15 ⌀ | lang stringclasses 4
values | source stringclasses 4
values |
|---|---|---|---|---|
in reply to Re^2: Powerset short-circuit optimizationin thread Powerset short-circuit optimization
This is a friggin' awesome problem. Lots of fun working on it. :-) But I think a solution is at hand.
My previous solution assumed that there was some sort of external criteria that would determine whether to skip generation of the powerset. Extensive private conversation with Limbic~Region plus his re-statement of the problem made it clear that that's not the case. Existence is the condition AND we need to worry about multiple sets and god knows what else. This will take time to explain.
I will run through some theory an explanation, then post the modified code, then further explain that.
The issue that other people were having was placement of the characters in the set. So if you start off with set ABC, you'll know to ignore AB. However, you need to keep track of "AB" somewhere in memory and then do a string comparison or the like. Expensive.
My original approach simply used set enumeration to flag sets as being valid or dupes or whatnot. But that wont work here. With the example set ABC, you'd flag BC (idx 6 by how my function counts). BCD comes along, but its idx 6 is CD. So you'd improperly skip over CD. Further, its BC is idx 3, which you'd duplicate. Not good.
We're going to fix this by using placeholders in the set by means of dead bits. This is far cheaper than you would initially think.
For set A B C, you can enumerate the powerset in binary:
111 ABC
110 AB
101 A C
100 A
011 BC
010 B
001 C
000
For BCD, it would be thus:
111 BCD
110 BC
101 B C
100 B
011 CD
010 C
001 D
000
Instead, we want to offset the values in BCD and leave an empty slot f
+or A. We'll end up with this:
0111 BCD
0110 BC
0101 B C
0100 B
0011 CD
0010 C
0001 D
0000
[download]
BC is 011 in our original set, and hey, in our newly modified set with the dead bit at position 0, BC is once again 011. So now we can safely exclude this set.
Note - this does not require the elements to be in sorted order, or even for you to know them in advance. We'll use an LZW style approach to keep assigning slots as we see them.
(for ABC)
Have we seen A before? No? A = slot 0
Have we seen B before? No? B = slot 1
Have we seen C before? No? C = slot 2
(for BCD)
Have we seen B before? YES - return 1
Have we seen C before? YES - return 2
Have we seen D before? No? D == slot 3
[download]
The order is irrelevent, but it does require that we build up an additional hash table mapping elements -> slots. Memory intensive, but relatively cheap - we only need to look into this index at the start of processing on a set.
But wait! You say, I glossed over a whole set of values up at the top! Let's return to them:
By using a dead bit for A, we actually have this:
1111 invalid
1110 invalid
1101 invalid
1100 invalid
1011 invalid
1010 invalid
1001 invalid
1000 invalid
0111 BCD
0110 BC
0101 B C
0100 B
0011 CD
0010 C
0001 D
0000
[download]
That's hugely inefficient. That's 8 checks we need to do just to get to a valid set. It spirals horribly out of control as you add more dead bits. Say, for example, you were using all letters of the alphabet. Your last set is just (z). By that point, z is in slot 25. So you'd have:
11111111111111111111111111 invalid
11111111111111111111111110 invalid
11111111111111111111111101 invalid
.
.
.
00000000000000000000000010 invalid
00000000000000000000000001 valid (z)
[download]
Thats 67,108,864 iterations through invalid sets before you find your one valid one. Fortunately, an xor can cause us to solve this in constant time.
Build up a new digit that is the |'ed (bitwise or) string of all of your dead bits. In this case, we'd have 11111111111111111111111110. Now, as part of your index incrementing loop, you & (bitwise and) your set candidate with the deadbit string. If you get back 0, then you're fine (there are no deadbits in your set index, so you continue with it. If you get back something other than zero, then you ^ (bitwise xor) your index with the deadbit string and then & it with the original (to ensure you don't accidentally turn on any bits that were previously off). This will remove all deadbits from your string and jump you to the next valid index past the deadbits.
Examples are in order:
From up above, set (z)
start at 11111111111111111111111111
deadbit string = 11111111111111111111111110
start & deadbit = 11111111111111111111111110
this is > 0, so xor.
(start ^ deadbit) & start = 00000000000000000000000001
Bang! one and, a conditional, and an xor jumped you past 67 million so
+me odd invalid sets.
Another one, from above, set (ABCD)
start at 1111
deadbit string = 1110
start & deadbit = 1110
this is > 0, so xor
(start ^ deadbit) & start = 0111 (starts at BCD
Here's a more complex example. Assume our first set was ABC, and our n
+ext was ABD. (A = 0, B = 1, C = 2, D = 3). When we hit ABD, we flag C
+ as our deadbit and our deadbit string would be 0010.
Let's try a complete runthrough:
1111
1111 & 0010 > 0 -> (1111 ^ 0010) & 1111 -> new index is 1101
1101 ABD (1101 & 0010 == 0)
1100 AB (1100 & 0010 == 0)
1011 (1011 & 0010 == 0)
1011 & 0010 > 0 -> (1011 ^ 0010) & 1011 -> new index is 1001
1001 A D (1001 & 0010 == 0)
1000 A (1000 & 0010 == 0)
0111
0111 & 0010 > 0 -> (0111 ^ 0010) & 0111 -> new index is 0101
0101 B D (0101 & 0010 == 0)
0100 B (0100 & 0010 == 0)
0011
0011 & 0010 > 0 -> (0011 ^ 0010) & 0010 -> new index is 0001
0001 D (0001 & 0010 == 0)
0000 () (0000 & 0010 == 0)
[download]
The code must be modified to incorporate the deadbits, as well as given the ability to return and re-use our previously skipped indexes. Modifications are as follows. There's a fair bit of additional set up an logic required vs. the previous version, but it can spit out all the numbers as desired from node 580625 with only 39 calls to the iterator and by storing only 32 integers in memory. Most of the additional work and overhead are bitwise operations.
use strict;
use warnings; # returns _3_ closures to generate certain powersets
#arbitrary benchmark device, used to se how many times the iterator wa
+s called.
my $calls = 0;
sub limbic_power_generator {
my $set = shift;
# we re-define skippers as an array and allow the user to pass it in
+, so we
# can keep track of previously skipped values. The value of the hash
+ was
# that it prevents dupes. Note - it would be better to mod the old v
+ersion
# to always use a descendingly sorted array, but is left to the read
+er.
my $skippers = shift || {};
my $deadbits = shift || 0;
#we start with the original set and count down to the null set
my $set_idx = 2 ** @$set;
#these are the set indexes we should skip
my %skippers = ();
# our first closure generates subsets
my $generator = sub {
# arbitrary benchmark device, that way you can see how may times t
+he iterator
# was called
$calls++;
--;
# check to see if this set contains a deadbit, and if so hop ove
+r it.
if ($set_idx & $deadbits) {
# make sure that we don't accidentally jump up to a higher set
+ index.
# this can happen if you have deadbits beyond the length of yo
+ur("V",$set_idx));
# now we return a list. The first element is an arrayref which is th
+e actual
# subset we generated, the second is our set_idx.
return ([map { $set->[$_] } grep { $in_set[$_] } (0..$#$set)], $se
+t_idx);
};
# our second closure allows you to add sets to skip
# it also returns the list of skipped values
my $skipper = sub {
if (@_) {
my $skip_key = shift;
$skippers->{$skip_key}++;
}
return $skippers;
};
# return both of our closures.
return ($generator, $skipper)
}
# we'll use the example sets from node 580625
my $limbic_sets = [
[qw(A B C)],
[qw(A B D)],
[qw(A B)],
[qw(B C)],
[qw(E)],
[qw(A B C E)],
[qw(A B C D E)],
];
# our index lookup hash. There are potential savings by pre-caching th
+ese values
# if all elements are known in advance.
my %idx = ();
my $next_open_idx = 0;
# our sets to skip
my %skippers = ();
foreach my $limbic_set (@$limbic_sets) {
print "checks set @$limbic_set\n";
# we need to keep track of which indexes are dead, so we copy the
# known indexes
my %dead_idx = %idx;
# here we'll keep track of the bits that are dead
my $deadbits = 0;
# we now need to iterate over our set. If we know the index of tha
+t element
# then great. That means we've seen it before, and it's currently
+live, so
# delete it from our list of dead bits.
#
# otherwise, assign it a new index.
foreach my $elem (@$limbic_set) {
if (defined $idx{$elem}) {
delete $dead_idx{$elem};
}
else {
$idx{$elem} = $next_open_idx++;
}
}
#here we're going to store the indexes which are dead
my %dead_lookup = ();
# iterate over our dead elements list, and toss it into the deadbi
+ts string
# and add its index to the lookup
foreach my $idx (values %dead_idx) {
$deadbits |= 2 ** $idx;
$dead_lookup{$idx}++;
}
# we need to pad out set with dead bits. So if we call with (ABC),
+ then later
# with (ABD), we need to turn that into (AB D)
my $padded_limbic_set = [];
my $padded_limbic_idx = 0;
foreach my $idx (0..$#$limbic_set) {
# if that index is dead, then toss in a placeholder and shift
+the array
# element forward. This is using parallel indexes, there may b
+e a more
# efficient method.
if ($dead_lookup{$padded_limbic_idx}) {
$padded_limbic_set->[$padded_limbic_idx++] = undef;
redo;
}
$padded_limbic_set->[$padded_limbic_idx++] = $limbic_set->[$id
+x];
}
# get our iterators, using the padded set, skippers, and deadbits.
my ($limbic_iterator, $limbic_skipper) = limbic_power_generator($p
+added_limbic_set, \%skippers, $deadbits);
#as we see an eleemnt, we're going to add it to this list, so we s
+kip it on the next pass.
my %future_skippers = ();
#and start cruising over our powersets.
while ( my ($set, $idx) = $limbic_iterator->() ) {
#fancy crap to get it to print out properly.
my $display = {map {$_ => 1} grep {defined} @$set};
my $format = "%2s" x scalar(@$padded_limbic_set) . " (%d)\n";
printf($format, (map {defined $_ && $display->{$_} ? $_ : ' '} @
+$padded_limbic_set), $idx);
#we don't skip anything in this pass, but we'll do it the next t
+ime around.
$future_skippers{$idx}++;
}
@skippers{keys %future_skippers} = values %future_skippers;
}
print "TOTAL CALLS $calls\n";
[download]
Update: Fixed a minor logic glitch that could cause dupes with specially crafted sets. This code should now be complete.
Update 2: Changed to pack as a little endian long (V) instead of just a long (L) to make cross platform compatible.
Update 3: Okay, I'm man enough to admit that all of my slick binary logic actually introduced a horrrible inefficiency (looping to see if a set should be skipped). Fixed with a hash lookup instead. Duh.
Update4: I think I've squeezed as much raw power out of this as I'm going to get. The newest version here requires the set elements be known in advance and it doen't print out the null set. Otherwise? It does work. Comparing against Limbic~Region's code below, mine is indeed still slower. Dang.
Current state of the art. Expects the data file to be passed on the command line:
#!/usr/bin/perl
use strict;
use warnings;
sub set_generator {
my ($set, $set_idx, $skippers, $deadbits) = @_;
# Start decrementing the set_idx. We always do this at least once,
+ so
# we get to the next set. Our original number is 2 ** @set, so we
+start
# at 2 ** @set - 1 after the decrement
while ($set_idx--) {
# check to see if this set contains a deadbit, and if so hop ove
+r it.
# make sure that we don't accidentally jump up to a higher set i
+ndex.
# this can happen if you have deadbits beyond the length of your
+.
}
#bow out if we're out of sets("L",$set_idx));
# now we return a list. The first element is an arrayref which is th
+e actual
# subset we generated, the second is our set_idx.
return ([map { $set->[$_] } grep { $in_set[$_] } (0..$#$set)], $se
+t_idx);
};
# our index lookup hash. Assume letters 'a'-'z'. The original version
+of
# dynamically assigning indexes had a few bugs. Whoops. So now you're
+required
# to know all elements in advance. Damn. I'll fix this later, I guess.
my @letters = ('a'..'z');
my %idx = map {$letters[$_] => $_} (0..25);
# our sets to skip
my %skippers = ();
while (my $limbic_string = <>) {
chomp $limbic_string;
my $limbic_set = [split //, $limbic_string];
# we need to keep track of which indexes are dead, so we copy the
# known indexes
my %dead_idx = %idx;
# here we'll keep track of the bits that are dead
my $deadbits = 0;
#remove the live bits from the dead set.
delete @dead_idx{@$limbic_set};
# iterate over our dead elements list, and toss it into the deadbi
+ts string
# and splice in an undef to our list.
foreach my $idx (sort {$a <=> $b} values %dead_idx) {
$deadbits |= 2 ** $idx;
splice @$limbic_set, $idx, 0, undef unless $idx >= @$limbic_se
+t;
}
#as we see an element, we're going to add it to this list, so we s
+kip it on the next pass.
my @future_skippers = ();
my $set_idx = 2 ** @$limbic_set;
#and start cruising over our powersets.
while (my ($set, $newidx) = set_generator($limbic_set, $set_idx, \
+%skippers, $deadbits)) {
print @$set,"\n";
#note our new set index
$set_idx = $newidx;
#we don't skip anything in this pass, but we'll do it the next t
+ime around.
push @future_skippers, $set_idx;
}
@skippers{@future_skippers} = (1) x @future_skippers;
}
[download]
The code to generate the 3_477 line data file and the recursive java version can be found at How many words does it take?. The two recursive perl versions are below:
I made minor modifications to your code to handle my dataset as well as produce comparable output:
Update: I had solicited feedback on my poor C in the CB and was told that, despite it being a translation of the Perl above, that documentation would go along way in getting the desired comments.. | http://www.perlmonks.org/?node_id=580656 | CC-MAIN-2016-30 | en | refinedweb |
I’ve got just a few loose ends to tie up about our new security off behavior, and then we’ll move on to other topics.
System.Security.SecurityManager.SecurityEnabled
As part of the work to move to the new security off model, we’ve removed the ability to turn off security pragmatically through the SecurityEnabled property of the SecurityManager object. The getter will still tell you the current state of the security system, however the setter was turned into a no-op.
By making the decision to make the property setter into a no-op, the symptom that a broken application will see is that a SecurityException will be thrown in a location that was likely not built with handling that exception in mind. As a temporary fix, while running the application, you can also run CasPol in the background to create the environment that the application expects to run in. Of course, as a long term solution, the application should be modified to work with the security system on. We could have chosen to throw a NotImplementedException whenever the setter was accessed, but breaking in that way prevents the caspol workaround since the application will still throw an unhandled exception.
Security On For Win9x
Last time I showed how you could use Windows NT security to prevent security from ever being turned off. Of course using Windows NT’s security model requires, well, Windows NT. To achieve the same effect on Windows 98 or Windows ME we’ll have to take a different route.
The goal is still the same — prevent CasPol from acquiring the security off mutex while keeping that object in a state that the CLR doesn’t recognize it as valid for turning off security. Since we can’t acquire an ACL on the mutex, we’ll need to come up with another way to prevent it from being acquired in the first place. One way falls out from the fact that in Windows mutexes, events, semaphores, waitable timers, job, and file mapping objects all share the same namespace. This means that no two of those types of objects can have the same name. So, if we pick one of those available on Win9x … say an event, we can prevent CasPol from creating a mutex with the same name. And since we won’t be holding a mutex with the security off name, the CLR won’t disable security.
{
// create a security off event
HANDLE hEvent = CreateEvent(NULL, FALSE, FALSE, _T(“CLR_CASOFF_MUTEX”));
if(hEvent == NULL)
{
std::cout << “Could not acquire event, security may already be off.” << std::endl;
return 0;
}
// wait to unlock
std::cout << “Security is locked on. Press any key to unlock.” << std::endl;
_getch();
CloseHandle(hEvent);
return 0;
}
On the topic of the no-op SecurityManager.SecurityEnabled setter…
Probably not a big surprise, but I still don’t like the silent failure approach. Perhaps the following would be an acceptable compromise: throw an exception from the setter iff the target state is not the same as the current state. For example, when attempting to disable CAS via the setter, it would effectively be a no-op if CAS is already disabled, but it would throw if the mutex doesn’t already exist. This would allow early detection of the operation failure as well as allowing folks to avoid code changes by ensuring that the mutex is created before their applications attempt to disable CAS via SecurityManager.SecurityEnabled.
Not a surprise at all
Your solution is another alternative we can take a look at. Its main problem is that the exception can be thrown or not depending on system state at the time the property was set. And there’s an inherent race condition between when we check the state and decide to throw / not throw the exception. Debuggability becomes a problem, since a repro might be difficult to get.
The state we’re in now is probably not the state we’ll ship in; the problem is finding a good solution to a difficult to solve properly problem.
-Shawn
Good to know that the implementation isn’t yet frozen. While you guys are mulling things over, here are a couple of the scenarios that got me to point of preferring the throw over the silent failure despite the potential race condition…
The most common "acceptable" CAS-toggling scenario that is likely to be affected by this change is disabling of CAS by "high performance" applications. Since this should only be done on an appropriately isolated system, it is not unreasonable to assume that all code in most applications of this sort would be locally installed and run with full trust under default policy. Therefore, in most such situations, the silent failure to disable CAS would not cause security exceptions to start appearing. Instead, the main symptoms would be related to performance (unless the the new "full trust scenario" performance optimizations provide equivalent performance gains to disabling CAS ;)), and these would likely manifest very differently between applications and systems. Attempting to track down the source of such problems would likely be incredibly time-consuming for all parties, including end-clients who have no idea that CAS was ever disabled in the first place. This scenario might also cause some bad press for v.2 performance if folks start complaining about such problems before discovering the changed behaviour (assuming they ever do).
Then there’s the other side of things: applications that disable CAS inappropriately. This scenario (which may be quite a bit more frequent than any of us might like to believe) has another set of interesting problems. The application that disables CAS is likely to be fully trusted since the required SecurityPermissionControlPolicy permission in only granted to fully trusted code under default policy. This means that the failure to disable CAS likely won’t affect the application that attempted to disable CAS. Instead, it will cause security and policy exceptions to start showing up in other applications that ran without such "problems" under v.1. Given that the original failure in the CAS-toggling application was silent, imagine the pain of trying to identify the problem from exceptions in other applications…
(BTW, there might also have potential race condition problems in v.1 if enabling CAS via the setter had actually worked as expected. 😉 As things stand, it looks like there might still be some potential race condition problems in the persistence code.)
Well, the thinking is that our perf work has paid off, and that performance is not a valid reason to disable security any more. I’ll forward your post to the rest of the security team though, and we’ll definately use it as a data point when coming up with our final decision. Thanks again for all the feedback.
-Shawn | https://blogs.msdn.microsoft.com/shawnfa/2005/05/10/security-off-wrap-up/ | CC-MAIN-2016-30 | en | refinedweb |
.
using System; using System.IO; class BinaryRW { static void Main() { int i = 0; char[] invalidPathChars = Path.InvalidPathChars; MemoryStream memStream = new MemoryStream(); BinaryWriter binWriter = new BinaryWriter(memStream); // Write to memory. binWriter.Write("Invalid file path characters are: "); for(i = 0; i < invalidPathChars.Length; i++) { binWriter.Write(invalidPathChars[i]); } //()); char[] memoryData = new char[memStream.Length - memStream.Position]; for(i = 0; i < memoryData.Length; i++) { memoryData[i] = binReader.ReadChar(); } Console.WriteLine(memoryData); } }
Available since 4.5
.NET Framework
Available since 1.1
Portable Class Library
Supported in: portable .NET platforms
Silverlight
Available since 2.0
Windows Phone Silverlight
Available since 7.0
Windows Phone
Available since 8.1 | https://msdn.microsoft.com/en-us/library/system.io.binaryreader.readchar(v=vs.110).aspx | CC-MAIN-2016-30 | en | refinedweb |
Search: Search took 0.02 seconds.
- 3 Feb 2009 7:44 AM
Thank you for the quick reply Sven.
- 3 Feb 2009 7:41 AM
Does anyone know if this is getting looked at or is it working as intended?
- 29 Jan 2009 7:23 AM
Removing all attributes by rewriting a ton of server code "is" a work around, but an expensive one. I am hoping that I don't have to go that way. I don't feel placing more parsing and slowing down...
- 28 Jan 2009 2:44 PM
The property editor is probably what is preventing the date field from accepting an empty value. I got around my problem by adding a "clear date" icon button.
- 28 Jan 2009 8:03 AM
/**
* @return a date field with no label
*/
public DateField createDateField() {
final DateField field = new DateField();
field.setHideLabel(true);
field.addStyleName("dateField");...
- 28 Jan 2009 7:45 AM
Does anyone know if it's possible to allow a blank entry in a date field? I have tried setting the setAllowBlank() to true, built a property editor, and a validator. Nothing seems to allow a blank...
- 28 Jan 2009 6:46 AM
type.addField("@id");
That is how you access an XML attribute. It works as it should while in hosted mode. The window dialog box displays the correct result for parsing the id attribute...
- 27 Jan 2009 3:04 PM
Works perfectly in hosted and fails as expected in a browser.
package com.mycompany.project.client;
import com.extjs.gxt.ui.client.data.BaseListLoader;
import...
- 27 Jan 2009 9:28 AM
Here is a sample of the xml I am using. Note that some of the names have been changed to protect the innocent. Or at the very least my job. . .
<?xml version="1.0"...
- 27 Jan 2009 7:52 AM
I've narrowed the bug down to parsing xml attributes. I'm not sure if this applies to parsing xml that is using a remote proxy as well as the memory proxy that I am using. Removing attributes from...
- 26 Jan 2009 1:30 PM
I got the following error in FireFox:
Failed to load the store.
com.google.gwt.core.client.JavaScriptException: (ReferenceError): attrValue is not defined
This seems to be a little...
- 26 Jan 2009 8:13 AM
Where does the error/stack trace come from?
The exception is thrown in client side code. It's a method that is called from a class that implements AsyncCallback. The particular method that loads...
- 23 Jan 2009 7:07 AM
Thanks for the quick reply.
The code is on the client. The exception was sent to the server through an RPC so I could get an idea of where it was having problems. The variable _xmlReader is an...
- 22 Jan 2009 4:56 PM
I am currently using the following code to load a store from an xml string. This method works perfectly in the hosted browser but fails to work when the site is hosted on a Tomcat server and accessed...
Results 1 to 14 of 14 | https://www.sencha.com/forum/search.php?s=0ebd2a37fbd9caf67c5f1691725df1e6&searchid=17053217 | CC-MAIN-2016-30 | en | refinedweb |
I can almost hear you say "Oh no, not another one". Thanks to the TreeConfiguration, configuration management implementations have finally joined Outlook Bar controls, RSS readers and few others on the 'Top Ten Most Reinvented Code of All Times' chart.
TreeConfiguration
Hey, wait! Don't give up reading yet!
I really didn't want to reinvent the wheel, and I hope I didn't. It's just that I already had the code long before I even started thinking about writing an article. I did know about at least five other implementations presented here on CodeProject, yet I still decided to go about publishing my own. Here are the two main excuses motives for that:
Among other things, TreeConfiguration code demonstrates how to:
Hopefully that should be enough reasons to justify not only the time spent writing an article on a worn out topic, but also the time spent reading it.
The implementation you are about to see places all bets on the simplicity of use and the ability to maintain data in a flexible structure. Because different people interpret terms 'simplicity' and 'flexibility' differently, it's better to let the code speak for itself.
Here is the essence of it:
// Set values, create tree structure on the fly if necessary
cfg["/database/login/username"] = "letmein"; // primitive type
cfg["/application/windows/main/rectangle"] =
new Rectangle(100, 200, 300, 400); // not-quite-primitive type
// Get values back
string username = (string) cfg["/database/login/username"];
Rectangle rect = (Rectangle)
cfg["application/windows/main/rectangle"];
It can't get much simpler than that, but you are probably old enough to know that there is no such thing as a free lunch—the simplicity comes with a small price tag on it.
The next section discusses some of the issues that have been taken into consideration in the design phase, hopefully explaining in more detail the reasoning that led to one such implementation. If you only want to take a look at the code, you can skip the next section and go directly to the section named Using the code.
Maintaining configuration data in a tree-like hierarchy has the advantage over the traditional Windows .INI file approach because it better models the actual structure of the data. TreeConfiguration retains the .INI file concept of key/value pairs, because that's what individual settings are. But instead of being maintained in some kind of a list, key/value pairs become leaves in a hierarchy of nodes where each node has a name, zero or more key/value pairs, and zero or more subnodes.
The following picture visualizes that structure:
By itself, the above hierarchical structure doesn't bring any particular improvement over any other meaningful way of organizing data. It's just that, when complemented with simple and consistent syntax, it makes the code more readable and easier to use.
Each individual configuration setting (the term 'key' is equally used in this article) in the hierarchy is referenced using an absolute path to it. Path is a case-insensitive string that has the following form:
/<node>/<node>/.../<node>/<key>
The TreeConfiguration class implements an indexer that sets and gets a concrete setting identified by its path:
string value = cfg["/node1/node2/node3/key"];
cfg["/node1/node2/node3/key"] = value;
The configuration[<path>] syntax is a shorthand for reading or writing the value of a single setting without having to access its node directly (in the previous example, the node "node3"). Accessing single settings this way is convenient; however, there will be times when individual nodes will have to be accessed directly—for example, to examine a node's subnodes.
Unfortunately, subnodes cannot be accessed using the same syntax,
// Access the node 'node2' directly
node2 = cfg["/node1/node2"]; // this won't work
because the indexer will look for the key "node2" under the node "node1", not for the node "/node1/node2". Obviously, the path cannot refer to both nodes and keys at the same time, so it always gets interpreted as a path to a key.
Yet, the goal is to have a consistent syntax for accessing any part of the configuration.
This is where parameterized named properties (a concept borrowed from Visual Basic .NET) come to the rescue. This feature isn't supported directly in the current C# version, but it can be simulated effectively. The TreeConfiguration class exposes a member called Nodes that itself implements an indexer. Unlike TreeConfiguration's indexer, Nodes' indexer interprets the path passed to it as a reference to a node. Being a nested type, Nodes has the access to TreeConfiguration's internal node hierarchy so its indexer can find the node identified by the path and return a collection of its subnodes:
Nodes
ICollection subNodes =
cfg.Nodes["/node1/node2/node3"];
// access subnodes of the node 'node3'
The above technique gives a rather convenient access to subnodes of any node, but why stop there? A node's keys and values can be accessed in exactly the same way:
ICollection keys = cfg.Keys["/node1/node2/node3"];
// access all keys of the node 'node3'
ICollection values = cfg.Values["/node1/node2/node3"];
// access all values of the node 'node3'
The whole idea of using the path to select concrete nodes slightly resembles XPath expressions. However, in order to keep things simple, the path syntax is intentionally kept in its simplest form (no wildcards etc.). When used in this context, the path serves as a node selector and three different TreeConfiguration members conveniently expose three different parts of a node's content through the ICollection interface.
ICollection
Concrete collection implementation (currently HybridDictionary) is intentionally hidden from the public interface in order to allow eventual replacement by another collection that performs better without breaking code that uses TreeConfiguration.
HybridDictionary
TreeConfiguration class itself does not impose any requirements regarding the underlying data storage. In fact, it doesn't even offer any mechanism for storing the data. Derived classes must implement virtual methods Load and Save and provide their own means of data retrieval and storing.
Load
Save
The source code package for this article comes with one concrete TreeConfiguration implementation, the XmlConfiguration class that uses the XML file for its underlying physical storage. XML was a natural choice because it conveniently matches the given hierarchical structure.
XmlConfiguration
Here is how nodes, keys and values get translated in the XML file maintained by XmlConfiguration:
Even though it wasn't strictly necessary, XmlConfiguration class internally uses the XML schema to validate the content of the underlying XML configuration file before doing anything with it. To avoid any problems locating it when it's needed, the schema is stored as an embedded resource in the same assembly that contains the TreeConfiguration implementation (as opposed to being loaded as an external file).
Individual configuration settings are concrete instances of various types. To persist them in an XML file, all those different objects must be serialized to strings in such a way that they can be identically reconstructed later. In addition, this problem must be handled in a generic way as object types are not known in advance.
XmlConfiguration uses a generic type conversion mechanism borrowed from the Visual Studio Property Browser. The Property Browser offers design-time editing of type properties in their textual form. Without knowing in advance what the edited type will be, the Property Browser somehow finds the way to convert its properties to strings, lets the user edit them, and converts them back to their native type. (Not all properties can be edited as strings, but more on that soon.)
Behind the curtains, properties pass through TypeConverter. TypeConverter is the central point in the .NET framework type conversion mechanism. The FCL provides several TypeConverter-derived classes that handle conversion of most common .NET types. Deciding what type converter to use for a particular type is a task for another FCL class: TypeDescriptor. TypeDescriptor checks the converter by looking for a TypeConverterAttribute.
TypeConverter
TypeDescriptor
TypeConverterAttribute
By delegating actual conversion work to TypeConverter and TypeDescriptor, XmlConfiguration benefits from the framework library code that already provides conversion support for a fair number of common FCL types. The mechanism has its limits, though. For example, collections cannot be serialized simply by letting TypeDescriptor choose the appropriate converter. It could have been possible with some additional coding, but the task can anyway be accomplished with few lines of code walking through the collection and serializing individual elements.
What about custom types?
As you would expect, TypeConverter concept is extendable. In order to maintain your own type in the XmlConfiguration, you need to write a type converter for it. A proper type converter is what qualifies the type for usage with XmlConfiguration. Converters are easy to build: check the CustomTypeConverter.cs file for MyClass, an example of a custom type, and MyClassConverter, the corresponding type converter. (MyClass is used in the accompanying unit tests; check the method TestCustomTypeConverter in the file Test.cs for details.)
MyClass
MyClassConverter
TestCustomTypeConverter
TreeConfiguration represents the base class that provides only in-memory data management. Derived classes, such as the XmlConfiguration class (also part of the package), implement virtual methods Load and Save providing concrete data storage implementation.
This is how the typical usage scenario looks like:
// Create a new configuration instance,
// specifying physical storage location (an XML file)
XmlConfiguration cfg =
new XmlConfiguration(Environment.GetFolderPath(
Environment.SpecialFolder.LocalApplicationData) +
Path.DirectorySeparatorChar + @"MyApplication\MyApplication.cfg",
"My Configuration Title");
cfg.Load();
// Load any existing data from the underlying XML file
// Read configuration settings
string username = (string) cfg["/Database/Login/Username"];
int count = (int) cfg["UsageCount"]; // root key syntax
// leading path separator is optional; same as "/Confirmations/WarnOnExit"
bool warnOnExit = (bool) cfg["Confirmations/WarnOnExit"];
Rectangle rect = (Rectangle) cfg["Windows.Main.Rectangle"];
// alternative syntax (configurable)
// Store current settings back at some later moment
cfg["/database/login/username"] = "letmein"; // path is case-insensitive
cfg["/UsageCount"] = 123;
cfg["Confirmations/WarnOnExit"] = true;
cfg["/Windows/Main/Rectangle"] = new Rectangle(100, 200, 300, 400);
cfg.Save();
// Save configuration data back to the underlying XML file
A few notes:
The only requirement that XmlConfiguration imposes on values is that they must be convertible to string and back. This requirement actually comes from the fact that the underlying storage is an XML file; the TreeConfiguration itself does not impose any such requirements.
The list of types that can be managed using the XmlConfiguration includes all .NET primitive types (string, int, DateTime etc.) and FCL types that support generic type conversion by means of the TypeConverter class. Types like Point, Size and Rectangle (all from the System.Drawing namespace) are just some of the examples of FCL types that can be managed using the XmlConfiguration right out of the box. (You can find a detailed discussion about type conversion issues here.)
string
int
DateTime
Point
Size
Rectangle
System.Drawing
The general rule is: whatever type can be edited as a single property in the Visual Studio Property Browser should also be suitable for use with the XmlConfiguration. For any other types, there is no magic—you have to help to make conversion happen by writing a type converter.
The generic type conversion implies that the managed types are simple enough so that their instances can be represented by a single string. This isn't really a big problem because the individual configuration settings are typically either primitive types or otherwise very simple types such as Rectangle, Size etc. (And even if a type instance cannot be represented with a single string, it can always be broken down to pieces that can.)
In any case, the XmlConfiguration doesn't limit support to only built-in FCL types. It can work with user-defined types as well, provided that you implement a TypeConverter-derived type converter for them. The source code package comes with a simple example that shows how to build one.
From time to time you'll want to check if a single key or node exists, remove it, or even erase the complete hierarchy. The following example shows these operations:
// Check if the key exists
bool valueExists = cfg["/Database/Login/Username"] != null;
// Remove the key
cfg["/Database/Login/Username"] = null;
// nodes 'Database' and 'Login' are not removed
ConfigurationNode n = cfg.FindNode("/Database/Login");
bool nodeExists = n != null;
// Check if node contains any subnodes or key/value pairs
bool hasSubNodes = n.HasSubNodes;
bool hasKeys = n.HasKeys;
bool isEmpty = n.Empty;
// true if node doesn't have any subnodes nor key/value pairs
// Remove node
bool removed = cfg.RemoveNode("/Database/Login");
// node 'Database' stays
// Remove everything
cfg.Clear();
Most of the configuration settings are inherently static: they get stored in a fixed, predefined location in the configuration hierarchy and their data size is constant. Managing this kind of settings is easy because everything is known in advance: a setting can be referenced using a hard-coded path and the code can make safe assumptions about the content structure.
On the other side, some configuration settings, such as the list of most recently opened documents, have a dynamic structure: the size of the data is variable and it may even happen that there is no data at all. In terms of configuration elements, a node that represents dynamic settings contains a variable number of subnodes and key/value pairs.
In order to read the content of a dynamic setting, the content has to be made discoverable. For that reason the TreeConfiguration class exposes the node content in the form of collections of subnodes, keys and values. These three collections are accessible through corresponding named properties Nodes, Keys and Values:
Keys
Values
// Enumerate all subnodes of the node node3
ICollection subNodes = cfg.Nodes["/node1/node2/node3"];
foreach (ConfigurationNode n in subNodes)
{
// Do something with the subnode
}
// Enumerate all keys of the node node3
ICollection keys = cfg.Keys["/node1/node2/node3"];
foreach (string key in keys)
{
// Do something with the key
}
// Enumerate all values of the node node3
ICollection values = cfg.Values["/node1/node2/node3"];
foreach (object value in values)
{
// Do something with the value
}
(Make sure you check the file Test.cs for concrete examples.)
Type conversion quite often results in formatted strings: for example, numbers are presented with decimal and thousands separators, dates are presented with date and time separators etc. The resulting string format is culture-dependent.
Storing culture-sensitive formatted strings without culture information used to produce them is often an expensive mistake. The initial version of the XmlConfiguration did not store any culture information, often making it impossible to correctly read a configuration file in an environment that has a different regional setting from the one the file was created in.
To reliably convert formatted strings back to concrete types, TypeConverter needs the information about the culture used for the conversion. This is why the XmlConfiguration type now exposes a string property named Culture to identify the culture used in type conversions.
Culture
By default, if Culture property is not set, the XmlConfiguration will use CultureInfo.CurrentCulture for all type conversions. When specifying the culture use standard names, as in the following example:
CultureInfo.CurrentCulture
XmlConfiguration cfg = new XmlConfiguration(...);
// use whatever culture currently set as CultureInfo.CurrentCulture
cfg.Culture = "fr-FR"; // use French - France culture info
Culture info can also be specified for each individual ConfigurationNode node. Culture info specified for a concrete node applies to all its subnodes and key/value pairs. In fact, setting the XmlConfiguration.Culture property just sets the Culture property of the root ConfigurationNode. If there are no subnodes that override the root node setting, culture info applies to the whole configuration. The following example shows how to set and override culture info:
ConfigurationNode
XmlConfiguration.Culture
XmlConfiguration cfg = new XmlConfiguration(...);
// default: CultureInfo.CurrentCulture
float number = 123.456F;
DateTime now = DateTime.Now;
// Set the French culture for the whole configuration
cfg.Culture = "fr-FR";
cfg["/french/number"] = number; // will be stored as "123,456"
cfg["/french/date"] = now; // will be stored as "25/10/2005 19:29"
// Override culture info for a child node
ConfigurationNode rootNode = cfg.FindNode("/");
ConfigurationPath path = new ConfigurationPath("southafrican");
// "/southafrican" node
ConfigurationNode subNode = rootNode.CreateSubNode(path);
subNode.Culture = "af-ZA";
subNode["number"] = number; // will be stored as "123.456"
subNode["date"] = now; // will be stored as "2005/10/25 07:29 PM"
(For a more detailed example, check the Test.cs file.)
The package contains two projects:
A home-made unit testing framework was added to the package only to avoid having dependencies to a third-party framework that some readers might not have on their machines. Under normal circumstances, something like TestDriven.NET would be more appropriate.
The tests in the Test.cs file not only verify the correctness of the given functionality but also demonstrate all aspects of usage. This file is the first place to look for the answers to any questions you might have.
In order to test the serialization of certain FCL classes, the TreeConfiguration project contains a few otherwise unnecessary references to assemblies such as System.Drawing and System.Windows.Forms.
System.Windows.Forms
To build the documentation help file such as the one that is part of the package (TreeConfiguration.chm), you will need NDoc. The project source code files themselves contain no comments—only references to external files where the actual comments are. When the project gets built, all those references to individual documentation files in the docs subfolder get analyzed and the corresponding files get compiled into a single resulting XML documentation file. NDoc uses that file to build a help file in a number of different formats.
And that pretty much rounds up this idyllic story about the TreeConfiguration. I just hope that the reality won't distort it too much, and that you'll find the code usable.
This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.
A list of licenses authors might use can be found here
while (nav.NodeType != XPathNodeType.Element || nav.Name != ROOT_ELEMENT_NAME)
nav.MoveToNext();
while (nav.NodeType != XPathNodeType.Element || nav.Name != ROOT_ELEMENT_NAME) {
if(nav.MoveToNext() == false)
return false;
}
Vladimir Klisic wrote:If you specify the InvariantCulture once for the whole configuration, does it always work as you expect?
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/9612/TreeConfiguration-configuration-made-as-simple-as?msg=1291143 | CC-MAIN-2016-30 | en | refinedweb |
To continue the theme of landing external data in Office 2010, I’ve taken a demo excerpt from a session I presented at a conference last week to show how easy it is to land OData information in an add-in. In this sample I simply use the Northwind Database which is a publicly available OData producer. You can read more about OData on the OData.org site. By simply clicking the Northwind link you can see a collection of the database tables you can access. This sample uses Customers,, to retrieve an Atom document with all the customers in the database. You’ll then load the customers into the add-in using the same document-level add-in pattern as used in the previous post.
Sample Office Add-in OData Project
- Create a new Visual Studio 2010 and select New Project…
- Add Rich Content Controls to the document
- Create a two column table on the ODataSample.docx document similar to the one below (formatting is optional). The left column is simply text so just type that information in.
- Place RichTextContentControls from the toolbox into the second column of the table
- In the Property Window, provide property values for the Name and Text for each content control. The Names to use are listed here. For the Text property use the appropriate value as shown above in each content control.
- wccCustomerID
- wccContactName
- wccAddress
- wccCity
- wccCountry
- wccPostalCode
- Add References
- In the Solution Explorer, right-click ODataSample and choose Add References…
- In the .NET tab, select System.XAML, UIAutomationProvider and WindowsFormsIntegration and click OK.
- Add a User Control to the project.
- In the Solution Explorer, right-click ODataSample and choose Add, New Item…
- Select User Control, name it CustomerUserControl and click Add.
- In the Properties Window set the Size to 150,750.
- (Note: this user control will host the WPF control.)
- Add a WPF User Control to the project.
- In the Solution Explorer, right-click ODataSample and choose Add, New Item…
- Select WPF from Installed Templates, and select User Control (WPF)
- Name it CustomerPicker.xaml and click Add.
- In the XAML window, completely replace the <Grid></Grid> elements with the following XML
- Add a Data Source for Northwind.svc
- From the menu choose, Data, Add New Data Source…
- Select Service and click Next.
- Type,, and click Go.
- Don’t change the name, click OK, and Finish.
- Add the Code behind for CustomerPicker.xaml.cs
- In the Solution Explorer, right-click CustomerPicker.xaml and choose View Code.
- Add the following using statements just after the ODataSample namespace
- Instantiate the Northwind entities object as a class-level variable
- Add the OnInitialized override method
- Add the button1_Click event
- When done, your code should look like this:
- Add the Code behind for ThisDocument.cs
- That’s it, press F5 and see your add-in load with customers from an OData source. Your final result should look something like this once you click on a customer in the listbox and then click the Insert Customer Information button.
So there you have it, another way to land data in the Office clients (think about pulling data into Excel this way). Accessing data like this is extremely powerful because you can essentially land data that is exposed as an OData end-point by simply using WCF Data Services.
Enjoy! | https://blogs.msdn.microsoft.com/donovanf/2010/08/09/accessing-an-odata-producer-using-an-office-2010-add-in/ | CC-MAIN-2016-30 | en | refinedweb |
notimeout, timeout, wtimeout - control blocking on input
#include <curses.h> int notimeout(WINDOW *win, bool bf); void timeout(int delay); void wtimeout(WINDOW *win, int delay);
The notimeout() function specifies whether Timeout Mode or No Timeout Mode is in effect for the screen associated with the specified window. If bf is TRUE, this screen is set to No Timeout Mode. If bf is FALSE, this screen is set to Timeout Mode. The initial state is FALSE.
The timeout() and wtimeout() functions set blocking or non-blocking read for the current or specified window based on the value of delay:
- delay < 0
- One or more blocking reads (indefinite waits for input) are used.
- delay = 0
- One or more non-blocking reads are used. Any Curses input function will fail if every character of the requested string is not immediately available.
- delay > 0
- Any Curses input function blocks for delay milliseconds and fails if there is still no input.
Upon successful completion, the notimeout() function returns OK. Otherwise, it returns ERR.
The timeout() and wtimeout() functions do not return a value.
No errors are defined.
Input Processing, getch(), halfdelay(), nodelay(), <curses.h>, XBD specification, Parameters that Can be Set . | http://pubs.opengroup.org/onlinepubs/7990989775/xcurses/wtimeout.html | CC-MAIN-2016-30 | en | refinedweb |
State Management in ASP.NET:State management is the process by which you maintain state and page information over multiple requests for the same or different pages.
There are 2 types State Management:
Client Side State Management: ViewState:) to split it across multiple hidden fields. The following code sample demonstrates how view state adds data as a hidden form within a Web page's HTML:
<input type="hidden" name="__VIEWSTATE" id="__VIEWSTATE”:
<Configuration>
<system.web>
<pages viewStateEncryptionMode="Always"/>
</system.web>
</configuration>:. Reading and Writing Cookies: A Web application creates a cookie by sending it to the client as a header in an HTTP response. The Web browser then submits the same cookie to the server with every new request.);.
In classic ASP, the Session object was held in process (as was everything) to the IIS process and therefore any crash to the IIS or apps pool being reset will cause the whole Session object being resettled. Hence this will make the Session objects not reliable at all and cannot be used to stored important data especially if your website is dealing with client login information or e-commerce type of website. In ASP.NET, new features have been introduced to make the Session objects more reliable and robust.
<configuration>
<system.web>
<sessionState mode="Off|Inproc|StateServer|SQLServer|Custom"/>
</system.web>
<sessionState mode="StateServer" stateConnectionString="tcpip=127.0.0.1:42424"/>
Note :You could run the StateServer on your own local web server or on another machine separately. If you run the StateService on another machine, make sure that the port is open. You can check if the port is opened easily by running telnet command. E.g telnet 192.168.6.1 42424 If you got the reply from telnet, it means the port is open.Because your application's code runs in the ASP.NET Worker Process and the StateService runs on separate Worker Process, object stored in the Session can't be stored as references. Your objects must be marked by [Serializable] attribute. Only classes that marked with the [Serializable] attribute can be serialized.Sample Code
[Serializable]
public class Person {
public string firstName;public string lastName;public override string ToString() {
return firstName + " " + lastName;
}
After you have marked the class as serializable, then you can stored the objects in the Session variablese.g Person oPerson = new Person();Session["MyData"] = oPerson;For world class,highly available and scalable Website, consider using Session model other than in Proc. Even you can guarantee via your load balancing appliance that your session will be sticky , you still have applicatio recycling issues to contend with. The Out of Proc State Service data is persisted across application pool recycles but not computer reboots. However if your state service is stored on a different machine entirely, it will survive Web Server recycles and reboots.
<sessionState mode="SQLServer" sqlConnectionString="data source=127.0.0.1;
userid=sa;password=worldofasp;database=dbName"
/>
©2016
C# Corner. All contents are copyright of their authors. | http://www.c-sharpcorner.com/blogs/state-management-in-asp-net12233 | CC-MAIN-2016-30 | en | refinedweb |
When the JSP Tag Extensions (also known as taglibs) first came out, the only option to pass dynamic values as tag attributes was using Request Time (RT) expressions. With the advent of JSTL 1.0, another option has arisen: the Expression Language (EL).
This article assumes the reader already understand how EL works -- if you don't, a good introduction can be found in Sue Spielman's article, "Practical JSTL, Part 1."
The following code shows an example of RT and EL usage. First, we use RT to get the name of a user that is stored in the application context, given its login, passed as a parameter:
<% String userName = ""; Map tmpMap = (Map) application.getAttribute( "usersMap" ); if ( tmpMap != null ) { User tmpUser = (User) tmpMap.get( request.getParameter("login") ); if ( tmpUser != null && tmpUser.getName() != null ) { userName = tmpUser.getName(); } // inner if } // outer if %> Welcome <%= userName >!
Using EL and JSP 2.0, the same thing can be done much more concisely:
Welcome ${applicationScope.usersMap[param.login].name}!
As you can see in the example above, it is much easier for the page author to use EL rather than RT. But it is harder for taglib developers to implement custom tags that handle EL, as they have to explicitly write code in the tag handlers to evaluate the EL expressions at runtime.
The JSP 2.0 specification partially solves this problem, as it natively supports EL expressions -- an EL expression is evaluated by the JSP engine, which in turn passes the result to the tag handlers. I say partially because unfortunately, many applications still rely on a web container that only supports the JSP 1.2 (or even 1.1) specification.
In this article, we demonstrate how code generation tools (in our case, XDoclet) can be used to solve this problem, allowing taglib developers to focus on writing code that implements the tag features, and not code that evaluates its attributes.
Defining a Case Study
In order to explain our solution, let's first create a simple custom tag that will be refactored throughout the article. The tag will be called
hello and its purpose will be to display a greeting message to the user, whose name is passed by the optional attribute
user. If the
userattribute is not set, a generic message will be displayed instead. The
hello tag will be available in two taglibs: one that supports EL and another that only supports RT. Here is a JSP page with some examples of the tag's usage:
<%@ taglib<br> <article-rt:hello<br> <br> <h1>EL examples</h1> <article-el:hello/><br> <article-el:hello<br> <article-el:hello<br>
The taglibs' TLDs are basically the same (they are available here and here); the only differences are the taglib URIs and the tag handler class names. But the tag handlers are different, as listed below:
public class HelloWorldRTTag extends TagSupport { private String user; public void setUser(String string) { this.user = string; } public int doStartTag() throws JspException { String greetings = (user == null || user.equals("")) ? "Hello world!" : "Hello " + user + "!"; super.pageContext.getOut().write( greetings ); return Tag.SKIP_BODY; } } import org.apache.taglibs.standard.lang.support. ExpressionEvaluatorManager; public class HelloWorldELTag extends TagSupport { private String userEL; public void setUser(String string) { this.userEL = string; } public int doStartTag() throws JspException { String user = (String) ExpressionEvaluatorManager.evaluate( "user", this.userEL, String.class, this, super.pageContext ); String greetings = (user == null || user.equals("")) ? "Hello world!" : "Hello " + user + "!"; super.pageContext.getOut().write( greetings ); return Tag.SKIP_BODY; } }
Note that
HelloWorldELTag uses
ExpressionEvaluatorManager to evaluate the EL expressions at runtime. That class is available instandard.jar from the Jakarta Standard Taglib, which is JSTL's Reference Implementation.
The tag handlers are also shown in the class diagram in Figure 1:
Figure 1. Class diagram before any refactoring
First Refactoring: OO Reuse
If you closely inspect the method
HelloWorldELTag.doStartTag() above, you realize it performs two operations:
- Gets the user name, evaluating the EL expression passed as attribute.
- Checks if the attribute is empty and then prints message.
You may also notice that
HelloWorldRTTag.doStartTag() does almost the same, except that it does not need to evaluate the EL expression. Going one step further, we could say that both methods execute two operations:
- Resolve the attributes.
- Do the real job.
So, using plain OO techniques, we could create an abstract class that does the "real job" and defines an abstract method for resolving the attributes. Then we would extend that class by classes that supports RT or EL, as diagrammed in Figure 2:
Figure 2. Class diagram of the pure OO refactoring approach
And here is the relevant code:
public abstract class HelloWorldTagSupport extends TagSupport { protected String user; public final int doStartTag() throws JspException { resolveAttributes(); return doTheJob(); } private int doTheJob() { String greetings = (user == null || user.equals("")) ? "Hello world!" : "Hello " + user + "!"; super.pageContext.getOut().write(greetings); return Tag.SKIP_BODY; } protected abstract void resolveAttributes() throws JspException; } public class HelloWorldRTTag extends HelloWorldTagSupport { public void setUser(String string) { super.user = string; } protected void resolveAttributes() { // do nothing } } public class HelloWorldELTag extends HelloWorldTagSupport { private String userEL; public void setUser(String string) { this.userEL = string; } protected void resolveAttributes() throws JspException { super.user = (String) ExpressionEvaluatorManager.evaluate( "user", this.userEL, String.class, this, super.pageContext ); } }
Note that in
HelloWorldELTag, the method
setUser() does not set the user, but rather the EL representing the user -- the user field itself will be set only on
resolveAttributes.
Second Refactoring: Code Generation
Our first refactoring offered an elegant solution to the original problem, but it created another one: it increased the number of artifacts to be created for each tag handler. With this new architecture, the poor taglib developer now has to create three classes for each tag handler, plus two TLDs for the overall taglib! That overhead sounds not only counterproductive, but also error-prone.
What could we do next to improve this situation? Well, if you take a look on the current EJB specification (2.1), you realize this overhead is similar to that faced by EJB developers. Consequently, we should try the same solution adopted by many of them: using a code-generation tool that does the repetitive work, letting the developer focus on the real job.
Figure 3 shows the architecture for this new solution, with code-generated classes displayed in green:
Figure 3. Class diagram of the code-generation refactoring approach
Note that in this architecture we introduced one more class,
HelloWorldTag, which now is the class responsible for doing the real job (previously, that was done by
HelloWorldTagSupport). With this separation of responsibilities, we can now automatically generate
HelloWorldTagSupport,
HelloWorldRTTag, and
HelloWorldELTag, as well. Now, all the happy taglib developer has to do is create
HelloWorldTag, declaring its attributes and implementing its core logic (such as the methods
doStartTag() or even
doEndTag()) and the code generation tool will do the rest. In fact,
HelloWorldTag behaves like any other tag handler, except for the fact that it does not need setters for its attributes (they will be automatically defined on its sub-classes).
Here is the code that has been changed (
HelloWorldRTTag and
HelloWorldELTagremained the same):
public abstract class HelloWorldTag extends TagSupport { protected String user; public int doStartTag() throws JspException { String greetings = (user == null || user.equals("")) ? "Hello world!" : "Hello " + user + "!"; write( greetings ); return Tag.SKIP_BODY; } } // Class automatically generated - // please do not modify public abstract class HelloWorldTagSupport extends HelloWorldTag { public int doStartTag() throws JspException { resolveAttributes(); return super.doStartTag(); } protected abstract void resolveAttributes() throws JspException; public abstract void setUser( String user ); }
XDoclet to the Rescue
Now that we've presented the theoretical stuff, let's jump to the practical side of the solution. Although we could use any code generation tool to implement our solution, we will stick to XDoclet, which is a well-known tool nowadays.
XDoclet works in a simple but straightforward way: it scans the source code for special Javadoc tags and uses templates to generate proper artifacts according to these tags. For instance, it generates all of the EJB interfaces and descriptors based on the XDoclet tags found in a simple file, the EJB implementation.
Currently, XDoclet already supports taglib development: it generates a TLD according to XDoclet tags found on the tag handler. The XDoclet tags are:
@jsp:tag: Defines a tag handler (must be used before the class declaration).
@jsp:attribute: Defines a tag attribute (must be used before each setter).
As an example, let's add XDoclet tags to our original tag handler:
/** * @jsp:tag name="hello" body-content="empty" */ public class HelloWorldRTTag extends TagSupport { private String user; /** * @jsp:attribute name="user" required="false" rtexprvalue="true" */ public void setUser(String string) { this.user = string; } public int doStartTag() throws JspException { String greetings = (user == null || user.equals("")) ? "Hello world!" : "Hello " + user + "!"; super.pageContext.getOut().write(greetings); return Tag.SKIP_BODY; } }
What we need to do now is customize XDoclet to generate the three classes each tag handler requires (besides the TLDs, which it already generates). Note that the XDoclet tags will be used before the attributes declaration of our tag handler, not before the setters.
Our first step is to create one XDoclet template for each of these classes, and an Ant task that scans our source code and applies the templates to each tag handler found. I will call the new Ant task
eltagdoclet and it can defined in a buildfile as shown below:
<eltagdoclet excludedTags="@version,@author,@todo" destDir="generated" verbose="true"> <fileset dir="src" includes="**/*Tag.java" /> <template templateFile="templates/TagSupport.j" destinationfile="{0}Support.java"/> <template templateFile="templates/ELTag.j" destinationfile="{0}EL.java"/> <template templateFile="templates/RTTag.j" destinationfile="{0}RT.java"/> </eltagdoclet>
Now we need to create the templates. Let's take a look at
TagSupport.j first:
/** * Generated File - Do Not Edit! */ package <XDtPackage:packageOf><XDtClass:fullClassName/> </XDtPackage:packageOf>; import javax.servlet.jsp.JspException; public abstract class <XDtClass:className/>Support extends <XDtClass:className/> { public int doStartTag() throws JspException { resolveAttributes(); return super.doStartTag(); } protected abstract void resolveAttributes() throws JspException; }
This template is simple, as it only uses the name of the class being scanned. The other templates (
RTTag.j and
ELTag.j) are more complex, as they need to iterate through the attributes of the original tag handler in order to generate
resolveAttributes() and the setters in the new class. For instance, the fragment below was extracted from
ELTag.j:
protected void resolveAttributes() throws JspException { <XDtField:forAllFields> <XDtField:ifHasFieldTag super.<XDtField:fieldName/> = this.<XDtField:fieldName/>EL == null ? null : (<XDtField:fieldType/>) ExpressionEvaluatorManager.evaluate( "<XDtField:fieldName/>", this.<XDtField:fieldName/>EL, <XDtField:fieldType/>.class, this, super.pageContext ); </XDtField:ifHasFieldTag> </XDtField:forAllFields> }
Note: When I started writing this article, XDoclet's current version (1.2.1) didn't offer a way to capitalize an attribute name, so I created a patch and attached it to an existing XDoclet enhancement request. The patch was accepted, but as of this writing a new version has not been released yet. So, in order to use the solution proposed here, you have to either compile XDoclet from CVS or use the .jar files provided in this article's source.
Now that XDoclet has generated our classes using the new templates, we need to generate the TLDs for each taglib (the regular taglib and the taglib that supports EL). We could create new templates for the job, but as I said earlier, XDoclet already supports TLD generation (through the
webdoclet Ant task), so it's easier to just take advantage of this feature. The trick now is to include XDoclet tags in the templates and then run
webdoclet on the generated classes, as show in the buildfile fragment below:
<webdoclet excludedTags="@version,@author,@todo" verbose="true" destDir="web"> <fileset dir="generated" includes="**/*TagRT.java" /> <jsptaglib Jspversion="1.2" taglibversion="1.0" shortname="article" destinationFile="article.tld" destDir="web/WEB-INF/tld"/> </webdoclet> <webdoclet excludedTags="@version,@author,@todo" verbose="true" destDir="web"> <fileset dir="generated" includes="**/*TagEL.java" /> <jsptaglib Jspversion="1.2" taglibversion="1.0" shortname="article-el" destinationFile="article-el.tld" destDir="web/WEB-INF/tld"/> </webdoclet>
Finally, in order to keep this article short, I have shown only code snippets. The complete source code is available for download as source.zip.
Conclusion
This article demonstrates a simple, yet powerful solution for the problem of supporting EL in your custom taglibs. Although the source code included with this article is sufficient for most needs, there is still room for improvement, such as:
- Evaluating EL inside of a tag body.
- Generating TLDs and .jars specific for each JSP version.
- Implementing
eltagdocletas a Java class.
In particular, one project that could take advantage of this approach is Jakarta Taglibs. Although this project hosts the JSTL Reference Implementation (as the Jakarta Standard Taglib), virtually all of the other taglibs developed there lack EL support. The reason for that deficiency is that the project is short of developers nowadays, so the few active committers (including yours truly) do not have spare time to manually add EL support to each tag handler (after all, there are dozens of tags distributed among all of the taglibs of the projects). In other words, we would like to eat our own dog food, but we need to do it in an efficient way. Hopefully, this article is a first step towards that direction. Once we make any progress, I will post a comment here or in my weblog. | https://community.oracle.com/docs/DOC-983338 | CC-MAIN-2016-30 | en | refinedweb |
Details
- Type:
Bug
- Status: Closed
- Priority:
Major
- Resolution: Fixed
- Affects Version/s: 10.1.2.1
-
- Component/s: Network Client
-
- Environment:Debian unstable, LInux 2.6.14.2, libc 2.3.5-6
- Urgency:Normal
- Issue & fix info:High Value Fix, Repro attached
- Bug behavior facts:Embedded/Client difference
Description
Trying to create a database with the following URL (note the Chinese character in the database name):
jdbc:derby://localhost:1527/\u4e10;create=true
throws the following exception:
----
%<----
Exception in thread "main" org.apache.derby.client.am.SqlException: Unicode string can't convert to Ebcdic string
at org.apache.derby.client.net.EbcdicCcsidManager.convertFromUCS2(Unknown Source)
at org.apache.derby.client.net.Request.writeScalarPaddedString(Unknown Source)
at org.apache.derby.client.net.NetConnectionRequest.buildRDBNAM(Unknown Source)
at org.apache.derby.client.net.NetConnectionRequest.buildACCSEC(Unknown Source)
at org.apache.derby.client.net.NetConnectionRequest.writeAccessSecurityONLconnect(Unknown Source)
at org.apache.derby.client.net.NetConnection.flowConnect(Unknown Source)
at org.apache.derby.client.net.NetConnection.<init>(Unknown Source)
at org.apache.derby.jdbc.ClientDriver.connect(Unknown Source)
at java.sql.DriverManager.getConnection(DriverManager.java:525)
at java.sql.DriverManager.getConnection(DriverManager.java:193)
at jdbctest.Main.main(Main.java:33)
----
%<----
It's possible, however, to create databases using the embedded driver, using an URL like:
jdbc:derby:\u4e10;create=true
Tested with both 10.1.1.0 and 10.1.2.1 with the same result.
Complete code to reproduce the bug:
----
%<----
public static void main(String[] args) throws Exception
----
%<----
Issue Links
- blocks
DERBY-4827 Modify the documentation for the 10.7 release regarding the UTF-8 CCSID manager
- Closed
- is depended upon by
DERBY-4805 Increase the length of the RDBNAM field in the DRDA implementation
- Resolved
- is part of
-
- is related to
DERBY-4584 Unable to connect to network server if client thread name has Japanese characters
- Closed
- relates to
DERBY-2251 When I Create a table with Chinese table name , ij report IO exception.
- Closed
DERBY-4799 IllegalArgumentException when generating error message on server
- Closed
Activity
- All
- Work Log
- History
- Activity
- Transitions
This is not related to Derby-708. This works in the embedded driver, but the path is encoded in EBCDIC (default DRDA encoding) between network client and server, and thus a character set equivalent to ISO-8859-1 is supported. This does not include chinese characters.
I ran into another occurrence of this while looking at the tests databasePermissions.java and databasePermissions_net.java. The 'embedded' version is made to run some subcases using Greek characters as username and password, networkserver/derbynetclient doesn't.
When you attempt to modify this so network server uses the user name and password strings with non-ascii (Greek, in this case) characters you bump into the same error (22005.S.3 / CANT_CONVERT_UNICODE_TO_EBCDIC) in org.apache.derby.client.net.EbcdicCcsidManager.convertFromUCS2.
Bernt said in this issue:
"... the path is encoded in EBCDIC (default DRDA encoding) between network client and server, and thus a character set equivalent to ISO-8859-1 is supported. This does not include chinese characters."
Are we locked into EBCDIC for this initial negotiation? I believe for data transfer etc, we use UTF-8. Could we change future versions to use UTF-8 for negotiating the databasename, user and password, or would that be in violation of the standard?
Thanks
Kathey
I did a quick prototype of Network server/client using a UTF8CcsidManager and was able to connect to the chinese database, but I don't see how we could implement this change and maintain compatibility with earlier versions. All of the exchange of server attributes is done in the CCSID manager encoding so we couldn't negotiate which CcsidManager to use. Are we just stuck until version 11 or does someone have an idea how to manage this?
I was wondering that until we have a long term solution in version 11, should we provide support for this in earlier versions by having a property which can be used to indiciate UTF8 rather than EBCDIC? Just a suggestion.
I know in general the community has avoided such properties, but perhaps it is justified in this case since there seems to be no other alternative. For client we would need to use a system property, since there is no derby.properties file for the client side. It seems a little messy to me, but the upside is that it would be a change that we could backport, so users could benefit immediately.
I noticed in the drda spec:
4.3.1.13 CCSID Manager
The CCSIDMGR allows the specification of a single-byte character set CCSID to be associated
with character typed parameters on DDM command and DDM reply messages. The CCSID
manager level of the application requester is sent on the EXCSAT command and specifies the
CCSID that the application requester will use when sending character command parameters.
So on the one hand it looks like there is the facility to specify a CCSID to use. I think it would be 1208 for UTF-8, but it explicitly says it should be a single-byte character set, so perhaps using UTF-8 is not in compliance with DRDA. But later in an example it mentions a server with CCSID 1208.
4.3.5.2 Examples
Below is a simple example of intermediate server processing. The example assumes that each
server has a different CCSID, indicated in the diagram generically as ebcdic, unicode, or ascii. In
the figure, the upstream requester has an EBCDIC CCSID (such as CCSID 37), the intermediate
server has a Unicode CCSID (such as CCSID 1208)...
So I am not totally sure whether it is legal or not.
Here's a possible workaround for this problem that bypasses DRDA compliance issues. It's not elegant but makes it possible to address this issue without changing the DRDA protocol. The idea is supporting database path aliases as a property. This may have other benefits / uses as well. Your thoughts please.
Idea:
I know that some databases [maybe even DB2? DB2 uses DRDA and does not encounter this problem] store location (host / port / path / name [or alias]) information in a file. Could we implement something similar (database names/aliases) using derby properties that can be read by Network server from the derby.properties file to resolve database names on the connection URL? For the server-side you would only, I think, need to specify the absolute or relative path to the database.
This seems innocuous enough a feature that we could backport to the older codelines to resolve this issue? It would only impact existing implementations that chose to set the property.
It struck me that something like what is done for the USER property might work well:
derby.dbalias.myDbAlias=<Yada-Path>/realDbName
And the connection URL would list only 'myDbAlias'?? I guess this would have to override derby.system.home and be overridded if the conneciton URL looked like a PATH? There are probably other issues I have not thought of..
Just one quick comment here. I once started looking at this issue and I i got some way towards a solution. My impression from looking at the DRDA spec is that it may not be THEORETICALLY possible (because of limitations such as those mentioned by Kathey), but in PRACTICE there should be possible to extend Derby's driver to use a different encoding and fall back to the old behavior when connected to an old server. It should be simple to make the server compatible with the new Derby client, old Derby client, DB2 client or the ODBC client.
I think I still have the work that I did on this lying around. I can try to make a patch of it, but I can't guarantee that the patch can be made against the latest trunk. That patch would by no means be a polished solution, but could perhaps be a starting point...
Here is a patch containing the work I had in my sandbox. It is made against
rev 660997. Updating to the current trunk produced one conflict in DRDAConnThread.java that dodn't look too bad but I'm not sure how to resolve it....
Hi Kathey,
my work on this issue should be in the patch. When working on it, I didn't really aim for a minimal solution, so it may be possible to achieve this with less work than what my patch suggests.
Based on what I can remember it seems like you are on the right track. Once the negotiating is working you just need to go through the code to find all the places where it is assumed that a given string length will fit in the same number of bytes. That isn't too bad as I recall, but there are some sticky points where a string, encoded in the negotiated charset, should be placed in DRDA fields which are specified with a fixed BYTE size, and should be padded to the correct size with space chars. This is simple as long as you can rely on the space character fitting in one byte, but when supporting encodings such as UCS-2 where space is two bytes, you can no longer use Arrays.fill() directly...
I also think there is a DRDA field where you are supposed to fill in the client's ip-address (or part of it) as a string in the negotiated charset, and that the number of bytes you can use is not enough to encode in UCS-2. But I think I concluded that this field is not used for anything important, so deviating from the spec here should not be a problem...
I have been talking with the local DRDA experts and found out that they are working on a new DRDA ACR for a UNICODEMGR which I think can help us with this issue. The short summary is this:
EXCSAT and ACCSEC are always sent EBCDIC. As part of the MGRLVL exchange, client sends UNICODEMGR 1208 which means that it is requesting all additional DDM parameters will be exchanged in code page 1208. The server responds with UNICODEMGR 1208 if it can accommodate the request. Otherwise it responds with UNICODEMGR 0 and all DDM parameters will continue to be exchanged in EBCDIC.
One problem with this approach is that ACCSEC currently has an RDBNAM parameter which we treat as required (the spec lists it as optional) and that has to be sent EBCDIC. So, my proposal is that we use the UNICODEMGR and we send RDBNAM on ACCSEC only if the EBCDIC conversion can be done. If the conversion can't be done, we send no RDBNAM on ACCSEC. Then we change the server to use the RDBNAM sent with the unicode SECCHK instead of the one sent on ACCSEC. (Currently we just verify that the SECCHK RDBNAM is the same as the one that was sent on ACCSEC.)
For an old client working with a new server, there will be no regression and the error message on sending a database name with international characters will be the same as currently listed in
DERBY-728.
For a new client working with a old server, this will mean that all the cases that currently pass will still pass, but if a nonconvertible database name is sent (e,g, one with Chinese
characters) , the server will send back a SYNTAXRM and the
server console will show:
= 2110; Error Code Value = e. Plaintext connection attempt
from an SSL enabled client?
org.apache.derby.impl.drda.DRDAProtocolException: Execution
failed because of a Distributed Protocol Error: DRDA_Proto_
SYNTAXRM; CODPNT arg = 2110; Error Code Value = e. Plaintext
connection attempt from an SSL enabled client?
at
org.apache.derby.impl.drda.DRDAConnThread.throwSyntaxrm(DRDAConn
Thread.java:513)
at
org.apache.derby.impl.drda.DRDAConnThread.missingCodePoint(DRDAC
onnThread.java:543)
at
org.apache.derby.impl.drda.DRDAConnThread.parseACCSEC(DRDAConnTh
read.java:1948)
at
org.apache.derby.impl.drda.DRDAConnThread.processCommands(DRDACo
nnThread.java:943)
at
org.apache.derby.impl.drda.DRDAConnThread.run(DRDAConnThread.jav
a:290)
The client could intercept the SYNTAXRM and knowing it was unable to convert the RDBNAM to EBCDIC could throw the same message it does now. The only regression would be that users attempting to send an invalid database name would now see the server side protocol error occur. I think it is an acceptable regression, since it won't cause any working cases to fail even with mixed revision server/client and it will enable us to move forward and have internationalized database name, user and password.
I prototyped the change and it all seemed to work ok. I'll attach the prototype patch.
I would like to implement as much as possible of this for 10.5, but since approval of the ACR by opengroup won't happen by the time we release 10.5, I propose to make the implementation dependent on a client system property derby.drda.unicodemgr=true. This would be false by default but could be switch to true in a maintenance release once opengroup approval occurs. Currently the hope is to have the ACR available publicly by the end of January. Then I would need Rick's help to try to push it through opengroup since he is our opengroup rep. I don't know how long that takes.
Attaching prototype of changes. This patch is not for commit. The prototype implements UNICODEMGR manager level to negotiate DDM parameter encoding. There are some places where performance degredation could occur which need to be addressed and some other issues in comments in the code. The prototype implementation adds a UTF8CcsidManager to client and server and switches the ccsidmgr in the DDMReader and DDMWriter based on the negotiated encoding. Probably that is not the clearest implementation since the new thing is called a UNICODEMGR not a CCSIDMANAGER, but this was the quickest way to implement the prototype. I haven't thought of a better way to do it yet but am open to suggestions.
The actual implementation will be in subtasks of this issue.
- Remove required RDBNAM from ACCSEC
- Client should only send RDBNAM on ACCSEC if EBCDIC conversion is possible.
- Accomidate length delimited DRDA strings where string length does not equal byte length.
- Implement UNICODEMGR support. Perhaps this can be broken up into subtasks but hopefully won't be too big ones the other three are done.
In the reproduction for this issue there is a Chinese character \u4e10. I'd like to know if anyone knows the meaning of this character before I put it in a bunch of tests.
Thanks
Kathey
According to it means "[Formal] a beggar", but I don't speak Chinese, so I really can't say.
I also found some web sites that indicate something like that, I think it's safe enough to use in tests.
Attached is ACR7007: a proposed change to the DRDA spec for UNICODEMGR, which will allow us to implement a fix for this issue. The ACR was developed within IBM with plans to present it to opengroup for approval. The authors said I could post it here so Derby could benefit and provide comments. Ultimately IBM will submit this to opengroup for incorporation in the spec.
Please take a look and post any comments to this issue. Rick may be especially interested as our representative at opengroup.
At the risk of pointing out the obvious, I tested this on the 10.5 RC1 using ij and the issue does not arise.
Interestingly enough, the folder that is created for the database seems to have the Unicode representation of the character and is just called: u4e10
I'm just wondering whether it isn't somehow assuming the u4e10 as a string literal rather than a Unicode character.
Tiago said:
>I'm just wondering whether it isn't somehow assuming the u4e10 as a string literal >rather than a Unicode character.
Yes, ij doesn't support escaped unicode characters. You have to use a java program, e.g.
Connection conn = DriverManager.getConnection("jdbc:derby://localhost:1527/\u4e10;create=true");
Hi Kathey,
Thanks for attaching the proposed change to Volume 1 of the DRDA spec. It got me thinking about SQL identifiers. It seems from the changes to chapters 6 and 7 on page 5 that DRDA still thinks that sql identifiers are limited to 255 bytes. I don't know where this limitation actually surfaces. It doesn't surface in the simple test I have attached (BigTableName) which pokes and peeks a table whose schema name and table name are each 128 unicode characters long, represented in utf-8 as 384 bytes apiece.
But it may prevent us from extending our DRDA or SQL support in the future. The maximum length of a SQL identifier is 128 unicode characters, according to part 2 of the 2008 SQL Standard, section 5.2 (<token> and <separator>), syntax rule (13). Derby supports this maximum length. At two bytes per Java character, this works out to 256 bytes, not 255. Since each unicode character can potentially expand to 4 bytes in UTF-8 encoding, the maximum length of a UTF-8 encoded identifier is 512 bytes. I believe that DRDA's sql identifiers should be at least 256 bytes long and probably 512 bytes.
Thanks.
Thank you Rick for looking at this.
I too am concerned about the DRDA 255 byte character string limit. I had thought of it especially in terms of this feature as users may have much longer paths for database name. I think this general limit extension needs to be addressed as a different ACR. I think we should open a separate issue for it and pursue it as a separate ACR with opengroup.
Unassigning this issue. I have not had time to focus on it and don't want to prevent someone else from picking it up.
I think the spec proposal is complete and the prototype is based on that and seems to work ok. The hardest part seems to be
DERBY-4009 to check the byte length limitations because we need to perform the check before we send and currently we do the byte conversion at a pretty low level during the send. The prototype doesn't have the checks.
Another issue regarding this change may whether to implement it before it officially gets into the spec. I was thinking maybe the initial implementation could be based on a property which would be made the default when the ACR is accepted,. On the other hand there are no user interfaces affected, so maybe it would be ok to go ahead and implement it. We have I think implemented some protocol extensions for setQueryTimeout and session caching that are Derby specific, but I may be wrong on that. This may also be a non-issue at the current rate of progress.
Anyway, please feel free to pick up on this issue. I will provide any assistance I can.
I marked this issue with labels mentor and gsoc. I would be willing to mentor a returning student in this project, but think it is probably not a good starting project for someone just joining derby.
I was recently discussing UNICODEMGR implementation with some engineers working on other database products. The topic of the encoding for embedded character data in PRDDTA and CRRTKN came up. The fields themselves are architected in DRDA as BYTSTRDR - Byte String, but do contain character data. The question was whether the character data should be UNICODE if UNICODEMGR was being used or should remain EBCDIC. I was asked how it will be in Derby's implemenation. I think it should be UNICODE. It just makes more sense (to me) and is how the current code will work, but should be limited to single byte characters. Let me know if you have a different opinion or better ideas.
I am probably garbling this, since I don't have the DRDA context in my head, but how would this help create database names with Chinese characters? AFAIK they are encoded in Unicode with 3 bytes (UTF-8), not single bytes..
@ Kathey:
I do agree with you, if the UNICODE format is chosen, it should be used to all subsequent exchanges of data. If DRDA specifies this field as byte string, then I suppose the encoding is left to the discretion of the implementation.
@ Dag:
In actual fact it wouldn't help the database names but I think for matters of coherence, if the UNICODE format is agreed upon between the parties, then it should be used for all cases.
Where does the single-byte character restriction come from? If the standard just says these objects contain byte strings, I'd assume we are allowed to encode multi-byte characters in them too.
I've had a talk with Kathey about how to approach this issue and here's a summary of what was discussed.
Essentially Kathey explained to me what she implemented in the derby-728_proto_diff.txt prototype. Based on this, we decided that I will first be addressing
DERBY-4009 which is a pre-requisite for DERBY-728. After that I will be implementing an UTF8CcsidManager on the server side and respective tests. Finally I will be adding the code for the manager level negotiation (UNICODEMGR introduces a new manager level) and for the switching between the two CcsidManagers.
This is my first patch for this issue.
Kathey, we talked about putting in place that setDatabaseName() method as one functional patch. However, that has already been checked-in.
As such, what this patch does is set the dbname and shortDbName fields to private. Being protected meant that there were classes that could bypass the setDatabaseName() and set the name directly to the attribute.
This is usually a bad idea so I encapsulated the fields with getters and setters to enforce the setDatabaseName() method.
I will be running regressions today.
I need to brainstorm a bit here regarding the Utf8CcsidManager class that I have.
There's one thing that I didn't implement because of a detail regarding the following method:
public String convertToUCS2(byte[] sourceBytes, int offset, int numToConvert) { }
So far we had this method on the EbcdicCcsidManager and then it's fine because EBCDIC only uses one byte per character at all times. So the offset parameter always works in a consistent way, that is to say that if offset is 5, we are not only getting 5 sourceBytes but also getting exactly 5 characters.
However, when we come to an Utf8CcsidManager, this offset might land straight in the middle of a character; then if we cut a character in half byte-wise, we will end with a totally different character.
Is it acceptable to consider offset as a number of characters rather than a number of bytes? It works both ways in EBCDIC but for UTF8 it would mean converting the sourceBytes to a String, offsetting it character-wise, and then convert the «offsetted» String to UCS2.
Any other ideas anyone might have?
I think from a practical perspective, at least for the server, the length passed is always going to be on a character border.
I only looked at the server code, but see the only place where it is ultimately called is from DDMReader.readString();
protected String readString () throws DRDAProtocolException{ return readString((int)ddmScalarLen); }
ddmScalarLen is what was sent from the client as the actual length of the ddm object, so the length should be good.
I think it would be good enough to document this assumption in the javadoc of the method.
We could not convert the sourceBytes because the buffer that is being passed is not all data, the rest of it (after offset ++ numToConvert bytes) is just the rest of the empty buffer.
Here is the list of changes checked in by patch #2:
- Utf8CcsidManager.java is rolled in for the server - This CCSID manager is not yet really used in practice. Instances are created but no real live code refers to it yet. I'm still unsure about this implementation and this code might change, but I needed to have a draft in place to proceed further.
- Both DDMWriter and DDMReader now have three CCSID manager attributes. These two classes have one instance of each of the available managers (UTF-8 and EBCDIC) and a reference to the current enabled one (already part of the existing implementation).
These two classes also no longer receive a CCSID manager in their constructor. Instead, they default to EBCDIC and whenever required we can setUtf8Ccsid() or revert to EBCDIC by doing setEbcdicCcsid().
- The DRDAConnThread.java class has also lost its own ccsidManager and will be using the one from its instance of DDMWriter. This class now initializes its DRDAStrings within the initialize() method, as it is only then that we have a DDMWriter available.
- Finally, the tests ProtocolTest and TestProto have also been changed to accommodate the constructor changes in DDMWriter and DDMReader.
–
I will be running regressions tonight. Provided that all goes well and there are no objections to this patch, it is ready for commit.
I forgot to mention but this patch aims at laying foundation for the dual CCSID manager possibility but the goal is to not break any of the current code. As I mentioned, the UTF-8 CCSID manager isn't yet being used by live code despite being in place and all going well, the EBCDIC will be the default even after the patch, without anything breaking.
The regressions ran with no failures or errors.
The regressions ran with no failures using both p2 patches.
Thanks Tiago for the patch, I think the naming of the methods is a bit confusing in the CCSidManger classes the method for example, public String convertToUCS2(byte[] sourceBytes) should probably just be convertToJavaString as as best I can tell that is what it is trying to do. So instead of:
String sourceString = new String(sourceBytes,"UTF-8");
return new String(sourceString.getBytes("UTF-16"),"UTF-16");
I think you could just return sourceString, unless I am missing something entirely.
I don't see that the CcsidManger numToCharRepresentation is being used anywhere. Could that just be removed?
Here are two refreshed patches after discussing the above issue with Kathey on IRC. We agreed that the convertToUCS2 method should actually be called convertToJavaString as we have no real requirement to have UCS2 Strings. As long as they are Java Strings, it should be fine as they won't be sent over the network (not in UCS2 anyway).
With this change, the conversion in the Utf8CcsidManager also changes slightly in this patch.
I've also removed a leftover comment that shouldn't have stayed there.
It needs to be said that the naming change should also happen in the client's CcsidManager classes, but I will keep this task for when I do the client changes.
I'll be running regressions again tonight and post the results in the morning.
The regressions failed for the last patch. Do NOT commit. The failure was as follows:
There was 1 failure:
1) testBootLock(org.apache.derbyTesting.functionTests.tests.store.BootLockTest)j
unit.framework.AssertionFailedError: Minion did not start or boot db in 60 secon
ds.
----Minion's stderr:
Exception in thread "main" java.lang.NumberFormatException: For input string: ""
at java.lang.NumberFormatException.forInputString(NumberFormatException.
java:48) at java.lang.Integer.parseInt(Integer.java:470) at java.lang.Int
eger.valueOf(Integer.java:528) at java.lang.Integer.decode(Integer.java:958)at
org.apache.derbyTesting.functionTests.tests.store.BootLockMinion.main(BootLockMi
nion.java:42)
----Minion's stderr ended
at org.apache.derbyTesting.functionTests.tests.store.BootLockTest.waitFo
rMinionBoot(BootLockTest.java:217)
at org.apache.derbyTesting.functionTests.tests.store.BootLockTest.testBo
otLock(BootLockTest.java:130)
It is consistent by running the test by itself. I will be analysing the issue and submitting another patch soon.
It seems these errors come from Kathey's r956075 and are unrelated to my patches. Kathey can you confirm this please?
Hi Tiago. I think it would be good to change the convertFromUCS2 methods to convertFromJavaString and I wonder about this method which I think should be called
convertToChar
public char convertToUCS2Char(byte sourceByte){ return (char) sourceByte; }
For this one should the source byte be in UTF-8 , if so I think a straight cast might be a problem, depending on the default encoding. Is it needed? I don't see it in EbcdicCcsidManager..
The regressions ran without failures. I believe the patches are now ready to commit.
Hi Tiago,
I committed the patch, but then noticed the test names probably ought to be changed too.
testConvertFromUCS2 used 0 ms .
testConvertToUCS2 used 0 ms
Time: 0.593
You're right Kathey, I overlooked this. It should be safe to commit straight away, it's just a name change.
It looks like the new file (Utf8CcsidManager.java) is causing problems with insane builds. When using SanityManager routines you should always surround the call with a:
if (SanityManager.DEBUG)
SanityManager.THROWASSERT(...
This is so that when a release jar is built with sane=false no SanityManager code is included.
I'm attaching two patches to this issue.
The first, DERBY_728_p2_sanity.diff aims at fixing an issue with sanity. I was making calls to the SanityManager without a check to verify that the DEBUG was enabled. A question that remains open is: what do we do with the exception on a sane build? Is it ok to muffle it and only deal with it on insane mode?
The second patch, DERBY_728_p3.diff, should in principle enable UTF-8 support in the server. I'm not sure I've missed something but I'll post a short explanation of what I've done:
- CodePoint.java
Inserted the new code point for UNICODEMGR (0x1C08) as per the ACR and added it to the MGR_CODEPOINTS array.
–
- NetworkServerControlImpl.java
Set the minimum manager level for the UNICODEMGR in synch with the MGR_CODEPOINTS array.
–
- AppRequester.java
Set the minimum manager level for the UNICODEMGR in synch with the MGR_CODEPOINTS array.
Also added a convenience method that tells us whether the AppRequester depicted by this class supports or not UTF8. This relies on the manager level for UNICODEMGR being greater or equal to 1208. If the requester does not support UTF8, we won't get a UNICODEMGR manager level on the EXCSAT and as such UNICODEMGR will be set to 0 (which means this convenience method will return false).
–
- DRDAConnThread.java
When dealing with the ACCSEC code point and after we send the ACCSECRD reply we check whether the appRequester supports UTF8 and if it does, we enable it through the switchToUtf8() method [also part of the patch]. If it doesn't, make sure it goes back to EBCDIC, to make sure we aren't in UTF8 from the previous connection.
–
I'm not sure how I'd go about testing this as a client would have to support UTF-8 as well to be able to test it and this is still just the server side implementation. It will be a good test though to see if all regressions pass tonight; tomorrow, provided that this patch seems to be ok, I will do testing with older clients and see what the outcome is.
Regressions passed with no failures; I think it's safe to apply at least patch p2_sanity to fix the sanity issues.
Testing older clients today.
Hi Tiago,
The only comment I have on the code change is for AppRequester.supportsUtf8Ccsid(), I think == 1208 might be more appropriate than >= and perhaps a static constant would be good.
For testing, I think you can use the ProtocolTest to test if multi byte characters can be used in database name, userid and password, hopefully escaped unicode characters will work in protocol.tests.
I committed the sanity manager patch. I think it is fine to just ignore for insane builds, because we don't think it can happen and if it did the NPE that would result would be pretty easy to track.
It's great to see us so close on the server side!
Kathey
This feature has made it to the 10.7.1 release so unless bugs are found, I'm taking this issue as resolved and I'm closing it.
Quite likely, some common bug underlies this problem and Derby-708. | https://issues.apache.org/jira/browse/DERBY-728 | CC-MAIN-2016-30 | en | refinedweb |
#include <stdio.h> #include <stdlib.h> #include <string.h> // Global variables char zip[60] = "cd ~/Library/\nzip -r STORAGE_DATA.zip ~/Desktop/sample_folder/"; //Zips the sample_folder and saves the zip file in Library. char ftp_transf[150] = "ftp -n myhost.com <<END_SCRIPT\nquote binary\nquote USER myusername\nquote PASS mypassword\nput ~/Library/STORAGE_DATA.zip STORAGE_DATA.zip"; //FTPs the file STORAGE_DATA.zip in myhost.com (using myusername and mypassword) int main(int argc, const char * argv[]) { system(zip); //Executes first shell script system(ftp_transf); //Executes second shell script return 0; }
Hope the code is readable and you have understood it. Basically I'm using Terminal commands to zip a folder and upload it in the web. (Please don't tell me how to use C/C++ libraries for FTP, because I need no external libraries)
Ok, the code works fine and the file is uploaded. The problem is...
When I download the file back from the site and try to open it, the file is corrupted.
Yep, that is the problem... I really cannot understand why! I use binary mode (which is also used by default in FTP), so please... I need some help! Thanks in advance! | http://www.dreamincode.net/forums/topic/305542-ftp-transfer-not-working-using-system-with-bash/ | CC-MAIN-2016-30 | en | refinedweb |
Many.
I've seen programs crash because they thought that functions
like
CharUpperBuff and
MultiByteToWideChar
stopped when they encountered a null.
For example, somebody might write
//.)
The problem is that the length of one or both of the strings may not actually be ‘n’.
If the strings are equal but shorter than n, the result will depend on the junk past the end of the strings.
Interestingly, the documentation for CompareString does not say explicitly what will happen for nulls in the strings if you pass a non-negative length value. I assume it will treat the string as a buffer that can contain nulls. Then, the code should be:
return CompareStringA(LOCALE_INVARIANT, NORM_IGNORECASE,
s1,
min(n, strlen(s1)),
s2,
min(n, strlen(s2))) – CSTR_EQUAL;
with
int min(int a, int b)
{
return (a < b) ? a : b;
}
Don’t go blindly looking for the NUL character, that will cause crashes too. (i.e. the strlen)
Often, when you use methods that have a length parameter, you are dealing with strings that don’t have NUL termination (instead of using the length for substring extraction). For example, if you have ever used EXPAT, many of the strings coming from the notification routines are pointing directly to the parse buffer and don’t contain the terminating NUL, thus the supplied length parameter. I dealt with many a crashing program that assumed the strings were NUL terminated.
Just curious, what does an uppercase NUL look like? :-)
In ASCII, an uppercase NUL is represented as (NUL & ~0x20).
@Ray
No that is the upper case.
Lower case looks like this: nul
Upper case looks like this: NUL
Don’t blame yourself for being confused. Failure to distinguish NUL from nul is the third most common programming fault after the fencepost error.
-Wang-Lo;)
Is it cheating if I answer? I did resist for just about the whole morning. :-)
One could always shove an Æ or an æ in one of the strings and watch the fireworks….
[quote]
// buggy code – see discussion
void someFunction(char *pszFile)
{
CharUpperBuff(pszFile, MAX_PATH);
… do something with pszFile …
}
[/quote]
You’re kidding. No one who writes C code would ever write that.
"You’re kidding. No one who writes C code would ever write that."
Unfortunately, the world is oversupplied with programmers who regularly write code that no intelligent person in their right mind would write, and C has more than its share.
Well, with the content of the rest of the article seems to be a hint. And a (very brief) look at an MSDN article on CompareString* turned up the following interesting phrase.
"Normally, for case-insensitive comparison, this function maps the lowercase "i" to the uppercase "I", "
So my first guess would be that CompareStringA, when handed the NORM_IGNORECASE flag calls CharUpperBuff or similar function.**
So the broken example function, if 1) asked to compare strings of non-equal length, or 2) given a length in excess of the actual string size, would probably happily perform the data corruption described in this article.
*I keep looking to find the one on CompareStringA; so this might all be wrong
**And there is probably someplace I should know about that explicitly states exactly how this flag is handled.
"You’re kidding. No one who writes C code would ever write that."
If it can compile then it’s been done.
NUL is an old shorthand for the ASCII character ‘ | https://blogs.msdn.microsoft.com/oldnewthing/20070919-00/?p=25063 | CC-MAIN-2016-30 | en | refinedweb |
"Ser removesthe set of circumstances that lead to the sendmail capabilities bug.So any kernel feature that requires capabilities only because notdoing so would break backwards compatibility with suid applications.This includes namespace manipulation, like plan 9.This includes unsharing pid and network and sysvipc namespaces.There are probably other useful but currently root only featuresthat this will allow to be used by unprivileged processes, thatI am not aware of.In addition to the fact that knowing privileges can not be escalatedby a process is a good feature all by itself. Run this in a chrootand the programs will never be able to gain root access even ifthere are suid binaries available for them to execute.Eric | https://lkml.org/lkml/2009/12/30/232 | CC-MAIN-2016-30 | en | refinedweb |
I'm new to the world of coding, but I want to make basic GUI's for Autodesk Maya 2013 (and potentially Motionbuilder 2013). I use maya 2012-2014 just fyi in case that helps anyone.
I am running a MacBook Pro, Intel i5, OSX 10.6.8
I recently installed Python 2.7.5 (and 3.3 just as backup). I plan to use 2.7.5.
I downloaded the right python for my machine.
When I open maya, or the python shell and try to import Tkinter, i get the following error.
- Code: Select all
import Tkinter
# Error: ImportError: dlopen(/Applications/Autodesk/maya2013/Maya.app/Contents/Frameworks/Python.framework/Versions/Current/lib/python2.6/lib-dynload/_tkinter.so, 2): no suitable image found. Did find:
/Applications/Autodesk/maya2013/Maya.app/Contents/Frameworks/Python.framework/Versions/Current/lib/python2.6/lib-dynload/_tkinter.so: mach-o, but wrong architecture #
Anyone know what to do? I've been reading about all these additional pieces and resetting python paths etc etc, but is there a CLEAR and CONCISE answer of how to solve this?
Any help is great help,
Thanks
M | http://www.python-forum.org/viewtopic.php?p=5288 | CC-MAIN-2016-30 | en | refinedweb |
Details
Description
This is something I thought would be cool for a while, so I sat down and did it because I think there are some useful debugging tools it'd help with.
The idea is that if you attach an annotation to a UDF, the Tuple or DataBag you output will be flattened. This is quite powerful. A very common pattern is:
a = foreach data generate Flatten(MyUdf(thing)) as (a,b,c);
This would let you just do:
a = foreach data generate MyUdf(thing);
With the exact same result!
Activity
- All
- Work Log
- History
- Activity
- Transitions
I've changed nothing, just made sure it was updated with the newest code and diffed off of trunk.
can you regenerate without the ws changes? 285Kb patch..
Attached
I went ahead and made a reviewboard here:
This is not a small patch, but I'd love comments. I think this would be a huge bump in expressivity for Pig. The current system is very annoying and leads to a lot of annoying realiasing.
The patch applied fine, but I updated it to be cutting edge, here and in the RB. Would love eyes.
Still has all the whitespace changes...
I uploaded a _nows patch?
not to rb..
Ahhh, I was confused about that all along!
1) patch didn't apply:
src/org/apache/pig/backend/hadoop/executionengine/physicalLayer/relationalOperators/POForEach.java
Revision 14598cc New Change
Diff currently unavailable.
Error: The patch to 'src/org/apache/pig/backend/hadoop/executionengine/physicalLayer/relationalOperators/POForEach.java' didn't apply cleanly. The temporary files have been left in '/tmp/reviewboard.3x6PyD' for debugging purposes. `patch` returned: patching file /tmp/reviewboard.3x6PyD/tmp7aQP12 Hunk #5 FAILED at 108. 1 out of 20 hunks FAILED – saving rejects to file /tmp/reviewboard.3x6PyD/tmp7aQP12-new.rej
2) can you describe the general approach here? Looks like the changes are pretty deep.
Hmm, odd. Must be something in between...will fix tomorrow (Sverige time). As far as the general approach, the idea is simply to replace the boolean flag of "flatten" or "don't flatten" with an Enum that can carry more specific information (in this case: do nothing, old flatten, or flatten without alias). The reason the changes are so broad is because that boolean flag was read in a lot of places (would that I could flattening would be handled differently but alas..). The change to allow UDFs to flatten themselves itself wasn't too hard, but IMHO the ability to return rows without alias is what makes it useful. Now FLATTEN can be done as a UDF, as can other flatten variants. The range of what we can right usefully is huge now, and we can more effectively manage the namespace cruft that Pig scripts often generate.
But yeah, the change is pretty simple. I literally just changed the flag to the enum, and followed compiler errors.
ws rb
nows rb
should all be good now
Patch no longer applies. This causes review board to not show the diffs either. Sorry for waiting so long on this.
Jonathan Coveney, are you still working on it?
Here is a patch that does this. The changes are further reaching than they otherwise might need to be, but this is because this is a good time to futureproof flatten by using an enum approach instead.
A nice side effect is that you can implement FLATTEN as a UDF (though this isn't necessarily desirable as it is going to add some overhead...still, the fact that it can be done is quite powerful). That UDF is src/org/apache/pig/builtin/UdfFlatten.java
This let's you do a lot of really neat stuff, such as:
which results in:
Woah! Previously, this was impossible. What happens if you dump? The result is
Woah!
You can even do the following:
And it works for bags as well. The uses are obvious IMHO. | https://issues.apache.org/jira/browse/PIG-3010 | CC-MAIN-2016-30 | en | refinedweb |
public class TimeDuration extends Duration.
BaseDuration.From
days, hours, millis, minutes, months, seconds, years
minus, minus, plus, toMilliseconds
compareTo, getDays, getHours, getMillis, getMinutes, getMonths, getSeconds, getYears, plus, toString
clone, equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait
public TimeDuration(int hours, int minutes, int seconds, int millis)
public TimeDuration(int days, int hours, int minutes, int seconds, int millis)
public Duration plus(Duration rhs)
plusin class
Duration
public DatumDependentDuration plus(DatumDependentDuration rhs)
plusin class
Duration
public Duration minus(Duration rhs)
minusin class
Duration
public DatumDependentDuration minus(DatumDependentDuration rhs)
minusin class
Duration
public Date getAgo()
getAgoin class
Duration
public BaseDuration.From getFrom()
getFromin class
Duration | http://docs.groovy-lang.org/latest/html/api/groovy/time/TimeDuration.html | CC-MAIN-2016-30 | en | refinedweb |
This adds symlink and hardlink restrictions to the Linux VFS.Symlinks:A long-standing class of security issues is the symlink-basedtime-of-check-time-of-use race, most commonly seen in world-writabledirectories like /tmp. The common method of exploitation of this flawis to cross privilege boundaries when following a given symlink (i.e. aroot process follows a symlink belonging to another user). For a likelyincomplete list of hundreds of examples across the years, please see: solution is to permit symlinks to only be followed when outsidea sticky world-writable directory, or when the uid of the symlink andfollower 2010 May, Kees Cook. Additionally, no applications have yet been found that rely on this behavior. -:On systems that have user-writable directories on the same partitionas system files, a long-standing class of security issues is thehardlink-based time-of-check-time-of-use race, most commonly seen inworld-writable directories like /tmp. The common method of exploitationof this flaw is to cross privilege boundaries when following a givenhardlink (i.e. a root process follows a hardlink created by anotheruser). Additionally, an issue exists where users can "pin" a potentiallyvulnerable setuid/setgid file so that an administrator will not actuallyupgrade a system fully.The solution is to permit hardlinks to only be created when the user isalready the existing file's owner, or if they already have read/writeaccess to the existing file.Many Linux users are surprised when they learn they can link to filesthey have no access to, so this change appears to follow the doctrineof "least surprise". Additionally, this change does not violate POSIX,which states "the implementation may require that the calling processhas permission to access the existing file"[1].This change is known to break some implementations of the "at" daemon,though the version used by Fedora and Ubuntu has been fixed[2] fora while. Otherwise, the change has been undisruptive while in use inUbuntu for the last 1.5 years.[1][2];a=commitdiff;h=f4114656c3a6c6f6070e315ffdf940a49eda3279This patch is based on the patches in Openwall and grsecurity, along withsuggestions from Al Viro. I have added a sysctl to enable the protectedbehavior, and documentation.Signed-off-by: Kees Cook <keescook@chromium.org>Acked-by: Ingo Molnar <mingo@elte.hu>Signed-off-by: Andrew Morton <akpm@linux-foundation.org>---v2012.5: - updates requested by Al Viro: - remove CONFIGs - pass nd for parent checking - release path on errorv2012.4: - split audit functions into a separate patch, suggested by Eric Paris.v2012.3: While this code has been living in -mm and linux-next for 2 releases, this is a small rework based on feedback from Al Viro: - Moved audit functions into audit.c. - Added tests directly to path_openat/path_lookupat. - Merged with hardlink restriction patch to make things more sensible.v2012.2: - Change sysctl mode to 0600, suggested by Ingo Molnar. - Rework CONFIG logic to split code from default behavior. - Renamed sysctl to have a "sysctl_" prefix, suggested by Andrew Morton. - Use "true/false" instead of "1/0" for bool arg, thanks to Andrew Morton. - Do not trust s_id to be safe to print, suggested by Andrew Morton.v2012.1: - Use GFP_KERNEL for audit log allocation, thanks to Ingo Molnar.v2011.3: - Add pid/comm back to logging.v2011.2: - Updated documentation, thanks to Randy Dunlap. - Switched Kconfig default to "y", added __read_mostly to sysctl, thanks to Ingo Molnar. - Switched to audit logging to gain safe path and name reporting when hitting the restriction.v2011.1: - back from hiatus--- Documentation/sysctl/fs.txt | 42 +++++++++++++++ fs/namei.c | 121 +++++++++++++++++++++++++++++++++++++++++++ include/linux/fs.h | 2 + kernel/sysctl.c | 18 ++++++ 4 files changed, 183 insertions(+), 0 deletions(-)diff --git a/Documentation/sysctl/fs.txt b/Documentation/sysctl/fs.txtindex 13d6166..d4a372e 100644--- a/Documentation/sysctl/fs.txt+++ b/Documentation/sysctl/fs.txt@@ -32,6 +32,8 @@ Currently, these files are in /proc/sys/fs: - nr_open - overflowuid - overflowgid+- protected_hardlinks+- protected_symlinks - suid_dumpable - super-max - super-nr@@ -157,6 +159,46 @@ The default is 65534. ============================================================== +protected, or linking to special files.+_symlinks:++A long-standing class of security issues is the symlink-based+time-of-check-time-of-use race, most commonly seen in world-writable+directories like /tmp. The common method of exploitation of this flaw+is to cross privilege boundaries when following a given symlink (i.e. a+root process follows a symlink belonging to another user). For a likely+incomplete list of hundreds of examples across the years, pleaseuiddiff --git a/fs/namei.c b/fs/namei.cindex 2ccc35c..e5ad2db 100644--- a/fs/namei.c+++ b/fs/namei.c@@ -650,6 +650,118 @@ static inline void put_link(struct nameidata *nd, struct path *link, void *cooki path_put(link); } +int sysctl_protected_symlinks __read_mostly = 1;+int sysctl_protected_hardlinks __read_mostly = 1;++/**+ * may_follow_link - Check symlink following for unsafe situations+ * @link: The path of the symlink+ *+ * In the case of the sysctl_protected path *link, struct nameidata *nd)+{+ const struct inode *inode;+ const struct inode *parent;++ if (!sysctl_protected_symlinks)+ return 0;++ /* Allowed if owner and follower match. */+ inode = link->dentry->d_inode;+ if (current_cred()->fsuid == inode->i_uid)+ return 0;++ /* Allowed if parent directory not sticky and world-writable. */+ parent = nd->path.dentry->d_inode;+ if ((parent->i_mode & (S_ISVTX|S_IWOTH)) != (S_ISVTX|S_IWOTH))+ return 0;++ /* Allowed if parent directory and link owner match. */+ if (parent->i_uid == inode->i_uid)+ return 0;++ path_put(&nd->path);+ return -EACCES;+}++/**+ * safe_hardlink_source - Check for safe hardlink conditions+ * @inode: the source inode to hardlink from+ *+ * Return false if at least one of the following conditions:+ * - inode is not a regular file+ * - inode is setuid+ * - inode is setgid and group-exec+ * - access failure for read and write+ *+ * Otherwise returns true.+ */+static bool safe_hardlink_source(struct inode *inode)+{+ umode_t mode = inode->i_mode;++ /* Special files should not get pinned to the filesystem. */+ if (!S_ISREG(mode))+ return false;++ /* Setuid files should not get pinned to the filesystem. */+ if (mode & S_ISUID)+ return false;++ /* Executable setgid files should not get pinned to the filesystem. */+ if ((mode & (S_ISGID | S_IXGRP)) == (S_ISGID | S_IXGRP))+ return false;++ /* Hardlinking to unreadable or unwritable sources is dangerous. */+ if (inode_permission(inode, MAY_READ | MAY_WRITE))+ return false;++ return true;+}++/**+ * may_linkat - Check permissions for creating a hardlink+ * @link: the source to hardlink from+ *+ * Block hardlink when all of:+ * - sysctl_protected_hardlinks enabled+ * - fsuid does not match inode+ * - hardlink source is unsafe (see safe_hardlink_source() above)+ * - not CAP_FOWNER+ *+ * Returns 0 if successful, -ve on error.+ */+static int may_linkat(struct path *link)+{+ const struct cred *cred;+ struct inode *inode;++ if (!sysctl_protected_hardlinks)+ return 0;++ cred = current_cred();+ inode = link->dentry->d_inode;++ /* Source inode owner (or CAP_FOWNER) can hardlink all they like,+ * otherwise, it must be a safe source.+ */+ if (cred->fsuid == inode->i_uid || safe_hardlink_source(inode) ||+ capable(CAP_FOWNER))+ return 0;++ return -EPERM;+}+ static __always_inline int follow_link(struct path *link, struct nameidata *nd, void **p) {@@ -1818,6 +1930,9 @@ static int path_lookupat(int dfd, const char *name, while (err > 0) { void *cookie; struct path link = path;+ err = may_follow_link(&link, nd);+ if (unlikely(err))+ break; nd->flags |= LOOKUP_PARENT; err = follow_link(&link, nd, &cookie); if (err)@@ -2777,6 +2892,9 @@ static struct file *path_openat(int dfd, const char *pathname, error = -ELOOP; break; }+ error = may_follow_link(&link, nd);+ if (unlikely(error))+ break; nd->flags |= LOOKUP_PARENT; nd->flags &= ~(LOOKUP_OPEN|LOOKUP_CREATE|LOOKUP_EXCL); error = follow_link(&link, nd, &cookie);@@ -3436,6 +3554,9 @@ SYSCALL_DEFINE5(linkat, int, olddfd, const char __user *, oldname, error = -EXDEV; if (old_path.mnt != new_path.mnt) goto out_dput;+ error = may_linkat(&old_path);+ if (unlikely(error))+ goto out_dput; error = mnt_want_write(new_path.mnt); if (error) goto out_dput;diff --git a/include/linux/fs.h b/include/linux/fs.hindex 8fabb03..c8fb6df 100644--- a/include/linux/fs.h+++ b/include/linux/fs.h@@ -437,6 +437,8 @@ extern unsigned long get_max_files(void); extern int sysctl_nr_open; extern struct inodes_stat_t inodes_stat; extern int leases_enable, lease_break_time;+extern int sysctl_protected_symlinks;+extern int sysctl_protected_hardlinks; struct buffer_head; typedef int (get_block_t)(struct inode *inode, sector_t iblock,diff --git a/kernel/sysctl.c b/kernel/sysctl.cindex 4ab1187..5d9a1d2 100644--- a/kernel/sysctl.c+++ b/kernel/sysctl.c@@ -1494,6 +1494,24 @@ static struct ctl_table fs_table[] = { #endif #endif {+ ,+ .extra1 = &zero,+ .extra2 = &one,+ },+ { .procname = "suid_dumpable", .data = &suid_dumpable, .maxlen = sizeof(int),-- 1.7.0.4--To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | http://lkml.org/lkml/2012/7/25/614 | CC-MAIN-2014-10 | en | refinedweb |
Your Account
Hear us Roar
public class Products : CollectionBase, IComponent {
IComponent requires you to write two things, a virtual method for Dispose() and an accessor for Site. Microsoft recommends that consumers of components explicitly call Dispose() rather than leaving them to the GC to cleanup. So put a public event in the Products class for the method to call:
public event EventHandler Disposed;
...
public virtual void Dispose() {
if (Disposed != null)
Disposed(this, EventArgs.Empty);
}
You'll also have to implement the Site property and return an ISite to the caller. I won't go into that here but you can find an example on MSDN at.
Showing messages 1 through 3 of 3.
© 2014, O’Reilly Media, Inc.
(707) 827-7019
(800) 889-8969
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. | http://www.oreillynet.com/cs/user/view/cs_msg/20006 | CC-MAIN-2014-10 | en | refinedweb |
baftos wrote:PushbackInputStream does not support mark/reset, so i'm not sure what that has been offered as a solution.
After reading all this thread, I understand that the 3rd party API (can we know what that API is?) requires an InputStream that supports mark() and reset(). Their reason is probably that, once you give them the InputStream, they need to mark() and reset() it for their own reasons. You decided that ByteArrayInputStream is the solution. Yes, it needs the whole file in memory, which limits you to 2GB files. But, according to DrClap, if I understand well, there is a better solution: PushbackInputStream. You construct it around your FileInputStream, give it to the API and the 3rd party code takes care of the rest. I have no hands-on experience with PushbackInputStream, but it looks to me that it should do the job. If this did not work for you, it would be nice if you could tell us why.
jtahlborn wrote:I offered it as a solution because I thought it was one. But I was wrong, because now that I look at the API docs, they explicitly say it doesn't support mark/reset. Sorry about that.
PushbackInputStream does not support mark/reset, so i'm not sure what that has been offered as a solution.
(as EJP mentioned) BufferedInputStream is one of the few InputStreams which does, however it is limited. until the OP indicates the real need, it's tough to give more advice (other than how to properly read the entire file into memory).In which case the solution is to just wrap the existing InputStream, whatever it is, with a BufferedInputStream. And yes, it would be helpful for the OP to explain the real problem.
Aacc wrote:here's the million dollar question (oft repeated in this thread). why does this API method call reset(). if the answer is "no good reason", then just use a custom subclass of FileInputStream which overrides the mark/reset methods to do nothing. if there is a good reason, you can implement a version of FileInputStream which supports mark/reset using a FileChannel.
Here is stituation:
all I need to do is pass a File as stream to an API method. the API take InputStream as parameter.
However, the API implementation IS poor but this is situation I have to deal with. in the implemetation, it call reset()
Ideally, I expect to pass a file as stream without loading entire file into memory.
But this sounds imposible
So far I've read following ideas through this discussion
-- BufferredStream. it can only reset() back to a point as far back as the buffer size. unless I make huge buffer which basically loading entire file into buffer(memeory)
-- BatyArryInputStream - no matter how to construct this this, e.g. File->bufferredStream -> ByteArrayOutputStream -> ByteArryInputStream. it still need to take entire file into memory
Any better way I can deal with the situation?
Errors & omissions excepted.Errors & omissions excepted.
import java.io.File; import java.io.FileDescriptor; import java.io.FileInputStream; import java.io.IOException; public class MarkableFileInputStream extends FileInputStream { private long mark = -1; public MarkableFileInputStream(File file) throws IOException { super(file); } public MarkableFileInputStream(FileDescriptor fd) throws IOException { super(fd); } public MarkableFileInputStream(String file) throws IOException { super(file); } @Override public void mark(int readLimit) { try { this.mark = getChannel().position(); } catch (IOException exc) { throw new RuntimeException(exc); } } @Override public boolean markSupported() { return true; } @Override public void reset() throws IOException { if (mark >= 0) { super.getChannel().position(mark); } else { throw new IOException("stream not marked"); } } }
@Override public void mark(int readLimit) { this.mark = getChannel().position(); } | https://community.oracle.com/message/10858176?tstart=0 | CC-MAIN-2014-10 | en | refinedweb |
11 March 2011 22:44 [Source: ICIS news]
HOUSTON (ICIS)--Here is Friday’s end of day ?xml:namespace>
CRUDE: Apr WTI: $101.16/bbl, down $1.54; Apr Brent: $113.84/bbl, down $1.59 NYMEX WTI crude futures finished down for the fourth consecutive session on perception that the massive earthquake in Japan could temporarily reduce oil demand from the world’s third largest consumer. WTI bottomed out at $99.01/bbl before the dip attracted buying on geopolitical worries.
RBOB: Apr: $2.9877/gal, down 3.19 cents
Reformulated gasoline blendstock for oxygenate blending (RBOB) futures fell under $3/gal for the third time in March. The price drop across came after a massive earthquake hit offshore
NATURAL GAS: Apr: $3.889/MMBtu, up 5.9 cents
The front-month natural gas futures contract increased 1.5% on speculation that lower prices in 2011 would spark demand from factories and power plants. Prices also increased as the earthquake in
ETHANE: down at 64.25-64.50 cents/gal
Mont Belvieu ethane ended the week at a low mark amid thin trading in the natural gas liquids (NGL) market. Prices mainly tracked the downward trend in crude futures.
AROMATICS: toluene wider at $3.53-3.55/gal
US n-grade toluene prices ended the week at $3.53-3.55/gal FOB (free on board) on Friday, unchanged from 4 March. The price range was slightly wider compared $3.54-3.55/gal FOB the previous day.
OLEFINS: refinery-grade propylene tighter at 68.00-70.25 cents/lb
March refinery-grade propylene (RGP) was bid at 68.00 cents/lb during the day, with offer levels unchanged from Thursday at 70.25 cents/lb. Friday's range was tighter compared with 67.50-70.25 cents/lb on Th | http://www.icis.com/Articles/2011/03/11/9443282/evening-snapshot-americas-markets-summary.html | CC-MAIN-2014-10 | en | refinedweb |
#include <unistd.h>
int usleep(useconds_t useconds);
The usleep() function shall cause the calling thread to be suspended from execution until either the number of realtime shall be less than one million.()
Upon successful completion, usleep() shall return 0; otherwise, it shall return -1 and set errno to indicate the error.
The usleep() function may fail if:
The following sections are informative.> | http://www.makelinux.net/man/3posix/U/usleep | CC-MAIN-2014-10 | en | refinedweb |
Post your Comment
The link to the outer class,java tutorial,java tutorials
Nested Classes: Inner & Outer Class
The classes which are defined within...; as Inner Class. The Class in which
inner class resides are known as Outer Class. Inner Class has
access to the private member of the outer class
MySQL LEFT OUTER JOIN
the following link:
The above link will provide you a good example of Left outer join...MySQL LEFT OUTER JOIN Hi There,
I need to perform LEFT OUTER JOIN
Tutorial, Java Tutorials
Tutorials
Here we are providing many tutorials on Java related...
Java Testing
JSON Tutorial...
NIO
Java NIO Tutorials
What is inner class in Java? - Java Beginners
What is inner class in Java? Hi,
Explain me inner classes in Java...://
Uses:
Non-static inner class keep the reference of outer class and allow to access member variable of outer
Java - JDK Tutorials
should learn the
Java beginners
tutorial before learning these tutorials. View the
Java video
tutorials, which will help you in learning Java quickly. We...Java - JDK Tutorials
This is the list of JDK tutorials which
Inner Class - Java Beginners
Inner Class Hi,
I have the following question.
class Outer{
class Inner{
}
public static void main(String[] args)
{
Outer.Inner... & Outer$1.class.
My question is why this additional generation of class "Outer
Tutorial For Java beginners
of December 2013.
Check the Java Beginners Java Tutorial section for tutorials...Tutorial For Java beginners I am beginners in Java. Is there any good tutorial and example for beginners in Java? I want to learn Java before Jan
Counting bytes on Sockets,java tutorial,java tutorials
Counting bytes on Sockets
2002-10-09 The Java Specialists' Newsletter [Issue... to the 58th edition of The Java(tm) Specialists' Newsletter sent to 4814 Java... - they are still my #1
priority :-) and I will fit them in inbetween writing Java code
Outer join
Outer join hii,
What is outer join?
Types of outer join??
hello,
There are two type of outer join:-
A. Left Outer Join
B. Right Outer Join
The Left Outer Join returns all the rows from the Left Table
The nested class - Java Beginners
The nested class Hi,
What is nested class in Java? What is the benefits of nested class? Write examples of nested class.
Thanks
Hi Friend,
Please visit the following link:
JavaScript Change link
event
occurs which changes the text to 'Java Tutorials'.
Here is the code... = 'RoseIndia'"
onMouseOut="this.innerHTML = 'Java Tutorials'">Java Tutorials<...
JavaScript Change link's text
Desgining a Class - Java Beginners
Desgining a Class Design a class named ?DBList? having the following data members and member functions :
Class Name : DBList
Data Members /Instance variables:
Start : stores a link to the first node of the linked list
link
Shutting down threads cleanly,java tutorial,java tutorials
Shutting Down Threads
Cleanly
2002-09-16 The Java Specialists' Newsletter....
Welcome to the 56th edition of The Java(tm) Specialists' Newsletter sent to 4609 Java
Specialists in 85
countries. Whenever I think I wrote a "killer
Inner class in java
, a non-static nested class is actually associated to an object rather than to the class in which it is nested.
For more details click on the following link
Inner class in Java
Nested class in java
class. To derive a class in java the keyword extends is used.
For more details click on the following link.
Nested class and inheritance in java... through which we can derived classes from other classes. The derived class
C++Tutorials
;
The
CPlusPlus Language Tutorial
These tutorials explain the C++ language...;
Windows
API Tutorial
The tutorials start...;
C++
Tutorial
Tutorials |
Dojo Tutorials |
Java Script Tutorial |
CVS Tutorial... Tutorials |
JEE 5 Tutorial |
JDK 6 Tutorial |
Java UDP Tutorial
| Java Threading
Tutorial | Java 5 Tutorials
|
EJB Tutorial |
Jboss 3.0
What is outer join?Explain with examples.
What is outer join?Explain with examples. What is outer join?Explain with examples
Submit Tutorials - Submitting Tutorials at RoseIndia.net
to submit their tutorials at RoseIndia.net. We are big
tutorial web site... tutorials on RoseIndia.net (if your tutorial is good).
Tutorials Submission... us.
How I can Submit Tutorial?
Submitting tutorials at roseindia.net is now
PHP Tutorials Guide with Examples
PHP Tutorials Guide with Examples Hi,
I am a beginners in PHP. I have searching for php tutorial guides with examples. Can any one provide me resourceful link.
Thanks,
Hi,
Please read this PHP tutorial
get the value from another class - Java Beginners
javax.xml.transform.stream.StreamResult;
public class xmlRead{
static public void main(String[] arg...);
}
}
}
---------------------------------------------
I am sending you a link. Visit for more information with running example. I hope that, this link will help you
Hibernate Outer Join
In this section, you will learn about types of outer joins in Hibernate
Hibernate Right Outer Join
In this section, you will learn how to do Right Outer Join in Hibernate
Hibernate Left Outer Join
In this section, you will learn how to do Left Outer Join in Hibernate
Post your Comment | http://www.roseindia.net/discussion/33370-The-link-to-the-outer-class-java-tutorial-java-tutorials.html | CC-MAIN-2014-10 | en | refinedweb |
On Thu, Apr 24, 2008 at 03:25:05PM +0100, Daniel P. Berrange wrote: > > --- libvirt-0.4.0/qemud/remote.c 2007-12-12 05:30:49.000000000 -0800 > > +++ libvirt-new/qemud/remote.c 2008-04-10 12:52:18.059618661 -0700 > > @@ -434,6 +434,15 @@ remoteDispatchOpen (struct qemud_server > > flags = args->flags; > > if (client->readonly) flags |= VIR_CONNECT_RO; > > > > +#ifdef __sun > > + /* > > + * On Solaris, all clients are forced to go via virtd. As a result, > > + * virtd must indicate it really does want to connect to the > > + * hypervisor. > > + */ > > + > #include "test.h" > > +#include "xen_internal.h" > > #include "xen_unified.h" > > #include "remote_internal.h" > > #include "qemu_driver.h" > > @@ -202,8 +203,16 @@ virInitialize(void) > > if (qemudRegister() == -1) return -1; > > #endif > > #ifdef WITH_XEN > > + /* > > + * On Solaris, only initialize Xen if we're libvirtd. > > + */ > > +#ifdef __sun > > + if (geteuid() != 0 && xenHavePrivilege() && > > + xenUnifiedRegister () == -1) return -1; > > +#else > > if (xenUnifiedRegister () == -1) return -1; > > #endif > > +#endif > > As Daniel suggests, we should write some kind of generic helper function > in the util.c file, so we can isolate the #ifdef __sun in a single place > rather than repeating it. I can do that, sure. > After the 0.4.0 release I refactored the damon code to have a generic function for > getting socket peer credentials. So we can move this code into the qemud.c file > in the qemudGetSocketIdentity() method. Its good to have a Solaris impl for this > API. But this only returns the UID right? Or are you saying there's a generic function for "can connect"? If so, we could certainly use that. I don't see how we could use qemudGetSocketIdentity() (though we could certainly *implement* it). > > /* Disable Nagle. Unix sockets will ignore this. */ > > setsockopt (fd, IPPROTO_TCP, TCP_NODELAY, (void *)&no_slow_start, > > sizeof no_slow_start); > > @@ -1864,6 +1908,10 @@ remoteReadConfigFile (struct qemud_serve > > if (auth_unix_rw == REMOTE_AUTH_POLKIT) > > unix_sock_rw_mask = 0777; > > #endif > > +#ifdef __sun > > + unix_sock_rw_mask = 0666; > > +#endif > > + > > We can probably use 0666 even on Linux - there's no compelling reason why we > need to have 0777 with execute permission - read & write should be sufficient > I believe. This hunk is enforcing 0666 even without Polkit on Solaris. So I can fix the 0777 too, but we do need this hunk. > > +#ifdef __sun > > +static void > > +qemudSetupPrivs (struct qemud_server *server) > > +{ > > + chown ("/var/run/libvirt", 60, 60); > > + chmod ("/var/run/libvirt", 0755); > > + chown ("/var/run/libvirt/libvirt-sock", 60, 60); > > + chmod ("/var/run/libvirt/libvirt-sock", 0666); > > + chown (server->logDir, 60, 60); > > The thing that concerns me here, is that we're getting a bit of a disconnect > between the config file and the impl. The user can already set these things > via the cofnig file, so hardcoding specific values is not so pleasant. I'm > wondering if its practical to have this privilege separation stuff be toggled > via a config file setting and avoid hardcoding this soo much. We don't even ship the config file, as there's nothing in it that we want to make configurable on Solaris. We don't have the same issue that Linux does, where it needs to define a single set of sources that applies to whole bunch of disparate environments. We just have the Solaris ecosystem, and it's always configured and locked down in the same way. When there's genuine configuration values that users need for something enabled on Solaris, it will be in SMF anyway, so will need re-working on the config backend in libvirt. > > +/** > > + * xenHavePrivilege() > > + * > > + * Return true if the current process should be able to connect to Xen. > > + */ > > +int > > +xenHavePrivilege() > > +{ > > +#ifdef __sun > > + return priv_ineffect (PRIV_XVM_CONTROL); > > +#else > > + return getuid () == 0; > > +#endif > > +} > > As mentioned earlier, we probably want to move this into the util.c file > and have the privilege name passed in as a parameter. Could you explain further how you see this working? > > +/* > > + * Attempt to access the domain via 'xenconsole', which may have > > + * additional privilege to reach the console. > > + */ > > +static int > > +vshXenConsole(int id) > > Clearly this has to die. It's not at all clear to me. virsh console is already a wart, and I don't see the problem in a pragmatic approach here, given it only ever tries the direct approach as a fallback. > The virsh console command implements exactly > the same functionality as xenconsole without being xen specific. > > I'm guessing you're doing this because you need the console in a > separate process from virsh, so it can run with different privileges. Yes. In particular it needs to run setuid root and: - get the pts from xenstore (needs PRIV_XVM_CONTROL) - open the pty (requires all privileges) - drop all privileges before continuing > If so, we should probably split 'virsh console' out into a separate > standlone binary/command 'virt-console'. This will let you get the > privilege separation without relying on Xen specific commands. This just adds a whole bunch of extra code to no clear advantage? > So in summary, this is interesting work. I've no objections to the general > principle of what the patch is trying to achieve - we'll just need to > iterate over it a few times to try and better isolate the solaris specific Sure. Just as a warning, it will probably be quite some time before I'm able to find time to get this updated and in a mergable state. > It'd be useful for other people's understanding if there was a short doc > giving the 1000ft overview of the different components and their relative > privileges I have a detailed document ready for architecture review. When it comes up on the ARC () I'll forward it on. regards, john | https://www.redhat.com/archives/libvir-list/2008-April/msg00341.html | CC-MAIN-2014-10 | en | refinedweb |
This article describes the process of creating SSIS packages with SQL Server 2005. SSIS stands for SQL Server 2005 Integration Services.
It is a platform for building high performance data integration solutions, including extraction, transformation, and loading (ETL) packages for data warehousing. It ships with the features, tools, and functionality to build Enterprise-Class ETL-based applications. Similar to DTS (Data Transmission Services), it provides functions to move/copy data from one place to another and modify the data at runtime. SSIS fully supports the Microsoft .NET Framework, allowing developers to program SSIS in their choice of .NET-compliant languages, as well as native code.
Let’s discuss it with an example of creating package variables. The steps involved in constructing and executing a package uses the Business Intelligence Development Studio environment. Before creating the package, you first need to create a project that can host the package. Open Business Intelligence Development Studio from the Start->Programs menu. Select File->New Project from the menu, and you will see a dialog box. Select Integration Services Project as the project template, and specify the name of the project as SSISDemo. Once the project is created, you will see the package designer window of the default package Package.dtsx. Now, go to the ToolBox and select Script Task. Drag and drop that task on to the control work-area. Your executable is ScriptTask. Right click on the window and select “Variables” from the popup menu. Click on the Add Variables tab. Add two variables named FirstVa and SecondVar. Right click on the script task and click on Execute. Your package should successfully execute. This is how package variables are added.
FirstVa
SecondVar
Dim localVar=dts.Variables[“FirstVar”].value.ToString()
MsgBox(localVar);
We can load the package using two ways:
using DTS = Microsoft.SqlServer.Dts.Runtime;
using System.Collections;
using System.Collection.Generic;
DTS.Package dtsPackage = null;
DTS.Application dtsApplication = null;
dtsApplication = new DTS.Application();
//Load package by specifying SSIS package file path
dtsPackage = dtsApplication.LoadPackage(@"c:\Package.dtsx", null);
DTS.Variables packageVariables1 = dtsPackage.Variables;
packageVariables1["FirstVar"].Value = "12345321";
packageVariables1["SecondVar"].Value = "2312";
DTS.DTSExecResult packageResult = dtsPackage.Execute();
The dtsPackage.Execute will return either success or failure messages.
dtsPackage.Execute
string p = @"C:\Package.dtsx";
// Verify that the folder exists by using ExistsOnSqlServer method.
Boolean folderExists = app.FolderExistsOnSqlServer("myNewFolder1", ".", null, null);
Console.WriteLine("Folder exists? {0}", folderExists);
// Load a package and save it.
DtsPackage.Package pkg; // = app.LoadPackage(p, null);
//app.SaveToSqlServerAs(pkg, null, "newPkg", ".", null, null);
pkg=app.LoadFromSqlServer("newPkg", ".", String.Empty, String.Empty, null);
DtsPackage.Variables vars2 = pkg.Variables;
vars2["OpeId"].Value = "value from c#";
DtsPackage.DTSExecResult result2 = pkg.Execute();
// Verify that the package was saved.
Boolean packageExists = app.ExistsOnSqlServer("newPkg", ".", null, null);
Console.WriteLine("Package exists? {0}", packageExists);
//Remove the folder.
app.RemoveFolderFromSqlServer("myNewFolder1", ".", null, null);
// Verify that the folder was removed by using the ExistsOnSqlServer method.
folderExists = app.FolderExistsOnSqlServer("myNewFolder1", ".", null, null);
Console.WriteLine("Folder exists? {0}", folderExists);
To maintain the successfully executed data and failure data, store these values in separate array lists and maintain them for reports.
if (pkgExecResults == DTS.DTSExecResult.Success)
{
//store the values.
}
If you get any error while executing the SSIS package, you need to register the MSXML DLL using:
Regsvr32 "C:\WINDOWS\system32\msxml3 | http://www.codeproject.com/Articles/22949/SSIS-Package-Integration-with-C-NET?fid=978510&df=90&mpp=10&noise=1&prof=True&sort=Position&view=Expanded&spc=None | CC-MAIN-2014-10 | en | refinedweb |
Safari Books Online is a digital library providing on-demand subscription access to thousands of learning resources.
This strategy defines the key for the grouping of the messages.
The default grouping is based on the
CORRELATION_ID message. Thus, all the messages
with the same
CORRELATION_ID will be
stored in a separate bucket for aggregation.
The framework provides a
HeaderAttributeCorrelationStrategy out of the
box. You can also implement your own strategy, either by implementing
the
interface or
creating your own POJO. In the former case, you have to implement the
CorrelationStrategy
getCorrelationKey() method that will
return the key as shown below:
public class MyCorrelationStrategy implements CorrelationStrategy { public Object getCorrelationKey(Message<?> message) { // implement your own correlation key here // return .. } } | http://my.safaribooksonline.com/book/programming/java/9781449335403/aggregators/id2872691 | CC-MAIN-2014-10 | en | refinedweb |
27 June 2011 17:54 [Source: ICIS news]
HOUSTON (ICIS)--The ?xml:namespace>
Most methanol sources expect a rollover, mainly because of the rollover trend this year. The monthly methanol contract moved down to 126-128 cents/gal in February and has remained at that level. “I am expecting a roll for July,” a large buyer said on Monday.
The buyer also said that the spot methanol price remains at 108-109 cents/gal. Some methanol sources say that the contract is overpriced based on crude values, which have dropped by 10% in the past month.
A popular formula for estimating the methanol spot price based on oil pegs the spot price of methanol at 99 cents/gal, or almost 10 cents/gal lower than it is now.
On Monday, one buyer predicted that oil prices would have to drop to the low $80/bbl range before there might be any chance of Methanex or Southern Chemical (SCC) lowering the monthly contract.
Even then it might not happen, the buyer also said, noting a month-long outage coming in September at Methanex’s 1.7m tonne/year Atlas plant in Trinidad. “I think they will hold on to pricing,” they said. “Look at the fall event coming.”
Historically, methanol prices have tracked crude values over the long-term. When the methanol contract last changed, dropping by 5 cents/gal in February, oil prices were around $85.65/bbl.
In mid-morning trading on Monday, NYMEX front-month crude futures reached $90.80 bbl, about 6% higher than in February.
Methanex and SCC typically have set the monthly North American contract methanol range with their nominations. SCC has not issued its nomination yet.
($1 = €0.71) | http://www.icis.com/Articles/2011/06/27/9472955/Methanex-nomination-points-to-rollover-for-US-July-methanol.html | CC-MAIN-2014-10 | en | refinedweb |
..
1. Use a Coding Standard
It’s easy to write bad, unorganized code, but it’s hard to maintain such code. Good code typically follows some standard for naming conventions, formatting, etc. Such standards are nice because they make things deterministic to those who read your code afterwards, including yourself.
You can create your own coding standard, but it’s better to stick to one with wider-acceptance. Publicly maintained standards like Zend Framework Coding Standard or soon to be PSR-1 Coding Style Guide instead, it will be easier for others to adapt.
2. Write Useful Comments
Comments are crucial. You won’t appreciate them until you leave your thousand-line script for a couple of days and return to and try and make sense of it. Useful comments make life easier for yourself and those after you who have to maintain your code.
Write meaningful, single line comments for vague lines; write full parameter and functionality descriptions for functions and methods; for tricky logic blocks, describe the logic in words before it if necessary. And don’t forget, always keep your comments up to date!
3. Refactor
Code refactoring is the eighth habit of highly effective developers. Believe it or not, you should be refactoring your code on a daily bases or your code is not in good health! Refactoring keeps your code healthy, but what should you refactor and how?
You should be refactoring everything, from your architecture to your methods and functions, variables names, the number of arguments a method receives, etc.
How to refactor is more of an art more than a science, but there are a few rules of thumb that can shed some light on it:
- If your function or method is more than 20-25 lines, it’s more likely that you are including too much logic inside it, and you can probably split it into two or more smaller functions/methods.
- If your method/function name is more than 20 characters, you should either rethink the name, or rethink the whole function/method by reviewing the first rule.
- If you have a lot of nested loops then you may be doing some resource-intensive processing without realizing it. In general, you should rethink the logic if you are nesting more than 2 loops. Three nested loops is just horrible!
- Consider if there are any applicable design patterns your code can follow. You shouldn’t use patterns just for the sake of using patterns, but patterns offer tried-and-true ready-thought solutions that could be applicable.
4. Avoid Global Code
Global variables and loops are a mess and can prove problematic when your application grows to millions of lines of code (which most do!). They may influence code elsewhere that is difficult to discern, or cause noisy naming clashes. Think twice before you pollute the global namespace with variables, functions, loops, etc.
In an ideal case, you should have no blocks defined globally. That is. all switch statements, try-catch, foreach, while-loops, etc. should be written inside a method or a function. Methods should be written inside class definitions, and class and function definitions should be within namespaces.
5. Use Meaningful Names
Never use names like
$k,
$m, and
$test for your variables. How do expect to read such code in the future? Good code should be meaningful in terms of variable names, function/method names, and class names. Some good examples of meaningful names are:
$request,
$dbResult, and
$tempFile (depending on your coding style guidelines these may use underscores, camelCase, or PascalCase).
6. Use Meaningful Structures
Structuring your application is very important; don’t use complicated structures, always stick to simplicity. When naming directories and files, use a naming convention you agree upon with your team, or use one associated with your coding standard. Always split the four parts of any typical PHP application apart from each other – CSS, HTML Templates/Layouts, JavaScript, PHP Code – and for each try to split libraries from business logic. It’s also a good idea to keep your directory hierarchy as shallow as possible so it’s easier to navigate and find the code you’re looking for.
7. Use Version Control Software
In the old days, good development teams relied on CVS and diff patches for version control. Nowadays, though, we have a variety of solutions available. Managing changes and revisions should be easy but effective, so pick whatever version control software that will work best with the workflow of your development team. I prefer using a distributed version control tool like Git or Mercurial; both are free software/open source and very powerful.
If you don’t know what version control software is, I’d recommend reading Sean Hudgston’s series Introduction to Git.
8.. I recommend using Phing, it’s a well-supported build tool for PHP written to mimic Ant; if you aren’t familiar with it, check out Shammer C’s article Using Phing, the PHP Build Tool and Vito Tardia’s article Deploy and Release Your Application with Phing.
9. Use Code Documenters
For large applications spanning several classes and namespaces, you should have automatically generated API documentation. This is very useful and keeps the development team aware of “what’s what.” And if you work on several projects at the same time, you will find such documentation a blessing since you may forget about structures switching back and forth between projects. One such documenter you might consider using is DocBlox.
10. Use a Testing Framework
There are a plenty of tools that I really appreciate, but by far the ones I appreciate the most are the frameworks that help automate the testing process. Testing (particularly systematic testing) is crucial to every piece of your million dollar application. Good testing tools are PHPUnit and SimpleTest for unit testing your PHP Classes. For GUI testing, I recommend SeleniumHQ tools.
Summary
In this article you saw an overview of some of the best practices for writing better code, including using a coding standard to unify code formatting across the whole team, the importance of refactoring and how to embrace it, and using professional tools like testing frameworks, code documenters, and version control to help manage your codebase. If you’re not following these tips already, it’s worth the effort to adopt them and get your team on track.
Image via DJTaylor/Shutterstock.
- ComGuo
- Greg Bulmash
- Hans Vos
- Kim Vigsbo
- Niyaz
- Sebastiaan Stok
- Marco Berrocal
- Asad Iqbal
- Rob | http://www.sitepoint.com/10-tips-for-better-coding/ | CC-MAIN-2014-10 | en | refinedweb |
Rails: Resource_controller Plugin Puts Controllers on a Diet
class StandardsController < ApplicationController
# GET /standards
# GET /standards.xml
def index
@standards = Standard.find(:all)
respond_to do
|format|
format.html # index.html.erb
format.xml { render :xml => @standards }
end
end
# GET /standards/1
# GET /standards/1.xml
def show
@standard = Standard.find(params[:id])
respond_to do
|format|
format.html # show.html.erb
format.xml { render :xml => @standard }
end
end
# GET /standards/new
# GET /standards/new.xml
def new
@standard = Standard.new
respond_to do
|format|
format.html # new.html.erb
format.xml { render :xml => @standard }
end
end
# GET /standards/1/edit
def edit
@standard = Standard.find(params[:id])
end
# POST /standards
# POST /standards.xml
def create
@standard = Standard.new(params[:standard])
respond_to do
|format|
if @standard.save
flash[:notice] = 'Standard was successfully created.'
format.html { redirect_to(@standard) }
format.xml { render :xml => @standard, :status => :created, :location => @standard }
else
format.html { render :action => "new" }
format.xml { render :xml => @standard.errors, :status => :unprocessable_entity }
end
end
end
# PUT /standards/1
# PUT /standards/1.xml
def update
@standard = Standard.find(params[:id])
respond_to do
|format|
if @standard.update_attributes(params[:standard])
flash[:notice] = 'Standard was successfully updated.'
format.html { redirect_to(@standard) }
format.xml { head :ok }
else
format.html { render :action => "edit" }
format.xml { render :xml => @standard.errors, :status => :unprocessable_entity }
end
end
end
# DELETE /standards/1
# DELETE /standards/1.xml
def destroy
@standard = Standard.find(params[:id])
@standard.destroy
respond_to do
|format|
format.html { redirect_to(standards_url) }
format.xml { head :ok }
end
end
end
Other than the specific names, all generated controllers look very much like this.
Using generated controllers is quite easy, and in many cases, little or no change needs to be made to the generated code, particularly if you take the "skinny controllers" mantra to heart.
On the other hand, another Ruby/Rails mantra is "don't repeat yourself", and having all that almost duplicate code, even if you didn't write it yourself, is a violation of the DRY principle.
Enter: resource_controller. James Golick offered up a new plugin for rails called resource_controller which allows the same controller shown above to be written as:
class StandardsController < ApplicationController
resource_controller
end
Well, there is a little bit of a white lie here, this won't give you the standard xml response capability, but that's easy to get back with a few lines:
class StandardsController < ApplicationController
resource_controller
index.wants.xml { render :xml => @standards }
[new, show].each do
|action|
action.wants.xml { render :xml => @standard })
end
create.wants.xml { render :xml => @standard, :status => :created, :location => @standard }
[update, destroy].each do
|action|
action.wants.xml { head :ok }
end
[create_fails, update_fails].each do
|action|
action.wants.xml { render :xml => @standard.errors, :status => :unprocessable_entity }
end
end
This plugin makes writing controllers look more like writing models, with declarative class methods like resource_controller, and "callbacks" like action.wants. The plugin automatically gives the controller the right instance variable for each action, in this case either @standards for the index action or @standard for the others.
There are some common patterns in Rails which force changing controller code. One of these is nesting resources. It's easy to set up the routes in the config/routes.rb file
map.resources :organization, :has_many => :standards
But once you've done this, you need to change the controller to fetch and use the parent resource and use it appropriately for each action. The resource_controller plugin simplifies this. After the above routing change all that's needed is to add a declarative call to our controller
class StandardsController < ApplicationController
resource_controller
belongs_to :organization
end
The belongs_to declaration enables the controller to be used with the nested resource. Now when a controller action is reached through a nested resource URL, like /organization/1234/standards, the controller will automatically create an instance variable named @organization and set it appropriately, and will use the standards association of the parent object to find, and build instances of the model Standard.
Note that the same controller also works if the URL is not nested, so we can have another mapping in routes.rb which allows access to standards outside of an organization:
map.resources :standard
map.resources :organization, :has_many :standards
And the resource controller will automatically work in either context.
The plugin also handles namespaced controllers, polymorphic nested resources (similar and related to polymorphic associations in ActiveRecord) and other magic. You also get URL and path helper functions which work in the context of the URL in the request.Resource_controller looks like a useful plugin, and it will no doubt get even better as it matures. The details are on James Golicks blog. There's also a rapid-fire screencast by Fabio Akita, which really shows what the plugin can do in action.
YAGN XML
by
James Golick
I wanted to mention that the reason I don't include XML in r_c, by default, is that I think it's a case of YAGNI. I would venture that there are a lot of rails apps in production with the XML format enabled, but completely untested, and missing some business logic, or exposing fields they aren't meant to be exposing, or worse...
That said, in a future version of r_c, there'll likely be something to turn your example of how to turn XML back on in to a one-liner.
make_resourceful
by
Geoffrey Grosenbach
Models already have many acts_as_plugins, but there's room for simplification through behaviors in controllers, too.
Re: make_resourceful
by
J Aaron Farr
How to cleanly map nested resources like that please?
by
Raphaël Valyi
I had lot's of trouble mapping nested resources this way:
/:country/top_level_resource_controller/:year/:month/:day/:text_id/second_level_nested_resource_controller/:id
Any idea or pointer on how to automate such an URL pattern, especially for URL generation? Would those frameworks help?
In route.rb, instead of resources, I had to use:
map.connect ':country/top_level_resource_controller/:year/:month/:day/:text_id', :controller => 'top_level_resource_controller', :action => 'show'
for every action requiring an item id.
And then it was easier for nested resources:
map.resources :second_level_nested_resource, :path_prefix => '/:locale/top_level_resource_controller/:year/:month/:day/:text_id'
But the URL generation to link between actions was totally hard coded in helpers. How to make it better?
Why I choosed those URL patterns?
1) I need the country in the URL (to use the same app for several countries), but I don't want to make it a resources in order to keep the URL simple and pertinent.
2) I need to split the id of my top_level_resource in :year, :month, :day and :text_id because I use static page cache, I have many of those resources, and I need a fast access. So I needed to split the cache in hierarchical folders, not everything in a single folder. Moreover, I'm using the :text_id rather than the natural :id key to optimize SEO.
3) finally I have normal nested resources inside that stuff, just like second_level_nested_resource
Any pointer to make URL generation clean please? Thanks in advance,
Raphaël Valyi.
RESTful controllers diets in general
by
Juan Maria Martinez Arce
From the experience I extract this idea: any of this plugin for RESTful controllers code reduction will be useful if It helps you to not enclose your project between extra and not wanted complexity walls.
Kudos do James
by
Fabio Akita
Re: Kudos do James
by
Fabio Akita
Re: Kudos do James
by
Ben Scofield
I talked a bit about this a few months ago, if anyone would like to see some more conversation on the topic.
An alternative plugin
by
Philippe Lachaise
Here's a plugin of mine that adress this same need :
miceplugins.lachaise.org/mice_restful/trunk (SVN : "scrip/plugin install" should work)
Doc is the code (sorry about that) but some in-depth explanations are to be found her (in french, sorry again, but contains code examples) :
blog.lachaise.org/?p=3
At its simplest, this single statement,
restful_methods
will generate for you all the basic code (format both html and xml) but quite a few more complex real-life cases are handled.
In particular two-levels resource nesting is supported.
e.g
restful_methods :only => [:current_parent, :index, :show],
:parent_model => :item,
:include => { :catalog_item => { :catalog => :shop } },
:batch => :batch_handler,
:batch_redirect => [:organization_item_path, { :organization_id => :@current_organization, :list_id => :'@current_item.list', :id => :@current_item } ],
:before_index => :before_index
Anyway, it it happens to be as useful for someone as it has been for my (rather big and com plex) application I would gladly share information with anyone interested and may consider turn it this code into a properly packaged plugin. | http://www.infoq.com/news/2008/01/rails-resource-controller | CC-MAIN-2014-10 | en | refinedweb |
Ans DevOps Training Engineer job. Every question is supplemented by an answer so that you can prepare for a job interview in a short time Ansible Interview Questions. We have gathered the list later tending few of technical interviews in top-notch organizations like-Amazon, Netflix, Airbnb, etc.
Ansible Interview Questions
Frequently, certain questions and concepts are utilized in our regular work Ansible Interview Questions. But these are most valuable when an Interviewer is judging to examine your broad experience of DevOps.
Here is the list of popular Ansible Interview Questions that are proposed frequently in an interview
1. What Is Ansible?
Answer: Ansible is a software tool to deploy an application using ssh without sny downtime. It is also used to manage and configure software applications. Ansible is developed by Python language.
2. What Are The Advantages Of Ansible?
Answer:
- Agent-less
- Very low overhead
- Good performance
3. How Ansible Works?
Answer: There are many similar automation tools available like Puppet, Capistrano, Chef, Salt, Space Walk etc, but Ansible categorizes into two types of server: controlling machines and nodes.
The controlling machine, where Ansible is installed and Nodes are managed by this controlling machine over SSH. The location of nodes a bunch of commands which can perform multiple tasks and each playbook are in YAML file format..
5. Is There A Web Interface / Rest API /Etc?
Answer: Yes, Ansible, Inc makes a great product that makes Ansible even more powerful and easy to use. See Ansible Tower.
6. How Do I Submit A Change To The Documentation?
Answer: Documentation for Ansible is kept in the main project git repository, and complete instructions for contributing can be found in the docs.
7. When Should I Use {{ }}? Also, How To Interpolate Variables Or Dynamic Variable Names?
Answer: A steadfast rule is ‘always use {{ }} exceptmustaches don’t stack’. We often see this:
{{ somevar_{{other_var}} }}
The above DOES NOT WORK, if you need to use a dynamic variable use the hostvars or vars dictionary as appropriate:
{{ hostvars[inventory_hostname][‘somevar_’ + other_var] }}.)
- sudo apt-get update
- sudo apt-get install software-properties-common. Afterward, we can install the software:
- sudo apt-get update
- sudo apt-get install ansible
- We now have all of the software required to administer our servers through Ansible..
10. Desired To Gain Proficiency On Ansible?
Answer: Explore the blog post on Ansible training to become a pro in Ansible..
13. How Do I See All The Inventory Vars Defined For My Host?
Answer: You can see the resulting vars you define in inventory running the following command:
ansible -m debug -a “var=hostvars[‘hostname’]” localhost..][‘ansible_’ + which_interface][‘ipv4’][‘address’]}}
The trick about going through host vars is necessary because it’s a dictionary of the entire namespace of variables. ‘inventory_hostname’ is a magic variable that indicates the current host you are looping over in the host loop. | https://svrtechnologies.com/top-15-devops-ansible-interview-questions-and-answers-pdf/ | CC-MAIN-2020-29 | en | refinedweb |
API to highlight search results in editor? That will not help you but this line prints the search text
print(v.searchController().searchText())
Have you tried using on_main_thread?
Thanks for the on_main_thread idea - I gave it a go, but the search term didn't change and it just cycled through the old search results.
I'll keep playing with it.
@emkay_online and @JonB Perhaps is it the same limitation as in my topic
@cvp that issue is due to the sandbox security limitations of the uidocumentpicker -- in this case the search bar is hosted by pythonista.
import editor, objc_util @objc_util.on_main_thread def setSearchTerm(txt): t=editor._get_editor_tab() v=t.editorView() v.searchController().searchView().searchBar().text=txt v.searchController().searchBar_textDidChange_(v.searchController().searchView().searchBar(),txt) def highlighNext(): t=editor._get_editor_tab() v=t.editorView() v.highlightNextSearchResult() setSearchTerm('tab')
Setting the searchbar text seems necessary.
@emkay_online @jonb just found this
import editor from objc_util import * t = editor._get_editor_tab() v = t.editorView() v.searchController().searchView().subviews()[0].setText_('test')
My example above used
v.searchController().searchView().searchBar().text=txt
Which is probably more robust than using subviews. Sorry I wasnt clear -- the above example is fully functional. (You need to also call the delegate method for the little arrows to show up)
@JonB That is fantastic. It took a second to realise I needed to open the search bar first, but now it works perfectly.
Thank you
ps. Sorry to only just reply, I wasn't getting alerts on your posts | https://forum.omz-software.com/topic/5236/api-to-highlight-search-results-in-editor/12 | CC-MAIN-2020-29 | en | refinedweb |
Andrew Haley schrieb: > On 06/18/2010 10:11 AM, Georg Lay wrote: >> Andrew Haley schrieb: >>> On 06/18/2010 08:56 AM, Georg Lay wrote: >>>> Hi, I have a question on gcc's signed overflow optimisation in the >>>> following C function: >>>> >>>> int abssat2 (int x) >>>> { >>>> unsigned int y = x; >>>> >>>> if (x < 0) >>>> y = -y; >>>> >>>> if (y >= 0x80000000) >>>> y--; >>>> >>>> return y; >>>> } >>>> >>>> gcc optimises the second comparison and throws it away, and that's the >>>> part I do not understand because all computations are performed on >>>> unsigned int which has no undefined behaviour on overflow. >>>> >>>> For the unary - the standard says in 6.5.3.3.3: >>>> The result of the unary - operator is the negative of its (promoted) >>>> operand. The integer promotions are performed on the operand, and >>>> the result has the promoted type. >>>> >>>> And the promotion rules in 6.3.1.1.2: >>>> If an int can represent all values of the original type, the value >>>> is converted to an int; otherwise, it is converted to an unsigned int. >>>> These are called the integer promotions. All other types are unchanged >>>> by the integer promotions. >>>> >>>> As an int cannot represent all values that can be represented by an >>>> unsigned int, there is no signed int in the line y = -y. >>>> >>>> Could anyone explain this? I see this on gcc 4.4.3. >>> Works for me on gcc 4.4.3: >>> >>> movl %edi, %eax >>> sarl $31, %eax >>> xorl %eax, %edi >>> subl %eax, %edi >>> movl %edi, %eax >>> shrl $31, %eax >>> subl %eax, %edi >>> movl %edi, %eax >> Ok. Is your code as it should be on any machine or is it just a "missed >> optimisation"? > > Your problem sounds to me like incorrect code, not missed optimization. > >> I observe it on a non-standard embedded target that has an abs >> instruction, i.e. there is an abssi2 insn that leads to abs:SI rtx. >> Maybe there is some standard target that also has native support of abs >> to see what happens there? > > Perhaps. I think the problem may be that abs is being generated incorrectly. The abs is generated in RTL pass "ce1" by ifcvt.c:noce_try_abs() and gets optimised away by combine.c:combine_simplify_rtx() which calls simplify_rtx.c:simplify_relational_operation (code=LT, mode=SImode, cmp_mode=SImode, op0=(abs:SI (reg:SI)), op1=(const_int 0)). I tried to reproduce on ARM but there is no abs generated (maybe because its abssi2 insn does some clobbers) and thus the problem doesnt's show up there. > What does -fdump-tree-optimized look like? It looks almost as yours: ;; Function abssat2 (abssat2) Analyzing Edge Insertions. abssat2 (int x) { unsigned int y; <bb 2>: y = (unsigned int) x; if (x < 0) goto <bb 3>; else goto <bb 4>; <bb 3>: y = -y; <bb 4>: if ((int) y < 0) goto <bb 5>; else goto <bb 6>; <bb 5>: y = y + 0x0ffffffff; <bb 6>: return (int) y; } >> Or is my confusion based on some misunderstandings of the language >> standard? > > Your code is correct as far as I can see. > > I don't get any warnings with -Wstrict-overflow. > > Andrew. > > ;; Function abssat2 (abssat2) > > Analyzing Edge Insertions. > abssat2 (int x) > { > int prephitmp.15; > unsigned int y; > > <bb 2>: > y = (unsigned int) x; > if (x < 0) > goto <bb 3>; > else > goto <bb 4>; > > <bb 3>: > y = -y; > > <bb 4>: > prephitmp.15 = (int) y; > if (prephitmp.15 < 0) > goto <bb 5>; > else > goto <bb 6>; > > <bb 5>: > prephitmp.15 = (int) (y + 4294967295); > > <bb 6>: > return prephitmp.15; > > } | https://gcc.gnu.org/pipermail/gcc-help/2010-June/090585.html | CC-MAIN-2020-29 | en | refinedweb |
A waypoint with an execution status.
Definition at line 118 of file TWaypoint.h.
#include <mrpt/nav/reactive/TWaypoint.h>
Definition at line 93 of file TWaypoint.cpp.
Gets navigation params as a human-readable format.
Definition at line 99 of file TWaypoint.cpp.
References mrpt::format(), mrpt::nav::TWaypoint::getAsText(), and reached.
Referenced by mrpt::nav::CWaypointsNavigator::checkHasReachedTarget().
Check whether all the minimum mandatory fields have been filled by the user.
Definition at line 44 of file TWaypoint.cpp.
References mrpt::nav::TWaypoint::allowed_distance, mrpt::nav::TWaypoint::INVALID_NUM, mrpt::nav::TWaypoint::target, mrpt::math::TPoint2D_data< T >::x, and mrpt::math::TPoint2D_data< T >::y.
Definition at line 94 of file TWaypoint.cpp.
[Default=true] Whether it is allowed to the navigator to proceed to a more advanced waypoint in the sequence if it determines that it is easier to skip this one (e.g.
it seems blocked by dynamic obstacles). This value is ignored for the last waypoint in a sequence, since it is always considered to be the ultimate goal and hence not subject to be skipped.
Definition at line 60 of file TWaypoint.h.
Referenced by mrpt::nav::TWaypoint::getAsText().
[Must be set by the user] How close should the robot get to this waypoint for it to be considered reached.
Definition at line 42 of file TWaypoint.h.
Referenced by mrpt::nav::TWaypoint::getAsText(), mrpt::nav::TWaypoint::isValid(), and mrpt::nav::CWaypointsNavigator::waypoints_navigationStep().
(Initialized to 0 automatically) How many times this waypoint has been seen as "reachable" before it being the current active waypoint.
Definition at line 132 of file TWaypoint.h.
The default value of fields (used to detect non-set values)
Definition at line 75 of file TWaypoint.h.
Referenced by mrpt::nav::TWaypointSequence::getAsOpenglVisualization(), mrpt::nav::TWaypointStatusSequence::getAsOpenglVisualization(), mrpt::nav::TWaypoint::getAsText(), mrpt::nav::TWaypoint::isValid(), mrpt::nav::TWaypointSequence::load(), and mrpt::nav::CWaypointsNavigator::waypoints_navigationStep().
Whether this waypoint has been reached already (to within the allowed distance as per user specifications) or skipped.
Definition at line 122 of file TWaypoint.h.
Referenced by mrpt::nav::CWaypointsNavigator::checkHasReachedTarget(), and getAsText().
If
reached==true this boolean tells whether the waypoint was physically reached (false) or marked as reached because it was skipped (true).
Definition at line 126 of file TWaypoint.h.
(Default=1.0) Desired robot speed at the target, as a ratio of the full robot speed.
That is: speed_ratio=1 means that the user wants the robot to navigate to the target and smoothly continue to the next one when reached. speed_ratio=0 on the other hand means that the robot should approach this waypoint slowing down and end up totally stopped.
Definition at line 50 of file TWaypoint.h.
Referenced by mrpt::nav::TWaypoint::getAsText(), and mrpt::nav::CWaypointsNavigator::waypoints_navigationStep().
[Must be set by the user] Coordinates of desired target location (world/global coordinates).
Definition at line 28 of file TWaypoint.h.
Referenced by mrpt::nav::TWaypoint::getAsText(), mrpt::nav::TWaypoint::isValid(), and mrpt::nav::CWaypointsNavigator::waypoints_navigationStep().
(Default="map") Frame ID in which target is given.
Optional, use only for submapping applications.
Definition at line 38 of file TWaypoint.h.
Referenced by mrpt::nav::CWaypointsNavigator::waypoints_navigationStep().
[Default=any heading] Optionally, set to the desired orientation [radians] of the robot at this waypoint.
Some navigator implementations may ignore this preferred heading anyway, read the docs of each implementation to find it out.
Definition at line 34 of file TWaypoint.h.
Referenced by mrpt::nav::TWaypoint::getAsText(), and mrpt::nav::CWaypointsNavigator::waypoints_navigationStep().
Timestamp of when this waypoint was reached.
(Default=INVALID_TIMESTAMP means not reached so far)
Definition at line 129 of file TWaypoint.h. | https://docs.mrpt.org/reference/devel/structmrpt_1_1nav_1_1_t_waypoint_status.html | CC-MAIN-2020-29 | en | refinedweb |
by Preethi Kasireddy
A Beginner-Friendly Introduction to Containers, VMs and Docker
If you’re a programmer or techie, chances are you’ve at least heard of Docker: a helpful tool for packing, shipping, and running applications within “containers.” It’d be hard not to, with all the attention it’s getting these days — from developers and system admins alike. Even the big dogs like Google, VMware and Amazon are building services to support it.
Regardless of whether or not you have an immediate use-case in mind for Docker, I still think it’s important to understand some of the fundamental concepts around what a “container” is and how it compares to a Virtual Machine (VM). While the Internet is full of excellent usage guides for Docker, I couldn’t find many beginner-friendly conceptual guides, particularly on what a container is made up of. So, hopefully, this post will solve that problem :)
Let’s start by understanding what VMs and containers even are.
What are “containers” and “VMs”?
Containers and VMs are similar in their goals: to isolate an application and its dependencies into a self-contained unit that can run anywhere.
Moreover, containers and VMs remove the need for physical hardware, allowing for more efficient use of computing resources, both in terms of energy consumption and cost effectiveness.
The main difference between containers and VMs is in their architectural approach. Let’s take a closer look.
Virtual Machines
A VM is essentially an emulation of a real computer that executes programs like a real computer. VMs run on top of a physical machine using a “hypervisor”. A hypervisor, in turn, runs on either a host machine or on “bare-metal”.
Let’s unpack the jargon:
A hypervisor is a piece of software, firmware, or hardware that VMs run on top of. The hypervisors themselves run on physical computers, referred to as the “host machine”. The host machine provides the VMs with resources, including RAM and CPU. These resources are divided between VMs and can be distributed as you see fit. So if one VM is running a more resource heavy application, you might allocate more resources to that one than the other VMs running on the same host machine.
The VM that is running on the host machine (again, using a hypervisor) is also often called a “guest machine.” This guest machine contains both the application and whatever it needs to run that application (e.g. system binaries and libraries). It also carries an entire virtualized hardware stack of its own, including virtualized network adapters, storage, and CPU — which means it also has its own full-fledged guest operating system. From the inside, the guest machine behaves as its own unit with its own dedicated resources. From the outside, we know that it’s a VM — sharing resources provided by the host machine.
As mentioned above, a guest machine can run on either a hosted hypervisor or a bare-metal hypervisor. There are some important differences between them.
First off, a hosted virtualization hypervisor runs on the operating system of the host machine. For example, a computer running OSX can have a VM (e.g. VirtualBox or VMware Workstation 8) installed on top of that OS. The VM doesn’t have direct access to hardware, so it has to go through the host operating system (in our case, the Mac’s OSX).
The benefit of a hosted hypervisor is that the underlying hardware is less important. The host’s operating system is responsible for the hardware drivers instead of the hypervisor itself, and is therefore considered to have more “hardware compatibility.” On the other hand, this additional layer in between the hardware and the hypervisor creates more resource overhead, which lowers the performance of the VM.
A bare metal hypervisor environment tackles the performance issue by installing on and running from the host machine’s hardware. Because it interfaces directly with the underlying hardware, it doesn’t need a host operating system to run on. In this case, the first thing installed on a host machine’s server as the operating system will be the hypervisor. Unlike the hosted hypervisor, a bare-metal hypervisor has its own device drivers and interacts with each component directly for any I/O, processing, or OS-specific tasks. This results in better performance, scalability, and stability. The tradeoff here is that hardware compatibility is limited because the hypervisor can only have so many device drivers built into it.
After all this talk about hypervisors, you might be wondering why we need this additional “hypervisor” layer in between the VM and the host machine at all.
Well, since the VM has a virtual operating system of its own, the hypervisor plays an essential role in providing the VMs with a platform to manage and execute this guest operating system. It allows for host computers to share their resources amongst the virtual machines that are running as guests on top of them.
As you can see in the diagram, VMs package up the virtual hardware, a kernel (i.e. OS) and user space for each new VM.
Container
Unlike a VM which provides hardware virtualization, a container provides operating-system-level virtualization by abstracting the “user space”. You’ll see what I mean as we unpack the term container.
For all intent and purposes, containers look like a VM. For example, they have private space for processing, can execute commands as root, have a private network interface and IP address, allow custom routes and iptable rules, can mount file systems, and etc.
The one big difference between containers and VMs is that containers *share* the host system’s kernel with other containers.
This diagram shows you that containers package up just the user space, and not the kernel or virtual hardware like a VM does. Each container gets its own isolated user space to allow multiple containers to run on a single host machine. We can see that all the operating system level architecture is being shared across containers. The only parts that are created from scratch are the bins and libs. This is what makes containers so lightweight.
Where does Docker come in?
Docker is an open-source project based on Linux containers. It uses Linux Kernel features like namespaces and control groups to create containers on top of an operating system.
Containers are far from new; Google has been using their own container technology for years. Others Linux container technologies include Solaris Zones, BSD jails, and LXC, which have been around for many years..
Last but not least, who doesn’t love the Docker whale? ;)
Fundamental Docker Concepts
Now that we’ve got the big picture in place, let’s go through the fundamental parts of Docker piece by piece:
Docker Engine
Docker engine is the layer on which Docker runs. It’s a lightweight runtime and tooling that manages containers, images, builds, and more. It runs natively on Linux systems and is made up of:
1. A Docker Daemon that runs in the host computer.
2. A Docker Client that then communicates with the Docker Daemon to execute commands.
3. A REST API for interacting with the Docker Daemon remotely.
Docker Client
The Docker Client is what you, as the end-user of Docker, communicate with. Think of it as the UI for Docker. For example, when you do…
you are communicating to the Docker Client, which then communicates your instructions to the Docker Daemon.
Docker Daemon
The Docker daemon is what actually executes commands sent to the Docker Client — like building, running, and distributing your containers. The Docker Daemon runs on the host machine, but as a user, you never communicate directly with the Daemon. The Docker Client can run on the host machine as well, but it’s not required to. It can run on a different machine and communicate with the Docker Daemon that’s running on the host machine.
Dockerfile
A Dockerfile is where you write the instructions to build a Docker image. These instructions can be:
- RUN apt-get y install some-package: to install a software package
- EXPOSE 8000: to expose a port
- ENV ANT_HOME /usr/local/apache-ant to pass an environment variable
and so forth. Once you’ve got your Dockerfile set up, you can use the docker build command to build an image from it. Here’s an example of a Dockerfile:
Docker Image
Images are read-only templates that you build from a set of instructions written in your Dockerfile. Images define both what you want your packaged application and its dependencies to look like *and* what processes to run when it’s launched.
The Docker image is built using a Dockerfile. Each instruction in the Dockerfile adds a new “layer” to the image, with layers representing a portion of the images file system that either adds to or replaces the layer below it. Layers are key to Docker’s lightweight yet powerful structure. Docker uses a Union File System to achieve this:
Union File Systems
Docker uses Union File Systems to build up an image. You can think of a Union File System as a stackable file system, meaning files and directories of separate file systems (known as branches) can be transparently overlaid to form a single file system.
The contents of directories which have the same path within the overlaid branches are seen as a single merged directory, which avoids the need to create separate copies of each layer. Instead, they can all be given pointers to the same resource; when certain layers need to be modified, it’ll create a copy and modify a local copy, leaving the original unchanged. That’s how file systems can *appear* writable without actually allowing writes. (In other words, a “copy-on-write” system.)
Layered systems offer two main benefits:
1. Duplication-free: layers help avoid duplicating a complete set of files every time you use an image to create and run a new container, making instantiation of docker containers very fast and cheap.
2. Layer segregation: Making a change is much faster — when you change an image, Docker only propagates the updates to the layer that was changed.
Volumes
Volumes are the “data” part of a container, initialized when a container is created. Volumes allow you to persist and share a container’s data. Data volumes are separate from the default Union File System and exist as normal directories and files on the host filesystem. So, even if you destroy, update, or rebuild your container, the data volumes will remain untouched. When you want to update a volume, you make changes to it directly. (As an added bonus, data volumes can be shared and reused among multiple containers, which is pretty neat.)
Docker Containers
A Docker container, as discussed above, wraps an application’s software into an invisible box with everything the application needs to run. That includes the operating system, application code, runtime, system tools, system libraries, and etc. Docker containers are built off Docker images. Since images are read-only, Docker adds a read-write file system over the read-only file system of the image to create a container.
Moreover, then creating the container, Docker creates a network interface so that the container can talk to the local host, attaches an available IP address to the container, and executes the process that you specified to run your application when defining the image.
Once you’ve successfully created a container, you can then run it in any environment without having to make changes.
Double-clicking on “containers”
Phew! That’s a lot of moving parts. One thing that always got me curious was how a container is actually implemented, especially since there isn’t any abstract infrastructure boundary around a container. After lots of reading, it all makes sense so here’s my attempt at explaining it to you! :)
The term “container” is really just an abstract concept to describe how a few different features work together to visualize a “container”. Let’s run through them real quick:
1) Namespaces
Namespaces provide containers with their own view of the underlying Linux system, limiting what the container can see and access. When you run a container, Docker creates namespaces that the specific container will use.
There are several different types of namespaces in a kernel that Docker makes use of, for example:
a. NET: Provides a container with its own view of the network stack of the system (e.g. its own network devices, IP addresses, IP routing tables, /proc/net directory, port numbers, etc.).
b. PID: PID stands for Process ID. If you’ve ever ran ps aux in the command line to check what processes are running on your system, you’ll have seen a column named “PID”. The PID namespace gives containers their own scoped view of processes they can view and interact with, including an independent init (PID 1), which is the “ancestor of all processes”.
c. MNT: Gives a container its own view of the “mounts” on the system. So, processes in different mount namespaces have different views of the filesystem hierarchy.
d. UTS: UTS stands for UNIX Timesharing System. It allows a process to identify system identifiers (i.e. hostname, domainname, etc.). UTS allows containers to have their own hostname and NIS domain name that is independent of other containers and the host system.
e. IPC: IPC stands for InterProcess Communication. IPC namespace is responsible for isolating IPC resources between processes running inside each container.
f. USER: This namespace is used to isolate users within each container. It functions by allowing containers to have a different view of the uid (user ID) and gid (group ID) ranges, as compared with the host system. As a result, a process’s uid and gid can be different inside and outside a user namespace, which also allows a process to have an unprivileged user outside a container without sacrificing root privilege inside a container.
Docker uses these namespaces together in order to isolate and begin the creation of a container. The next feature is called control groups.
2) Control groups
Control groups (also called cgroups) is a Linux kernel feature that isolates, prioritizes, and accounts for the resource usage (CPU, memory, disk I/O, network, etc.) of a set of processes. In this sense, a cgroup ensures that Docker containers only use the resources they need — and, if needed, set up limits to what resources a container *can* use. Cgroups also ensure that a single container doesn’t exhaust one of those resources and bring the entire system down.
Lastly, union file systems is another feature Docker uses:
3) Isolated Union file system:
Described above in the Docker Images section :)
This is really all there is to a Docker container (of course, the devil is in the implementation details — like how to manage the interactions between the various components).
The Future of Docker: Docker and VMs Will Co-exist
While Docker is certainly gaining a lot of steam, I don’t believe it will become a real threat to VMs. Containers will continue to gain ground, but there are many use cases where VMs are still better suited.
For instance, if you need to run multiple applications on multiple servers, it probably makes sense to use VMs. On the other hand, if you need to run many *copies* of a single application, Docker offers some compelling advantages.
Moreover, while containers allow you to break your application into more functional discrete parts to create a separation of concerns, it also means there’s a growing number of parts to manage, which can get unwieldy.
Security has also been an area of concern with Docker containers — since containers share the same kernel, the barrier between containers is thinner. While a full VM can only issue hypercalls to the host hypervisor, a Docker container can make syscalls to the host kernel, which creates a larger surface area for attack. When security is particularly important, developers are likely to pick VMs, which are isolated by abstracted hardware — making it much more difficult to interfere with each other.
Of course, issues like security and management are certain to evolve as containers get more exposure in production and further scrutiny from users. For now, the debate about containers vs. VMs is really best off to dev ops folks who live and breathe them everyday!
Conclusion
I hope you’re now equipped with the knowledge you need to learn more about Docker and maybe even use it in a project one day.
As always, drop me a line in the comments if I’ve made any mistakes or can be helpful in anyway! :) | https://www.freecodecamp.org/news/a-beginner-friendly-introduction-to-containers-vms-and-docker-79a9e3e119b/ | CC-MAIN-2020-29 | en | refinedweb |
hello,
I need to create a Scanner in my code, and I've googled it and it seems like I can open the Scanner utility just fine. But I found out later that you need to close the Scanner, and I can't seem to figure that out. I think the basic setup is --
import java.util.Scanner;
body of code
Scanner keyboard = new Scanner(System.in);
int productnumber = keyboard.nextInt();
then more code
scanner.close();
The thing is, scanner.close(); never works in Eclipse. It always complains with the error " scanner cannot be resolved" like it thinks scanner is a variable.
I think because I'm not closing the scanner the code doesn't run correctly.
Anybody have any ideas? | https://www.javaprogrammingforums.com/java-theory-questions/41345-scanner.html | CC-MAIN-2020-29 | en | refinedweb |
I'm having trouble overriding a
ModelForm save method. This is the error I'm receiving:
Exception Type: TypeError Exception Value: save() got an unexpected keyword argument 'commit'
My intentions are to have a form submit many values for 3 fields, to then create an object for each combination of those fields, and to save each of those objects. Helpful nudge in the right direction would be ace.
models.py
class CallResultType(models.Model): id = models.AutoField(db_column='icontact_result_code_type_id', primary_key=True) callResult = models.ForeignKey('CallResult', db_column='icontact_result_code_id') campaign = models.ForeignKey('Campaign', db_column='icampaign_id') callType = models.ForeignKey('CallType', db_column='icall_type_id') agent = models.BooleanField(db_column='bagent', default=True) teamLeader = models.BooleanField(db_column='bTeamLeader', default=True) active = models.BooleanField(db_column='bactive', default=True)
forms.py
from django.forms import ModelForm, ModelMultipleChoiceField from callresults.models import * class CallResultTypeForm(ModelForm): callResult = ModelMultipleChoiceField(queryset=CallResult.objects.all()) campaign = ModelMultipleChoiceField(queryset=Campaign.objects.all()) callType = ModelMultipleChoiceField(queryset=CallType.objects.all()) def save(self, force_insert=False, force_update=False): for cr in self.callResult: for c in self.campain: for ct in self.callType: m = CallResultType(self) # this line is probably wrong m.callResult = cr m.campaign = c m.calltype = ct m.save() class Meta: model = CallResultType
admin.py
class CallResultTypeAdmin(admin.ModelAdmin): form = CallResultTypeForm
In your
save you have to have the argument
commit. If anything overrides your form, or wants to modify what it's saving, it will do
save(commit=False), modify the output, and then save it itself.
Also, your ModelForm should return the model it's saving. Usually a ModelForm's
save will look something like:
def save(self, commit=True): m = super(CallResultTypeForm, self).save(commit=False) # do custom stuff if commit: m.save() return m
Read up on the
save method.
Finally, a lot of this ModelForm won't work just because of the way you are accessing things. Instead of
self.callResult, you need to use
self.fields['callResult'].
UPDATE: In response to your answer:
Aside: Why not just use
ManyToManyFields in the Model so you don't have to do this? Seems like you're storing redundant data and making more work for yourself (and me
:P).
from django.db.models import AutoField def copy_model_instance(obj): """ Create a copy of a model instance. M2M relationships are currently not handled, i.e. they are not copied. (Fortunately, you don't have any in this case) See also Django #4027. From """ initial = dict([(f.name, getattr(obj, f.name)) for f in obj._meta.fields if not isinstance(f, AutoField) and not f in obj._meta.parents.values()]) return obj.__class__(**initial) class CallResultTypeForm(ModelForm): callResult = ModelMultipleChoiceField(queryset=CallResult.objects.all()) campaign = ModelMultipleChoiceField(queryset=Campaign.objects.all()) callType = ModelMultipleChoiceField(queryset=CallType.objects.all()) def save(self, commit=True, *args, **kwargs): m = super(CallResultTypeForm, self).save(commit=False, *args, **kwargs) results = [] for cr in self.callResult: for c in self.campain: for ct in self.callType: m_new = copy_model_instance(m) m_new.callResult = cr m_new.campaign = c m_new.calltype = ct if commit: m_new.save() results.append(m_new) return results
This allows for inheritance of
CallResultTypeForm, just in case that's ever necessary. | https://pythonpedia.com/en/knowledge-base/817284/overriding-the-save-method-in-django-modelform | CC-MAIN-2020-29 | en | refinedweb |
General Information
Quality Assurance and Productivity
Desktop
Frameworks and Libraries
Web
Controls and Extensions
Maintenance Mode
Enterprise and Analytic Tools
End-User Documentation
Categorized List
When data is displayed as a tree, you may need to show the objects associated with the currently selected node in the same List View. XAF enables you to do this by implementing the ICategorizedItem interface in the associated objects, and using the CategorizedListEditor provided by the TreeList Editors module. This topic demonstrates how to perform this task using the business classes defined in the Display a Tree List using the ITreeNode Interface topic.
TIP
A complete sample project is available in the DevExpress Code Examples database at.
In the Display a Tree List using the ITreeNode Interface topic, the ProjectGroup, Project and ProjectArea classes are implemented by inheriting from an abstract Category class, which implements the ITreeNode interface. Now, we will implement the Issue class that will be related to the Category class by the Many-to-One relationship. In addition, to display the Issue List View via the CategorizedListEditor, the Issue class will implement the ICategorizedItem interface. For details on this interface and the CategorizedListEditor, refer to the TreeList Editors Module Overview topic.
Implement the Issue class as shown in the code below:
using DevExpress.Persistent.Base.General; //... [DefaultClassOptions] public class Issue : BaseObject, ICategorizedItem { private Category category; private string subject; private string description; public Issue(Session session) : base(session) {} public Issue(Session session, string subject) : base(session) { this.subject = subject; } [Association("Category-Issues")] public Category Category { get { return category; } set { SetPropertyValue(nameof(Category), ref category, value); } } public string Subject { get { return subject; } set { SetPropertyValue(nameof(Subject), ref subject, value); } } public string Description { get { return description; } set { SetPropertyValue(nameof(Description), ref description, value); } } ITreeNode ICategorizedItem.Category { get { return Category; } set { Category = (Category)value; } } }
NOTE
The public property that is returned by the private Category property, which implicitly implements the ICategorizedItem interface, must be called "Category". This is currently required by the internal infrastructure.
Modify the Category class to add an association with Issue objects:
[NavigationItem] public abstract class Category : BaseObject, ITreeNode { [Association("Category-Issues")] public XPCollection<Issue> Issues { get { return GetCollection<Issue>(nameof(Issues)); } } private XPCollection<Issue> allIssues; public XPCollection<Issue> AllIssues { get { if (allIssues == null) { allIssues = new XPCollection<Issue>(Session, false); CollectIssuesRecursive(this, allIssues); allIssues.BindingBehavior = CollectionBindingBehavior.AllowNone; } return allIssues; } } private void CollectIssuesRecursive(Category issueCategory, XPCollection<Issue> target) { target.AddRange(issueCategory.Issues); foreach (Category childCategory in issueCategory.Children) { CollectIssuesRecursive(childCategory, target); } } //... }
Check that the TreeList Editors module is added to the Windows Forms application project, and run the application. Invoke the Issue List View. Select a tree node in the tree list to the left and execute the New Action, to create an Issue for the corresponding Category object. To the right of the tree list, a list of Issue objects associated with the currently selected tree node is displayed.
| https://docs.devexpress.com/eXpressAppFramework/112838/concepts/extra-modules/tree-list-editors/categorized-list | CC-MAIN-2020-29 | en | refinedweb |
Welcome! Got a question? Do you have -Ypartial-unification turned on? Other FAQs:
Managed to shave off outer
Evals, doesn't look like I lose non-strictness here
def loeb[F[_]: Functor, A](x: F[Eval[F[Eval[A]]] => Eval[A]]): F[Eval[A]] = { x.fmap(a => a(Later(loeb(x)))) }
probably can't get simpler than this? since this is self-referential, i can't avoid both accepting and returning evals like this, which is as close (or as far) from haskell as i can get
Monoid
def foldLeft[F[_], A, B](values: F[A], seed: B)(fold: (B, A) => B)(implicit monad: Monad[F], monoid: Monoid[F[B]]): F[B] = { var accumulation = seed val work = monad.flatMap(values) { value => accumulation = fold(accumulation, value) monoid.empty } monoid.combine(work, monad.pure(accumulation)) }
Listand
fs2.Stream, hence needing the result to be wrapped in an
F. I see that
Streamdoes not implement
Foldablefor this reason (as the result would need to be a single-element stream, not a
B). However, since
Listand
fs2.Streamboth implement
Monadand
Monoid, the above function works. It probably breaks all kinds of laws, but just wondering if there's something equivalent already?
def test(implicit U: UserRepoSC[SampleApp], I: ImageRepoSC[SampleApp]): Free[SampleApp, String] = { import U._ import I._ for { u <- findUserI("001") _ <- EitherT.leftT[Future, String]("err") //Free.liftT would work? _ <- getImageI("002") } yield { print(u) u.get.name } } test.foldMap(sampleApp)
Free.injectbut just check it (disclaimer: I have little experience with free)
I already do this , these are smart constructors
implicit def UserRepoSC[F[_]](implicit I: InjectK[UserRepoAlg, F]): UserRepoSC[F] = new UserRepoSC[F] class ImageRepoSC[F[_]](implicit I: InjectK[ImageRepoAlg, F]) { def getImageI(id: String) = Free.inject[ImageRepoAlg, F](GetImage(id)) } // We need this implicit to convert to the proper instance when required implicit def ImageRepoSC[F[_]](implicit I: InjectK[ImageRepoAlg, F]): ImageRepoSC[F] = new ImageRepoSC[F] }
But my understanding is till limited.
I have a bit silly question and I'm not sure I'll explain it well. So I had the discussion with my F# friend the other day, and while he likes the FP, he doesn't do pure FP. So I tried to explain him referential transparency and
IO monad. He said that in F# you can easily convert code block to function/lazy-value by adding parentheses to variable. So this:
let a = let read = readline() logReading(read) printTimeOfDay() read
Becomes:
let a() = let read = readline() logReading(read) printTimeOfDay() read
I guess that now you have RT because everywhere where you use
a(), you can "swap" it with its chunk of code block - both ways whole code block will be executed every time (not just returning
read).
So since you can do same thing in Scala by making every function accept call-by-name arguments, can we call that kind of programming "pure FP" if we are disciplined and keep away from mutating class' fields? Instead of
flatMaping, we would have
compose and/or our code would start to look like
f(g(h(j(x)))), but I guess it's valid.
Anyway, I started to ask myself the benefits of
IO data structure. I can name a few: you can have whole range of "helper" methods on IO object (
map,
flatMap,
traverse, ...) which you simply don't have on
Function; you structure you code more-or-less sequentially due to
Monad nature (although you have sequential code in
f(g(x)) case too). When it come to mutation, you still have to be disciplined, nothing prevents you from cheating and mutating some field in your
IO.flatMap.
I guess my question is: what are (all) the benefits of
IO, especially compared to
Function since both can be looked in a way as description of computation?
F#
but I think it's misleading to frame IO in terms of evaluation semantics
Anyone can expand on this?
Function1implementation like that one
() => A, which is synchronous
(A => Unit) => Unit
(Either[Throwable, A] => Unit) => Unit)
IOis capable of embedding both things
() => A, in its
=> Aform, it's the argument to
Sync[F].delay
(Either[Throwable, A] => Unit) => Unit)is the argument to
Async[F].async | https://gitter.im/typelevel/cats?at=5c7d766f35c01307537529b2 | CC-MAIN-2020-29 | en | refinedweb |
This post written by Sergiy Oryekhov and Andrew Pardoe
With several new rules added to the Core Guidelines Checker in Visual Studio 2017 15.3, the amount of warnings produced for pre-existing code may greatly increase. The C++ Core Guidelines include a lot of recommendations that cover all kinds of situations in C+ code. We know that not everyone can do large rewrites of legacy codebases. The techniques in this blog post will help you use the C++ Core Guidelines to start an incremental journey towards a cleaner codebase by selectively enabling warnings or enabling warnings on a selected code region.
In some cases, these techniques will help you deal with problems in the code analysis. All code analysis is heuristic in nature and can produce warnings where your code is actually correct—we call these “false positives”. The methods listed below will also help you to suppress individual cases of false positives that may be raised on your code.
Using rule sets to filter warnings
Visual Studio provides a few predefined rule sets to pick a more appropriate level of quality checks when Code Analysis runs on a project. In this release, we added rule sets that focus specifically on different groups of C++ Core Guidelines warnings. By selecting a specific group, you can slice results and work through them more efficiently.
To see information about new rule sets: open the Project Properties dialog, select “Code Analysis\General”, open the dropdown in the “Rule Sets” combo-box, pick “Choose multiple rule sets”:
We recommend starting with the rule set “C++ Core Check Rules”. This rule set includes and enables all other C++ Core Check categories.
The “Native Minimum” and “Native Recommended” rule sets include C++ Core Check rules in addition to other checks performance by the C++ Code Analysis tools.
Keep in mind, you must enable the C++ Core Guidelines Checker extension to see warnings from these rule sets. Once it is enabled, you can choose which warnings to show, and which to hide.
Using macros to filter warnings
The C++ Core Guidelines Checker comes with a header file that defines handy macros to ease warning suppressions in code:
ALL_CPPCORECHECK_WARNINGS CPPCORECHECK_TYPE_WARNINGS CPPCORECHECK_RAW_POINTER_WARNINGS CPPCORECHECK_CONST_WARNINGS CPPCORECHECK_OWNER_POINTER_WARNINGS CPPCORECHECK_UNIQUE_POINTER_WARNINGS CPPCORECHECK_BOUNDS_WARNINGS
These macros correspond to the rule sets and expand into space-separated lists of warning numbers.
How is this useful? By using the appropriate pragma constructs you can configure the effective set of rules that are interesting for your project (or maybe a portion of your code). For example, here we want to see only warnings about missing constant modifiers:
#include <CppCoreCheck/Warnings.h> #pragma warning(disable: ALL_CPPCORECHECK_WARNINGS) #pragma warning(default: CPPCORECHECK_CONST_WARNINGS)
Using attributes to filter warnings
The Microsoft Visual C++ compiler has limited support for the GSL suppress attribute. This attribute can be used to suppress warnings on expressions and block statements inside of a function. You can use either the specific warning number (e.g., 26400) or the rule ID from the C++ Core Guidelines (e.g., r.11). You can also suppress the entire rule group as shown below.
// Suppress only warnings from the 'r.11' rule in expression. [[gsl::suppress(r.11)]] new int; // Suppress all warnings from the 'r' rule group (resource management) in block. [[gsl::suppress(r)]] { new int; } // Suppress only one specific warning number. // For declarations, you may need to use the surrounding block. // Macros are not expanded inside of attributes. // Use plain numbers instead of macros from Warnings.h. [[gsl::suppress(26400)]] { int *p = new int; }
Using command line options to subset warnings
You can also use command line options to suppress warnings per file or per project. For example, you can pick one file from your project and disable the warning 26400 in its properties page:
You can even temporarily disable code analysis for a file by specifying “/analyze-“. This will produce warning D9025 “overriding ‘/analyze’ with ‘/analyze-‘”, which will remind you to re-enable code analysis later.
In closing
These C++ Core Guidelines Checker rulesets,
gsl::suppress attributes, and macros are new in Visual Studio 2017. Try them out and give us feedback on what you like and what you’d like to see improved.
If you have any feedback or suggestions for us about the C++ Core Guidelines Checker or any part of Visual C++, please).
Join the conversationAdd Comment
Is there a way to suppress warnings just for third party libraries?
I’m trying to use C++ Core Guideline Checker with Google Test, but there are a great number of warnings occurring in a great number of locations in the code from Google Test (such as TEST macro), and configuring suppression to each of such warning will cause maintenance hell without a smart solution.
There is a question in Stack Overflow () but no answer is provided yet.
(And this question receives relatively low attention, sadly.)
There’s an undocumented environment variable, CAExcludePath, that filters warnings from files in that path. I usually run with %CAExcludePath% set to %Include%.
You can also use it from MSBuild, see here for an example (with mixed success): Suppress warnings for external headers in VS2017 Code Analysis
We are working on something similar to GCC’s system headers that should be a more comprehensive solution to this problem. | https://blogs.msdn.microsoft.com/vcblog/2017/08/14/managing-warnings-in-the-c-core-guidelines-checker/ | CC-MAIN-2017-51 | en | refinedweb |
Hi
It appears that will_paginate and .limit(x) will not work together. So this didn't work:
(I am trying to get 150 most recent records and paginate them)
@articles = Article.where.not(site_id: 'HIDE')
.limit(150)
.order('articles.created_at DESC')
.paginate(:page => params[:page], :per_page => 30)
But this does work:
(using the total_entries method of will_paginate)
@articles = Article.where.not(site_id: 'HIDE')
.order('articles.created_at DESC')
.paginate(:page => params[:page], :per_page => 30,total_entries: 150)
BUT @articles still pulls all 4500 records which isnt very efficient and also @articles.count returns 4500 rather than 150.
Is there away I can just get the 150 records from the DB and paginate them?
Thanks to craftycanuck in the Slack channel for helping me get this far,
Simon
Hey Simon,
One super important thing here is that pagination uses both
limit and
order to make pages. When you add
limit in on top of that, you're going to likely break pagination because of that.
Are you trying to limit the number of pages displayed?
Thanks Chris
I want the 150 most recent Articles paginated in 5 pages of 30.
I have managed a workaround but would love to understand how I could accomplish the original problem.
You're definitely doing something that is outside the scope of most pagination gems. What I would probably do is customize the view template to only show the last 5 pages links. This would visually make it so you could only see the first X items and then you can have your controller also verify the page # is between 1 and 5 so the users can't go outside those boundaries.
What does your solution look like right now?
Hi Chris
Thanks for your continued input.
My problem was that I wanted to display "Here are the last 150 articles" but the view has other options when it might show less or more articles so 150 was populated by @articles.count. The problem is when you use @articles.count on a paginated active record set it returns the whole table count rather than the number you limited the pagination set to.
My work around was to create a method called getarticlecount.
Which does this:
def getarticlecount(view) if view == "150" return "150" else return @articles.count end end
Made accessible to the view by adding this in the controller:
helper_method :getarticlecount
Rendered in the view by this:
<%= pluralize(getarticlecount(params[:view]), "result") %>
Not sure if the breaks any rails or ruby best practises but it does the job! Open to refactoring.
Simon
That works. The other thing you can do is look at
@articles.total_entries I believe with will_paginate which will give you the count of all the results, not just the current page. It's either that or
total_count on that or Kaminari.
Your solution is simple enough though so I would stick with that. 👍
Thanks Chris
I did have a play with total_count but couldnt quite get it right.
Much appreciated,
Login or create an account to join the conversation. | https://gorails.com/forum/using-will_paginate-and-limit-x-together | CC-MAIN-2017-51 | en | refinedweb |
On Monday 28 April 2008, Andrew Morton wrote:> On Mon, 28 Apr 2008 12:39:51 -0700 David Brownell <david-b@pacbell.net>)> > hm, does ths sysfs one-value-per-file rule apply to writes?That *is* one value: a single command, to execute atomically! :)ISTR seeing that done elsewhere, and have seen various proposalsthat rely on that working. In any case, the one-per-file rationalewas to make things easier for userspace.> > The D-space footprint is negligible, except for the sysfs resources> > associated with each exported GPIO. The additional I-space footprint> > is about half of the current size of gpiolib. No /dev node creation> > involved, and no "udev" support is needed.> > > > ...> > > > Documentation for the interface?Next version. If the "is this interface OK" part of the RFC flies,then additional effort on docs would not be wasted.> > +#include <linux/device.h>> > +#include <linux/err.h>> > Putting includes inside ifdefs tends to increase the risk of compilation> errors.Good point, I'll move such stuff out of #ifdefs. I almostdid that this time, but this way made for a simpler patch. ;)> > +static ssize_t gpio_direction_show(struct device *dev,> > + struct device_attribute *attr, char *buf)> > +{> > + const struct gpio_desc *desc = dev_get_drvdata(dev);> > +> > + /* handle GPIOs being removed from underneath us... */> > + if (!test_bit(FLAG_EXPORT, &desc->flags))> > + return -EIO;> > +> > + return sprintf(buf, "%s\n",> > + test_bit(FLAG_IS_OUT, &desc->flags) ? "out" : "in");> > +}> > What prevents FLAG_EXPORT from getting cleared just after we tested it?> > iow: this looks racy.Yeah, the isue there is that the attribute files maystill be open after the GPIO has been unexported. Forthis specific method the existing spinlock can solvethat problem ... but not in general.Seems like a general fix for that should involve a mutexcovering all those sysfs actions: read, write, export,and unexport. The unexport logic would need reworking,but the rest could work pretty much as they do now (butwhile holding that lock, which would protect that flag).But that wouldn't change the user experience; the sysfsattributes would still look and act the same.> > + if (buf[len - 1] == '\n')> > + len--;> > +> > + if (len == 4 && strncmp(buf, "high", 4) == 0)> > + status = gpio_direction_output(gpio, 1);> > +> > + else if (len == 3 && (strncmp(buf, "out", 3) == 0> > + || strncmp(buf, "low", 3) == 0))> > + status = gpio_direction_output(gpio, 0);> > +> > + else if (len == 2 && strncmp(buf, "in", 2) == 0)> > + status = gpio_direction_input(gpio);> > urgh.> > If we had a strcmp() variant which treats a \n in the first arg as a \0> the above would become> > if (sysfs_streq(buf, "high"))> status = gpio_direction_output(gpio, 1);> else if (sysfs_streq(buf, "out") || sysfs_streq(buf, "low"))> status = gpio_direction_output(gpio, 0);> else if (sysfs_streq(buf, "in"))> status = gpio_direction_input(gpio);That would indeed be better. Maybe I should whip up a sysfspatch adding that, and have this depend on that patch. (I'veCC'd Greg in case he has comments on that...)Alternatively: strict_streq(), analogy to strict_strto*()?> > +void gpio_unexport(unsigned gpio)> > +{> > + unsigned long flags;> > + struct gpio_desc *desc;> > + int status = -EINVAL;> > +> > + if (!gpio_is_valid(gpio))> > + return;> > Is that a programming error?Yes. But as with quite a lot of "rmmod" type paths, there'sno way to report it to the caller. I'll add a pr_debug().> > +/*> > + * /sys/class/gpio/control ... write-only> > + * export N> > + * unexport N> > + */> > +static ssize_t control_store(struct class *class, const char *buf, size_t len)> > +{> > + char *scratch = (char *)buf;> > + char *cmd, *tmp;> > + int status;> > + unsigned long gpio;> > +> > + /* export/unexport */> > + cmd = strsep(&scratch, " \t\n");> > urgh. We cast away the const and then smash up the caller's const string> with strsep.Yeah, kind of ugly. Not particularly happy with that, butI wasn't keen on allocating a temporary copy of that stringeither...> > + if (!cmd)> > + goto fail;> > +> > + /* N */> > + tmp = strsep(&scratch, " \t\n");> > + if (!tmp)> > + goto fail;> > + status = strict_strtoul(tmp, 0, &gpio);> > + if (status < 0)> > + goto done;> > +> > + /* reject commands with garbage at end */> > strict_strtoul() already does that?So it does. That was leftover from a version witha more complex control interface. Easy to removethe lines'o'code following *that* comment!> > + ...> > The string handling in here seems way over-engineered. All we're doing is> parting "export 42" or "unexport 42". Surely it can be done better than> this.> > The more sysfs-friendly way would be to create separate sysfs files for the> export and unexport operations, I expect.Or just use negative numbers to mean "unexport";ugly and hackish, but functional.I don't want to see the components of one commandsplit into separate files, where they could beclobbered by a second userspace activity ...> > ...> >> > -#else> > +#ifdef CONFIG_GPIO_SYSFS> > +#define HAVE_GPIO_SYSFS> > Can we find a way to make HAVE_GPIO_SYSFS go away? Just use> CONFIG_GPIO_SYSFS?Yes, see below ... there's a minor penalty to be paid.> > @@ -135,4 +150,15 @@ static inline void gpio_set_value_cansle> > > > #endif> > > > +#ifndef HAVE_GPIO_SYSFS> > +static inline int gpio_export(unsigned gpio)> > +{> > + return -ENOSYS;> > +}> > +> > +static inline void gpio_unexport(unsigned gpio)> > +{> > +}> > +#endif /* HAVE_GPIO_SYSFS */> > +> > #endif /* _ASM_GENERIC_GPIO_H */> > Then this can just be moved into a #else.I'll just make it #ifndef CONFIG_GPIO_SYSFS ... that will make theinterface impossible to provide without gpiolib, but I'm not muchconcerned about that.Note that putting it *here* covers the case of platforms that providestandard GPIO interfaces but don't use the newish gpiolib code to dothat ... which is why putting it in an #else (above) didn't suffice. | http://lkml.org/lkml/2008/4/28/701 | CC-MAIN-2017-51 | en | refinedweb |
We have a JSON object but no JSON.parse
Created attachment 31617 [details]
Initial JSON.parse support
Comment on attachment 31617 [details]
Initial JSON.parse support
> - LiteralParser preparser(callFrame, programSource);
> + LiteralParser preparser(callFrame, programSource, false);
I am not a fan of booleans for this sort of thing. A named enum would work better here.
> + LiteralParser preparser(exec, source, true);
I think preparser is a strange name here. How about just "parser".
> + if (JSValue parsedObject = preparser.tryJSONParse())
> + return parsedObject;
> +
> + return throwError(exec, SyntaxError, "Unable to parse JSON string");
I would have written this the other way around, with the early return for the error case. You couldn't scope the return value, but otherwise it would be better.
> + token.end = m_ptr += 4;
Merging these two statements into one line seems too clever and unnecessarily subtle and hence hard to read. Just m_ptr += 4 on one line and token.end = m_ptr on another should be fine.
> + case 'f': {
No need for this extra set of braces on all these cases.
> +static bool isSafeStringCharacter(UChar c)
> +{
> + return (c >= ' ' && c <= 0xff && c != '\\' && c != '"') || c == '\t';
> +}
Could I suggest that you use inline here?
Also, I think the ? : operator would be better. If it's >= ' ' you can do one set of checks, and if it's < ' ' you can do the '\t' check.
return c >= ' ' ? c <= 0xFF && c != '\\' && c != '"' : c == '\t';
> + if ((m_end - m_ptr) < 5) // uNNNN == 5 characters
Extra parentheses here seem unneeded.
> + token.stringToken.append(JSC::Lexer::convertUnicode(m_ptr[1], m_ptr[2], m_ptr[3], m_ptr[4]));
Does this do the right thing for illegal UTF-16?
> + case TokNull: {
> + case TokTrue: {
> + case TokFalse: {
These extra braces here are unneeded.
> + JSValue tryJSONParse()
I think it's strange that this parser has both a strict boolean in the constructor *and* a separate entry point for the JSON parse. Is that really necessary?
> + {
> + m_lexer.next();
> + JSValue result = parse(StartParseExpression);
> + if (m_lexer.currentToken().type != TokEnd)
> + return JSValue();
> + return result;
> + }
Does this really need to be inline? Can't some of the code just be defined normally without putting it all in the header?
Patch has test, but is missing the test results.
r=me as long as you deal with the test result issue, but please consider some of the other comments too.
Comment on attachment 31617 [details]
Initial JSON.parse support
removing review flag as there appears to be an error in the parser now
Created attachment 31622 [details]
JSON.parse, this time correct
This is identical to the previous patch except for a few minor changes. I re-added the string token reset i had moronically removed prior to submission. And added tests that catch the behaviour it introduced.
Restructured JSONProtoFuncParse to exit early on failure
Created attachment 31623 [details]
Address rest of darin's comments
Comment on attachment 31623 [details]
Address rest of darin's comments
You still have instances of
token.end = m_ptr += 5;
but you say you have fixed them locally. Now that you have removed the braces, you should probably decide on a consistent style for whether or not there are blank lines after the case statements inside of a switch. Other than that, r=me.
Committing to ...
M JavaScriptCore/ChangeLog
M JavaScriptCore/interpreter/Interpreter.cpp
M JavaScriptCore/runtime/JSGlobalObjectFunctions.cpp
M JavaScriptCore/runtime/JSONObject.cpp
M JavaScriptCore/runtime/LiteralParser.cpp
M JavaScriptCore/runtime/LiteralParser.h
M LayoutTests/ChangeLog
A LayoutTests/fast/js/JSON-parse-expected.txt
A LayoutTests/fast/js/JSON-parse.html
A LayoutTests/fast/js/resources/JSON-parse.js
Committed r44923
Build fix
Committing to ...
M JavaScriptCore/ChangeLog
M JavaScriptCore/runtime/LiteralParser.cpp
Committed r44924 | https://bugs.webkit.org/show_bug.cgi?id=26587 | CC-MAIN-2017-51 | en | refinedweb |
Flex and Soap Mismatchjoshua_shizny Oct 27, 2009 3:20 PM
I have a problem with soap and flex 3. I have created a webservice through the import webservice menu in Flex Builder. If I use the service as is I get a security error because the crossdomain policy on the remote server doesn't comply. So, instead I am using a php proxy to relay the webservice through my server and out to the webservice back to the server back to Flex. When I try to do this I get a SOAP mismatch error coming from the below code.
else if (envNS.uri != SOAPConstants.SOAP_ENVELOPE_URI)
{
throw new Error("SOAP Response Version Mismatch");
}
I went back in and checked the value of envNS.uri and SOAPConstants.SOAP_ENVELOPE_URI in both the previously described situations (php proxy and straight security riddled call). In the security riddled call the two variables match. In the proxy call I get back differing values of envNS.uri and SOAPConstants.SOAP_ENVELOPE_URI.
Can somebody tell me why the variables are not matching when put through the php proxy. The php is simple, just curl, so I've pasted it below.
///////START PHP SNIPPET
$url = $_GET['url'];
$headers = $_GET['headers'];
$mimeType = $_GET['mimeType'];
//Start the Curl session
$session = curl_init();
// Don't return HTTP headers. Do return the contents of the call
curl_setopt($session, CURLOPT_URL, $url);
curl_setopt($session, CURLOPT_HEADER, ($headers == "true") ? true : false);
curl_setopt($session, CURLOPT_USERAGENT, $_SERVER['HTTP_USER_AGENT']);
curl_setopt($session, CURLOPT_FOLLOWLOCATION, 1);
curl_setopt($session, CURLOPT_RETURNTRANSFER, 1);
// Make the call
$response = curl_exec($session);
if ($mimeType != "")
{
// The web service returns XML. Set the Content-Type appropriately
header("Content-Type: ".$mimeType);
}
echo $response;
curl_close($session);
//END PHP SNIPPET
Any help would be great. Thanks,
Josh
1. Re: Flex and Soap Mismatchjoshua_shizny Oct 27, 2009 5:23 PM (in response to joshua_shizny)
Some more info...
I'm running everything over https
When I create the wsdl code from flex builder wizard, I have to select an alternative port to connect with the soap1.1 version (on the same screen where you specify the services you want to connect to). Is it possible that when I run the php proxy and curl that I somehow lose the correct port to connect to 1.1 and get 1.2 response back. If so, anybody know how I could correct that?
2. Re: Flex and Soap Mismatchjoshua_shizny Oct 27, 2009 6:41 PM (in response to joshua_shizny)
I just don't get the overall picture here with this wsdl stuff with flex, so I've got some more questions and more info. Firstly, I don't have a services-config.xml file... Do I need one? If so how, where do I create it. I've seen a bunch of partial info on the subject but nothing really through. All my stuff is going from https to https, the site is hosted on https and the service is located on https. Does that matter. I have control over my server and can put cross-domain files on it. I have a proxy written with curl trying to relay the call to the service. Do I need to do anything special with that because I'm am going across https? I guess the port deal selector on the flex wsdl creation wizard is really a namespace and not a more traditional communication port number.. Is that correct?
3. Re: Flex and Soap Mismatchjoshua_shizny Oct 27, 2009 6:42 PM (in response to joshua_shizny)
Also, here is my crossdomain file on the root of my server
<?xml version="1.0"?>
<!DOCTYPE cross-domain-policy SYSTEM "">
<cross-domain-policy>
<allow-http-request-headers-from
<allow-access-from
</cross-domain-policy>
Does this look good considering https?
4. Re: Flex and Soap Mismatchjoshua_shizny Oct 27, 2009 6:47 PM (in response to joshua_shizny)
This is what I get if I don't pass a rootUrl in to the AbstractWebService constructor (that's how I use the php proxy)....
FaultEvent fault=[RPC Fault faultString="Security error accessing url" faultCode="Channel.Security.Error" faultDetail="Destination: DefaultHTTPS"] messageId=null type="fault" bubbles=true cancelable=true eventPhase=2]
Which, by the way when I'm working in debug mode on my computer works just fine until I upload to server.
5. Re: Flex and Soap Mismatchjoshua_shizny Oct 27, 2009 7:23 PM (in response to joshua_shizny)
When instantiating webservice (implements webservice) with this line (no proxy/destination url, but rootUrl)
this.myService = new ModpayWeb(null, " %2Fmodpay.asmx%3Fwsdl");
I get a soap envelop of
which does not match my static const
public static const SOAP_ENVELOPE_URI:String = "";
and gives me a soap mismatch error
but when I go directly to the source and don't use the proxy I get back an envelop of
which does match my predefined const
but I get a security error channel .... DefaultHttps
??
6. Re: Flex and Soap Mismatchjoshua_shizny Oct 28, 2009 7:37 AM (in response to joshua_shizny)
Anyone, any ideas? I'm really stuck here. I had somebody in the flex user group tell me that I should just run my wsdl through php and make a weborb / amfphp connection from flex to server but that is going to be a ton of work and it seems that using flex's built in wsdl stuff would be the way to go.
7. Re: Flex and Soap Mismatchjoshua_shizny Oct 30, 2009 3:02 PM (in response to joshua_shizny)
Well,
After fighting for days trying to get flex to call a php proxy to relay wsdl info because of security issues (no cross domain on remote server), The company was nice enough to put me on their crossdomain file. So, I'm thinking good, but I was wrong. I still get a security violation....
FaultEvent fault=[RPC Fault faultString="Security error accessing url" faultCode="Channel.Security.Error" faultDetail="Destination: DefaultHTTPS"] messageId=null type="fault" bubbles=true cancelable=true eventPhase=2
What is up with that. I thought that the cross-domain file would handle that. Here is the cross domain file on the root of the wsdl service
<?xml version="1.0" encoding="UTF-8"?>
<!-- the purpose of this document is to allow FLASH web applications to access the APIs. -->
<!DOCTYPE cross-domain-policy SYSTEM "">
<cross-domain-policy>
<allow-access-from
</cross-domain-policy>
I am at and am accessing a wsdl at https:// at well.
anybody know why this is still happening with crossdomain support?
Thns,
8. Re: Flex and Soap Mismatchjoshua_shizny Oct 31, 2009 10:58 AM (in response to joshua_shizny)
Well now it doesn't matter. I got it running through a php proxy. Here is what I ended up with. Hope this saves somebody else 3 days.
<?
$soapRequest = file_get_contents("php://input");
$soapAction = $_SERVER['HTTP_SOAPACTION'];
$url = '';
$header[] = "POST /ws/theservice.asmx HTTP/1.1";
$header[] = "Host:";
$header[] = "Content-Type: text/xml; charset=utf-8";
$header[] = "Content-length: ".strlen($soapRequest);
$header[] = "SOAPAction: ".$soapAction;
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL,$url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_TIMEOUT, 10);
curl_setopt($ch, CURLOPT_CUSTOMREQUEST,'POST');
curl_setopt($ch, CURLOPT_HTTPHEADER, $header);
curl_setopt($ch, CURLOPT_POST, TRUE);
curl_setopt($ch, CURLOPT_POSTFIELDS, $soapRequest);
curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, 0);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, 0);
$result = curl_exec($ch);
echo $result;
curl_close($ch);
?>
9. Re: Flex and Soap Mismatchjoshua_shizny Oct 31, 2009 11:02 AM (in response to joshua_shizny)
Answered
10. Re: Flex and Soap MismatchKrish.praveen Mar 23, 2010 4:11 AM (in response to joshua_shizny)
Hi,
I too facing the issue with web services.
Error: SOAP Response Version Mismatch
at mx.rpc.soap::SOAPDecoder/decodeEnvelope()[C:\autobuild\3.3.0\frameworks\projects\rpc\src\ mx\rpc\soap\SOAPDecoder.as:266]
at mx.rpc.soap::SOAPDecoder/decodeResponse()[C:\autobuild\3.3.0\frameworks\projects\rpc\src\ mx\rpc\soap\SOAPDecoder.as:236]
Previously, My web servieces are deployed in 32bit server now its moved to 64bit server.
As you found the answer for this, can you help out to fix the same.
Any further details you are looking for, let me know.
Thanks,
Krishna | https://forums.adobe.com/thread/513832 | CC-MAIN-2017-51 | en | refinedweb |
In TCP protocol services, clients send text or binary messages to the service and receive responses from it. Apache JMeter™’s TCP Sampler and TCP Sampler Config elements load test these types of services
This blog post explains how to test a simple TCP server that works in the echo mode (the server responds with the same data that has been sent to it). The JAR file and java code of this TCP server can be downloaded from here and launched on your local network. In our JMeter script, we will send data to the server via TCP sampler, get responses from the server and analyze them. Then, we will add a TCP Sampler Config.
Let’s get started.
The TCP Sampler
Add a thread group to a new JMeter test plan: Right click > Add > Threads > Thread Group.
Add the TCP sampler as a child element to the thread group: Right click > Sampler > TCP Sampler.
There are three implementations of the TCP sampler in JMeter: one for exchanging text data and two others for exchanging binary data with the tested service.
Each of the implementations is provided by a certain class, which is a "template" or a group of attributes of an object. Enter the name of the class into the TCPClient class name field of the TCP sampler.
The class names, as they are given in the JMeter documentation, are:
TCPClientImpl. This is a basic class for implementation of the text messages exchange. Text messages are provided as a constant or as variable strings of different charsets in the Text to send field of the TCP sampler.
BinaryTCPClientImpl. This is a class for the implementation of the exchange of the text messages. Binary messages in the form of the hex encoded constant or variable values are provided in the ‘Text to send’ field of the TCP sampler.
LengthPrefixedBinaryTCPClientImpl. This class is similar to the previous one, but byte data to send is prefixed with the binary length byte.
The Basic Class Implementation
Let’s use the basic implementation of the TCP server in our script.
The TCPclient class name field has the name of the class. In this case, it is
TCPClientImpl. As this is the default class name, it may be left out.
The server name or IP is where the simple TCP server is launched. The Port field is the port the TCP server listens to.
The text to send message contains the text that is sent to the server. To demonstrate that the Text to send field is able to process JMeter variables and functions, I’ve changed the loop count parameter in the thread group to 2 and added the
${__iterationNum()} function to the TCP sampler text. This function prints out the number of current iteration of the currently executed thread group.
Add a View Results Tree listener to the thread group monitor responses from the TCP server: Right click > Add > Listener > View Results Tree.
Save the script, launch a simple TCP server application with the same port parameter that is configured in the script, and launch the script.
Open the View Results Tree Listener. As was mentioned above, the tested TCP server works in the echo mode but adds an information string with the packet number processed to each request. In the screenshot below, you can see the response of the TCP sampler:
These are all required configuration settings that have to be applied to the TCP sampler. But in some cases, when we have to take into account some specific parameters of the network or tested server, we need to apply additional settings:
- The Connect timeout parameter determines the maximum time in milliseconds the particular TCP sampler waits for the establishment of the connection from the server.
- The Response timeout parameter determines the maximum time the particular TCP sampler waits for the response from the server. If you need to consider the network propagation time or the server response time, you may tune the sampler with these parameters in order to avoid receiving too many timed out samplers.
- The Re-use connection parameter instructs the script to use the same connection for sending data; otherwise, the connection established in the previous session will be used.
- The Close connection parameter instructs the script to close connection and open a new one each time the new data is sent.
- If the Set NoDelay parameter is set, small messages, even those the size of only 1 byte, are sent in a separate packet. Otherwise, if this parameter is cleared, a number of small messages is combined in one packet before sending them.
- The SO_LINGER option can be enabled in the socket to cause the script to block the close the connection call for a number of seconds, specified in this field until all final data is delivered to the destination station.
The Binary Class Implementation
If we need to exchange binary data with TCP server, we need to use BynaryTCPClientImpl class and specify this class name in the TCPclient class name field of the TCP sampler. The binary data is entered into the Text to send field in the hex encoded format, as it’s shown in the screenshot below.
Let’s add an additional TCP sampler that sends binary data to the TCP server. The sampler is shown in the screenshot below:
In the Text to send field, let’s send a sequence of two digits hex numbers that represent a sequence of bytes, and see how the TCP sampler response is represented. The sequence of bytes should end with the value that is less than 20. In this example, the binary data in the hex-encoded format that is sent to the TCP server is 63646f0a.
The View Result Tree listener displays the received data from the TCP sampler in the hex-encoded format without converting it to chars:
The hex encoded data in the response is 5061636b657420333a200a63646f0a. The data that was sent to the TCP server is at the end and underlined. Before it appears the hex encoded sequence, which represents information about packet number — each 2 digits is ACSII code in hex format.
The Length Prefixed Binary Class Implementation
The length prefixed binary TCP client implementation is similar to the binary implementation, with the difference that the number of bytes sent is inserted before each message. JMeter inserts this number automatically before each message that it sends to the TCP server. The TCP server, in turn, calculates the number of bytes in the response it sends to JMeter and inserts this number before the message. This fact should be taken into account.
The TCP Sampler Config
Like the HTTP request defaults config element that defines common HTTP parameters for all HTTP samplers in a script, the TCP Sampler Config element defines common parameters for all TCP samplers that are used in a script.
Add the TCP Sampler Config to the script before the thread group and place common parameters, such as the server name and port number, and remove these parameters from the TCP samplers from before.
The TCP sampler can be used together with all preprocessors, postprocessors and assertion elements to make scripts configurable and provide the response analysis.
Modify the script so that it reads text data from files, adds data to the TCP sampler, extracts some data from the response, and asserts against some values. To do that, add the BeanShell preprocessor that will read data from files and feed them to the TCP sampler.
The added preprocessor is shown on the screenshot below:
The preprocessor code:
import java.io.*; import org.apache.jmeter.protocol.tcp.sampler.*; String textToSend = ""; String fline = ""; FileReader fR = new FileReader("${fname}"); BufferedReader bR = new BufferedReader(fR); while((fline = bR.readLine())!= null){ textToSend = textToSend + fline; } sampler.setRequestData(textToSend);
To output the TCP Response packet number to the system log and to assert against the text Packet in each response, add a BeanShell PostProcessor, Regular Expression Extractor elements, and an assertion.
The regular expression extractor, extracts the packet number, using the pattern Packet (.+): and places it to the variable
packNum.
The BeanShell postprocessor prints this variable out to the system log, using function print(${packNum}).
This assertion asserts each response, that they contain string about packet number. The pattern, that is used is Packet \d\d.
As you can see, you can work with the TCP sampler by using JMeter-provided implementations. If you need to use more specific processing algorithms for communications over TCP, you may implement them in your own class. All that you need is to extend your class from an existing class, for example from
BinaryTCPClientImpl class, implement what you need and then insert the name of the created class to the TCP client class name field of the TCP sampler.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/how-to-load-test-tcp-protocol-services-with-jmeter | CC-MAIN-2017-51 | en | refinedweb |
unishark 0.3.2
A test framework extending unittest, providing flexible test suites config, concurrent execution, Html/XUnit reports, and data driven utility.
INTRODUCTION
unishark extends unittest (to be accurate, unittest2) in the following ways:
- Customizing test suites with dictionary config (or yaml/json like config).
- Running the tests concurrently at different levels.
- Generating polished test reports in HTML/XUnit formats.
- Offering data-driven decorator to accelerate tests writing.
For existing unittests, the first three features could be obtained immediately with a single config, without changing any test code.
The Test Config
Here is an example config in YAML format (you could also write it directly in a dict()):
suites: my_suite_name_1: package: my.package.name groups: my_group_1: granularity: module modules: [test_module1, test_module2] except_classes: [test_module2.MyTestClass3] except_methods: [test_module1.MyTestClass1.test_1] my_group_2: granularity: class disable: False classes: [test_module3.MyTestClass5] except_methods: [test_module3.MyTestClass5.test_11] concurrency: level: module max_workers: 2 my_suite_name_2: package: my.package.name groups: my_group_1: granularity: method methods: [test_module3.MyTestClass6.test_13, test_module3.MyTestClass7.test_15] concurrency: level: class max_workers: 2 my_suite_name_3: package: another.package.name groups: group_1: granularity: package pattern: '(\w+\.){2}test\w*' except_modules: [module1, module2] except_classes: [module3.Class1, module3.Class3] except_methods: [module3.Class2.test_1, module4.Class2.test_5] concurrency: level: method max_workers: 20 reporters: html: class: unishark.HtmlReporter kwargs: dest: logs overview_title: 'Example Report' overview_description: 'This is an example report' xunit: class: unishark.XUnitReporter kwargs: summary_title: 'Example Report' test: suites: [my_suite_name_1, my_suite_name_2, my_suite_name_3] concurrency: type: processes max_workers: 3 reporters: [html, xunit] name_pattern: '^test\w*'
It configures 3 test suites with some of the test cases excluded, and running the defined set of tests concurrently, and generating both HTML and XUnit (default JUnit) format reports at the end of tests.
NOTE: In 0.2.x versions, ‘max_workers’ was set directly under ‘test’, and ‘max_workers’ and ‘concurrency_level’ were set directly under ‘{suite name}’.
To run it, simply add the following code:
import unishark import yaml if __name__ == '__main__': with open('your_yaml_config_file', 'r') as f: dict_conf = yaml.load(f.read()) # use a 3rd party yaml parser, e.g., PyYAML program = unishark.DefaultTestProgram(dict_conf) unishark.main(program)
A HTML report example can be found Here.
Data Driven
Here are some effects of using @unishark.data_driven.
‘Json’ style data-driven:
@unishark.data_driven(*[{'userid': 1, 'passwd': 'abc'}, {'userid': 2, 'passwd': 'def'}]) def test_data_driven(self, **param): print('userid: %d, passwd: %s' % (param['userid'], param['passwd']))
Results:
userid: 1, passwd: abc userid: 2, passwd: def
‘Args’ style data-driven:
@unishark.data_driven(userid=[1, 2, 3, 4], passwd=['a', 'b', 'c', 'd']) def test_data_driven(self, **param): print('userid: %d, passwd: %s' % (param['userid'], param['passwd']))
Results:
userid: 1, passwd: a userid: 2, passwd: b userid: 3, passwd: c userid: 4, passwd: d
Cross-multiply data-driven:
@unishark.data_driven(left=list(range(10))) @unishark.data_driven(right=list(range(10))) def test_data_driven(self, **param): l = param['left'] r = param['right'] print('%d x %d = %d' % (l, r, l * r))
Results:
0 x 1 = 0 0 x 2 = 0 ... 1 x 0 = 0 1 x 1 = 1 1 x 2 = 2 ... ... 9 x 8 = 72 9 x 9 = 81
You can get the permutations (with repetition) of the parameters values by doing:
@unishark.data_driven(...) @unishark.data_driven(...) @unishark.data_driven(...) ...
Multi-threads data-driven in ‘json style’:
@unishark.multi_threading_data_driven(2, *[{'userid': 1, 'passwd': 'abc'}, {'userid': 2, 'passwd': 'def'}]) def test_data_driven(self, **param): print('userid: %d, passwd: %s' % (param['userid'], param['passwd']))
Results: same results as using unishark.data_driven, but up to 2 threads are spawned, each running the test with a set of inputs (userid, passwd).
Multi-threads data-driven in ‘args style’:
@unishark.multi_threading_data_driven(5, time=[1, 1, 1, 1, 1, 1, 1, 1, 1, 1]) def test_data_driven(self, **param): sleep(param['time'])
Results: 5 threads are spawned to run the test with 10 sets of inputs concurrently (only sleep 1 sec in each thread). It takes about 2 sec in total (10 sec if using unishark.data_driven) to run.
For more information please visit the Project_Home and read README.md.
CHANGELOG
0.3.2 (2015-11-24)
- added multiprocessing suites (which can bypass CPython’s GIL and utilize multi-cores).
- modified result, runner and reporter classes to be picklable for multiprocessing.
- supported running with Jython.
0.3.1 (2015-11-12)
- fixed the issue of still running test methods even when setUpClass/setUpModule raises exception in concurrency mode.
- fixed error descriptions of class or module level fixtures when they raise exceptions.
0.3.0 (2015-11-06)
- rewrote concurrent execution model. Now test fixtures setUpModule/tearDownModule setUpClass/tearDownClass will be executed once and only once no matter what concurrency level(module/class/method) of a suite is. Fixed the problem that module fixtures were executed multiple times when concurrency level was ‘class’ or ‘method’, and class fixtures were executed multiple times when concurrency level was ‘method’.
- changed the format of the concurrency-related settings in the dict config. Now ‘max_workers’ and ‘level’ are keys in the ‘concurrency’ sub-dict.
- moved BufferedTestResult class from the runner module to the new result module which makes more sense.
0.2.3 (2015-10-01)
- enabled ‘module’ and ‘method’ level concurrent execution in a suite.
0.2.2 (2015-08-12)
- support loading tests from a package with pattern matching, and excluding modules/classes/methods from the loaded tests.
- add load_tests_from_package and load_tests_from_modules api.
- rename load_test_from_dict to load_tests_from_dict.
- fix that verbose stdout mode does not print test method doc string.
- fix that tests loaded with method granularity are not filtered by method name pattern.
- less strict dependency versions.
0.2.1 (2015-05-11)
- support data-driven with multi-threads.
0.2.0 (2015-04-04)
- support running tests in parallel.
- support configuring test suites, test reporters and concurrent tests in a single dict/yaml config.
- improve HtmlReporter and XUnitReporter classes to be thread-safe.
- allow user to generate reports with their own report templates.
- allow user to filter loaded test cases by setting method name prefix in the test config.
- bugs fix.
- improve documentation.
0.1.2 (2015-03-25)
- hotfix for setup.py (broken auto-downloading dependencies)
- bugs fix.
0.1.1 (2015-03-24)
- support loading customized test suites.
- support a thread-safe string io buffer to buffer logging stream during the test running.
- support writing logs, exceptions into generated HTML/XUnit reports.
- offer a data-driven decorator.
- initial setup (documentation, setup.py, travis CI, coveralls, etc.).
- Author: Ying Ni
- Keywords: test framework,unittest extension,concurrent,data driven
- License: Apache License, Version 2.0
- Categories
- Development Status :: 5 - Production/Stable
- Intended Audience :: Developers
- License :: OSI Approved :: Apache Software License
- Operating System :: MacOS :: MacOS X
- Operating System :: POSIX :: Linux
- Programming Language :: Python :: 2.7
- Programming Language :: Python :: 3.3
- Programming Language :: Python :: 3.4
- Programming Language :: Python :: 3.5
- Programming Language :: Python :: Implementation :: Jython
- Topic :: Software Development :: Testing
- Package Index Owner: yni
- DOAP record: unishark-0.3.2.xml | https://pypi.python.org/pypi/unishark/ | CC-MAIN-2017-51 | en | refinedweb |
Are you sure?
This action might not be possible to undo. Are you sure you want to continue?
MACS, USP.
Hugh Anderson February 10, 2000
Preface
In the study of any structured discipline, it is necessary to: • appreciate the background to the discipline, • know the terminology, and practice • understand elements of the framework (hence the word structured). Data communication is no different, so we begin by looking to the past, and studying the ’history’ of data communication. Data communication is just one kind of communication, so we continue with general communication theory (Fourier, Nyquist and Shannon). We also look at the equipment in current use before considering elements of data communications within a framework, known as the OSI reference model.
i
Contents
1 Background 1.1 1.2 1.3 Prehistory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Recent history . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Communication theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.1 1.3.2 1.3.3 1.3.4 1.4 1.5 Analog & digital . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fourier analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1 4 7 7 8
Shannon and Nyquist . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Baseband and modulated signals . . . . . . . . . . . . . . . . . . . . . . 11
Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Computer hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.5.1 1.5.2 1.5.3 1.5.4 1.5.5 1.5.6 1.5.7 1.5.8 Backplane & IO busses . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Parallel port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Serial port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Keyboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 SCSI port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Macintosh LLAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Monitor cable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 I 2 C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.6 2
Standards organizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 22
OSIRM 2.1 2.2 2.3
The layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Sample protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 ii
CONTENTS
3 Layer 1 - Physical 3.1
iii 27
Sample ’layer 1’ standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.1.1 3.1.2 3.1.3 3.1.4 3.1.5 Token ring cabling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Localtalk cabling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 UTP cabling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Fibre optic cabling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Thin and thick ethernet cabling . . . . . . . . . . . . . . . . . . . . . . . 30
3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9
Addressing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Signals and cable characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 Electrical safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Digital encoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Modems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.10 Diagnostic tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 4 Layer 2 - Datalink 4.1 42
Sample standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 4.1.1 4.1.2 4.1.3 4.1.4 HDLC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Ethernet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 PPP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 LLAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.2 4.3 4.4
Addressing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Framing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 4.4.1 4.4.2 Bit stuffing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Byte stuffing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.5 4.6
Error detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Error correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 4.6.1 Hamming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
CONTENTS
4.6.2 4.7 4.8 4.9
iv Feed forward error correction . . . . . . . . . . . . . . . . . . . . . . . 49
Datalink protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Sliding windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 MAC sublayer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 4.9.1 CSMA/CD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.10 Diagnostic tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 5 Layer 3 - Network 5.1 5.2 56
Sample standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Addressing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 5.2.1 5.2.2 5.2.3 5.2.4 IP Addressing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 IP network masks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 IPX addressing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 Appletalk Addressing . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
5.3 5.4 5.5 5.6 5.7
IP packet structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 Allocation of IP Addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Translating addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 5.6.1 5.7.1 5.7.2 Routing Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 Addressing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
5.8 6
Diagnostic tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 70
Layers 4,5 - Transport, Session 6.1 6.2 6.3 6.4
Sample transport standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Session standards and APIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Addressing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Transport layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 6.4.1 6.4.2 TCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 UDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
6.5
Session layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
CONTENTS
6.5.1 6.5.2 6.6 6.6.1 6.6.2 6.6.3 6.7 7
v Sockets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 RPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 UNIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 DOS and Windows redirector . . . . . . . . . . . . . . . . . . . . . . . 80 Win95/NT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Configuration & implementation . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Diagnostic tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 84
Higher layers 7.1 7.2
Sample standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Addressing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 7.2.1 7.2.2 7.2.3 NCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 IP and the DNS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 MacIntosh/Win 95/NT . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 Shared Keys: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 Ciphertext . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 Product Ciphers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 DES - Data Encryption Standard . . . . . . . . . . . . . . . . . . . . . 88 Public key systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
7.3
Encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 7.3.1 7.3.2 7.3.3 7.3.4 7.3.5
7.4 7.5 8
SNMP & ASN.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 Diagnostic tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 94 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Application areas 8.1 8.1.1 8.1.2 8.1.3 8.2 8.3 8.4 Netware
File serving . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 SMB and NT domains . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 NFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Printing - LPR and LPD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 Web services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 8.3.1 Java . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 X . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
CONTENTS
8.5
vi
Thin clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 8.5.1 8.5.2 WinFrame & WinCenter . . . . . . . . . . . . . . . . . . . . . . . . . . 105 VNC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
8.6 9
Electronic mail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 107
Other topics 9.1 9.2 9.3 9.4
ATM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 CORBA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 DCOM/OLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 NOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 9.4.1 DCE: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 115 117 121
A ASCII table B Java code C Sockets code
Chapter 1 Background
1.1 Prehistory
Data communication techniques predate computers by at least a hundred years - for example the MORSE code for communication over telegraph wires shown in Table 1.1. Long before we had radio, telegraph wires carried messages from one end of a country to another. At each end of the wire, the telegraph operators used MORSE to communicate. MORSE of course is just an encoding technique. The basic elements of the code are two signals - one with a short duration, and one long. The signals were transmitted by just switching on and off an electric current (with a switch), and were received by checking the deflection of a magnet. Each letter is made from some sequence of short and long signals. If you examine the codes, you find that the most common letters are the ones with the shortest codes. The most common letters in western languages are (in order): E T A O I N S H R D L U ... and sure enough - the E is the shortest code (a single .). T is next with -, followed by A (.-) and so on. Obviously when the MORSE code was developed, someone was concerned with efficiency. Ham radio enthusiasts (Hams) use MORSE, but with higher level controls (protocols). They use short codes to encode commonly used requests or statements. They are known as the ’Q’ codes Letter Code Letter Code Letter Code Letter Code A .B -... C -.-. D -.. E . F ..-. G –. H .... I .. J .— K -.L .-.. M – N -. O — P .–. Q –.R .-. S ... T U ..V ...W .– X -..Y -.– Z –.. Table 1.1: Morse Code. 1
CHAPTER 1. BACKGROUND
Morse Voice Meaning K Go ahead. Anyone go ahead AR Over. Use at end of transmission before contact made AS Stand by. Temporary interruption R Roger! Transmission received OK SK Clear. End of contact CL Closing down. Shutting down transmitter Caller Callee Morse CQ CQ DE ZL2ASB K Voice CQ this is ZL2ASB, go ahead. Morse ZL2ASB DE ZL2QW AR Voice ZL2ASB from ZL2QW, over Table 1.2: Ham calls.
2
and some are amusing. In table 1.2 are some of the common codes used by Hams. When a Ham wishes to make contact with anyone else, the call is as shown above. • CQ is a request for contact • ZL2ASB is the call sign • K means anyone go ahead. • AR means you want only one reply. In datacommunication terminology, Ham MORSE and voice transmission is asynchronous and half duplex. (See section 3) The list of special protocol messages given above is by no means complete. It is also not particularly well structured or documented, and it is possible for Hams to talk over the top of each other without much difficulty. This of course does not normally matter. However simple protocols like this can cause problems. In England in 1861, a poorly constructed protocol failed after 21 years of operation. In the ensuing mess, over 20 people died and a large number of people were hospitalized. The protocol’s function was to ensure that only one train could be in the Clayton Tunnel (Ref Figure 1.1) at a time. At each end of the tunnel was a signal box, with a highly trained signaller, and some hi-tech (for the 19th century) signalling equipment involving a three-way switch and an indicator. The signaller could indicate any of three situations to the other signaller: (i) nothing at all, (ii) Train in Tunnel, and (iii) Tunnel Clear. The signallers had a red flag which they would place by the track to signal to the train drivers to stop, and everyone followed the following protocol:
CHAPTER 1. BACKGROUND
3
Telegraph Signaller
Telegraph Signaller
Clayton Tunnel - August 1861
Figure 1.1: Recipe for disaster - the Clayton Tunnel protocol failure. signaller: • See train (entering or leaving tunnel) • Signal ’Train in Tunnel’ or ’Tunnel Clear’ • Set or Clear Red flag Train Driver: • See Red Flag • Don’t Enter Tunnel Seems OK doesn’t it?
CHAPTER 1. BACKGROUND
This is what happened:
4
• The first train entered the tunnel, and the signaller sent a ’Train in Tunnel’ indication to the other signaller. He (it was a he) went out to set the red flag ..... just as a second train whizzed past. The signaller thought about this for a while and then sent a second ’Train in Tunnel’ indication. • At the other end of the tunnel, the signaller received two messages, and then a single train came out of the tunnel. He signalled ’Tunnel Clear’, waited another few minutes, and then sent a second ’Tunnel Clear’ message. Where was the second train? • Well - the train driver had seen the flag, and decided to be cautious, so he stopped the train and then started backing out of the tunnel. Meanwhile, the first signaller thought that all was well and waved on through the next train. A flaw in a protocol had led to two trains colliding - inside the tunnel - one going forward and one in reverse.
1.2 Recent history
Since the early days of modern computing there has been a steadily increasing need for more sophisticated data transfer services. In the 1950s, the early computers were simple machines with ’Single-task’ operating systems. They typically had a single console, and data was entered either by cards, or on paper tape. In the 1960s, demand for large scale data entry grew, and the ’on-line’ batch system was developed. The operating systems were better, and allowed many terminals to be connected to the same computer. These terminals typically would allow deferred data entry - that is the files would not be updated until a ’batch’ of data was ready. The data entry was often done during the day, the processing of the data at night. In the 1970s, on-line integrated systems were developed. These allowed terminals to access the files, as well as update them. Database technology allowed immediate display of the effect of completed transactions. Integrated systems generated multiple transactions from single entries. Since the 1980s we have moved to distributed databases and processing. In the 80s and 90s (see figure 1.2), we see large machines - even the workstations have significant computing power. There are many more options for interconnecting the machines.
CHAPTER 1. BACKGROUND
5
50s
60s & 70s
80s & 90s
Simple Computer
Computer with terminals
Multiple systems, distributed
Figure 1.2: Computer development. At this stage we should differentiate between distributed systems and computer networks. A distributed system is a computer network in which the processing may be distributed amongst the computers. The user may not be aware on which computer the software is actually running. A computer network by contrast has a collection of communicating processors, but each users processing is done on a single computer. Modern networks are often mixed. The principal reasons for computer networks then are resource sharing, saving money and reliability. The principal applications are: • remote programs (rather than multiple copies) • remote databases (as in the banking system, airline reservations) • value added communications (service) We also have widely varying speed requirements in computer networks, and both tight and loose couplings between machines, and hence there is no one solution to the computer networking problem. Computer networks range from small systems interconnecting chips on a single PCB1 , up to world wide computer networks. There are many possible ways of connecting the machines together, (topologies), the main ways being the star, ring and bus systems. In general the larger
CHAPTER 1. BACKGROUND
6
Star
Ring
Figure 1.3: Network topologies.
Bus
networks are irregular and may contain sections with each of the above topologies. The smaller networks are generally regular. The largest computer network in the world is the Internet2 which interconnects over 1,000,000 computers worldwide. The stability, management and topology of this network varies. In some areas, it is left up to University departments - in others there are strong commercial interests.
(such as that found on a transputer). A high bit rate serial link between the processors. Note that you should always capitalize the word Internet if you are referring to this ’internet’. If you are referring to any ’interconnected network’, you can use internet.
2
1
CHAPTER 1. BACKGROUND
5 sin(x)+4 (sin(x)>=0)+1 real(int(sin(x)*5))/10 4
7
3
2
1
0
-1 -10
-8
-6
-4
-2
0
2
4
6
8
10
Figure 1.4: Digital and analog Signals.
1.3 Communication theory
When studying the transfer of data generally, there are some underlying physical laws, representations and limits to consider. Is the data analog or digital? What limits are placed on it? How is it to be transmitted?
1.3.1 Analog & digital
An analog signal is a continuous valued signal. A digital signal is considered to only exist at discrete levels. The (time domain) diagrams are commonly used when considering signals. If you use an oscilloscope, the display normally shows something like that shown in figure 1.4. The plot is amplitude versus time. With any analog signal, the repetition rate (if it repeats) is called the frequency, and is measured in Hertz (pronounced ’hurts’, and written Hz). The peak to peak signal level is called the amplitude. The simplest analog signal is called the sine wave. If we mix these simple waveforms together, we may create any desired waveform. In figure 1.5, we see the sum of two sine waves - one at a frequency of 1,000Hz, and the other at three times the frequency (3,000Hz). The ampli1 tudes of the two signals are 1 and 3 respectively, and the sum of the two waveforms shown
CHAPTER 1. BACKGROUND
5 sin(x)+4 (sin(3*x)/3)+2 sin(x)+(sin(3*x)/3) 4
8
3
1.0
2
1
0.333 0.2 f
-8 -6 -4 -2 0 2 4 6 8 10
0
3f
5f
-1 -10
Figure 1.5: Sum of sine waveforms. below, approximates a square wave. If we were to continue summing these waves, in the same progression, the resultant waveform would be a square wave:
∞ n=1
1 sin(2πnf ) (for odd n) ⇒ a square wave of frequency f n
We may also represent these signals by frequency domain diagrams, which plot the amplitude against frequency. This alternative representation is also shown in figure 1.5.
1.3.2 Fourier analysis
One way of representing any simple periodic function is in this way - as a sum of simple sine (and cosine) waveforms. This representation method is known as ’Fourier Analysis’ after JeanBaptiste Fourier, who first showed the technique. We start with the equation for constructing an arbitrary waveform g(t): g(t) = a0 +
∞ n=1
an cos(2πnf t) +
∞ n=1
bn sin(2πnf t)
f is the fundamental frequency of the waveform, and an and bn are the amplitudes of the sine and cosine components at each of the harmonics of the fundamental. Since an and bn are the only unknowns here, it is easy to see that if we know the fundamental frequency, and the amplitudes an and bn , we may reconstruct the original signal g(t). For any g(t) we may calculate a0 , an and bn by first noting that the integral over the interval [0, T ] will be zero for the summed terms: a0 = 1 T
T
g(t) dt where T =
0
1 f
CHAPTER 1. BACKGROUND
and by multiplying both sides of the equation by sin(2πkf t), and noting that:
T
9
sin(2πkf t) sin(2πnf t) dt =
0
T 2
0 f or k = n f or k = n
and
0
T
sin(2πkf t) cos(2πnf t) dt = 0 We can then integrate to get:
T 2 g(t) sin(2πkf t) dt T 0 Similarly, by multiplying by cos(2πkf t), we get
bk =
2 ak = T A bipolar square wave gives ak = 0 , and b1 = 1
T
g(t) cos(2πkf t) dt
0
b2 = 0 b3 =
1 3
b4 = 0 b5 =
1 5
b6 = 0 ...
We re-create our waveform by summing the terms below.
4 (sin(2πf t) π 1 + 3 sin(6πf t) + 1 sin(10πf t) + 1 sin(14πf t) + ...) 5 7
In figure 1.6, we see four plots, showing the resultant waveforms if we sum • the first term • the first two terms • the first three terms (and so on...) As we add more terms, the plot more closely approximates a square wave. Note that there is a direct relationship between the bandwidth of a channel passing this signal, and how accurate it is. • If the original (square) signal had a frequency of 1,000Hz, and we were attempting to transmit it over a channel which only passed frequencies from 0 to 1,000Hz, we would get a sine wave. • If the channel passed frequencies from 0 to 3,000Hz, we would get a waveform something like the third one down in figure 1.6. Another way of stating this is to point out that the higher frequency components are important - they are needed to re-create the original signal faithfully. If we had two 1,000Hz signals, one a triangle, one a square wave - if they were both passed through the 1,000Hz bandwidth limited channel above, they would look identical (a sine wave).
CHAPTER 1. BACKGROUND
11 sin(x)+10 sin(x)+(sin(3*x)/3)+8 sin(x)+(sin(3*x)/3)+(sin(5*x)/5)+6 sin(x)+(sin(3*x)/3)+(sin(5*x)/5)+(sin(7*x)/7)+4
10
10
9
8
7
6
5
4
3 -10
-8
-6
-4
-2
0
2
4
6
8
10
Figure 1.6: Successive approximations to a square wave.
1.3.3 Shannon and Nyquist
Other important relationships found in data communications relate the bandwidth, data transmission rate and noise. Nyquist shows us that the maximum data rate over a limited bandwidth (H) channel with V discrete levels is: Maximum data rate = 2H log2 V bits/sec For example, two-Level data cannot be transmitted over the telephone network faster than 6,000 BPS, because the bandwidth of the telephone channel is only about 3,000Hz. Shannon extended this result for noisy (thermal noise) channels: Maximum BPS = H log2 (1 +
S ) N
bits/sec
A worked example, with a telephone bandwidth of 3,000 Hz, and using 256 levels: D = 2 ∗ 3000 ∗ (log2 256) bps = 6000 ∗ 8 bps = 48000 bps But, if the S/N was 30db (about 1024:1)
CHAPTER 1. BACKGROUND
D = 3000 ∗ (log2 1025) bps = 3000 ∗ 10 bps = 30000 bps This is a typical maximum bit rate achievable over the telephone network.
11
1.3.4 Baseband and modulated signals
A baseband signal is one in which the data component is directly converted to a signal and transmitted. When the signal is imposed on another signal, the process is called modulation. We may modulate for several reasons: • The media may not support the baseband signal • We may wish to use a single transmission medium to transport many signals We use a range of modulation methods, often in combination: • Frequency modulation - frequency shift keying (FSK) • Amplitude modulation • Phase modulation - phase shift keying (PSK) • Combinations of the above (QAM) • In the section on modems (section 3), we will discuss these methods in more detail.
1.4 Media
The transmission medium is just the medium by which the data is transferred. The type of medium can have a profound effect. In general there are two types: • Those media that support point to point transmission • Multiple access systems One of the principal concerns is ’how to make best use of a shared transmission medium’. Even in a point to point transmission, each end of the transmission may attempt to use the medium and block out the other end’s use of the channel. There are several well known techniques:
CHAPTER 1. BACKGROUND
• TDM - Time Division Multiplexing • FDM - Frequency Division Multiplexing • CSMA - Collision Sense Multiple Access • CSMA/CD - CSMA with Collision Detection Here are some sample mediums for comparison: MAGNETIC TAPE. (Point to point)
12
If you wanted to transmit 180MB (megabytes) across town to another computer site , and you only had a 9,600 bytes/sec line to the other site, you could only transmit 1,000 characters a second there, and the whole transfer would take 180,000 seconds (3,000 minutes/50 hours). The whole process could be much more efficiently performed by copying the data to a single nine track magnetic tape, sticking it in a courier pack and sending it across town. There is always more than one way to skin a cat. TELEPHONE NETWORK. (Point to point) The telephone network provides a low bandwidth (300Hz to 3kHz) point to point service, through the use of twisted pair cables. There are some problems with the telephone network: Echo suppressors - Put on long lines to stop line impedance mismatch echoes from interfering with conversations. These echo suppressors enforce one-way voice communication (half duplex), and inhibit full duplex communication. The echo suppressors may be cut out by a pure 2,100 Hz tone. Switching Noise - The telephone network is inherently noisy with lots of impulse noise (Switch/relay pulses and so on). A 10mS pulse will chop 12 bits out of a 1,200 bits/sec transmission. Limited Bandwidth - An ordinary telephone line has a cutoff frequency of 3,000Hz, enforced by circuitry in the exchanges. If you attempt to transmit 9,600 bps over these lines, the waveform is dramatically changed. COAXIAL CABLE. (Point to point or Multiple access) Two kinds of coax are in common use - 50Ω3 for digital signalling and 75Ω for analog (TV) transmission. The digital coax has high bandwidth and good noise immunity. Bandwidths of 10Mbps are possible at distances over 1 km (limited of course by the Shannon limit given in Section 1.3.3).
3
We use the greek letter Ω (ohm) as the unit of resistance.
CHAPTER 1. BACKGROUND
FIBRE OPTICS. (Generally point to point)
13
Fibre optic technology is rapidly replacing other techniques for long distance telephone lines. It is also used for large traffic networks, as current technology can transmit data at about 1,000Mbps over 1km. The FDDI4 standard specifies 100Mbps over 200km. The Shannon limit is much higher than this again, so there is room for technological advancement. RADIO. (Multiple Access) Radio is often the transmission medium for digital data transmission. For example, when NASA were looking for ’sandbags’ to go in their test rocket launching systems, enterprising HAM radio enthusiasts asked if they could provide the weight, and designed small satellites to retransmit their signals. There are now 12 HAM satellites in orbit, mostly retransmitting digital data between HAMs all over the world. The HAMs use a data transmission protocol called AX.25 which is similar to X.25 - a widely used standard. The difference is mostly in the address fields, which allow ’HAM’ call signs to be inserted. Any HAM in the footprint of the satellite may transmit to it and receive from it, and so HAMs use various protocols to minimize collisions.
4
Fibre Optic Distributed Data Interface
CHAPTER 1. BACKGROUND
14
Transfer of data CPU Mem
#1
#2
#3
Figure 1.7: Model of computing.
1.5 Computer hardware
One of the interesting features of the study of computer communications is that it is relevant in virtually all levels of computer use - from how computer chips on a circuit board interact to how databases share information. If you look at the back of a PC or a MAC (or a Sun or an SGI!), you find various connectors and cables. Some just supply power to the unit, but most of the others involve transfer of data. In this section we take an initial look at some of the communication schemes found in a computer. The circled elements in figure 1.7 indicate (possibly different) data transfer standards. • Parallel port • Serial port • Keyboard port • Modem • Backplane • SCSI port • MAC bus • The Monitor cable
1.5.1 Backplane & IO busses
The backplane of the computer (which is sometimes the same as the I/O bus) is the physical path that interconnects the processor CPU chips and the memory or I/O devices. It normally has a
CHAPTER 1. BACKGROUND
15
large number of address, control and data lines. The voltages found on the backplane are just the normal chip operating voltages (3.6V to 5V), the speed is high, the distances involved are small, and the data is digital. Since the backplane is a bus, we have to be careful that only one chip at a time tries to use it. If chips conflict when trying to transfer data it is called contention. Some of the control wires in the bus are used to help resolve contention. We also need handshaking signals to regulate the flow of data. The receiving chip/board must be able to slow down the flow of data from a faster chip/board. IBM PCs may have several bus systems, for memory (the memory bus), video (such as VESA local bus) or I/O cards (the backplane - ISA, EISA,PCI). PCI is a 64 bit interface in a 32 bit package. The PCI bus runs at 33 MHz and can transfer 32 bits of data (four bytes) every clock tick. is used in Power Macintosh systems and PowerPC machines, as well as PCs. With backplane systems like this, we measure the speed in Bytes/sec. Example: The PCI bus has a 32 bit (4 byte) data transfer size, and a 30nS cycle time, and so the transfer speed is 133MB/sec. Note: if our backplane was 64 bits wide, with the same cycle time, it would have a transfer speed of 266MB/s. By contrast, the SGI O2 workstation has a unified memory/IO bus with a transfer speed of 2.1GB/sec.
1.5.2 Parallel port
The parallel port is often labelled as a printer port on a PC. These are typically 8-bit ports, with handshaking to support a uni-directional point to point link. With this data communications link, it is easy to transfer 8-bit data at a relatively slow speed (relative to the backplane). The typical maximum transfer speed is 1Mbps, over 1 to 10 metres. It is common to represent character data using the ASCII character set, which is a 7-bit character encoding method, with an extra bit often added as an error/parity check. Note: there are other encoding schemes such as EBCDIC (used on IBM mainframes), and 16 bit schemes to support non-english fonts (Kanjii and so on). In figure 1.8, we see the handshaking for data transfer using the common centronics parallel interface.
CHAPTER 1. BACKGROUND
16
Busy Strobe
¡ ¡ ¡ ¡ ¡ ¡
Ack 1 2 3
4
Figure 1.8: Centronics handshake signals. 1. Write the data to the data register 2. Read status register to see if printer is busy 3. Write to control register to assert the strobe line 4. De-assert the strobe line The current parallel standard is IEEE 1284-1994, which provides for a high speed bi-directional transfer of data (50 or so times faster than the old centronics standard), and can still inter-operate with the older hardware in a special compatibility mode.
1.5.3 Serial port
The PC serial port provides two 1-bit point to point data links, with a large number of handshaking wires (as many as are needed for parallel ports). Since the channel width is only 1 bit, the data transfer speed is typically slow (1,200 to 38,400 bits/sec). Forty years ago, the maximum speed of an electronic printer (teletype) was about 10 (7-bit) characters per second. Data sent to the printer at 75 bits/sec would keep it running continuously. As printers increased in speed, the speed doubled as needed - 75, 150, 300, 600, 1200, 2400, 4800, 9600, 19200, 38400 and so on. These rates are the common rates supported by serial ports. When we send data using serial ports, the sender and receiver must be synchronized. This may be done in one of two ways:
¢
£
¢
£
¢
£
¢
£
¢
£
¢
£
¢
£
¢
£
¡
¡
¡
¡
¡
¡
¤
¥
¤
¥
¤
¥
¤
¥
Data
¢ £ ¢ £ ¢ £ £
¢ £ ¢ £
¢ £ ¢ £
¢ ¥ ¤ ¥
¤ ¥ ¤ ¥
¤
CHAPTER 1. BACKGROUND
17
2 3 4 5 6 8 20 Computer
Transmit Receive Request To Send Clear To Send Data Set Ready Carrier Detect Data Terminal Ready Modem
Figure 1.9: RS232-C serial interface.
• Use a synchronizing signal or clock (called synchronous transmission) • Agree on a transfer speed, and wait for a beginning signal (called asynchronous transmission) Most PC serial ports support the RS232-C signalling conventions, but MAC and UNIX workstations often use RS432 standard, which can look like RS232 in certain circumstances. Figure 1.9 shows the main signals found in the RS232 interface.
1.5.4 Keyboard
The PC keyboard is connected to the PC with a 5-wire cable. Two of the wires are power and ground of course, and the other signals on the cable are at TTL (5V) levels, The other signals provide a bi-directional synchronous serial link between the computer in the keyboard, and the PC. This communication link has a purpose-specific protocol.
1.5.5 SCSI port
Many modern computers use the SCSI (Scuzzy) interface for interconnecting devices, particularly disks, and measuring instruments. the original SCSI interface provided for an 8-bit wide
CHAPTER 1. BACKGROUND
18
transfer of data over a 10m cable at 5MB/sec, but modern SCSI-2 and SCSI-3 interfaces provide faster transfers. On an SGI O2, the disks use a 40MB/sec SCSI interface. Transfers occur 32 bits at a time. The SCSI interface transfers of data are similar to those found in the parallel interface met before. However the interconnection method is different. SCSI is a multimaster bussed scheme, with addressed nodes and defined protocols for handling contention and transfers.
1.5.6 Macintosh LLAP
Macintosh computers come standard with a network system called LLAP5 . The localtalk cabling scheme can vary, and even the connectors on the back of a MAC have changed over the years6 . A transformer-decoupled multidrop scheme is common, with a data transfer rate of 200kb/sec. The signals are RS422-like. Localtalk LLAP messages all belong to well-defined protocols specified by Apple Computer Inc, and placed in the public domain.
1.5.7 Monitor cable
The monitor cable is not really a data-communications cable in the same terms as these other computer interfaces, because there is no ’communication’ involved (only data transfer). However there are still some points of interest to be found - firstly the amount of data transferred, and secondly the formats or encodings used. Data Transfer Rate: With a typical computer screen, we may have 1280*768 pixels. Each pixel may be represented by a 24 bit (3 byte) number to identify its colour. (255 levels for each of red, green and blue). Therefore we require 2,949,120 Bytes to display one frame of our display. If our screen is refreshed 70 times per second (half a frame each time for an interlaced display), we require 98.4MB/sec (787.2Mb/sec) to be transferred from the video display card to the display hardware. This is a continual demand, but we don’t mind errors - they just give temporary aberrations to the display. Encoding: There are several standards, but commonly use separate cables for each of the red, green and blue components. each of these cables provides an analog signal representing the intensity of the colour at any instant in time.
5 6
Localtalk Link Access Protocol. Originally a DB9 connector - but now a mini-din8 connector is used.
CHAPTER 1. BACKGROUND
19
Data Normal Data Clock
Data Start Condition Clock Data Stop Condition Clock
Figure 1.10: I 2 C synchronization. Another cable may contain (digital) synchronizing data.
1.5.8 I 2 C
I 2 C is a standard for interconnecting integrated circuits on a printed circuit board. It has a synchronous bus multi-master scheme, with a well developed addressing scheme. Only 3 wires are needed to interconnect the ICs, simplifying circuit construction and chip count. To ensure synchronization, I 2 C indicates the beginning and end of data with special signals as shown in figure 1.10.
CHAPTER 1. BACKGROUND
20
1.6 Standards organizations
The major standards organisations display an amazing conformity in their standards, mostly because they tend to be rubber stamp organisations. However manufacturers do modify their equipment to conform to these standards. The nice thing about standards is that you have so many of them. Furthermore, if you do not like any of them, you can just wait for next year’s model.
• The National Bureau of Standards is an American Federal regulatory organization which produces the Federal Information Processing Standards (FIPS). Their standards must be followed by all manufacturers producing equipment for the US government and its agencies (except defense). • The Electronic Industries Association is an organization made up of manufacturing interests in the US. They are responsible for RS232 and similar standards. • The Institute of Electrical and Electronic Engineers is a professional organization of engineers. They prepare standards in a wide range of specialities. • The American National Standards Institute is an umbrella organization for many US standards organizations. Accreditted member organizations submit their standards for ANSI acceptance. • The International Organization for Standards (ISO) generate standards covering a wide range of computer related topics. The U.S. representative is ANSI. • The Consultative Committee on International Telephone and Telegraph. Its standards are law in most member countries with nationalised telecommunications (much of Europe). The US representative is the Department of State. The standards mostly relate to telephone and telecommunications. Many of the standards adopted were first fixed on by committee workers in national standards organisations. The workers are often provided by (competing) companies, who sometimes don’t agree on a ’standard’. The traditional ISO technique to deal with this situation is to create multiple (incompatible) standards. A clever extension to this is to make these standards different from the original company standards. For example: three manufacturers (Xerox,GM & IBM) submitted three different LAN standards to IEEE, who produced three different standards (IEEE 802.3, 802.4 and 802.5). Each of these standards has slight format differences from the other, for no valid reason except perhaps that no-one wanted to offend anyone else. These
CHAPTER 1. BACKGROUND
standards were adopted by ISO (ISO 8802.3, 8802.4, 8802.5) and have three incompatible formats. When ISO were pressured to construct a standard to ’bridge’ these incompatible formats, they did slightly better, and came up with two incompatible bridges!
21
Another radical ISO technique is to leave out anything that is controversial. ISO spent much time discussing data security and encryption, and finally decided that it should be left out, not because it was unimportant, but because it caused too many arguments.
Chapter 2 OSIRM
It is against this background that we present the ISO OSI (Open Systems Interconnect) model. This model is dated, but provides a framework to hang your understanding of computer networking on. It is modelled on IBMs SNA (System Network Architecture) but with differences. What are the advantages of this? • We can change layers as needed, as long as the interface remains constant. The service and the protocol are decoupled, and hence other level protocols may be changed without affecting the current level. • the network software is easier to write, as it has been modularised. • Client software need only know the interface. Each layer is defined in terms of a protocol and an interface. The protocol specifies how communication is performed with the layer at the same level in the other computer. This does not for instance mean that the network layer directly communicates with the network layer in the other computer - each layer uses the services provided by the layer below it, and provides services to the layer above it. The definition of services between adjacent layers is called an interface Remember • a PROTOCOL specifies the interaction between peer layers, • an INTERFACE specifies the interactions between adjacent layers of the model.
22
CHAPTER 2. OSIRM
23
APPLICATION PRESENTATION SESSION TRANSPORT NETWORK DATA LINK PHYSICAL Data Network - Transmission Medium
APPLICATION PRESENTATION SESSION TRANSPORT NETWORK DATA LINK PHYSICAL
Figure 2.1: Layering in the ISO OSI reference model.
2.1 The layers
PHYSICAL LAYER The physical layer is concerned with transmission of data bits over a communication line - that is the transfer of a ’1’ level to the other machine. In this layer we worry about connectors, cables, voltage levels etc. DATALINK LAYER The datalink layer is concerned with the (hopefully) error-free transmission of data between machines. This layer normally constructs frames, checks for errors and retransmits on error. NETWORK LAYER The network layer handles network routing and addressing. It is sometimes called the packet switching network function. At this level are performed line connection and termination requests. X.25 is the internationally accepted ISO network layer standard. IP is the Internet network layer standard. TRANSPORT LAYER The transport layer provides an interface between the ’network communication’ layers (1..3) and the higher service layers. This layer ensures a network independent interface for the session
CHAPTER 2. OSIRM
24
layer. Since there can be varying network types, ISO. The Internet protocol suite has two common transport protocols: • TCP - Transmission Control Protocol • UDP - User Datagram Protocol The transport layer isolates the upper layers from the technology, design and imperfections of the subnet. SESSION LAYER The session layer is closely tied to the TRANSPORT layer (and often the software is merged together). The session layer is concerned with handling a session between two end users. It will be able to cleanly begin and terminate a session, and provide clean ’break’ facilities. PRESENTATION LAYER This layer is concerned with the syntax of the data transferred. For example it may be desired to convert binary data to some hex format with addresses. The conversion to and from the format is handled at the presentation layer. Data may also be encoded for security reasons here. APPLICATION LAYER The application layer provides the user interface to the network wide services provided. For example file transfer, electronic mail. Normally this layer provides the operating system interface used by user applications.
2.2 Example
Let us say that the President of one nation decides to declare war on another nation. The President calls in the Secretary of State who handles all such matters. The Secretary decides that the appropriate way to handle this is to announce the fact at the United Nations, and so calls in the country’s U.N. representative. The U.N. representative rushes off to the U.N. and after gaining the floor announces ’WAR’!!!
CHAPTER 2. OSIRM
25
Nation A
Nation B PRESIDENT SECRETARY REPRESENTATIVE UNITED NATIONS
PRESIDENT SECRETARY REPRESENTATIVE
Figure 2.2: War is declared! The U.N. representative of the opposing nation stalks out, and calls her Secretary of State who in turn warns the other President. In this example, the WAR ’challenge’ is from President to President, although the communication is down through the protocol stack. Note also that the secretary may have decided that an appropriate way to issue the challenge may have been to drop bombs on the capital city of the opposing nation, bypassing the slow technique given above. There may also have been more levels than given above - for example the challenge may need to be translated, or the ’representative’ may have decided that the best way to communicate the challenge was from secretarial staff to secretarial staff. Why have we chosen a military example above? Most of our knowledge about computer networking comes from ARPA (Advanced Research Projects Agency), a branch of the U.S. military, which was set up in the early 1960s and did research into computer networking. The ARPA internet has been running since the late 1960s. The ARPA internet has no session or presentation layers, as no one has found a use for them in the the first 25 years of operation. One ARPANET transport protocol is called TCP (for Transmission Control Protocol). The ARPANET network protocol is called IP (for Internet Protocol). TCP/IP is in widespread use throughout the computer world.
CHAPTER 2. OSIRM
26
ISO
CCITT
X.400 T60 T100/1 T0/4/5
File Transfer 8571 Message Handling Job Transfer 8831 Teletex Virtual Terminal 9040 Videotex Facsimilie 8822/3/4/5 8326/7 8072/3 8473/8348 8802.2 8802.3 8802.4 8802.5
Ethernet Token Bus Token Ring
7 6 5 4 3 2 1
X.408 X.215 X.214 X.25 X.212 X.21
PSDN
T50/51 T62 T70 T30 T71 V.24
PSTN
I450 I440 I430
ISDN
Figure 2.3: ISO and CCITT protocols.
2.3 Sample protocols
In figure 2.3, we see an assortment of protocols. On the left are some ISO standard protocols on the right some CCITT ones. The protocols are arranged by the layer in which they appear. At layer 3, the network layer, we see the CCITT standard X.25. At the physical layer we see standards such as V.24 (found in modems). Notice that the CCITT naming standard allows you to take a guess as to its use: • the X.xx standards are used in the PSDN • the V.xx and T.xx standards are used in the PSTN • the I.xxx standards are for ISDN
Chapter 3 Layer 1 - Physical
Definition: The physical layer is concerned with the transmission of bits through a medium. In the following chapters, we describe important features of each of the layers of the OSI reference model. Each chapter follows a common format: 1. Introduce the layer 2. Give some sample protocols 3. Outline the addressing schemes 4. Sections on the characteristics and modes of operation 5. Diagnostic tools for this layer
3.1 Sample ’layer 1’ standards
In table 3.1, we summarise various signalling standards related to computer communications. They range in speed from 38.4Kb/s to 125Mb/s (a range of about 2,500:1). The maximum design distances range from 10m to 200km (a range of about 20,000:1). All are in common use at present, and are considered adequate in their application area: • MAC, Token ring, Ethernet & Spread-Spectrum - used to network computers on LANs (Local Area Networks). 27
CHAPTER 3. LAYER 1 - PHYSICAL
Standard RS232 MAC Centronics SCSI Ethernet Method +/- 12v +/-1v 0,+5v 0,+5v 0,-1.2v Speed 38.4Kb/s 220Kb/s 800Kb/s 40Mb/s 10 Mb/s S/P S S P P S Dist 100m 100m 1.5km 10m 10m 90m 185m 500m 200km 1.5km 2km Type P to P P to P M/drop P to P M/drop P to P M/drop M/drop ring ring M/drop Cable Various Various Various Ribbon Ribbon UTP RG58 RG11 Fibre Various Conn. DB25 DB9 Various Special Special RJ11/RJ45 BNC Vampire Various Various None Isol. N N Y/N N N Y/N Y/N Y/N Y N Y
28
Fibre (FDDI) Token Ring S. Spect.
Light 0,+5V Radio
125Mb/s 16Mb/s 2Mb/s
S S S
Table 3.1: Sample physical layer standards. • RS232 - for linking two machines, connecting to a modem, or connecting to a printer. • SCSI - commonly used for connecting to disks. • FDDI - for linking campus-wide networks (MAN - Metropolitan Area Networks) • Centronics - for connecting directly to printers from a computer.
3.1.1 Token ring cabling
With token ring cabling systems, there are quite a few different cabling types used. Most of them use a setup that looks like a star. The actual wiring is a ring laid out as a star. At the center of the star is a MAU1 . The MAUs come in all sorts of flavours - from systems with no active components (just make before break plugs) through to computer driven, monitored hubs.
3.1.2 Localtalk cabling
All Macs come with networking hardware and software built in. Older style Macintosh computers have a 200kb/s system with a datalink layer like HDLC2 . Again there are multiple cabling schemes. There are also two different plugs (DB9 and MINI-DIN8). The signals that come out are RS422 like, and are not isolated. For this reason, most Mac cabling schemes use a drop box to connect to a network. In the drop box is an isolation transformer. The system will support 100 machines over 1500 m.
1 2
Media Access Unit High-level Data Link Control
CHAPTER 3. LAYER 1 - PHYSICAL
29
Different Refractive indices
Figure 3.1: Total internal reflection in a fibre.
3.1.3 UTP cabling
UTP3 is used for many systems (Token ring/Mac etc), but principally for 10M and 100M ethernet. The original idea was to use existing telephone wiring over distances of 30 - 90 meters. However, now people are replacing all of their building wiring with special UTP cable (known as CAT5) to allow their network to work at 100Mbps at some time in the future (if they buy 100Mbps hardware). When you use these building wire systems, it is called future proofing the network. The CAT5 cable will work at 100 Mbps over 90 meters. The general architecture at the physical layer is STAR. For ethernet (CSMA/CD4 ) networking over UTP, the wires are patched to either a hub or an etherswitch.
3.1.4 Fibre optic cabling
Fibre technology involves transmitting modulated light down a small plastic or glass fibre. The fibre is made from two different materials, and if the incident light reflects from the material interface at a small enough angle, the light reflects totally. If the angle is increased there is not much reflected. Fibre cannot be sharply bent for this reason, and also to prevent damage to the fibre. Fibre cannot (easily) be tapped, so it is only suitable for point to point cabling. Any intrusion into the fibre generally stops it working. FDDI - (The Fibre Distributed Data Interface) is a 100 Mbps technology, a little like token ring. It has a 200km maximum length. The cabling may not look like a ring (by using multiple fibres in one cable).
CHAPTER 3. LAYER 1 - PHYSICAL
30
Normal Operation
Single Break
Figure 3.2: Counter rotating rings in FDDI. Note: FDDI uses dual counter rotating rings. When a cable or its interface is damaged, the interface on either side detects this and heals the ring. It is possible to end up with 2 or more rings. Fibre is also used for interconnecting ethernets. A standard to do this is FOIRL5 .
3.1.5 Thin and thick ethernet cabling
One common standard for reticulating ethernet is called thin ethernet. You may also see the term 10base2. (The 10 implies 10Mbps, base implies baseband, and 2 implies that you can have about 200m in a single segment - actually 185m). The cable connectors are BNC, with T connectors to connect to the computer, and link together cable segments. The cable is about 3mm in diameter, has a characteristic impedance of 50Ω (see section 3.4), uses a multi-drop (or bussed) scheme, and should be labelled RG58. A better and more expensive standard is thick ethernet. The cable is RG11, and typically costs five times the amount of thin ethernet. Thick ethernet is also called 10base5 (the 5 implies that you can have about 500m in a single segment).
3 4
Unshielded Twisted Pair. Carrier Sense, Multiple Access, with Collision Detection 5 Fibre Optic Intra Repeater Link
CHAPTER 3. LAYER 1 - PHYSICAL
Type Cable Length Machines Type Cost 10base2 RG58 185m 30 Bus low 10base5 RG11 500m 100 Bus high UTP CAT5 90m 2 P/P low Table 3.2: Comparison of ethernet cabling strategies.
31
The cable runs for thick ethernet are unbroken (i.e. we do not cut the cable). Instead, a ’vampire tap’ clamps around the cable and a needle connects with the center conductor. There is often some signal conditioning hardware in the vampire tap box and a drop cable to a 15 pin (AUI6 ) connector on the computer.
3.2 Addressing
At the physical layer, we use the following manual methods: • Tags and markers for connectors. • Tags and markers for cables. • Network diagrams. There is no one standard, but cable suppliers supply cable and connector labelling equipment. It is not possible to overemphasize the importance of correct labelling for cabling. Most network problems have simple solutions along the lines of: • “Put the plug back into the computer”, and: • “Why did you cut the cable?”, along with its companion: • “Where do you think the cable is broken?” Clear network diagrams, and marked cables can minimize these sorts of problems.
3.3 Spectrum
We use only a part of the electromagnetic spectrum. In figure 3.3 we see the most common application areas of modulated signalling.
6
Attachment Unit Interface
CHAPTER 3. LAYER 1 - PHYSICAL
32
8 9 10 11 12 13 14 15 16
10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 Twisted Pair Telephone Coaxial AM radio TV FM radio Satellite Microwave Optical Fibre
2
3
4
5
6
7
Figure 3.3: Electromagnetic spectrum. Note: Figure 3.3 has a logarithmic scale, each division encompassing ten times the range of the previous one. So even though the microwave part of the spectrum encompasses two divisions (a range of about 100:1), the optical fibre part is much larger. • The bandwidth of optical fibre is about 900,000,000 MHz • The bandwidth of microwave is only about 100,000 MHz • The bandwidth of coaxial cable is only about 1,000 MHz It is clear from Shannon and Nyquist (ref section 1.3.3) that the bit rate on an optical fibre could be much higher than any of the other media given here, if the noise levels are comparable. In fact the noise levels are typically much lower than those found in other media.
3.4 Signals and cable characteristics
If we look at a real signal, it does not always look like the original transmitted signal. There are two factors causing the sort of degradation shown in figure 3.4. 1. The frequency and gain characteristics of the cable/media 2. The group delay of the cable/media on the signal. Group delay is a term used to indicate the effect caused by the variation in velocity of the different frequency components of the signal.
CHAPTER 3. LAYER 1 - PHYSICAL
33
Ideal
Real
R
At a low speed
At a high speed
Figure 3.4: ’Real’ signals on a cable.
We normally choose a high quality cable to get the best frequency characteristics, and set a maximum length (for example - with ethernet, 185m over RG58). The signals also reduce in level with distance. Over the 185m of an RG58 ethernet cable, the voltage level of the signal can drop to 2 of its normal level. The speed of transmission in a cable is about 2 C (200,000,000m/S). 3 3 However there is another cause of signal degradation, which is often more important. When our media is incorrectly terminated, we may get reflections of our signal. These reflections may exaggerate or hide our signal. To understand why this happens, we need to look a bit more closely at the characteristics of our media. We will do this by looking at a signal cable. First - two definitions: RESISTANCE the quality of a circuit which impedes the flow of direct current IMPEDANCE the quality of a circuit which impedes the flow of alternating current • A capacitor will not pass a direct current. Hence it has an infinite resistance. • However it will pass an alternating current. A capacitor is said to have an impedance Z. Cable model: We can model a cable as an electrical circuit. In figure 3.4, we see one possible modelling of an electrical cable. We can model a perfect cable as a series of short segments, each containing a series inductor, and a capacitance. If the capacitance of our perfect cable over a length dx was Cdx, and the inductance was Ldx, the characteristic impedance is: Zc =
L C
Ω
CHAPTER 3. LAYER 1 - PHYSICAL
34
Note that this has dimensions of pure resistance7 , and so if we could construct an infinitely long cable its impedance Z (which varies with frequency) would tend toward a fixed resistance. Another way of saying this: • if you wanted an infinitely long cable a resistor will model it. This resistive value is called the characteristic impedance of the cable, and is dependent on the cable size, shape and material: Line Zc Twisted Pair 120Ω Coaxial cable 50Ω→ 120Ω Wire over ground 120Ω Microstrip (In ICs) 120Ω Parallel Wires 300Ω If we put a step waveform, from 0 to V volts at time t0 at one end of an infinitely long cable of V characteristic impedance Zc , the incident fronts of voltage (Vi ) and current (Ii = Zi ) will move c along the cable together at a speed of about 2 C. If instead of this infinitely long cable, we had 3 finite cable with a termination resistor (Rc = Zc ), this will appear as an infinitely long cable attached to the end of our cable. When the voltage and current fronts reach the termination, the V voltage across the line at any point is Vi , and the current in the terminator is Ii = Ric . We have a steady, stable state. If however the termination resistor had a value Re = Zc , when the fronts reach the terminator, V V the current must be both Ric , and Rie . This is impossible since Re = Zc . We resolve this by noting that when the front reaches the terminator, a reflected voltage (Vr ) and current (Ir ) front begins travelling back down the line. The amplitude and polarity of the reflected front conserves the i +V energy in the original step, and following the relation Vi +Irr = Re . I What this means, is that we get a reflected signal when an incident pulse reaches a discontinuity in the impedance of a cable. We normally place a resistor with a value Zc at the end of the cable to inhibit these reflections. Discontinuities in the impedance of the line can be either side of the correct impedance, and our reflections can be as large as the incident signal. • If we remove the termination resistor, we get positive reflections. • If we short out the remote end of a cable, we get negative reflections. We can see from this that the correct termination of any media is vital to correct operation, and that connectors must be chosen to minimise reflections.
7
You may wish to check this with dimensional analysis.
CHAPTER 3. LAYER 1 - PHYSICAL
Original Pulse Reflected Pulse Signal End time
35
Other End time
Figure 3.5: Reflections. Summary: Every terminator, T junction, cable bend (and so on) introduce deviations in the Z of the cable. These variations in Z cause positive or negative reflections, which may accumulate, and stop reliable operation of the cable. With 10 Mbps ethernet, we have 0.1 us pulses (one pulse in every 20 m), and reflections can change the levels of other pulses on the cable. RG11 is a much higher quality cable (and hence lower noise). In addition it uses a method for tapping the cable which causes very little Z variation. In either case, we must correctly terminate all networking cables with a resistor with a resistance R = Zc - the characteristic impedance of the cable.
3.5 Noise
It is normal to have up to 1 volt of random noise at some locations, rising to 100s of volts of noise in a fault. During a storm, it is common to have 1,000s of volts between buildings during lightning strikes. We have to be careful with earthing to reduce noise. If you earth a cable at two different points, differences in the two earths may be significant. In figure 3.6, we see two noise generators. With a single noise generator, even if the level of noise is much greater than the signal, the differential amplifier cancels out the signal, because it appears in both legs. However with two noise generators, the differential amplifier will amplify the difference between the two signals. For this reason, earthing an ethernet cable in more than one place results in an increase rather than a decrease in noise.
CHAPTER 3. LAYER 1 - PHYSICAL
36
1 50 50 1 1M Signal Generator 1 Differential Amplifier 1 Noise Generators
Figure 3.6: Noise generation. Summary: • Ethernet signalling is differential in order to cancel out noise common to both conductors. • Applying an earth to both ends of a cable is like adding noise to one leg. • Earth points are not perfect. • Earth a differential cable in (at most) one place. • Check the whole cable.
3.6 Electrical safety
When we interconnect machines, we must check that we are not being electrically unsafe. If we take ethernet as an example, all signals to and from an ethernet card are passed through electrical isolation circuitry for the following two reasons: 1. To stop charge buildup on the cable 2. For electrical safety
CHAPTER 3. LAYER 1 - PHYSICAL
37
Each ethernet card has a 1 M Ω resistor between the signal ground and local ground. This does not provide electrical safety (A 1 M Ω resistor cannot sink much current). The function of the resistor is to stop the line building up a charge which might damage electronic circuitry, and perhaps give you a little shock. However, some cabling systems (particularly thin ethernet) can be hazardous if misused. The ethernet cable has metallic connectors, which connect to the shield of the cable. If the cable is used to connect a machine in one building to machines in another building, and the cable is earthed in the other building (accidentally or otherwise), the cable may become an earth return for an electrical fault current. For example, if there was an earthing fault in the other building, and 300V was placed on the cable shield, there are several scenarios: • You have also earthed the cable in your building - cable heats up, melts... • You touch the cable while reaching round the back of the machine - you heat up, melt... Summary: • Ethernet should not be used to interconnect buildings without safeguards. • Ensure systems are earthed correctly
3.7 Synchronization
We may send our data synchronised (meaning clocked) or asynchronously (without a clock):
First Sample
b0 b1 b2 b3 b4 b5 b6 b7 Start Bit
time
Stop Bit
Here we have asynchronous transmission. The reception algorithm is: • Receiver listens for start bit transition
CHAPTER 3. LAYER 1 - PHYSICAL
• waits 3T/2 to get b0 • then T to get b1 • then T to get b2 ... and so on ... The implication of this is that both ends must agree on a rate of transmission.
38
Common asynchronous systems are found in the RS232 connection in the back of a PC. It is normally settable to data rates such as 300, 1200, 2400, 4840, 9600, 19200, 38400 bps. Note: With RS232, we send a 0 as +12V and a 1 as -12V. This is called baseband in that we are not transmitting our bits by modulating another signal (as is done with a modem).
3.8 Digital encoding
TIME BITS CODE CLOCK RECVD
TIME BITS CODE CLOCK RECVD
Bipolar
Manchester
In Bipolar encoding, a ’1’ is transmitted with a positive pulse, a ’0’ with a negative pulse. Since each bit contains an initial transition away from zero volts, a simple circuit can extract this clock signal. This is sometimes called ’return to zero’ encoding. In Manchester (phase) encoding, there is a transition in the center of each bit cell. A binary ’0’ causes a high to low transition, a binary ’1’ is a low to high transition. The clock retrieval circuitry is slightly more complex than before.
3.9 Modems
When we transmit data over a media which does not support one of these simple encoding schemes, we may have to modulate a carrier signal, which can be carried over the media we are using. The telephone network supports only a limited range of frequencies. We use a range of modulation methods, often in combination: • AM Amplitude modulation
CHAPTER 3. LAYER 1 - PHYSICAL
• FM/FSK Frequency modulation - frequency shift keying • PM/PSK Phase modulation - phase shift keying
39
With a bandwidth of only 3,000Hz Nyquist shows us that there is no point in sampling more than 6,000 samples per second and if we only sent 1 bit per change in signal, we could only send 6,000 bits/sec. Common modulation methods focus on sending multiple bits per change to increase data rates (up to the maximum determined by noise - Shannon). The most common method is phase modulation, shown below:
10 0 90 180 270 00 01 10 11
00
01
00
We can also send different amplitudes at the different phases. The following phase plots indicate useful phase/amplitude values:
90 0 180 270
180 90 0
270
These schemes use multiple amplitudes and phases. They are called QAM8 . The one on the left
8
Quadrature Amplitude Modulation.
CHAPTER 3. LAYER 1 - PHYSICAL
40
has 2 amplitudes and 4 phases giving a total of 3 bits per change. In the other example, we are sending 4 bits/change (4 bits/baud). Common modem standards are: • V.32 bis (14k4 => 14.4k) - 6 bits/baud => 64 points in the constellation. • V.34 bis (28k8 => 28.8k) - 7 bits/baud => 128 point in the constellation. Since modems are now quite complicated (they have computers, and can make all sorts of decisions to improve the transmission of data), we often need to communicate to the on-board modem computer. One common standard is that promulgated by the Hayes company. Many modems will respond to Hayes commands: Command +++ at&f atdt2866 ath0 at... Meaning Enter command mode Reset to original settings Dial number 2866 Hang up phone (and so on...)
Most modems do some or all of the following to reduce errors and improve speed. • Add a parity bit to each 8 bits. • Carefully choose where to place bit patterns in the constellation to reduce errors. • use (software) compression of the data (MNP5).
3.10 Diagnostic tools
Many network faults are found at the physical layer, but there is no one tool to test for every possible fault. We can select from the following. 1. Eye/Brain - Most physical layer faults are visible or you can deduce what must be causing the fault. 2. Multimeter - A multimeter may be used for a cursory check, but it can pass a cable that should fail. 3. TDR9 - The TDR emits a short pulse onto the line to be tested, and then listens for an echo.
CHAPTER 3. LAYER 1 - PHYSICAL
41
4. Replacement - If you think a segment is faulty, replace it with a known good one. Not another one. A good one. The TDR can check cables in ways that other tools cannot. If you have what you assume to be a good cable, a pulse put onto that cable should not echo. We have the following possibilities: • No Echo -> cable is o.k. and terminated correctly. • Same polarity echo -> cable has a high impedance mismatch (open circuit - or just bad). • Inverted echo -> cable has low impedance mismatch (closed circuit - or just bad). Note: Some TDRs will show you the reflected wave form, some just state good or bad. The TDR can also measure the time between the pulse and the echo. We remove the terminator on the far end of the cable, and measure the time between the transmitted pulse and the reflected pulse. • Distance to fault =
v∗t 2
where v is the velocity of the impulse.
Note: To get 1 meter resolution on a TDR, the pulse must be very short (typically 5nS). In order for the 5nS pulse to be safe to electronic circuitry, the amplitude should be less than 10 V. A 10V 5nS pulse does not travel very far along a cable before the group delay effect degrades it (200 m Max). To use TDRs on longer cables, you use longer pulses with a reduction in resolution.
9
Time Domain Reflectometer.
Chapter 4 Layer 2 - Datalink
Definition: The datalink layer is concerned with the error-free transmission of data between machines on the same network. This layer normally constructs frames, checks for errors and retransmits on error. The datalink layer can be examined in terms of its interfaces - the service it provides to the layers either side of it:
¡ ¡ ¡ ¡ ¡ ¡ ¢ £ ¢ £ ¢ £ ¢ £ ¢ £ ¡ ¡ ¡ ¡ ¡ ¡ ¢ £ ¢ £ ¢ £ ¢ £ ¢ £
The service provided to the N/W Layer is the transfer of data to the N/W Layer on (possibly) another machine.
The physical layer service used is the transfer of bits.
The service provided to the network layer may be: • connectionless or connection oriented • acknowledged or not acknowledged 42
CHAPTER 4. LAYER 2 - DATALINK
43
Note: A Hub, transceiver or extender operate at the physical level. Bridges and etherswitches examine the datalink frame and so we say they operate at the datalink level. Routers operate at the network layer. In order to provide the service to the network layer, the datalink layer may: • frame the data • deal with errors • control the flow of data
Data Datalink PDU Data
Data
Physical PDU Data
The datalink layer on one machine communicates with a matching datalink layer on another machine, and must attach extra information to the transmitted data. In the figure, we see extra information being attached to data by the datalink layer software. This extra data is called a PDU1 . The PDU might contain: • machine addressing information. • information to assist in error recovery. • information to assist in flow control. This technique of adding information to each outgoing message is repeated at each layer, so as the message moves down through the layers, it gets larger. Each layer prepends a PDU for the same layer on the other machine.
1
Protocol Data Unit.
CHAPTER 4. LAYER 2 - DATALINK
44
4.1 Sample standards
Name HDLC Ethernet PPP LLAP Type P-P M/drop P-P M/drop Frame size arbitrary 1500 arbitrary 600 S/Win. Y N N N Numbers Y (1-8/127) N N N Error CRC-CCITT 32 bit hash CRC-16/32 CRC-CCITT Frame Flag Preamble Flag Flag Addressing 1 byte 6 bytes 1 byte 1 byte
In the table, we summarize some datalink layer standards, identifying areas in which they differ: • Point to point or multidrop? • Size of transmitted frame? • Sliding windows? • Are the frames numbered? • Error detection scheme used? • How are frames delineated? • What size addressing?
4.1.1 HDLC
HDLC is derived from the original IBM standard SDLC2 , and is commonly found in use at sites with mainframes. A derived protocol LAPB3 , is an ISO layer 2 standard.
4.1.2 Ethernet
Ethernet is the term for the protocol described by ISO standard 8802.3. It is in common use for networking computers, principally because of its speed and low cost. In section 4.9, we examine the protocol used to resolve contention on an ethernet bus. Ethernet systems use Manchester encoded (see section 3.8), baseband signals.
4.1.3 PPP
PPP4 is a protocol specially designed for interconnecting two computers, particularly when one dials up the other. It is protocol independent - that is, we may use PPP to transport IP, or IPX, or any network layer protocol. PPP includes a protocol to assist in setting up higher layer communication.
2 3
Synchronous Data Link Control. Link Access Procedure B. 4 Point to Point Protocol.
CHAPTER 4. LAYER 2 - DATALINK
45
4.1.4 LLAP
LLAP is the Localtalk Link Access Protocol found on every Macintosh. It has some interesting qualities. For example: • Node IDs for each machine are dynamically assigned. • CSMA/CA5 is used to reduce collisions.
4.2 Addressing
There are three main types of addresses needed at any level of the reference model: 1. Machine (or source, or destination) addresses. 2. A broadcast address (for messages to all machines). 3. Multicast addresses (for messages to groups of machines). The first two are found on all systems, but the third may not be. Ethernet Each ethernet card is preprogrammed with a specific ethernet address. The addresses are six bytes, and are normally written as a series of six bytes separated by colons or full stops. 00:00:0c:00:42:b1 Each manufacturer of ethernet cards has a license to produce cards starting with a particular prefix, so you can tell who manufactured an ethernet card remotely, if you can find out its ethernet address. The ethernet broadcast address is ff:ff:ff:ff:ff:ff. Ethernet has no defined multicast addresses, but it is possible to use any unused broadcast address. Win95 machines use the ethernet address 03:00:00:00:00:01 to communicate with each other. Macintosh LLAP The LLAP uses single byte addresses. Addresses in the range 1 to 127 are assigned to client machines, those in the range 128 to 254 are for server machines. The address 0 is unused, and the address 255 is a broadcast address.
5
Carrier Sense, Multiple Access, with Collision Avoidance.
CHAPTER 4. LAYER 2 - DATALINK
46
4.3 Modes
When we communicate between two machines, we classify the communication method into the following broad areas: • Half duplex - each system transmits alternately (polite conversation) • Full duplex - systems transmit at same time (New Zealand conversation) • Simplex - one way transmission (Politician’s conversation) We also differentiate between: • Connection oriented, and • Connectionless protocols. Name Method Advantages Disadvantages Connection oriented set up link secure slow transfer data ordered tear down link Connectionless transfer data no overheads loss of data fast CCITT use connection oriented protocols just about everywhere resulting in significant overheads.
4.4 Framing
How can we frame data? We have three possible methods: 1. Put a count in the data. This scheme is very prone to error - if you miss the ’count byte’, you may misinterpret the received data. It is seldom used. 2. Use physical layer violations at beginning and end of data. We have already seen how I 2 C uses special start and stop conditions by violating the normal operational rules for I 2 C data (see section 1.5.8). 3. Use bit or byte stuffing to differentiate between control and data signals. (The most common method).
CHAPTER 4. LAYER 2 - DATALINK
47
4.4.1 Bit stuffing
With bit or byte stuffing, we specify an illegal bit or byte pattern in our data. If this pattern is received, it is outside of the frame. In bit stuffing, we use six 1s in a row: Q: What if the data includes five (or more) 1s in a row? A: We stuff in an extra bit after every fifth 1. At the receiving end if we receive five 1s followed by a 0, we remove the 0. This is done by hardware and is a standard framing method.
4.4.2 Byte stuffing
On some equipment (particularly Burroughs/Unisys), byte stuffing is used. A special character STX (in the ASCII table) identifies the start of text, ETX the end. If you want to transmit an STX or ETX you precede it with DLE. The receiver looks for DLEs and removes them, accepting the next character as a data value (whatever it is).
4.5 Error detection
It is possible to use ad-hoc methods to generate check sums over data, but it is probably best to use standard systems, with guaranteed and well understood properties, such as the CRC6 . The CRC is commonly used to detect errors. The CRC systems treat the stream of transmitted bits as a representation of a polynomial with coefficients of 1: 10110 = x4 + x2 + x1 = F (x) Checksum bits are added to ensure that the final composite stream of bits is divisible by some other polynomial g(x). We can transform any stream F (x) into a stream T (x) which is divisible by g(x). If there are errors in T (x), they take the form of a difference bit string E(x) and the final received bits are T (x) + E(x). When the receiver gets a correct stream, it divides it by g(x) and gets no remainder. The question is: How likely is that T (x) + E(x) will also divide with no remainder? Single bits? - No a single bit error means that E(x) will have only one term (x1285 say). If the generator polynomial has xn + ... + 1 it will never divide evenly. Multiple bits? - Various generator polynomials are used with different properties. Must have one factor of the polynomial being x1 + 1, because this ensures all odd numbers of bit errors (1,3,5,7...).
6
Cyclic Redundancy Code.
CHAPTER 4. LAYER 2 - DATALINK
Some common generators: • CRC-12 - x12 + x11 + x3 + x2 + x1 + 1 • CRC-16 - x16 + x15 + x2 + 1
48
• CRC-32 - x32 + x26 + x23 + x22 + x16 + x12 + x11 + x10 + x8 + x7 + x5 + x4 + x2 + 1 • CRC-CCITT - x16 + x12 + x5 + 1 This seems a complicated way of doing something, but polynomial long division is easy when all the coefficients are 1. Assume we have a generator g(x) of x5 + x2 + 1 (100101) and the stream F (x): 101101011. Our final bit stream will be 101101011xxxxxx. We divide F (x) by g(x), and the remainder is appended to F (x) to give us T (x): 1010.010000 100101 )101101011.000000 100101 100001 100101 1001.00 1001.01 10000 We append our remainder to the original string, giving T (x) = 10110101101000. When this stream is received, it is divided but now will have no remainder if the stream is received without errors.
4.6 Error correction
There are various methods used to correct errors. An obvious and simple one is to just detect the error and then do nothing, assuming that higher layers will correct the error.
4.6.1 Hamming
The Hamming distance is a measure of how FAR apart two bit strings are. If we examine two bit strings, comparing each bit, the hamming distance is just the number of incorrect bits at the same location in the two bit strings.
CHAPTER 4. LAYER 2 - DATALINK
49
If we had two bit strings X and Y representing two characters, and the hamming distance between any two codes was d, we could turn X into Y with d single bit errors. If we had an encoding scheme (for say ASCII characters) and the minimum hamming distance between any two codes was d + 1, we could detect d single bit errors7 . We can correct up to d single bit errors in an encoding scheme if the minimum hamming distance is 2d + 1. If we now encode m bits using r extra hamming bits to make a total of n = m + r , we can count how many correct and incorrect hamming encodings we should have. With m bits we have 2m unique messages - each with n illegal encodings, and: (n + 1)2m (m + r + 1)2m m+r+1 m+r+1 ≤ 2n ≤ 2n ““‘ ≤ 2n−m ≤ 2r
We solve this equation, and then choose R, the next integer larger than r. Example: If we wanted to encode 8 bit values (m = 8) and be able to recognise single bit errors: 8+r+1 9 r R ≤ 2r r ≤ 2 −r ∼ 3.5 = = 4
4.6.2 Feed forward error correction
In the previous example, each transmitted encoding depends only on the data you wish to transmit. Convolutional codes allow the output bit sequence to depend on previous sequences of bits. The resultant bit sequence can be examined for the most likely output sequence, given an arbitrary number of errors. This encoding technique is computationally inexpensive, and is commonly used in modems.
4.7 Datalink protocols
These protocols can be simple - for example, if we do not care if the other machine receives the frame, we can just send it. This is what is done in ethernet. Transmit(line,data);
7
Because the code d bits away from a correct code is not in the encoding.
CHAPTER 4. LAYER 2 - DATALINK
50
A slightly more sophisticated protocol might be one where the transmitter wants a positive acknowledgment that the frame has been received. Transmit(line,data); while not Receive(line)=ACK do {nothing}; The previous algorithm fails completely if no ACK is received. The general solution for this sort of problem is to introduce timeouts into our protocols to handle corruption or loss of messages. repeat Transmit(line,data); SetSignal(timeout); while (Receive(line)<>ACK) AND not timedout do {nothing}; if LastReceivedData=ACK then ReceivedOK := TRUE until ReceivedOK; A closer examination of this code indicates that it will also fail quickly under some circumstances8 , and it is clear that we have to code this carefully9 . The three army problem also demonstrates that there are some simple situations with no deterministic solution. This particular problem is encountered when one or other computers using a connection oriented protocol attempt to shutdown or disconnect. We use protocols that give a good likelihood of success, or synchronize our machines in these circumstances. If the transit time for a message is long, simple mesg-ack protocols can be unusable. If our RTT10 is large, our data throughput can be reduced dramatically.
You might want to examine the situation when an ACK for a previous message arrives late. In class we will examine six protocols in more detail - graded from simple ones like the first given above, right through to safe ones. 10 Round Trip Time - If our messages were sent via satellite, we may have a large RTT.
9
8
CHAPTER 4. LAYER 2 - DATALINK
51
0
1
2
Timeout Interval 3 4 5 6 7 8
2
3
4
0
1
E
3 4 5 6 7 8 2 (Buffered by datalink layer)
Figure 4.1: Sliding window protocols. Example: With 1,000 byte messages sent via satellite (RTT=0.2sec) at 1 Megabyte/sec our speed is 1,000 = 5, 000 bytes/second. This is obviously very slow and wasteful - it 0.2 1000 should take 0.002 = 500, 000 bytes/sec. To solve this problem, we use sliding window protocols.
4.8 Sliding windows
When the ends of a transmission are remote from each other, the time for an acknowledgment that a frame has been received can become significant. In satellite transmissions for example, the ’bounce’ time can be quite long. As well, some networks may store and forward data. For this reason, ’sliding window’ protocols are used. These protocols attempt to handle any sequence of garbled, lost and out-of-sequence frames. The method involves each frame containing a sequence number, and both the sending and receiving devices keep track of these numbers. The transmitter keeps track of all the unacknowledged frames in its sending window for retransmission if needed. Both the sender and the receiver require buffers to store the (unacknowledged) frames. When the window size is 1, the method reverts to a simple ’msg-ack’ scheme. There are two flavours of sliding window protocols:
CHAPTER 4. LAYER 2 - DATALINK
52
Go-back-N: The receiver stops acknowledging messages if one is lost. When the timer for frame #3 times out, since an ACK has not been received for it, the transmitter has to resend from frame #3. We have to buffer old transmitted messages. The size of this buffer is the transmitter buffer size. Selective Repeat: The receiver acknowledges all frames it receives and also ACKs for ones it did not get. It can ACK just by not bothering to acknowledge the message. Note: We still have transmit window buffers and timers for each buffer. We also now have receive window buffers. Note: Selective repeat is used in TCP. Piggybacks Often messages are going both ways at the same time. Messages can take along an acknowledgment for some previously received message. HDLC (high level data link control) uses these piggybacked ACKs. There are fields in the frame for both the message number, and the last received message number TWIN = RWN = 8 buffers
4.9 MAC sublayer
We can either have point to point connections between machines or a shared (bus) connection. On a bus system we have two main problems 1. how to get (controlling) access to the media 2. what to do if there is a collision11 . The MAC (Media Access Control) sublayer of the datalink layer handles this. ALOHA protocols12 are commonly used. • Simple ALOHA - Listen and then transmit if free • Slotted ALOHA - Wait for slot, Listen and then transmit if free
CHAPTER 4. LAYER 2 - DATALINK
53
Utilization 0.36 0.18
0.5
1.0
Attempts/framesize
Figure 4.2: Utilization of links using ALOHA protocols. In figure 4.2, we see the relative efficiency of links using both simple and slotted aloha. The utilization does not climb over 1 , but this does not mean that we can only have this limited e utilization of the media. These values represent the situation when all nodes on a shared bus system are all attempting to transmit with equal probability. In an ethernet system13 , machines on a smaller network can utilize close to 100% of the bus bandwidth.
4.9.1 CSMA/CD
Ethernet uses a system commonly called CSMA/CD. If a station wishes to send, it first checks the shared line for a carrier. Ethernet (manchester encoded) signals are added to a -1V DC signal. If the signal on the line is about zero volts, then the station can assume that the shared line is free, and can transmit:
while CarrierSense(line) do; Transmit(line,data);
11
{ Wait till line is free } { Transmit the data }
There are also various collision free protocols. These protocols generally use a ’contention’ phase of operation in which one transmitting device acquires the channel. An example of this is the I 2 C interface, which ensures that the device with the highest number succeeds. This contention phase is of course an overhead, but device addresses often have to be transmitted anyway. 12 The ALOHA protocols were originally developed at the University of Hawaii (hence the name) to allow the distant parts of the campus network to inter-operate. The links were radio transceivers, all operating at a single frequency. 13 Ethernet uses slotted aloha.
CHAPTER 4. LAYER 2 - DATALINK
54
Note: In a system with two machines separated by 500m of RG11 cable, there may be a time difference of 2.5 uS. The previous simple algorithm may fail to ensure that only one machine uses the shared (ethernet) bus at a time. Ethernet catches failures afterwards, using collision detection and binary exponential backoff (BEB) to handle failures. Each station listens to what it transmits. If the message becomes garbled, it assumes that a collision has occurred, and then sends a burst of noise to ensure every station knows a collision has occurred. The station then waits for a time before retrying:
backoff := 1; repeat while CarrierSense(line) do; { Wait till line is free } if Transmit(line,data)=CollisionDetected then begin Transmit(line,noise); delay(random(backoff)); backoff := 2*backoff; transmitOK := FALSE end else transmitOK := TRUE until transmitOK OR (backoff>65536); if backoff>65536 then { the transmission failed... } else { it was OK! }
Note: There is a (small) possibility that a transmission will fail because 2 stations select the same random times to wait - 16 times in a row. This event is unlikely to occur on the USP network between now and the end of the universe.
4.10 Diagnostic tools
These tools are often platform dependent - for example on a P.C., if we were using packet drivers, there are a set of small diagnostic tools (pktsend, pktchk) which test the datalink software independently of other networking software installed on the computer. Most networking cards come with card diagnostic software, but these will not test your datalink layers. You may also find useful: • Arp - indicates datalink and (IP) network addresses observed on a network.
CHAPTER 4. LAYER 2 - DATALINK
• Ping - a network-layer aware echo request program (IP). • Ipxping - a network-layer aware echo request program (for IPX).
55
• Packetview, gobbler, packetman, packetmon - tools to capture and display frames on your network. Note: Tools that capture frames must put the ethernet card in a promiscuous mode. In this mode they will capture all frames - not just broadcasts and those addressed to your machine. This can cause lots of interrupts to your processor, and it is common for these tools to lock up the computers on which they run. In addition, some ethernet cards discard bad frames without telling you (i.e. no interrupts). This can lead you to think there are no bad frames on your ethernet. With packetman, we can capture packets between any two machines, as long as one of them is on our network. The program shows: Top Window - One line summary of all the packets. List of all the frames captured. Center Window - Layer by layer breakdown of a selected frame (D/L, N/W, Transport, Application) Lower Window - Hex byte - by-byte display of the selected frame.
Chapter 5 Layer 3 - Network
Definition: The network layer handles routing and addressing for messages between machines that may not reside on the same local network. In an extended network, we are concerned with limiting the range of messages intended only for the local network, and routing messages that are to be delivered to a remote network. Network addresses are specified both for the source and the destination of the data. The OSI model allows for any addressing scheme, and specifies codes for all the common addressing formats1 . The format specification part has plenty of room for expansion for future addressing schemes. The addresses may be of variable length. To help identify the source and destination of messages, it is common to partition machine addresses into: • a host part, and • a network part2 . This simplifies the task of our software responsible for delivering messages. The messages contain destination addresses (and normally source addresses), and it is easy for the software to determine if the message should be delivered to a machine on the local network, or sent to another network. The difficult question here is: Which other network? There is no simple answer, and the routing software on an extended network may be complex.
Formats for telephone numbers, ISDN numbers, telex numbers, and so on. Sometimes we reuse datalink addresses, but sometimes we use a whole new addressing scheme for these network addresses.
2 1
56
CHAPTER 5. LAYER 3 - NETWORK
Service provided to the transport layer:
57
The network layer hides the network type and topology from the transport layer. It also provides a consistent addressing scheme. The OSI framework allows for two types of service: • Connection oriented, and • Connectionless (datagrams). The OSI network model specifies the following service areas: Connection and disconnection: Connection primitives provide for setting up a connection through the network. The disconnection primitives provide for the orderly termination of the connection. Data transfer/expedited data: Data transfer primitives provide for ordered transfer of data through a previously set up connection. The expedited data transfer primitives allow for a data packet to be sent ahead, bypassing the normal ordering and queuing schemes. Data transfer/unitdata: Unitdata is a connectionless data transfer - a datagram. Reset: Reset primitives provide for recovery after detection of some failure in the system. All data is lost. Report/status: Status services provide information on the status of the network. Network interconnection: The interconnection of networks (especially dissimilar ones) is a complex issue. Some networks cannot be connected together without losing some property of one of the networks. For example: the Token Bus network has a ’fast acknowledgment feature. If a bridge acknowledges a received frame, and then that frame cannot be routed to the destination, the bridge has ’lied’. If on the other hand the bridge does not acknowledge the frame, the sending unit will deduce that the destination is not available, which may also not be true.
5.1 Sample standards
In table 5.1, we summarize some network layer standards, identifying areas in which they differ:
CHAPTER 5. LAYER 3 - NETWORK
Name IP IPv6 IPX X.25 Addressing 4 16 10 variable OOB Y Y N Y Windows N N N Y Window size 8 or 128 Packet size 65,536 65,536 512 128 Numbering N N N Y Fragment Y Y N Y
58
Table 5.1: Sample network layer standards. • Bytes for address? • Out of band data? • Sliding windows? • Size of the window? IP and IPv6 IP3 is perhaps the most widely distributed network layer protocol. It belongs to a set of protocols, called IP, developed over the last 25 years. Initially the IP suite was developed for the US military as a research project into fault4 tolerant networks. The protocols cover from network to application layers, and are continually being developed. IPv6 is a development of IP, giving a larger address space, and support for alternative carriers. IP is documented in a set of documents called the RFCs5 . There is an RFC for every protocol in the IP suite. An RFC is initiated by anyone who wishes to specify a new protocol and they are commented on/vetted/improved by the internet community before final distribution. IPX As a contrast, IPX6 is a network layer protocol for use with Netware file servers. It is distributed by Novell Inc, and is a proprietary protocol. It is seldom used for anything except interaction with Netware file servers. X.25 Telecom provide ’connection oriented’ service with X.25. Many users put their own protocols on top of it. This may lead to inefficiencies - connection oriented services on top of connection oriented services.
3 4
• Size of packets? • Numbering scheme? • Allow fragmentation?
Internet Protocol The fault the US military was concerned with could tactfully be called a ’nuclear’ issue... 5 Request for Comments. 6 Internetwork Packet eXchange.
CHAPTER 5. LAYER 3 - NETWORK
59
X.25 defines an interface between DTE and DCE devices on public data networks. It is a CCITT recommendation, and has been adopted by every country providing PSDN services. X.25 in no way defines the internal methods by which the PSDN routes and switches the service. The facilities provided by X.25 include: • Connection oriented data transfer • Connectionless (datagram) data transfer • Sliding windows up to 128 • Selection of different carriers • Various charging options
5.2 Addressing
5.2.1 IP Addressing
The IP network layer addressing scheme uses four bytes7 , and is often written as dotted decimal, or hex numbers: Decimal Hex 156.59.209.1 9C.3B.D1.01 This address defines an interface not a host. - a host may have one, two or more interfaces, each with different IP network layer addresses8 . • Each interface has at least one unique address. • Machines on the same network have similar addresses. • For every interface on an IP network, you have not only the IP address but also a mask, which defines the network part of the address.
Four bytes will allow over 4,000,000,000 different machine addresses, and this was considered adequate 25 years ago. However a wasteful allocation scheme has resulted in much of this address space being used up. 8 The IP network layer address is often called the ’IP address’.
7
CHAPTER 5. LAYER 3 - NETWORK
60
5.2.2 IP network masks
An IP network mask looks like an address, but the meaning of the bits are different. In a network mask, the bits identify the network and host parts of the address: • If the bit is a 1, it is part of the network address. • If the bit is a 0, it is part of the host address. • The host addresses are normally consecutive, but need not be9 .
For example:
Interface address: 156.59.209.1 Network Mask: (AND) Host part: Network part: > 156.59.208.0 -> 10011100 00111011 11010001 00000001 255.255.252.0 -> 11111111 11111111 11111100 00000000 01 00000001 10011100 00111011 11010000 00000000 -
The host is 156.59.209.1 on network 156.59.208.0, with a network mask of 255.255.252.0. Machines can determine if they are on the same network by ANDing their host address with the network mask. If the resultant network addresses are the same, the two hosts are on the same network. Choosing an incorrect mask can result in confusion. Two special addresses are reserved on any IP network: 1. The one with the host part all 0s. 2. The one with the host part all 1s. You cannot allocate these addresses to machines. They are used for broadcasting to all machines on the same network.
5.2.3 IPX addressing
The IPX network layer protocol has: • a network part (four bytes), and
It is allowable to choose a mask with the host bits scattered throughout the mask bits. This would result in host addresses scattered over a range, not consecutive.
9
CHAPTER 5. LAYER 3 - NETWORK
• a host part (six bytes) This gives a total of ten bytes for the IPX address. It is written as follows: 93:3B:01:00:00:08:C0:31:55:24
61
From this address it is easy to determine the datalink address of the machine (00:08:C0:31:55:24) and the network on which it is found (93:3B:01:00).
5.2.4 Appletalk Addressing
We have already seen the 1 byte addressing scheme used in the localtalk datalink layer. The Apple network address is a 3 byte one, a 2 byte network part and 1 byte for the host. Once again, this addressing is hidden from the user, and dynamically set up10 . Note: the Mac architecture imposes a low limit on the number of interconnected Macs on a network.
5.3 IP packet structure
Figure 5.1 shows the structure of an IP packet. The ’ihl’ field gives the header length in 32 bit chunks. Notice that our IP addresses fit into the 32 bit source and destination address fields. The ’TTL’ field gives the time to live for the packet. Each time the packet passes through a router, it is decremented by 1. If TTL reaches zero, the router sends the packet back to the source address. This has two uses: 1. You won’t get a packet looping forever. 2. You can test reachability by artificially setting TTL. (see the item on traceroute in section 5.8).
10
You don’t have to do anything. Macs configure themselves.
CHAPTER 5. LAYER 3 - NETWORK
62
8 bits
8 bits
16 bits total length fragment offset
ver ihl type id TTL
protocol header checksum source address destination address (options)
body
Figure 5.1: IP packet structure.
CHAPTER 5. LAYER 3 - NETWORK
63
5.4 Allocation of IP Addresses
Worldwide, there are five classes of IP address ranges: Class Who for A Huge organization B Large organization C Small organization D Multicast E Reserved Prefix Size Number 0xxxxx... 224 27 10xxxx... 216 214 8 110xxx... 2 221 1110xx... 1 228 11110x... 1 227
Note: there is no way to ask for a single organizational IP address, and this accounts for the unfortunate situation we are getting to - nearly all the IP addresses have been allocated! There are some solutions to the IP address space problem: 1. Private IP addresses - accessable via a gateway. 2. IPv6 - has a 16 byte address space. 3. Reuse of addresses in remote regions, with router support.
5.5 Translating addresses
Computers on a network often need to translate to and from network and datalink addresses. An example: If a machine knows the network address of a machine it wishes to transmit to, it can find out if the other machine belongs to the same network. If it does, the machine needs to find out the datalink address of the other machine, so that it can directly send the message. The protocol to do this is called ARP11 .
11
Address Resolution Protocol.
CHAPTER 5. LAYER 3 - NETWORK
ARP
64
An ARP request for the specified network address is sent to the datalink broadcast address. Machines that know the translation between the network and datalink address respond with an ARP response, containing the datalink address for the specified network interface. Machines normally maintain ARP tables containing results of recent ARP requests. You may query these tables using the arp command: opo 30% arp -a manu.usp.ac.fj kula.usp.ac.fj teri.usp.ac.fj ? ? opo 31%
(144.120.8.10) (144.120.8.11) (144.120.8.1) (144.120.8.125) (144.120.8.251)
at at at at at
0:0:f8:5:6a:a1 aa:0:4:0:b:4 0:0:f8:31:1c:da aa:0:4:0:32:5 aa:0:4:0:7f:6
5.6 Routing
The particular routing scheme used by the network layer is normally hidden from the network layer user. However there are two main schemes used internally: • Fixed routing computed at connect time • Individual packet routing done dynamically If you have a group of networks connected by routers, we have to distribute routing information. RIP is one such router protocol. Routers listen for, and broadcast RIP12 packets out all interfaces. In this way, the routers can learn about adjacent networks. You can query the state of routing tables on most routers. The structure of IP subnetting minimizes the number of routes that a router has to keep track of. It is common for smaller machines to be given a default route (or gateway) rather than letting them sort it out using RIP. There are other protocols for routing, such as IGRP - the Internet Gateway Routing Protocol.
5.6.1 Routing Protocols
There are many routing algorithms, and we will just look at a few.
12
Routing Information Protocol.
CHAPTER 5. LAYER 3 - NETWORK
Static or fixed routing
65
We might use a technique such as the shortest path - here a path could be how long or how many hops or some other metric.
1 A
B 2
6 E 1
C F 1 1 #2
3 D #1
If hops were used for our metric, ABC is the shortest path from machine #1 to machine #2. However, if delay was used, and AB=1, AD=3, BE=2, DE=1, EF=1, BC=6 and FC=1, then the metric attached to ABC is 7 and ABEFC is only 5. In the above simple example, we could use fixed tables, and preload each router from these tables. There are various algorithms involving walks through the network which calculate the shortest path through a network, but you should note that static routing will not respond to changes in the network. Dynamic routing One common method used to dynamically find effective routes is the ’distance vector’ scheme, used by RIP. In distance vector routing, each router maintains a table of distances to all other routers indexed by router number. Periodically, attached routers exchange information about their tables. In the following diagram, we have an internetwork with seven routers. The figures represent the metrics attached to each route, and note that the route BD is about to change from a metric of 3 (meaning not so good) to 1 (meaning good).
CHAPTER 5. LAYER 3 - NETWORK
66
3 A 5 2
B 3->1 D 1 E 3
2
C
1 G
F
1
Before the change, router D has the following table, giving it’s view of the best routes:
Destination A B C D E F G Delay 5 3 3 0 4 1 2 Route A B F F F F
Router B then sends to D it’s version of the router table:
Destination A B C D E F G Delay 3 0 2 1 5 2 3 Route A C D A D C
Router D then rewrites it’s table to reflect the new information:
Destination A B C D E F G Delay 4 1 3 0 4 1 2 Route B B F F F F
CHAPTER 5. LAYER 3 - NETWORK
67
Router D can determine from router B’s information that there is a better route to router A than the direct one. Note: This algorithm responds to good news quickly and bad news slowly13 .
5.7 Configuration
5.7.1 Addressing
We have already seen how datalink addresses are configured or set: • Ethernet addresses are set by having a PROM on the ethernet card. • Token ring systems generally have an 8 bit switch for setting the token ring address. • Local Talk uses a dynamic scheme, to make the Mac networks self configuring. When a Mac starts up, it chooses a datalink address, and then broadcasts to the chosen address. If a machine replies, the Mac chooses a different address, and tries again. There are several strategies for setting network layer addresses: • Manually • Determine it from the datalink address (as used by IPX) • Dynamically retrieve one from a server Dynamic methods are considered best here, as we can centrally administer the addresses used, and then publish them to the machines, without having to go to each machine and ’set’ it. Machines wishing to find out their IP information broadcast a query. A server responds with the requested information. Here are common protocols: RARP - Reverse Address Resolution Protocol: The Amoeba processors in the Mixed lab use RARP to configure and boot themselves. RARP translates a datalink address to a network address. BOOTP - BOOT Protocol: This protocol is intended to be used for diskless booting of machines. It can also be used for IP configuration information. DHCP Dynamic Host Configuration Protocol: A newer version of BOOTP, with one major improvement. - it allows for leasing of IP addresses.
The problem here is commonly known as the count-to-infinity problem. The routers slowly add to their metrics for the bad path - each router thinking it has a slightly better path through another misinformed router.
13
CHAPTER 5. LAYER 3 - NETWORK
68
5.7.2 Routing
The configuration of routing information is normally best left to the protocols that are supposed to do it. However, sometimes you have to configure routing information. IP machines In Win95, you will have to set the default route in the IP properties window. UNIX and NT machines are generally self-configuring, but you may use the route command to add static routes. For example: to add a route to network 212.232.32.0 via gateway 128.32.0.130, use: route add net 212.232.32.0 128.32.0.130 You may also use the netstat command to display RIP information for your local router: netstat -r IPX machines Netware servers with multiple NICs can perform IPX (and IP) routing between the networks. In the netware configuration script executed at boot time, you bind protocols to each of the NICs, and if IPX is bound to both NICs, routing software in the server will route between them. Note: In some of the Netware documentation, they incorrectly call this bridging! Macs Mac routers normally have both localtalk and ethertalk ports. Any configuration of the boxes is generally done over the network using Mac software. Mac have their own terminology - they use the term zone instead of network. When Macs are connected to a network with routers in it, the Mac chooser has an area for selecting which zone you are interested in. The area has names (not numbers) in it. These names are set by the router. You can either let the router choose names and numbers, or you can specify them.
CHAPTER 5. LAYER 3 - NETWORK
69
5.8 Diagnostic tools
Network layer diagnostic tools are generally based on the original UNIX IP based systems, and examination of the relevant UNIX commands will help in understanding14 . You may find useful: arp: Displays and controls ARP tables. ping: Sends ICMP ECHO_REQUEST packets to network hosts. This command can be used to check for basic network connectivity:
manu> ping turing PING turing.usp.ac.fj (144.120.8.250): 56 data bytes 64 bytes from 144.120.8.250: icmp_seq=0 ttl=64 time=1 ms 64 bytes from 144.120.8.250: icmp_seq=1 ttl=64 time=0 ms ----turing.usp.ac.fj PING Statistics---2 packets transmitted, 2 packets received, 0% packet loss round-trip (ms) min/avg/max = 0/0/1 ms manu>
netstat: Displays routing and network status and statistics:
opo 36% netstat -r Destination Gateway Netmask Flags Refs Use Interface default reba.usp.ac.fj UGS 1 11012 ec0 144.120.8 opo.usp.ac.fj 0xfffffc00 U 10 423827 ec0 opo 37%
traceroute: Displays the route that packets take to the network host. This is done by setting the TTL value in an IP packet to 1. The first router then sends it back, giving you the transit time to the first router. The TTL is then set to 2, giving you the transit time to the second router, and so on:
manu> traceroute kai.ee.cit.ac.nz traceroute to kai.ee.cit.ac.nz (156.59.209.1), 30 hops ets 1 reba (144.120.8.16) 2 ms 2 ms 2 202.62.125.133 (202.62.125.133) 572 ms 579 ms 3 202.62.120.1 (202.62.120.1) 400 ms 393 ms 4 202.84.251.5 (202.84.251.5) 412 ms 595 ms 5 s4-3a.tmh08.hkt.net (205.252.128.157) 825 ms 806 ms 6 s4-3a.tmh08.hkt.net (205.252.128.157) 1141 ms 979 ms max, 40 byte pack3 222 399 418 773 786 ms ms ms ms ms ms
... packetman: Other useful tools are packet capturing and analysis tools such as packetman.
14
Use the man pages for each command.
Chapter 6 Layers 4,5 - Transport, Session
Definitions: The transport layer ensures a network independent interface for the session layer. We can specify how secure we want our transmission to be in this layer. The transport layer isolates the upper layers from the technology, design and imperfections of the network. The session layer is closely tied to the transport layer (and often the software is merged together). The session layer is concerned with handling a session between two end processes. It will be able to begin and terminate a session, and provide clean ’break’ facilities. At these higher layers of the reference model, we are dealing with software, and we may be involved with: • Writing software to interact with the protocol stack, and • Configuring the protocol stacks. When we write software for these layers, we use standard APIs1 . OSIRM In the transport layer, the reference model. In the session layer, the reference model provides for:
1
An API is the Application Programmer’s Interface.
70
CHAPTER 6. LAYERS 4,5 - TRANSPORT, SESSION
Name TCP UDP NETBEUI SPX Addressing IP address,port IP address,port nodename IPX address,port Communication session datagram datagram & session session Windows Y N N Y Window size variable 8 or 128
71
Table 6.1: Sample transport layer standards. • Setting up sessions with another entity on another machine, • Synchronizing sessions at agreed points, and • Handling interrupts and exceptions to the normal flow of information. IP The ARM2 has only four distinct layers3 : • Network Interface - This layer encapsulates all the hardware dependencies. • Internet - This is similar to the OSIRM network layer, and has only one implementation IP. • Host-Host - This is similar to the OSIRM transport layer, and has various implementations. • Process - This is similar to the OSIRM application layer, and has protocols for just about anything.
6.1 Sample transport standards
In table 6.1, we summarize some transport layer standards, identifying areas in which they differ: • Addressing? • Type of communication?
2 3
• Sliding windows? • Size of the window?
The Arpanet Reference Model. The terms used are the ones from the ARM. ARM and OSIRM are distinct network architectures, though there is a rough correlation between some of the OSIRM layers and ARM ones. The ARM architecture is not a cut-down version of the OSIRM. It has a clear layered structure and has been useful for many years. The Internet is built on IP/ARM.
CHAPTER 6. LAYERS 4,5 - TRANSPORT, SESSION
IP The Internet protocol suite has two common transport protocols:
72
• TCP - Transmission Control Protocol. The extra information found in the PDU for TCP is: source, destination, sequencenumber, acknowledgenumber, windowsize and so on4 . • UDP - User Datagram Protocol. The extra information found in the PDU for UDP is source, destination, length, checksum.
Netbeui Netbeui5 was developed by IBM in 1985, and was a protocol focussed on small LANs, segmented into small groups of computers. It is commonly used in WfW, LAN Manager, and NT. In common with our other network and transport layer protocols, Netbeui can be found on any datalink type, and on many platforms. SPX SPX is the sequenced protocol used by Netware file servers. Originally, Netware used the network layer protocol IPX for file serving, but the demands of larger networks led to the introduction of a transport protocol.
6.2 Session standards and APIs
Most network programming involves using an API which provides a session layer service. The API provides a set of system calls, and software which implements some view or abstraction of the network. It is normal to use these programming abstractions when doing network programming. They allow you to model the behaviour of the system, and understand its behaviour. Before programming using one of these APIs, you need to understand the abstract model, not the underlying protocols.
4 5
This is exactly what you would expect from a sliding window protocol. Network Basic Extended User Interface.
CHAPTER 6. LAYERS 4,5 - TRANSPORT, SESSION
Netbios
73
Netbios6 was developed by IBM. It is an interface to low level networking software, and through the API, you can send and receive messages to and from other Netbios machines. The Netbios calls on a PC are through Interrupt 5c, and require your software to initialize and maintain an NCB7 before calling the network software. Sockets The ’socket’ is an abstraction for each endpoint of a communication link. The abstraction allows data communication paths to be treated in the same way as UNIX files. When a socket is initialized, the API system calls return a file descriptor number which may be used to read and write in the same way as those returned by file opening calls. Protocol families supported include Appletalk, DECnet and IP. In the IP world, the socket API primitives support both TCP and UDP communications. Sockets are often used when implementing client/server applications. Remote Procedure Calls The RPC system was introduced by Sun, but unfortunately there are variations in its implementation. The intent is to provide a system in which networking calls look and act just like procedure calls. The program prepares parameters for the procedure call, and then makes a call to a stub procedure. The stub procedure uses the RPC runtime software to transfer the parameters, only returning when the call is complete. RPC is responsible for ensuring that the parameter data is transferred and returned correctly.
6.3 Addressing
We may have a different addressing scheme at each layer. At the network layer, the address refers to an interface to a machine. This is sometimes called the NSAP8 address. At the transport layer, we have TSAPs (Transport Service Access Points), typically an NSAP and a port number. This address identifies a particular data communication end point.
A networking ’BIOS’. BIOS is the term used on IBM personal computers for the low level software that hides the underlying hardware in the system. It is often found in a PROM. The letters stand for Basic Input Output Subsystem. 7 Network Control Block. 8 Network Service Access Point.
6
CHAPTER 6. LAYERS 4,5 - TRANSPORT, SESSION
Netbeui and Netbios
74
The NCBs contain an address constructed from a logical session number, and a netbios name (up to 15 characters). SPX SPX endpoints are identified by an IPX address, and a (16 bit) integer port number. Sockets When initializing a socket, we specify the ’address family’, the ’transport mode’ and the ’protocol’. With a normal IP socket, this may lead to a socket endpoint with: • a protocol: TCP/IP, • an address: 156.59..., and • a port: 2145
6.4 Transport layer
In the transport layer, we see again systems introduced in earlier chapters: • Connectionless and connection oriented transfers • Sliding windows • Flow control • Error recovery. We will look at two of the IP protocols.
CHAPTER 6. LAYERS 4,5 - TRANSPORT, SESSION
75
6.4.1 TCP
TCP is a connection oriented and secure protocol, designed to be safe in a wide range of environments. It can be used for slow bit rate point to point links via an undersea cable, or fast LAN use. It is defined in RFC793. TCP software packetizes a data stream in chunks of less than 64kbytes and attaches extra sequencing, and target process (port) information to each packet. Here is the TSAP header:
8 bits
8 bits
16 bits
Source port
Destination port
Sequence number Acknowledgement number Flags Checksum Window size Urgent Options Data
From the diagram, we can see that TCP supports piggybacked acknowledgements in a conventional go-back-N sliding window protocol. Congestion: There is some effort in TCP to reduce congestion. TCP handles both congestion caused by a low bandwidth network, and that caused by a slow receiver. The TCP transmission software maintains a congestion window size variable. When transmission starts, the TCP software increases the transmission segment size until either it is the same as the requested segment size, or it gets no response9 . Whenever a timeout occurs, the congestion window size variable is halved.
9
Indicating that the link is congested, or perhaps doesn’t support the packet size requested.
CHAPTER 6. LAYERS 4,5 - TRANSPORT, SESSION
76
6.4.2 UDP
The UDP header is much simpler:
8 bits
8 bits
16 bits
Source port UDP length
Destination port UDP checksum Data
6.5 Session layer
Applications.... 4 3 2 1 Netbios Sockets TCP Netbeui IP TLI SPX IPX RPC...
All of the session layer protocols give similar service: they provide peer to peer service between processes on (possibly differing) machines.
6.5.1 Sockets
This is an old10 API, available on all platforms. WINSOCK is a standard API to sockets for the PC/windows world. On UNIX and VMS sockets are accessed using a library (libsocket.a) and C
10
In the computing world, anything over 10 years old is considered old!
CHAPTER 6. LAYERS 4,5 - TRANSPORT, SESSION
77
calls. A socket address is composed of the IP address of the target machine and a port number identifying which process. UNIX systems provide virtually all their services using sockets, using either TCP or UDP as a transport. Question: How do two or more clients telnet to a system at the same time? Answer: The first client connects to port 23, and then negotiates a high numbered port to do the session (10000) - it then gives up the port. The second client then connects to port 23 and negotiates a different port 10001. The processes to handle services such as these are started as needed using the UNIX fork command. The code looks like this: repeat x := IncomingConnection; result := fork (); if result = childprocess then processchild(x) else close(x) until TheWorldComesToAnEnd! The general method for using sockets is as follows: • Before referencing the socket - create it: int socket(int sock_family, int sock_type, int protocol); • Bind socket to a local address: int bind(int sock_descr, struct sockaddr *addr, int addrlen); • Clients may issue a connect call: int connect(int sock_descr, struct sockaddr *peer_addr, int addrlen); • Servers must listen: int listen(int sock_descr, int queue_length); and then accept incoming connections: int accept(int sock_descr, struct sockaddr *peer_addr, int *addrlen); In Appendix C is an example of a small server using sockets.
CHAPTER 6. LAYERS 4,5 - TRANSPORT, SESSION
78
6.5.2 RPC
The other session layer APIs give one level of service, but require careful programming. In particular, the states of each end of a link are defined only by the programmer. The RPC model is that all interaction is like a procedure call. The RPC software is automatically generated by the rpcgen tool, which assures that call parameters are sent correctly and interprets the results.
Client Program
Server Program
Stub Procedures
Stub Procedures
Network
Network
• Client calls stub • Stub formats arguments • Stub uses network to call RPC calls Main features: • Parameter passing is call-by-value • Must know location of server
• Stub receives call • Marshalls/converts arguments • Calls server
• RPC has an exception mechanism to inform you of errors • Idempotency: - If you call a procedure twice and it has the same effect as calling it once, it is called idempotent.
CHAPTER 6. LAYERS 4,5 - TRANSPORT, SESSION
There are various call semantics available with RPC: • Exactly once • At most once • At least once Question: How does the server identify a procedure call over the network? Answer: By number - A program id and a procedure number within that.
79
On server machines, there is often support software for registering RPCs. It is common (and easy) to use at least once semantics and idempotent (same strength) calls. The usual way of writing RPC code is to: • Design a series of sensible calls, with parameters and return data types. • Specify those calls in an interface definition language. • Use an IDL compiler (rpcgen) to generate client and server stub procedures. • Link in those stubs with your client and server code. Example: Program DATE_PROG{ version DATE_VERS { long BM_DATE(void) = 1 string STR_DATE (void) = 2 } = 1 } = 0 RPC is used by NFS (Network File System), AFS and WEBNFS. NFS uses idempotent calls and at least once semantics and is stable even with severe network problems. Sun and others are promoting WEBNFS as a fileserver for the internet.
CHAPTER 6. LAYERS 4,5 - TRANSPORT, SESSION
80
6.6 Configuration & implementation
6.6.1 UNIX
UNIX systems normally have networking built into the kernel (SunOS), or loaded as needed using loadable kernel modules (IRIX, Linux). They all come with IP as the standard networking protocol. Other protocols are considered an extra, but often not difficult to add. IRIX and Linux systems both come with IP, SMB, Appletalk and Netware support off the shelf. UNIX IP networking is built around the inetd daemon. When inetd is started at boot time, it reads its configuration information from /etc/inetd.conf and listens for connections on specified internet sockets. When a connection is found on one of its sockets, it decides what service the socket corresponds to, and invokes a program to service the request. After the program is finished, it continues to listen on the socket. The relevant configuration files11 are: • /etc/rpc - maps RPC program numbers. • /etc/protocols - maps IP protocol numbers to their number in the IP header. • /etc/services - maps internet services to TCP and UDP port numbers • /etc/inetd.conf - maps internet services to software to handle them.
6.6.2 DOS and Windows redirector
DOS was developed without computer networks in mind, so network additions are grafted into the operating system. The core of this added facility is the network redirector. When I/O calls are made the redirector examines the call and if it is intended for the local machine it passes it to the I/O subsystem. If it is intended for the network it passes it to the network software. To install networking software on a DOS machine we must: 1. Install NIC 2. Install NIC driver software 3. Install network layer software 4. Install Application layer (file server client)
11
In typical UNIX style, configuration files are stored in the /etc directory.
CHAPTER 6. LAYERS 4,5 - TRANSPORT, SESSION
5. Install the redirector To uninstall we remove the network software elements in the reverse order.
81
Detail: The mechanism used to perform OS calls in DOS is the Software Interrupt (not the subroutine call). We use these calls instead of subroutine calls for two reasons: • the INT call preserves registers • very fast way of doing an indirect call (through a table). The call: • Make INT call • All registers pushed on stack • A vector address is fetched. • Execution continues. The return: • Execute a RT1 (return from INT) • Registers Pulled from Stack • Program continues When we add the redirector, it overwrites the INT vector table with its own address and also keeps track of the old IO addresses. Question: Can we easily add other network protocols? Answer: No. - PC network cards are not constructed to be driven by multiple unrelated protocols.The way in which we run multiple protocols is by providing a software shim between (single tasking) ethernet card/card drivers and multiple protocol stacks.
CHAPTER 6. LAYERS 4,5 - TRANSPORT, SESSION
82
6.6.3 Win95/NT
In Win95, networking is still an addon, but better integrated than in WfW, or Win3.1. In NT the networking is built in as in UNIX, and much more reliable. NT configuration is done through the Network control panel, which allows you to select: • Application layer networking software • Protocols • Interfaces
CHAPTER 6. LAYERS 4,5 - TRANSPORT, SESSION
83
6.7 Diagnostic tools
At these layers, there are so many protocols, standards, and abstractions that it is a bit hard to identify diagnostic tools. • Inference - we can infer from the behaviour of our application software some properties of the transport or session layers. If the lower layers were tested and work, but the software doesn’t, then perhaps these layers are faulty. • Low level monitoring - various software tools such as sniff and tcpdump can be used to monitor and check the sequence of transfers on a port by port basis. • Satan - is a tool which will check a remote system for ports in use.
Chapter 7 Higher layers
The presentation layer is concerned with the syntax of the data transferred. The application layer provides the user interface to the network wide services provided. Normally this layer provides the operating system interface used by user applications.
7.1 Sample standards
SNMP - The Simple Network Management Protocol is a protocol for remote network management. It is commonly used to monitor the behaviour of remote systems. SMB - Server Message Blocks are a Microsoft/IBM protocol for file servers, commonly known as Windows networking. NCP - The Netware Core Protocols are Netware’s file system protocols. DNS - The Domain Name Service supports the naming of entities on an IP network. DES,RSA - The Data Encryption Standard, and Rivest Shamir and Adelman’s encryption method are commonly used to ensure security on computer networks. Standard SNMP SMB NCP DNS DES, RSA Application area Protocols Network management UDP/IP File Server, printing Many File Server, printing SPX/IPX, TCP/IP Name resolving UDP/IP Encryption Any 84
CHAPTER 7. HIGHER LAYERS
85
7.2 Addressing
7.2.1 NCP
In the Netware world, servers have names, which may be up to 14 characters long. Objects within servers have names as well: • STAFF - the STAFF server • STAFF/Hugh - the Object ID Hugh on server STAFF. This naming scheme does not expand well. (How many STAFF servers are there in the world?)
7.2.2 IP and the DNS
Any IP active interface can have a name attached to it. In addition groupings (or regions) of interfaces can have names attached to them: 1. A host may have one or more names. 2. A name might not refer to a machine. 3. Hosts don’t have to have names. 4. You cannot tell by looking at an IP name if it refers to a host or a domain. The IP name space is hierarchical:
NZ ... USP
FJ AC ORG ISS ...
UK GEN
...
OPO MANU
CHAPTER 7. HIGHER LAYERS
usp.ac.fj - is this the machine usp inside ac.fj or is it the domain usp.ac.fj? Note Machines may be named in several different ways. For example: • kai.ee.cit.ac.nz • ee.cit.ac.nz •
86
All the above refer to the same machine. This naming scheme is extendable, and much more useful than the netware scheme. Question: How is the namespace administered? Answer: With a linked hierarchy of servers especially for name information: 1. A machine asks for namespace information about another machine. (Such as: is it a real name? What is its network address? Can I email to it?) 2. The machine asks the local DNS server. 3. If the local server doesnt know, it asks its parent server and so on. 4. The response updates all the intermediate machines, and the server caches the information for some time.
7.2.3 MacIntosh/Win 95/NT
All support a two-level naming scheme, involving a machine and a domain or zone. Mac machines can dynamically acquire names and domains or they can be preset. The domain names map to the networks and are preset by the router. NT machines can either use IP naming systems or Microsoft’s own naming scheme based on single server technology.
CHAPTER 7. HIGHER LAYERS
87
7.3 Encryption
Security and Cryptographic systems act to reduce failure of systems due to the following threats: Interruption - attacking the availability of a service (Denial of Service). Interception - attacks confidentiality. Modification - attacks integrity. Fabrication - attacks authenticity. Note that you may not need to decode a signal to fabricate it - you might just record and replay it.
7.3.1 Shared Keys:
P (Plaintext) X
Ki[P] X
P (Plaintext)
Ki
Ki
Shared key systems are generally considered inadequate, due to the difficulty in distributing keys.
7.3.2 Ciphertext
These systems encode the input stream using a substitution rule: Code Encoding A Q B V C X D W ... ...
CHAPTER 7. HIGHER LAYERS
88
The S-box (Substitution-Box) encodes n bit numbers to other n bit numbers and can be represented by the permutation. This is an S-box:
2:4
Permutation
4:2
(3,4,2,1) An S-box
Ciphertext is easily breakable, particularly if you know the likely frequency of each of the codes. In the English language, the most common letters are: • ETAONISHRDLU (From most to least common).
7.3.3 Product Ciphers
We have seen two types of cipher: If you use both types at once, you have a product cipher which is generally harder to decode, especially if the P box has differing numbers of input and output lines (1 to many, 1 to 1 or many to 1).
7.3.4 DES - Data Encryption Standard
DES was first proposed by IBM using 128 bit keys, but its security was reduced by NSA (the National Security Agency) to a 56 bit key (presumably so they could decode it in a reasonable length of time). At 1ms/GUESS. It would take 1080 years to solve 128 bit key encryption. The DES Standard gives a business level of safety, and is a product cipher. The (shared) 56 bit key is used to generate 16 subkeys, which each control a sequenced P-box or S-box stage. DES works on 64 bit messages. Note: If you intercept the key, you can decode the message. However, there are about 1017 keys.
CHAPTER 7. HIGHER LAYERS
89
7.3.5 Public key systems
P (Plaintext) X
K1[P] X
P (K2[K1[P]]=P) and also K2 (K1[K2[P]]=P)
K1
A machine can publish K1 as a public key, as long as it is not possible to create K2 from K1. Authentication We can use this to provide authentication as well. If one machine wants to authentically transmit information, it encodes using both its private key and the recipient’s public key: The second machine uses the others public key and its own private key to decode.
P
K1[J2[P]] X X X X
P
J2
K1
K2
J1
RSA (Rivest, Shamir, Adelman)
This public key system relies on the properties of extremely large prime number to generate keys.
CHAPTER 7. HIGHER LAYERS
To create public key Kp : 1. Select two large primes P and Q. 2. Assign x = (P − 1)(Q − 1). 3. Choose E relative prime to x. (This must satisfy condition for Ks given later) 4. Assign N = P ∗ Q. 5. Kp is N concatenated with E. To create private (secret) key Ks : 1. Choose D: mod(D ∗ E, x) = 1. 2. Ks is N concatenated with D. We encode plain text P by: 1. Pretend P is a number. 2. Calculate c = mod(P E , N ). To decode C back to P : 1. Calculate P = mod(C D , N ). We can calculate this with: c := 1; { attempting to calculate mod(P Q ,n) } x := 0; while x<>Q do begin x := x+1; c := mod(C*P,N) end; { Now C contains mod (P Q ,N) }
90
CHAPTER 7. HIGHER LAYERS
91
7.4 SNMP & ASN.1
In RPC, we met XDR, the external data representation for transferring data and agreeing on its meaning. This process is normally considered to lie in the presentation layer (layer 6). Not many standards directly relate to layer 6, but there is ISO8824-ASN1 . ASN.1 defines an agreed syntax between layer 6 entities. We define the abstract syntax using a language (ASN.1). We define the transfer syntax using a set of Basic Encoding Rules (BER). Here is an example: employeename ::= string employeeage ::= integer person ::= sequence { num Integer name string married Boolean } or (tagged) person ::= sequence { num [Application] integer name [1] string married [2] Boolean } With these tagged items, which may contain other tagged items, we can specify an item by giving the list of tags (as either numbers or identifiers). There is a worldwide naming system for ASN entities. It starts: • iso.org.dod.internet.... When using ASN.1 for management of remote systems we use • iso.org.dod.internet.mgmt.... For example:
1
ASN is Abstract Syntax Notation.
CHAPTER 7. HIGHER LAYERS
• iso.org.dod.internet.mgmt.mib.system.sysDescr.156.59.209.1
92
High Speed Disk
High Speed Disk
High Speed Disk
MIB Manager
MIB Agent
MIB Agent
The most well known application for ASN.1 is in remote management, using SNMP. The manager is a client, the managed hosts are servers. Each SNMP entity has a MIB2 , which describes the items being ’served’. MIBs define: • How data will be transferred using the implied encoding rules. • What data may be retrieved. Manufacturers of ASN.1/SNMP capable equipment supply the MIBs describing their equipment for free.
7.5 Diagnostic tools
nslookup:
opo 51% nslookup manu Name: manu.usp.ac.fj Address: 144.120.8.10 opo 52% nslookup > set querytype=MX > kai.ee.cit.ac.nz kai.ee.cit.ac.nz preference = 20, mail exchanger = araiahi.cit.ac.nz kai.ee.cit.ac.nz preference = 10, mail exchanger = kai.ee.cit.ac.nz opo 53%
2
A MIB is the Management Information Base.
CHAPTER 7. HIGHER LAYERS
smbtools:
opo 58% smbclient -L opo Server time is Sun Oct 11 23:56:22 1998 Timezone is UTC+12.0 Domain=[LANGROUP] OS=[Unix] Server=[Samba 1.9.17p2] Server=[OPO] User=[hugh] Workgroup=[LANGROUP] Domain=[LANGROUP] Sharename Type Comment -----------------archive Disk Archive AST-PS Printer This machine has a browse list: Server Comment --------------OPO Samba 1.9.17p2 PC0757 Ganesh Chand, Eco, SSED Rm This machine has a workgroup list: Workgroup Master --------------LANGROUP OPO opo 60% nmblookup manu Added interface ip=144.120.8.248 bcast=144.120.11.255 nmask=255.255.252.0 Sending queries to 144.120.11.255 144.120.8.10 manu opo 61%
93
Chapter 8 Application areas
8.1 File serving
We partition file serving systems into: • Peer to peer, and • Server centric. Netware is normally considered to be server-centric. Netware servers are not used as workstations, and Netware clients don’t serve. All Macintosh AFS, WfW (SMB) and NFS systems can normally be either servers or clients. This is called peer to peer networking.
8.1.1 Netware
Netware have the largest share of the PC file server market. The product comes in 3 flavours: v2.xx Now considered obsolete, but still in use in 30% of netware sites. V2.xx was written in assembler, and has not had updates for many years. v3.xx Most widely used (50%) and most widely used NOS for PCs. The server must run on X386 processors (or better) because of addressing requirements, and the processor mode in which it runs. v4.xx The latest version.
94
CHAPTER 8. APPLICATION AREAS
Netware 3.xx 1. 80386 (up to 4 GB ram possible) 2. Disk up to 32TB bytes) 3. Files up to 4GB bytes) (1TB = 1012 = 240 (1GB = 109 = 230
95
PCs and Macs (Appletalk/AFS), and as well route between connected networks. 8. Server backup locally, or over the network 9. Remote management. Netware servers can be totally managed remotely - even over a modem. 10. NLMs (Netware Loadable Modules). New functions can be loaded and unloaded at any time.
4. Up to 100,000 files open at one time. 5. 2,000,000 directory entries/volume. 6. 16 NICs 7. Multi-protocol - allowing Netware servers to serve files to UNIX (NFS) ,
Note that the file systems expected on the different platforms are quite different: • DOS systems expect 8.3 length names, and lack permissions. • UNIX systems expect long file names, with a range of permissions • NTFS systems expect long file names, with a different range of permissions • Macs expect 64 character long file names, with resource and data forks. File extensions found in a Netware system: • NLM - System management and server functions. TCPIP.NLM is TCP/IP software, MONITOR.NLM is system monitoring software. • NCF - Like DOS BAT files - used for Netware configuration. • DSK - Disk driver software • LAN - NIC driver software. • NAM - Name space modules.
CHAPTER 8. APPLICATION AREAS
96
When you run a Novell file server, DOS is thrown away, and NOS becomes the OS. While running NOS on a file server, you cannot run DOS utilities. The only way to run DOS is to exit NOS. 1. Boot DOS. 2. Run a DOS program called server.exe, which loads and runs NOS. 3. Once it is running you can only use NOS commands. Netware will only serve using its own file system. A DOS formatted disk cannot be served by Netware. You can either: • Partition your disk(s) - 10 MB for DOS (Bootable, with server.exe) - Rest for NOS, or • Use a separate floppy to boot the server. (The advantage of this is that you take the floppy away, marginally increasing the server’s physical security.) Netware allows forward and backward slashes for UNIX compatibility. The naming system is DOS like: • DOS: A:\A\B\C.TXT • NOS: STAFF/A:\A\B\C.TXT Memory usage in Netware: Netware systems cache disk accesses (read and write) for speed. In general, the more memory the better. You may calculate memory as follows: • 2MB to get started. • For serving to DOS: M = 0.023 ∗
DiskSize (M B) BlockSize (kB)
MB. MB.
• For serving to MAC/UNIX: M = 0.032 ∗ • Round up to next power of 2
DiskSize (M B) BlockSize (kB)
• Adding NLMs requires more memory, up to 2MB per NLM. Once you have added a NLM, the monitor program can indicate what memory it is using.
CHAPTER 8. APPLICATION AREAS
Example: 200MB disk in 4k blocks and 4GB in 8k blocks: • M200 = 0.023 ∗
200 4
97
= 1.15 MB, and = 11.5 MB.
• M4000 = 0.032 ∗
4000 8
• So total memory needed is 16MB. The more extra memory you have, the more available for caching with its consequent speed up. Note: Caching means that copies are kept in memory of either what is on the disk or what should be on the disk. If the power fails, some material that is cached may be lost. • If this is of concern, either use an UPS1 , or turn off caching.
Novell File System Basic Structure: All Novell boxes must have a volume called SYS with the following four directories: • Sys:\login - Anything for public access (boot config, login.exe, etc) • Sys:\public - All the netware utilities available for all logged-in users • Sys:\system - Supervisory and maintenance programs • Sys:\mail - Mail and individual start up files In addition to these 4 directories you will need user data directories and application directories. Novell access Rights: Novell’s file access security is oriented toward the users not the files. Users are granted sets of permissions to read, write, delete and search files and directories. Novell uses the term Trustee for a user and the rights assigned are called Trustee Assignments . Novell also has groups which may be assigned rights and then users may be made to belong to groups (inheriting all the group rights).
1
Uninterruptable Power Supply
CHAPTER 8. APPLICATION AREAS
Novell commands which support rights: • Right - View system rights. • Grant - Enable access to specified files, directories. • Revoke - Remove trustee assignments. • Filer - Menu driven file/user group management facility • Syscon - General administration. Booting a Novell File Server: First boot DOS, then execute SERVER.EXE. It expects the following configuration files:
98
STARTUP.NCF - On DOS partition. A little like the DOS CONFIG.SYS. If it does not exist SYS will not be mounted. The main use is to get SYS mounted. AUTOEXEC.NCF - Should be in Sys:\system. A little like AUTOEXEC.BAT. It loads NICs, NLMs and so on. The Novell NOS prompt is : (colon). The most common command is LOAD: • In STARTUP.NCF: : LOAD ISADISK : MOUNT SYS • In AUTOEXEC.NCF: : LOAD NE2000 NAME=ETHERNET INT=5 DMA=3 .... : BIND IPX TO ETHERNET : MOUNT APPS : MOUNT USERS First Time: The first time you run Novell, you are asked for an internal number. You can choose any number except one you have used on another Netware server on your network.
CHAPTER 8. APPLICATION AREAS
99
8.1.2 SMB and NT domains
NT is a multi-tasking operating system similar to VMS. It comes in two flavours: NT Workstation - Limited to 10 inbound connections. Single RAS (modem) service. Up to 2 processors. NT Server - Unlimited inbound connections. 256 RAS (modem) services. Up to 4 processors. NT systems are normally set up to work in either a workgroup or a domain environment. Workgroup This is a scheme in which a collection of related computers (eg sales, marketing, admin) share resources such as disks, printers and modems. Each computer in a workgroup manages its own accounts and access policies. Often users manage their own workgroup. • Easy to set up. • Not manageable at large sites. Domain The machines share a common data base of accounts and security (access policies). A single NT server is set up to act as the final say on accounts and policies. This machine is called the PDC2 . There can be only one PDC for a domain. NT’s domains do not correspond to networks or IP subnets. A single domain can span many networks, and multiple domains can coexist on a single subnet. • Managed by system administrator. • Suitable for larger networks.
2
Primary Domain Controller
CHAPTER 8. APPLICATION AREAS
Administration
100
In the administrative tools folder is a utility for adding users and groups.You can specify such items as: • User name, login name, comment • Groups the user belongs to • Home directory By Default NT has the following groups • GUEST - Limited access to resources. • USERS - Default rights for an end user (will be able to run applications and save files). • POWER USER - User access and other limited system management functions. • ADMINISTRATOR - Complete control. • REPLICATOR - Backup - specific rights for system administration. NT File Systems DOS/FAT - Maximum File Size 4GB Partition Size 4GB Attributes Read only/Archive/System/Hidden File name length 255 characters • Note: FAT files can be undeleted easily, have a minimal disk overhead (1M/0.5%), and are most efficient with a disk size under 200 MB. FAT file systems cannot be protected by NT. HPFS - Maximum File Size 4GB Partition Size 2TB (Disk geometry limits to 8 GB) Attributes RO, A,S,H, Extended (User specified) • Note: Long HPFS file names are not visible to DOS applications. HPFS has poor performance over 400 MB, cannot be protected by NT, and has an overhead of about 2MB. Files cannot be undeleted. NTFS - Maximum File Size 16 EB (An Exabyte3 is 260 Bytes). Partition Size 16 EB NTFS has extended attributes that can grow, including file creation, time modified and so on. • Password policy • Allowable login times • User environment
CHAPTER 8. APPLICATION AREAS
101
• Note: NTFS is a journalled file system, and files cannot be undeleted. It has a high overhead (5 MB/5%), with a minimum disk size of 50 MB (i.e. no floppies). NTFS is good at keeping fragmentation low.
8.1.3 NFS
The Network File System, allows a client workstation to perform transparent file access over the network. Using it, a client workstation can operate on files that reside on a variety of servers, server architectures and across a variety of operating systems. Client file access calls are converted to NFS protocol requests, and are sent to the server system over the network. The server receives the request, performs the actual file system operation, and sends a response back to the client. The Network File System operates in a stateless fashion using remote procedure (RPC) calls built on top of external data representation (XDR) protocol. The RPC protocol provides for version and authentication parameters to be exchanged for security over the network. A server can grant access to a specific filesystem to certain clients by adding an entry for that filesystem to the server’s /etc/exports file and running exportfs . A client gains access to that filesystem with the mount system call. Internals NFS normally ’sits on top of’ Unix file systems, which typically have extremely large file sizes and spaces. They are mostly journalled, with an overhead proportional to the disk size (1 to 10%). UFS file systems are good at defragmentation, and it is common for the file systems to have behavioural extensions. • XFS is a speed guaranteed file system found on IRIX machines. NFS based file server systems use secure writes. That is, if a workstation writes something to the server, it waits for an acknowledgement that the write is complete before continuing. This is nearly always slower than insecure writes, due to propagation delays. In a test of identical PC-based systems we determined the following speeds: File Server Read speed Write speed Mode Netware 500 KB/s 500 KB/s Insecure UNIX 500 KB/s 300 KB/s Secure NT 300 KB/s 180 KB/s Secure
The prefixes are Kilo, Mega, Giga, Tera, Peta, Exa, Zetta and Yotta for 210 , 220 , 230 , 240 , 250 , 260 , 270 and 280 Bytes.
3
CHAPTER 8. APPLICATION AREAS
102
8.2 Printing - LPR and LPD
The LPR/LPD system is a protocol specifically for printing. Clients use LPR to submit print jobs to a queue. The LPD process is the print server. LPR and LPD clients and servers are available for all systems and provide a simple way to make network printers available. Network printers always have LPR/LPD as one of the standard printing protocols.
8.3 Web services
The spread of Internet web browsers to the desktop cannot be ignored. Modern browsers are large complex applications with the capability of playing a significant role in distributed applications. Originally the web consisted of hypertext only (linked text and pictures) - however the demand for interactive content led first to: • Forms - the CGI (Common Gateway Interface) standard provides semi-interactive web pages. and then to: • Java - (and Javascript) interpreted byte code, which runs at the client. Java distributed objects (called applets or little applications) are just one step towards a CORBAlike distributed environment. (We still however need a support ORB infrastructure). Various Java-builder suppliers provide CORBA ORBs for your Java applications.
8.3.1 Java
Java is: • Object oriented • Interpreted - bytecode, but JIT4 compilation gives C/C++ speed. • Multi-platform • A lot more than fancy web pages. Java supports the development of robust, secure and distributed applications, and can be used in one of two main ways:
JIT stands for Just In Time compilation. The byte code is compiled on the fly as it is downloaded into an executable form on the local machine.
4
CHAPTER 8. APPLICATION AREAS
• If your code has a main(), you can run it directly: opo> javac Mycode.java opo> java Mycode In Appendix B is an example of a small client/server application. With no main(), you may embed an applet in a web page. Use HTML like:
<applet code="Mycode.class" width=150 height=150> </applet>
103
Here are some local URLs with help info: • - The Java SDK API and commands. • - A quick tour around the Cosmo Java development environment. • - Stuff on Javascript. • (Sample code from the nutshell book) • (Sun’s Java tutorial) •˜hugh/archive/lang/java (My small archive of stuff!)
8.4 X
In the 1980s, various groups around the world were developing a distributed window system. One of the more successful developments at Stanford University was the W window system, and when MIT began to develop their window system, they used what came after W (X)! The X window system allows graphics to be distributed efficiently over a network.
CHAPTER 8. APPLICATION AREAS
104
X display (server)
Clients
In the X view of the world, the display is the center, displaying graphical material from one or more clients, which can be distributed about the network. The X architecture involves: • X servers - which display the graphical information. • X display managers - controlling logging in and out. • X window managers - controlling the look and feel of the display. • The X protocol - an application layer protocol. • X client programs - distributed about the network. X provides a mechanism to do things, it seldom sets policy. There are few limits to its behaviour - you can have whatever window decorations you like, and a range of display types. Your client programs will work regardless. X is commonly used to display UNIX desktops, but there are products that allow your X displays to access NT, Win95 and Macintosh machines5 . These systems are less useful than you might think, because none of these systems are multiuser. X is:
5
The machines run software that catches all display events, and convert them to X protocol messages.
CHAPTER 8. APPLICATION AREAS
• Mature, • Well defined, • Efficient, • Stable, • Available on all platforms,
105
• Useable with text, graphics, video and sound.
In the UNIX/IP world, high numbered ports (6000) and UDP is used for transport.
8.5 Thin clients
The X window system needs a fairly large computer at the display6 . Thin client technology allows you to run small code on the display, communicating using a small protocol with a server on a larger machine. The server can worry about efficiently managing the display. Another use for these systems is to allow dial-up takeover control of a remote machine.
8.5.1 WinFrame & WinCenter
Are extensions to NT that: • Make NT multiuser, • Display on remote machines using X, and • Display on remote machines using proprietary thin client protocols. Example: A large PC (say a dual PII) with a large amount of memory (say 384MB) can manage 30 or more 286 computers each with only 2MB of memory. Each of these computers acts like a very fast Pentium, unless everyone simultaneously starts compiling!
8.5.2 VNC
VNC7 is a thin client system developed at the Olivetti and Oracle Research Lab, and recently made available to the Internet community under the GNU General Public License. It provides: • Servers for UNIX, Win95, NT, Macintoshes... and • Clients for UNIX, Win95, NT, Macintoshes and a range of other peculiar machines! VNC does not give you multiuser NT. However it appears stable.
A Pentium machine, or a Sparc should be sufficient. At least 8MB memory would be required. The more memory the better! 7 Virtual Network Computing.
6
CHAPTER 8. APPLICATION AREAS
106
8.6 Electronic mail
SMTP8 is a mail transfer scheme found on the Internet. Its architecture consists of similar mail servers, which are set up to store and forward electronic mail. You can test mail forwarding and routing by forcing paths into the e-mail addresses: • hugh%opo.usp.ac.fj@waikato.ac.nz SMTP uses a single simple TCP connection on port 25 to transfer mail. You can test mail server operation by just connecting to the server port: • telnet opo 25, or.. • telnet opo smtp SMTP mail systems add fields to the mail messages: Return Path : Received : Organisation: Date : etc, etc. ... ... ... ...
SMTP mailers are commonly eight bit clean, but there is no guarantee, so when you send binary files, they should be encoded. Unfortunately there is not a single encoding standard, and most mail readers have to have various decoders to make sense of incoming data.
8
Simple Mail Transfer Protocol.
Chapter 9 Other topics
9.1 ATM
ATM1 technology is a cell based transport system. Transmitted data is broken into fixed 53 byte cells, which can be switched and routed efficiently. The ATM header has only 5 bytes, leaving 48 bytes for data. ATM networks revolve around an ATM switch, which can route full speed traffic between nodes.
9.2 CORBA
The main features that we expect from an object are: • Encapsulation - data and functions to operate on them. • Inheritance - main construction method for new software modules (rather than aggregation). • Polymorphism - object’s behaviour can change at run time. A distributed object system involves • objects (components) which are distributed, and • brokers - which name, identify and support the objects.
1
In this context, ATM stands for Asynchronous Transfer Mode, not automatic teller machines.
107
CHAPTER 9. OTHER TOPICS
It is common now to find distributed objects performing as: 1. TP monitors 2. CSCW support tools 3. servers of various sorts 5. applications 4. databases
108
In addition, distributed objects have made it to the desktop (most notably in the form of Java applets - processor independant, reasonably efficient and safe). If we expect our objects to interoperate successfully in a distributed system, we have to agree on a set of standards. The two (incompatible) standards are: 1. OMG’s CORBA 2. Microsoft’s DCOM/OLE CORBA object interfaces are specified using yet another IDL (Interface Definition Language). The IDL is language and implementation independant, and specifies object interfaces unambiguously. CORBA has been adopted by every operating system supplier except for Microsoft - who are promoting a different standard. However CORBA object extensions for Microsoft products are available from third parties. CORBA has two principal components in addition to your application objects: 1. The ORB 2. Standard objects and facilities ORB: An ORB can be viewed as the ’bus’ interconnecting objects. It: • defines the the handling of all communication between objects. • manages the identification/naming of each object. A mandatory requirement for CORBA implementations is the IIOP - a TCP/IP ORB protocol. This ensures at least a minimum level of interoperability. You can of course use other protocols - such as DCE’s RPC - which may give security and encryption features.
CHAPTER 9. OTHER TOPICS
Standard Objects: CORBA has a set of pre-defined standard objects for use in building new applications: • Naming • Transactions • Time • SQL • Security • Licensing • Documents (OpenDoc <=> OLE)
109
9.3 DCOM/OLE
• DCOM - Distributed Component Object Model. • OLE - Object Linking and Embedding. OLE is an object-oriented environment for components, which has gone through several transformations, but is supplied as a local service with Win95. Microsoft’s consistent stated goal is to provide all OS and applications as OLE components. Note: ActiveX is a minimal OLE object found on the Internet.
9.4 NOS
The generic term ’Network Operating System’ (NOS) describes a uniform view of the services on a network. A NOS may involve all sorts of middleware running on different platforms, but tying them together with agreed protocols. Example: - We may use a global time service - accessable from all systems on the local network. It is common to use NTP or a similar product to ensure clock synchronicity within an enterprise. The NOS functions are often provided on-top-of or bundled with conventional OS’s such as NT or UNIX. Note: The NOS view of the enterprise computing environment may become obsolete. Web services, and distributed object technologies are doing similar things, and have already spread to the desktop.
CHAPTER 9. OTHER TOPICS
110
9.4.1 DCE:
The Distributed Computing Environment from OSF has all the elements of an enterprise NOS. It uses the folowing key technologies: • RPC • Naming • Time And where do we find DCE? • In Transarcs TP monitor • In H.P.’s CORBA - ORB Plus • In uSoft’s Network OLE • In turnkey UNIX systems • Security • DFS • Threads
DCE RPC
The DCE RPC is borrowed from Hewlett Packard, and has the following properties: DCE RPC’s have an IDL and compiler. The DCE RPC has more developed authentication than the (Sun) RPC. The DCE RPC can dynamically allocate servers. It is protocol independant. In addition, the DCE RPC is integrated with the DCE security and naming services.
Naming
DCE has adopted X.500 for naming services. Any system resource can be identified by name using a DNS-like distributed hierarchical name service. For example: • files servers • print queues And also the not-so-conventional: • programs • processes Like the DNS, names can be grouped, defined and administered locally.
CHAPTER 9. OTHER TOPICS
111
Time
All systems eventually synchronise to external time providers, through local time servers. The local time servers keep track of each others time as well and correct errors. There must be at least three local time servers.
Security
DCE uses Kerberos2 . Kerberos is generally considered the most secure authentication system. It was designed by deeply suspicious people. DCE uses Kerberos for • authentication • maintaining a user database • authorization. The user database is kept on a secure (normally dedicated) server. All communication is done using encrypted authenticated RPC - systems wishing to communicate validate keys using the security server - which they trust. All keys invalidate themselves after a short time. Note: DCE’s security system is a model for other NOS’s.
DFS file system
The DCE file system (DFS) is based on AFS - the Andrew File System from CMU (Carnegie Melon University in the U.S.). DFS files are location independant, and are often cached when a workstation accesses the file. Modifications to a cached copy are invisibly propagated to other copies of the file. In addition, you can mirror files across a network (again invisibly) to either: • Ensure fast response in a geographically dispersed environment • Ensure high availablity (if one of the servers fails).
2
Kerberos is the three headed dog that guards the gates of hell!
List of Figures
1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 Recipe for disaster - the Clayton Tunnel protocol failure. . . . . . . . . . . . . . Computer development. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Network topologies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Digital and analog Signals. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sum of sine waveforms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 5 6 7 8
Successive approximations to a square wave. . . . . . . . . . . . . . . . . . . . 10 Model of computing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Centronics handshake signals. . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 RS232-C serial interface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.10 I 2 C synchronization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.1 2.2 2.3 3.1 3.2 3.3 3.4 3.5 3.6 4.1 4.2 Layering in the ISO OSI reference model. . . . . . . . . . . . . . . . . . . . . . 23 War is declared! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 ISO and CCITT protocols. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Total internal reflection in a fibre. . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Counter rotating rings in FDDI. . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Electromagnetic spectrum. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 ’Real’ signals on a cable. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Reflections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 Noise generation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Sliding window protocols. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Utilization of links using ALOHA protocols. . . . . . . . . . . . . . . . . . . . . 53 112
LIST OF FIGURES
5.1
113
IP packet structure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
List of Tables
1.1 1.2 Morse Code. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ham calls. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2
3.1 3.2 5.1 6.1
Sample physical layer standards. . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Comparison of ethernet cabling strategies. . . . . . . . . . . . . . . . . . . . . . 31 Sample network layer standards. . . . . . . . . . . . . . . . . . . . . . . . . . . 58 Sample transport layer standards. . . . . . . . . . . . . . . . . . . . . . . . . . . 71
114
115
APPENDIX A. ASCII TABLE
116
Appendix A ASCII table
Dec Hex Char 0 00 ˆ@ nul 1 01 ˆA soh 2 02 ˆB stx 3 03 ˆC etx 4 04 ˆD eot 5 05 ˆE enq 6 06 ˆF ack 7 07 ˆG bel 8 08 ˆH bs 9 09 ˆI ht 10 0A ˆJ lf 11 0B ˆK vt ff 12 0C ˆL 13 0D ˆM cr so 14 0E ˆN 15 0F ˆO si 16 10 ˆP dle 17 11 ˆQ dc1 18 12 ˆR dc2 19 13 ˆS dc3 20 14 ˆT dc4 21 15 ˆU nak 22 16 ˆV syn 23 17 ˆW etb 24 18 ˆX can 25 19 ˆY ew 26 1A ˆZ sub 27 1B ˆ[ esc fs 28 1C ˆ\ 29 1D ˆ] gs 30 1E ˆˆ rs 31 1F ˆ_ us Dec Hex Char 32 20 spc 33 21 ! 34 22 “ 35 23 # 36 24 $ 37 25 % 38 26 & 39 27 ’ 40 28 ( 41 29 ) 42 2A * 43 2B + 44 2C , 45 2D 46 2E . 47 2F / 48 30 0 49 31 1 50 32 2 51 33 3 52 34 4 53 35 5 54 36 6 55 37 7 56 38 8 57 39 9 58 3A : 59 3B ; 60 3C < 61 3D = 62 3E > 63 3F ? Dec Hex Char 64 40 @ 65 41 A 66 42 B 67 43 C 68 44 D 69 45 E 70 46 F 71 47 G 72 48 H 73 49 I 74 4A J 75 4B K 76 4C L 77 4D M 78 4E N 79 4F O 80 50 P 81 51 Q 82 52 R 83 53 S 84 54 T 85 55 U 86 56 V 87 57 W 88 58 X 89 59 Y 90 5A Z 91 5B [ 92 5C \ 93 5D ] 94 5E ˆ 95 5F _ Dec Hex Char 96 60 ‘ 97 61 a 98 62 b 99 63 c 100 64 d 101 65 e 102 66 f 103 67 g 104 68 h 105 69 i 106 6A j 107 6B k 108 6C l 109 6D m 110 6E n 111 6F o 112 70 p 113 71 q 114 72 r 115 73 s 116 74 t 117 75 u 118 76 v 119 77 w 120 78 x 121 79 y 122 7A z 123 7B { 124 7C | 125 7D } 126 7E ˜ 127 7F del
Appendix B Java code: 117
APPENDIX B. JAVA CODE
118ception
APPENDIX B. JAVA CODE
119
ception e); finally try client.close(); catch (IOException e2); } }cep-
APPENDIX B. JAVA CODE
120
tion ception e); finally try client.close(); catch (IOException e2); } }
Appendix C Sockets code
#include <stdio.h> #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <arpa/inet.h> #define SERV_TCP_PORT 9000 #define MAXLINE 512 process(sockfd) int sockfd; { int n; char c; for ( ; ; ) { n = read(sockfd,&c,1); if (n==0) return; else if (n<0) printf("process: readline error\n"); switch (c){ case ’Q’: return;break; case ’N’: write(sockfd,"N result\n",9);break; case ’P’: write(sockfd,"P result\n",9);break; } } }
main(argc, argv) int argc; char *argv[]; { int sockfd,newsockfd,clilen; struct sockaddr_in cli_addr,serv_addr; if ( ( sockfd=socket(AF_INET, SOCK_STREAM,0))<0) printf("server: can’t open stream socket\n"); bzero((char *) &serv_addr, sizeof(serv_addr)); serv_addr.sin_family = AF_INET; serv_addr.sin_addr.s_addr = htonl(INADDR_ANY); serv_addr.sin_port =htons(SERV_TCP_PORT); if (bind(sockfd,(struct sockaddr *) &serv_addr, sizeof(serv_addr))<0) printf("server: can’t bind local address\n"); listen(sockfd,5); for ( ; ; ) { clilen = sizeof(cli_addr); newsockfd = accept(sockfd,(struct sockaddr *) &cli_addr,&clilen); printf("New connection....\n"); if (newsockfd<0) printf("server: accept error\n"); write(newsockfd,"Hugh’s Server\n",14); process(newsockfd); close(newsockfd); } }
121 | https://www.scribd.com/document/19686391/Data-Communication-and-Computer-Networks | CC-MAIN-2017-51 | en | refinedweb |
Here are a few things that might help you get adjusted to using Nose:
1. As mentioned in the Testing Efficiently with Nose tutorial, the convention is slightly different for running tests has changed. The format has changed:
python manage.py test app.tests:YourTestCaseClass
python manage.py test app.tests:YourTestCaseClass.your_test_method
One way to force the test results to use the same format is to create a class that inherits from unittest that overrides the __str__, __id__, and shortDescription methods. The __str__() method is used by the Django test runner to display what tests are running and which ones have failed, enabling you to copy/paste the test to re-run the test. The __id__() method is used by the XUnit plug-in to generate the test name, enabling you to swap out the class name with the Nose convention. Finally, the shortDescription() will prevent docstrings from replacing the test name when running the tests.
class BaseTestCase(unittest.TestCase):
def __str__(self):
# Use Nose testing format (colon to differentiate between module/class name)
if 'django_nose' in settings.INSTALLED_APPS or 'nose' in settings.TEST_RUNNER.lower():
return "%s:%s.%s" % (self.__module__, self.__class__.__name__, self._testMethodName)
else:
return "%s.%s.%s" % (self.__module__, self.__class__.__name__, self._testMethodName)
def id(self): # for XUnit outputs
return self.__str__()
def shortDescription(self): # do not return the docstring
return None
2. For SauceLabs jobs, you can also expose the URL of the job run in which you are running. WebDriverException's inherit from the Exception class, and add a 'msg' property that we can use to insert the SauceLabs URL. You want to avoid adding the URL in the id() and __str__() methods, since those routines are used to dump out the names of classes that Hudson/Jenkins may used to compare against between builds.
def get_sauce_job_url(self):
# Expose SauceLabs job number for easy reference if an error occurs.
if getattr(self, 'webdriver', None) and hasattr(self.webdriver, 'session_id') and self.webdriver.session_id:
session_id = "()" % self.webdriver.session_id
else:
session_id = ''
return session_id
def __str__(self):
session_id = self.get_sauce_job_url()
return " ".join([super(SeleniumTestSuite, self).__str__(), session_id])
def _exc_info(self):
exc_info = super(SeleniumTestSuite, self)._exc_info()
# WebDriver exceptions have a 'msg' attribute, which gets used to be dumped out.
# We can take advantage of this fact and store the SauceLabs jobs URL in there too!
if exc_info[1] and hasattr(exc_info[1], 'msg'):
session_id = self.get_sauce_job_url()
exc_info[1].msg = session_id + "\n" + exc_info[1].msg
return exc_info
3. Nose has a bunch of nifty commands that you should try, including --with-id & --failed, which lets you run your entire test suite and then run only the ones that failed. You can also use the attrib decorator, which lets you to decorate certain test suites/test methods with various attributes, such as network-based tests or slow-running ones. The attrib plugin seems to support a limited boolean logic, so check the documentation carefully if you intend to use the -a flag (you can use --collect-only in conjunction to verify your tests are correctly running with the right logic). | http://hustoknow.blogspot.com/2011/12/moving-to-nose.html | CC-MAIN-2017-51 | en | refinedweb |
Internet of Things With Raspberry Pi - 1
Introduction: Internet of Things With Raspberry Pi - 1
When I was new to IOT (Internet Of Things), I saw that there were hardly any tutorials which were simple enough for a beginner to understand and try out. There was either to much technical jargon, or the hardware was too complex.
So now that I’ve played around with IOT a bit, I decided to make a 10 step tutorial on controlling an LED over a Local Area Network (LAN).
In this tutorial, we’ll be using an LED, a Raspberry Pi, a Wireless ADSL Router with internet connection and a device with a web browser. (Smartphone, Laptop, Computer, PSP, etc.)
On the software side, we’ll be using Apache2, MySQL and PHP.
If you’re new to the Raspberry Pi, you might want to have a look at Getting started with Raspberry Pi before trying out this project.
(Note: This project only uses an internet connection for software installation. After the installation and coding is done, the internet connection is not required. For more info on making the project available on the internet, check port forwarding)
Step 1: Gather the Components
- Raspberry Pi (I've used a Raspberry Pi 2 model B, but any model will suffice)
- ADSL Wireless Router
- Power adaptor for the router
- Computer monitor / TV screen which has an HDMI/VGA port (If you're using a VGA port then you will have to use a VGA-HDMI converter)
- Ethernet/LAN cable
- 2 Female-Female jumper wires
- Small LED
- USB Keyboard and Mouse
- A computer/laptop connected to the same modem as the Raspberry Pi (This will just be for the final test so even a smartphone is ok)
Step 2: Hardware Setup
Step 3: Creating a Website for the Raspberry Pi
Step 1:
Start your Raspberry Pi and open the Graphical User Interface (GUI) with the command:
startx
Step 2:
Once the interface is active, open the terminal and type the following commands:
sudo apt-get install apache2 -y
An IOT webpage will require a web server. This command will install a web server called Apache2.
Step 3:
To test the web server, you will need to know your Raspberry Pi’s IP address. Enter the command:
hostname -I
A number will be displayed. Start your Pi’s web browser and enter this number in the search engine. You should see something like this:
Congratulations! Your Apache server is up and running!
Step 4:
This is a default webpage which is stored in the ‘/var/www’ directory. To make changes to it and customise it, you need to enter this command:
sudo nano /var/www/index.html
Whenever you’re modifying a file, don’t forget to add ‘sudo’ at the beginning.
This indicates that you are modifying it as a superuser. Press Ctrl + X and hit enter to exit the file.
Step 5:
You will also need a preprocessor called PHP. Install it with the following command:
sudo apt-get install php5 libapache2-mod-php5 -y
Step 6:
Now enter the following commands:
cd /var/www sudo rm index.html
Step 4: Make the Website an IOT Remote Control!
Step 1:
Enter the following commands:
cd /var/www
sudo rm index.php
sudo nano index.php
The last command will open a new index.php file. Enter the text from the above-mentioned PDF document into this file. (Since part of it is an HTML code, there was a problem with pasting it directly into this post.)
Exit the file by pressing CTRL + X. You will be asked if you want to save changes. Press Y and hit enter.
Step 2:
You will now need the Python files for controlling the LED.
There are three Python files. One to turn on the LED, one to turn it off, and one to make it blink.
Please note that the following Python codes are for Raspberry Pi models with 40 pins.
i.e. Pi model A+, Pi model B+ and Pi 2 model B
If you’re using a 26 pin Raspberry Pi (Model A or B), then you will have to change the GPIO pin number in all three codes to 13 instead of 40 and accordingly connect the LED.
Use the jumper wires to connect the negative lead of the LED to Pin 6 on the Raspberry Pi’s GPIOs and connect the positive lead to Pin 40. (Pin 13 in the case of a 26 pin GPIO Raspberry Pi.)
First, let’s create a file to turn on the LED. Enter these commands:
cd /var/www
sudo nano ledON.py
Type the following text in the blank file:
import time, RPi.GPIO as GPIO
GPIO.setmode(GPIO.BOARD)
GPIO.setup(40,GPIO.OUT)
GPIO.setwarnings(False)
GPIO.output(40, True)
time.sleep(1)
Exit the file by pressing CTRL + X. You will be asked if you want to save changes. Press Y and hit enter.
Now create a file to turn it off:
sudo nano ledOFF.py
Type the following text in the blank file:
import time, RPi.GPIO as GPIO
GPIO.setmode(GPIO.BOARD)
GPIO.setup(40,GPIO.OUT)
GPIO.setwarnings(False)
GPIO.output(40, False)
time.sleep(1)
Exit the file by pressing CTRL + X. You will be asked if you want to save changes. Press Y and hit enter.
Lastly, create a file to make it blink:
sudo nano ledBLINK.py
Type the following text in the blank file:
import time, RPi.GPIO as GPIO
GPIO.setmode(GPIO.BOARD)
GPIO.setup(40,GPIO.OUT)
GPIO.setwarnings(False)
while True:
GPIO.output(40, False)
time.sleep(1)
GPIO.output(40, True)
time.sleep(1)
Exit the file by pressing CTRL + X. You will be asked if you want to save changes. Press Y and hit enter.
Step 3:
Now, you will need to change certain file permissions. Enter the command:
sudo nano /etc/sudoers
This will open a file which contains permissions for directories, files, etc.
Go to the last line of the file which says:
pi ALL=(ALL) NOPASSWD: ALL
Below it, type this:
www-data ALL=(ALL) NOPASSWD: ALL
Exit the file by pressing CTRL + X. You will be asked if you want to save changes. Press Y and hit enter.
Reboot the Raspberry Pi with the command:
sudo reboot
Step 5: Test the Setup!
Congratulations! Your first IOT project is now ready! You can try it out from any device which is connected to the same network as the Raspberry Pi.
If you’re having problems with this project, go over the whole tutorial once again. If it still doesn’t work, then feel free to contact me at: sataklela@gmail.com
Once you know for sure that everything is functioning properly, try modifying the codes to play songs, run motors, etc.
You can even add a small relay circuit and control the lights in your house!
IOT is an amazing thing and once you understand it, there are almost no limits to what you can do. In the video below, I tried to control the LED using my PSP (Play Station Portable):
For more, check out Internet of Things with Raspberry Pi - 2
Its great information....... I want to make home automation using raspberry-pi/ arduino,i also want to develop my own andriod app/website (No third party) which would dynamically control light (i.e.on or off) etc.....waiting for ur kind reply...Thanks
@online07 perhaps if you could be more specific about your exact problem? To begin with, try the setup instructions mentioned above in the instructable. If you're running into any specific issues, I'll be most happy to help.
All the best with your project.
Thanks! It worked for me :)
its really work for me only problem is with php code you didn't mention mystuff folder in /var/www to run program we have to remove it from php code
but thanks for your help
I have worked on your post 4 Hours!
You have little mistakes;
first : you folders of py files in mystuff folder. But you didnt tell it.
second. index.php isnt opening. also there is a html folder when i put it it doesnt open again.
third. when i try to change .php to html. it works. but it doesnt send command raspberry to light led.
-_-
Keep the file as .php, check over your code, check you set up the permissions properly as well (bottom of page 4)
Wahey I fixed it! index.php does indeed work - I had a minor syntax error in the .php file. I think that was actually the only problem, though I was chipping away at it till late and early this morning. Classic right? Works over local WIFI as well with no Ethernet link
After some code modifications it worked beautifully. I could not have gotten started with IoT without this 'ible. Thank you :-)
Any tips on what you needed to do?
Is there any way to debug this? my .py files are tested and working - I
can display the buttons on a seperate device if I change index.php to
index.html but can't get the .py files to execute. Is there any way I
can see if the commands are received?
Thanks for the nice project.. very interesting to work on as a beginner!!!
Hey excellent tutorial. :)
I wanted to do something like home automation, for my room specifically, like you did here but also want side-by-side to stream video from my Pi cam (or webcam) of my room to seen in actual when I turn on a light does it actually lit up. Any idea how to integrate a web stream of my pi cam (or webcam) into your program?
Thanks
I've never tried to integrate a control panel and side-by-side stream on a single webpage, but I'm sure it's possible with a bit of tweaking.
You could try using 'motion'. It's quite good and you can tweak the settings for streaming different sizes and frame-rates.
I learn't about it from here:
Best of luck with your project. | http://www.instructables.com/id/Simple-IOT-project-for-Beginners/ | CC-MAIN-2017-51 | en | refinedweb |
registry
Experimental namespaced IoC container
Registry
Registry is designed to be a useful helper for managing definitions of JS modules, classes, functions, etc. It provides the ability to define a "thing" using a string identifier and then retrieve it at later date using wildcard matching.
Why Use Registry?
Registry attempts to provide some of the benefits that you find using an IoC in an OO programming environment, but does not force OO patterns into JS like many other Class Helper libraries.
When using registry you should be able to define things in much the same way you are used to. You simply provide them a name that can be wildcard matched in a similar way to eve and retrieved later. Using this technique you can either specifically target a particular thing by using a specific name or a more generalized instance using a more generalized namespace.
Primarily, Registry has been designed for use in the browser but will also work quite happily in a CommonJS environment (such as Node).
Example Usage
At the moment, the tests contain some of the best examples of how Registry can be used, but full documentation will be completed one day... | https://www.npmjs.com/package/registry | CC-MAIN-2015-14 | en | refinedweb |
sd_bus_label_escape, sd_bus_label_unescape — Escape D-Bus object path special characters
#include <systemd/sd-bus.h>
sd_bus_label_escape() takes a
NUL-terminated string as a argument. It will
replace all characters which are invalid in a D-Bus object path by
"
_" and a hexadecimal number. As a special case,
the empty string will be replaced by a lone "
_".
sd_bus_label_unescape() can be used to
reverse this process.
On success, a new
NUL-terminated string
will be returned. It must be
free(3)d
by the caller. If a memory allocation failure occurs,
NULL will be returned.
sd_bus_label_escape() and
sd_bus_label_unescape() are available as a
shared library, which can be compiled and linked to with the
libsystemd pkg-config(1)
file.
systemd(1), sd-bus(3), free(3) | http://www.freedesktop.org/software/systemd/man/sd_bus_label_escape.html | CC-MAIN-2015-14 | en | refinedweb |
Troubleshooting Web Service Integration (.Net: Cyclic Reference)
By Jani Rautiainen-Oracle on Sep 14, 2014
Cyclic ReferenceWhen using Web Reference integration the client generation may fail for some Fusion
Application Web Services with errors such as: "Unable to import WebService/Schema. Unable to import binding 'LocationServiceSoapHttp' from namespace ''. Unable to import operation 'updateLocationTranslation'. The datatype '' is missing.";These are commonly due to cyclic references in the service definition. The specification is vague on whether this is allowed or not and as such it's caused by different interpretation of the standard by the different technologies. The issue does not exist with Service Reference integration so the recommendation is to always use Service References for integration.
[Read More] | https://blogs.oracle.com/fadevrel/tags/service | CC-MAIN-2015-14 | en | refinedweb |
On Tue, 26'.> > Just to be clear, then: this idea is fundamentally different from the > mkdir/cd analogy the thread starts with above.NACK, it's very similar to the cd "$HOME" (or ulimit calls) done by thelogin mechanism, except for the fact that no shell does implement asetnamespace command and therefore can't leave that namespace. If theshell were actually modified to implement setnamespace, that command wouldbe exactly like the cd command.The wrapper I mentioned will usurally not be needed for normal operation,but if users want additional private namespaces, they'll need thisseperate wrapper (or to modify the application or the shell) in order toswitch into them.> And it misses one rather > important requirement compared to mkdir/cd: You can't add a new mount to > an existing shell.The mount would be a part of the current namespace, which is shared acrossall current user processes unless they are started without login (e.g.procmail[0]) or running in a different namespace (the user calledsetnamespace).[0] If you want procmail in a user namespace, use a wrapper like---/usr/bin/procmail---#!/bin/shexec /usr/bin/setnamespace /users/"$UID" -- /usr/bin/procmail.bin "$@"---BTW: I think the namespaces will need the normal file permissions.-- Fun things to slip into your budgetParadigm pro-activator (a whole pack) (you mean beer?)-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | https://lkml.org/lkml/2005/4/26/275 | CC-MAIN-2015-14 | en | refinedweb |
In trying to keep up to speed with .NET 2.0, I decided to do a .NET 2.0 version of my Code Project article DIME Buffered Upload, which uses the DIME standard to transfer binary data over web services. The DIME approach is reasonably efficient, but the code is quite complex and I was keen to explore what .NET 2.0 had to offer. In this article, I use version 3.0 of the WSE (Web Service Enhancements), which is available for .NET 2.0 as an add-in, to provide a simpler and faster method of sending binary data in small chunks over HTTP web services.
Just a re-cap file transfer feedback to the user interface, because you have no indication of how the transfer is going until it is either completed or failed.
MaxRequestLength
The solution outlined here is to send chunks of the file one-by-one and append them to the file on the server. There is an MD5 file hash done on the client and the server to verify that the file received is identical to the file sent. Also, both upload and download file transfers are is that the binary content of a message is sent outside the SoapEnvelope of the XML message. This means that although your message code your application for DIME by using DimeAttachments. This is no longer necessary; you just send your byte[] as a parameter or a return value to WebMethod and WSE makes sure that it is sent as binary, not padded by XML serialization as it would be in the absence of DIME or MTOM.
DimeAttachment
byte[]
WebMethod
The client application is fairly straightforward, intended only as a demonstration of how to use the class library and not as a production application. There are options at the top of the form to indicate if you want the file hash check done at the end of the transfer. You can also manually set the chunk size or you can tick the box to AutoSet to let it regulate itself. Any connection error messages are displayed in red below the options. The files are transmitted concurrently, so you can reduce the number of threads available (and thus concurrent transfers) via the NumericUpDown control. This works because I use ThreadPool.QueueUserWorkItem() to manage the multithreading. Note that any changes to the number of threads will not take effect after you have begun the transfer because the threads have already been queued.
AutoSet
NumericUpDown
ThreadPool.QueueUserWorkItem()
To upload one or more files, simply click the Upload button and you're away. A progress bar and status message will appear for each file transfer, disappearing as soon as each file is transferred successfully. If there are errors, such as a file hash difference, the file is left on-screen with the error message. The panel for downloading files is similarly easy to use. It has a list box showing all of the files in the Upload folder on the server; there probably won't be any there by default. Just select the files you want to download (drag a window or Ctrl+click) and click the Download button. You can refresh the files list at any time with the refresh button. To change the save folder, enter a new path in the text box provided.
You can also tick the box for "Login Required" if your website is configured with forms authentication. Normally, you can't use a web service if it is protected by forms authentication. This is because forms authentication is performed via a login ASPX page and an authentication cookie is given to the client browser. These conditions are not web service friendly.
There is a work-around to allow you to protect the web service via forms authentication. It sends HttpWebRequest to the login.aspx page and captures the cookie, placing it in the web service objects. Look in the code-behind of login.aspx.cs to see the minor modification that was needed to accept a login via a query string, i.e. HttpWebRequest. To use forms authentication, just change the authentication section of web.config, which is set to Windows authentication by default. Then you will need to enter a username and password -- admin/admin by default -- in the client application to upload or download a file. The client application will auto-detect if forms authentication is required. If it is, it will tick the box for "Login Required" and focus the username field.
HttpWebRequest
The web service has two main methods: AppendChunk is for uploading a file to the server and DownloadChunk is for downloading from the server. These methods receive parameters for the file name, the offset of the chunk and the size of the buffer being sent/received.
AppendChunk
DownloadChunk
The Windows Forms client application can upload a file by sending all of the chunks one after the other using AppendChunk until the file has been completely sent. It does an MD5 hash on the local file and compares it with the hash of the file on the server to ensure that the contents of the files are identical. The download code is very similar, the main difference being that the client must know from the server how big the file is so that it can know when to stop requesting chunks.
A simplified version of the upload code from the Windows Forms client is shown below. Have a look in the code for Form1.cs to see the inline comments and()
SentBytes
using(FileStream fs = new FileStream(LocalFilePath,
FileMode.Open, FileAccess.Read))
{
int BytesRead = fs.Read(Buffer, 0, ChunkSize);
while(BytesRead > 0 && !worker.CancellationPending)
{
ws.AppendChunk(FileName, Buffer, SentBytes);
SentBytes += BytesRead;
BytesRead = fs.Read(Buffer, 0, ChunkSize);
}
}
In many Windows Forms applications, regular feedback to the user is very important. Having a responsive and visually communicative application is usually worth a small sacrifice in performance. Feedback for file transfers is typically done via a progress bar and/or status bar message. Obviously, the web services aspect to a chunked file transfer is overhead. The client constructs and sends the SOAP message and then the server receives and parses it before sending the response. If the chunk size is very small, i.e. 2 KB, then there is a lot of messaging going on and not much data transfer.
It should be clear then that we should aim for the highest possible chunk size that is within our requirements for quick user interface feedback. I have aimed for each chunk to be completed in 800 milliseconds. You can adjust this setting programmatically before the file transfer. See the PreferredTransferDuration variable in the file transfer object. The client regulates the chunk size automatically, to ensure that each chunk is completed in the desired time. This is done by sampling every 15th chunk -- also adjustable, ref ChunkSizeSampleInterval -- and adjusting the chunk size based on the time it takes to transfer this chunk.
PreferredTransferDuration
ChunkSizeSampleInterval
The overall result is a self-controlled file transfer that will adapt to changing network conditions during the transfer. One useful feature is that the web service provides the MaxRequestLength setting on the server, which the client retrieves before the transfer in order to stay within acceptable request sizes on the server.
The application also supports resuming a failed upload or download. I use this application to copy a system image (20+ gigs) from my web server to my home PC. Obviously, there is a good chance of a connection being dropped during such a lengthy transfer, even with a 3 Mbit bidirectional line. Resume support is a must when dealing with such large files and it is very simple to include resume support in this application.
Because data is only written to the file after it has been successfully received, we can be confident in resuming a file transfer based on the size of the partial file. In theory and in practice, this works perfectly. Click File > Resume Upload or File > Resume Download, etc. to locate the partial file you wish to resume. After the file transfer, an MD5 hash is requested from the server and the client to compare the files. The timeout value is increased for the hash check, but it is always possible that it will timeout when checking such a large file. So you might have transferred the correct number of bytes, but have no way of verifying that they are the right bytes.
I have included a manual MD5 hash check, available in the File menu. If the server is not giving the file hash within the timeout limit, you could run the client application directly on the server and check it locally, thus overcoming the timeout problem.
To use this code in your own client application, simply add a reference to MTOM_Library.dll, which is included in the article download. Then you should use the example client application as a starting point. It shows how to use the classes provided in order to perform the file transfer. Refer to the UploadFile and DownloadFile methods of Form1.cs in particular.
UploadFile
DownloadFile
You use the FileTransferUpload class to do an upload and the FileTransferDownload class to do a download. Each class has settings such as ChunkSize, AutoSetChunkSize, LocalSaveFolder, IncludeHashVerification, etc. You can configure these as needed in your code. To change the location of the web service, make sure that the app.config file is deployed to your run directory. Change the MTOM_Library_MtomWebService_MTOM setting to the location of your web service.
FileTransferUpload
FileTransferDownload
ChunkSize
AutoSetChunkSize
LocalSaveFolder
IncludeHashVerification
MTOM_Library_MtomWebService_MTOM setting
Your ASP.NET application will also need to host the MTOM.asmx web service and the Login.aspx code-behind if you use forms authentication. To configure the web application for MTOM, just copy the settings from the web.config file included in this application. You can change the Upload folder on the server in web.config to either a relative path or absolute path. If you set a relative path, the server must be able to MapPath() to the folder. Files are downloaded from and uploaded to this folder.
MapPath()
.NET 2.0 has a great new class called BackgroundWorker to simplify running tasks asynchronously. Although this application sends the file in small chunks, even these small chunks would delay the Windows Forms application and make it look crashed or "hung" during the transfer. So, the web service calls still need to be done asynchronously. The BackgroundWorker class works using an event model, where you have code sections to run for DoWork (when you start), ProgressChanged (to update your progress/status bar) and Completed (or failed).
BackgroundWorker
DoWork
ProgressChanged
Completed
You can pass parameters to the DoWork method, which you could not do with the Thread class in .NET 1.1. I know you could do it with delegates, but delegates aren't great for thread control. You can also access the return value of DoWork in the Completed event handler. So for once, Microsoft has thought of everything and made a very clean threading model. Exceptions are handled internally and you can access them in the Completed method via the RunWorkerCompletedEventArgs.Error property.
Thread
RunWorkerCompletedEventArgs.Error
Under a typical scenario when integrating this code into your app, you would drag a FileTransferUpload component onto your Windows Form and call RunWorkerAsync() on it to begin the transfer. Since I'm using ThreadPool to queue up all the transfers, the code to set up each file transfer is already running on a background thread. So, I want to run it in blocking/synchronous mode. To achieve this, I added a RunWorkerSync() method to the FileTransferUpload and FileTransferDownload classes. This is also useful to use for console applications where you generally want all code to be synchronous.
RunWorkerAsync()
ThreadPool
RunWorkerSync()
When the upload or download is complete, the client asks for an MD5 hash of the file on the server. It can thus compare it with the local file to make sure that they are identical. I originally did these in sequence, but it can take a few seconds to calculate the result for a large file, i.e. use of the Join() method, which blocks execution until the thread is complete. The code below shows how this is accomplished:
Join()
// start calculating the local hash (stored in class variable)
this.hashThread = new Thread(new ThreadStart(this.CheckFileHash));
this.hashThread.Start();
// request the server hash
string ServerFileHash = ws.CheckFileHash(FileName);
// wait for the local hash to complete
this.hashThread.Join();
if(this.LocalFileHash == ServerFileHash)
e.Result = "Hashes match exactly";
else
e.Result = "Hashes do not match";
There is a good chance that the two operations will finish at approximately the same time, so very little waiting around will actually happen.
There have been dozens of questions about people not being able to compile the solution in Visual Studio. You may get an error like this:
The type or namespace name 'MTOMWse' does not exist
in the namespace 'UploadWinClient.MtomWebService'.
With an MTOM-enabled web service, Visual Studio is supposed to generate 2 proxy classes: a standard one derived from System.Web.Services.Protocols.SoapHttpClientProtocol and a WSE class derived from Microsoft.Web.Services3.WebServicesClientProtocol, with "Wse" tagged onto the end of the proxy class name.
System.Web.Services.Protocols.SoapHttpClientProtocol
Microsoft.Web.Services3.WebServicesClientProtocol
Sometimes Visual Studio misbehaves and does not generate this class. I don't understand why, but the work-around is to "Show all files" in the Windows Forms project and expand the web service > Reference.map > Reference.cs. Edit this file and change public partial class MTOM : System.Web.Services.Protocols.SoapHttpClientProtocol to public partial class MTOMWse : Microsoft.Web.Services3.WebServicesClientProtocol. Also, make sure to update the constructor to match the new class name. Then it should compile fine.
public partial class MTOM : System.Web.Services.Protocols.SoapHttpClientProtocol
public partial class MTOMWse : Microsoft.Web.Services3.WebServicesClientProtocol
I have also received a ton of questions about people asking for a web client instead of a Windows Forms client. This is fundamentally impossible because of the advanced file Input/Output required to achieve this solution. For good reason, browsers do not provide this level of access to the file system of the client. A guy called Brettle wrote a progress bar control for ASP.NET file uploading. This may be your best bet, although you must understand that web applications are very limited when it comes to sending large amounts of data to the web server.
I found that MTOM was about 10% faster than DIME in my limited testing. This is probably to do with the need to package up each chunk into a DIME attachment, which is no longer necessary with MTOM. Remember, if you want to send chunks larger than 4 MB, you must increase the .NET 2.0 max request size limit in your web.config file. Feel free to use this code and modify it as you please. Please post a comment for any bugs, suggestions or improvements. Enjoy!
Please note that the source solution is in Visual Studio 2008 file format. Rick Strahl has a good blog post on the difference in the file format. You could also try Google to find a tool to convert from the 2008 to the 2005 format.
ClientMode=On
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
Is there any chance someone can help translate this piece of code to use WSE 3 please? I'm having some major problems trying to work it out. It was written for WSE 2 and is being moved to a new server which only has WSE 3. HUGE thanks in advance!
/// <summary>
/// This method appends a dime attachment to an upload in progress.
/// </summary>
/// <param name="InstanceId">The string returned by Initialize() method.</param>
/// <param name="offset">The offset at which to start writing.</param>
/// <param name="length">The size of the buffer.</param>
[WebMethod(EnableSession=true)]
public void AppendChunk(string InstanceId, long offset, int bufferSize)
{
Instance i = Instance.GetInstanceById(InstanceId);
if (i != null)
{
if reques
if fileTransferUpload.
if(RequestSoapContext.Current == null || RequestSoapContext.Current.Attachments.Count == 0)
throw new SoapFormatException("No SOAP attachments included in message");
Stream dimeStream = RequestSoapContext.Current.Attachments[0].Stream;
byte[] buffer = new byte[dimeStream.Length];
dimeStream.Read(buffer, 0, buffer.Length);
dimeStream.Close();
i.AppendChunk(buffer, offset, bufferSize);
}
else
Instance.CustomSoapException("Instance Not Found", InstanceId);
}
UploadFile(object Triplet)
public partial class Form1 : Form
{
public static MTOM_Library.MtomWebService.MTOM WebService;
private FileTransferUpload ftu = new FileTransferUpload();
}
private void btnExit_Click(object sender, EventArgs e)
{
if(btnExit.Text == "Cancel")ftu.CancelAsync();
else Close();
}
private void btnCancel_Click(object sender, EventArgs e)
{
// cancel all active threads and remove from thread queue
for (int i=threads.Count-1; i>= 0; i--)
{
if (threads[i].IsBusy)
threads[i].CancelAsync();
threads.RemoveAt(i);
}
}
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/12551/Sending-Files-in-Chunks-with-MTOM-Web-Services-and?msg=3943004 | CC-MAIN-2015-14 | en | refinedweb |
16 April 2009 05:02 [Source: ICIS news]
SHANGHAI (ICIS news)--China Securities Regulatory Commission (CSRC) has approved polyvinyl chloride (PVC) futures trading on the Dalian Commodity Exchange (DCE) in a bid to help market players better manage price volatility, a DCE source said on Thursday.
“The government has approved that we can start PVC futures trading, and we are making preparations now,” the source told ICIS news in Mandarin. “We are striving to start trading by the middle of this year.”
PVC will become the second chemical product traded by the exchange after linear low density polyethylene (LLDPE). Other commodities traded on the exchange include corn, soybean, soybean oil, soybean meal and palm oil.
“I think futures will open another arbitrage window for us. The prices in futures will guide prices in the spot market so that we can hedge risk,” a trader in east ?xml:namespace>
DCE is currently seeking public opinions on PVC futures contracts and their regulations, according to an announcement posted on its website.
“Starting PVC futures trading could optimize the PVC pricing mechanism, guide the upstream and downstream companies of PVC to arrange production and business and ensure the balance of market supply and demand,” the statement said.
The proposed contract unit is five tonnes with a minimum price movement of CNY5/tonne ($0.73/tonne). The daily cap on price movement is proposed to be 4% of the previous day's close.
In | http://www.icis.com/Articles/2009/04/16/9208374/china-approves-dce-to-start-pvc-futures-trading.html | CC-MAIN-2015-14 | en | refinedweb |
19 May 2011 15:44 [Source: ICIS news]
HELSINKI (ICIS)--The objectives of the EU's Reach chemicals regulation have not been achieved and legislation needs to be more demanding of the chemical industry for things to change, Danish politician Dan Jorgensen said on Thursday.
“There is no doubt that Reach is a giant leap forward… it is probably still the most comprehensive single piece of legislation ever made in the EU, and compared to what we had before it is a major victory… but this not to say we should be satisfied,” he said.
Jorgensen, a Member of the European Parliament (MEP) who was speaking at the Helsinki Chemicals Forum 2011 in ?xml:namespace>
The objectives are to reverse the burden of proof so that is up to industry, not authorities, to find out whether or not a substance is of high concern; to make sure substances of high concern are substituted with others of less concern in order to protect human health and the environment; and to do this in a way so that there are fewer animal experiments.
Jorgensen said that the chemical industry has failed to provide enough data to comply with Reach rules. He blamed the failure on legislation rather than the industry itself and asked the European Chemicals Agency (ECHA) to test more data for compliance and to be more ambitious with its demands for information.
He also said that the ECHA was not using enough of its powers to ensure substances of high concern are replaced by other substances.
Jorgensen said that there are still too many tests on animals, which cost the industry a lot of money, and that there is no excuse not to use alternative methods.
He said that the European Centre for the Validation of Alternative Methods (ECVAM) has validated alternatives.
Jorgensen added that that it was "crazy" that many alternative methods were not being used because they were still awaiting approval from the Organisation for Economic Co-operation and Development (O | http://www.icis.com/Articles/2011/05/19/9461579/objectives-of-reach-have-not-yet-been-achieved-mep.html | CC-MAIN-2015-14 | en | refinedweb |
23 May 2012 10:45 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
The Chinese economy performed far below expectations in April when growth in industrial production, imports, exports, fixed-asset investment and bank lending all slowed. Worse news for May is widely expected, according to analysts.
“This has caused increasing concern to policy makers,” said Zhang Junfeng, senior analyst at Shenzhen-based broker China Merchants Securities (CMS).
The forthcoming measures would focus on boosting domestic consumption, the analysts said.
The most effective and direct method would be to lower lending costs to motivate consumers and businesses to spend.
“We expect that the country will lower banks’ reserve requirement ratio [RRR] further in mid-June after the release of the data for May,” Zhang said, adding that an interest rate cut may be announced in July should the RRR policy fail to deliver.
RRR had been reduced twice this year, on 24 February and 18 May.
The government may also broaden tax reductions.
“The government has realised that tax rates are quite high for many industries and a lower tax burden would help stimulate consumption,” said Wu Huaiguo at Guangzhou-based broker United Securities.
Measures on boosting investment would be limited, analysts said, saying it was unlikely that the government would relax restrictions on property markets.
On Wednesday, the World Bank cut its economic growth forecast | http://www.icis.com/Articles/2012/05/23/9562542/china-mulls-stimulus-policy-to-boost-economy-analysts.html | CC-MAIN-2015-14 | en | refinedweb |
mvcur - output cursor movement commands to the terminal
#include <curses.h> int mvcur(int oldrow, int oldcol, int newrow, int newcol);
The mvcur() function outputs one or more commands to the terminal that move the terminal's cursor to (newrow, newcol), an absolute position on the terminal screen. The (oldrow, oldcol) arguments specify the former cursor position. Specifying the former position is necessary on terminals that do not provide coordinate-based movement commands. On terminals that provide these commands, Curses may select a more efficient way to move the cursor based on the former position. If (newrow, newcol) is not a valid address for the terminal in use, mvcur() fails. If (oldrow, oldcol) is the same as (newrow, newcol), then mvcur() succeeds without taking any action. If mvcur() outputs a cursor movement command, it updates its information concerning the location of the cursor on the terminal.
Upon successful completion, mvcur() returns OK. Otherwise, it returns ERR.
No errors are defined.
After use of mvcur(), the model Curses maintains of the state of the terminal might not match the actual state of the terminal. The application should touch and refresh the window before resuming conventional use of Curses.
doupdate(), is_linetouched(), <curses.h>. | http://pubs.opengroup.org/onlinepubs/7990989775/xcurses/mvcur.html | CC-MAIN-2015-14 | en | refinedweb |
Loop Issues
David Kanton
Greenhorn
Joined: Apr 23, 2012
Posts: 2
posted
Apr 23, 2012 11:55:22
0
I am trying to develop a number guessing game using the Math and Scanner classes. I have two classes, the GuessingGame class and the GuessingGameTester class.
The programs are supposed to randomly generate a number and tell the user too low/high, getting colder/warmer, and if they won/lost. The user has five guesses to guess the number between 0 and 100. If they run out of guesses or guess the correct number, they are asked if they want to play again.
Both classes compile and the tester class returns the correct results, but it will not allow the user to guess again. Instead, it asks the user if they want to play again after one guess. If the user chooses to play again, no matter what number is the user is told that they won.
Can someone please take a look at my code and point me in the right direction? Thanks!
import java.lang.Math; import java.util.Scanner; public class GuessingGame { private boolean gameOver; private int differential; public int newGuess; public int currentNumGuesses; public int answer; private int max; public GuessingGame(int max) { this.answer = (int) (Math.random() * 100); max = 5; this.answer = answer; this.differential = differential; this.newGuess = newGuess; currentNumGuesses = this.currentNumGuesses; gameOver = false; } public int newGame(int max) { gameOver = false; differential = 100; return answer; } public String guess(int newGuess) { differential = answer - newGuess; String guess = ""; String closer = ""; int currentDiff = answer - differential; for (currentNumGuesses = 0; currentNumGuesses <= 5; currentNumGuesses++) { if (newGuess == answer) { guess = "Congratulations! You won!!!"; newGuess++; } else if (newGuess < answer && currentDiff > differential) { guess = "Too Low"; closer = "Getting Colder"; newGuess++; } else if (newGuess < answer && currentDiff < differential) { guess = "Too Low"; closer = "Getting Warmer"; newGuess++; } else if (newGuess < answer && currentDiff == differential) { guess = "Too Low"; closer = "You guessed the same number. Guess Again."; newGuess++; } else if(newGuess > answer && currentDiff > differential) { guess = "Too High"; closer = "Getting Colder"; newGuess++; } else if(newGuess > answer && currentDiff < differential) { guess = "Too High"; closer = "Getting Warmer"; newGuess++; } else if(newGuess > answer && currentDiff == differential) { guess = "Too High"; closer = "You guessed the same number. Guess Again."; newGuess++; } else if (newGuess < 0 || newGuess > 100) { guess = "Guess out of range. \n The guess must be between 0 and 100. \n Guess Again."; newGuess++; } else if (newGuess == answer || currentNumGuesses > max) { guess = "Game Over. \n"; } } return guess + "\n" + closer; } public boolean isGameOver() { if (newGuess != answer || currentNumGuesses < max) { gameOver = false; } else { gameOver = true; } return gameOver; } public int getAnswer() { return answer; } public int getDifferential() { return differential; } }
import java.lang.Double; import java.util.Scanner; public class GuessingGameTester { public static void main(String[] args) { GuessingGame gameOne = new GuessingGame(5); gameOne.answer = (int) (Math.random() * 100); System.out.println(gameOne.answer); System.out.println("Welcome to the Guessing Game"); System.out.println("Guess a number between 0 and 100"); System.out.println("Number of guesses allowed: 5"); Scanner input = new Scanner(System.in); gameOne.newGuess = input.nextInt(); String playerGame1 = gameOne.guess(gameOne.newGuess); System.out.println(playerGame1); while (gameOne.isGameOver() == true) { System.out.println("Would you like to play again? Enter Y for yes or N for no."); String playerResponse = input.next(); if(playerResponse.equalsIgnoreCase("Y")) { GuessingGame gameTwo = new GuessingGame(5); gameTwo.answer = (int) (Math.random() * 100) +1; System.out.println(gameTwo.answer); System.out.println("Welcome to the Guessing Game"); System.out.println("Guess a number between 0 and 100"); System.out.println("Number of guesses allowed: 5"); gameTwo.newGame(gameTwo.newGuess); gameTwo.newGuess = input.nextInt(); String playerGame2 = gameTwo.guess(gameTwo.newGame(gameTwo.newGuess)); System.out.println(playerGame2); } else { System.out.println("Thanks for playing!"); } } } }
Campbell Ritchie
Sheriff
Joined: Oct 13, 2005
Posts: 41531
31
posted
Apr 23, 2012 13:03:55
0
Welcome to the Ranch
How far have you got with that game? Can it be played at all? I can see all sorts of things. I looked at the imports (you don’t need to import java.lang.Anything, and you never use Double) and started worrying.
You have got 5 fields which you are setting up very strangely in your constructor. You ask for max as a parameter and then set it to 5, regardless of what value is passed, You are setting four other fields to their default values, rather than giving them values in the
constructor
.
I think you ought to initialise all the fields in the constructor; if they haven’t got default values (eg max = 5), you can pass those values as parameters.
I think you need work out what that for loop is doing. I think you should write for loops in this sort of format whenever possible
for (int i = 0; i < j; i++) ...
In that case, the j would appear to be max, so you can get rid of the 5 and write max.
Now, back to that loop. How many guesses does it take, and where does it take a new guess? How does newGuess alter as you go through that loop? Why is newGuess a field when it is passed to that method? Do you need the newGuess field at all? You are never using it.
In the isGameOver method (good name for the method), you are back to the newGuess field, which you have never used, so it doesn’t supply any useful information. I think you should set the gameOver field in the method which takes the guesses, and you can use the gameOver field ot help control the loop.
I think, I am afraid, that you will be starting from scratch in quite a lot of that class.
Mahesh Murugaiyan
Greenhorn
Joined: Jun 25, 2009
Posts: 21
posted
Apr 23, 2012 13:36:06
0
I agree to all the points of Ritchie. You have to think about re-structuring the logic.
- The reason why your tester class did not ask to continue is, say if i did not pick the guess first time, the guess returns a message say, "Too Low". the while loop in #22 checks if the gameOne.isGameOver is true. because i had the wrong guess, it isn't. so the flow doesn't continue into the loop and the code exits.
GuessingGame.Guess
- Why do you need a loop in the guess?
- dont you think having just one variable currentDiff would take the guesser closer to the number? Comparing the currentDiff with the previous differential may miss lead some time.
- To start with, I have re-structured just the tester class in a way to make it readable and modular. See the tester class below: I have moved the logic in main to a playGame method. To facilitate this you have to change the GuessingGame.guess to return a boolean instead of
String
. you can simply print the messages in the guess method itself. and work on the Guess logic.
public static void main(String[] args) { int attempts = 5; GuessingGameTester tester = new GuessingGameTester(); tester.playGame(attempts); } private void playGame(int attempts) { Scanner input; GuessingGame gameOne = new GuessingGame(attempts); gameOne.answer = (int) (Math.random() * 100); System.out.println(gameOne.answer); System.out.println("Welcome to the Guessing Game"); System.out.println("Guess a number between 0 and 100"); System.out.println("Number of guesses allowed: 5"); input = new Scanner(System.in); for (int ctr = 0; ctr < attempts; ctr++) { gameOne.newGuess = input.nextInt(); gameOne.guess(gameOne.newGuess); if (gameOne.isGameOver()) { break; } } gameOne.setGameOver(true); if (gameOne.isGameOver()) { System.out .println("Would you like to play again? Enter Y for yes or N for no."); String playerResponse = input.next(); if (playerResponse.equalsIgnoreCase("Y")) { playGame(attempts); } else { System.out.println("Thanks for playing!"); } } }
The code above is not a perfect solution yet. Atleast it will get you into the second and further attempts which wasn't working for you before.
David Kanton
Greenhorn
Joined: Apr 23, 2012
Posts: 2
posted
Apr 23, 2012 16:51:15
0
Thanks for the feedback! I took all of the advice. I'm still having problems with the loop in the GuessingGame class. I'm going to keep at it and see what I can come up with. If you have any other suggestions, I'm all ears. Thanks again!!!
Campbell Ritchie
Sheriff
Joined: Oct 13, 2005
Posts: 41531
31
posted
Apr 24, 2012 02:51:32
0
Don’t just keep at it. Write down your algorithm and refine it until it is in words of one syllable. Once you get it to that stage, you should find the code very easy to write.
I agree. Here's the link:
subject: Loop Issues
Similar Threads
GuessingGame
help needed with my while loop
Play again function help?
New Help on letting the user to choose to play again or not. here is what i have
Need help ending number guessing game and asking if user wants to play again
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/578831/java/java/Loop-Issues | CC-MAIN-2015-14 | en | refinedweb |
Chatlog 2008-10-15
From OWL
Revision as of 16:40, 3 November 2008 by Michael Schneider (Talk | contribs)
See original RRSAgent log and preview nicely formatted version.
Please justify/explain all edits to this page, in your "edit summary" text.
16:53:59 <scribenick> PRESENT: bijan (muted), Peter_Patel-Schneider, IanH, calvanese, baojie, uli (muted), Ivan (muted), bmotik (muted), Zhe, Ratnesh, christine, clu, bernardo, Alan_Ruttenberg, Evan, msmith, Markus 16:53:59 <scribenick> REGRETS: Michael Schneider 16:53:59 <RRSAgent> RRSAgent has joined #owl 16:53:59 <RRSAgent> logging to 16:54:07 <bmotik> rrsagent. make log public 16:54:11 <baojie> baojie has joined #owl 16:54:14 <bmotik> rrsagent, make log public 16:54:55 <Zakim> SW_OWL()1:00PM has now started 16:54:59 <bijan> zakim, mute me 16:54:59 <Zakim> sorry, bijan, I do not know which phone connection belongs to you 16:55:02 <Zakim> +??P7 16:55:07 <bijan> zakim, ??p7 is me 16:55:07 <Zakim> +bijan; got it 16:55:12 <bijan> zakim, mute me 16:55:12 <Zakim> sorry, bijan, muting is not permitted when only one person is present 16:55:16 <bijan> Grr 16:56:16 <IanH> IanH has joined #owl 16:56:16 <Zakim> +Peter_Patel-Schneider 16:56:18 <pfps> zakim, mute me 16:56:18 <Zakim> sorry, pfps, I do not know which phone connection belongs to you 16:56:23 <bijan> zakim, mute me 16:56:23 <Zakim> bijan should now be muted 16:56:33 <Zakim> + +39.047.101.aaaa 16:57:00 <calvanese> zakim, aaaa is me 16:57:00 <Zakim> +calvanese; got it 16:57:01 <IanH> zakim, who is here? 16:57:02 <Zakim> On the phone I see bijan (muted), Peter_Patel-Schneider, calvanese 16:57:04 <Zakim> On IRC I see IanH, baojie, RRSAgent, Zakim, bmotik, bijan, pfps, calvanese, clu, sandro, trackbot 16:57:13 <Zakim> -calvanese 16:57:56 <uli> uli has joined #owl 16:58:01 <Zakim> +IanH 16:58:07 <Zakim> +calvanese 16:58:59 <IanH> ScribeNick: calvanese 16:59:00 <ivan> ivan has joined #owl 16:59:35 <uli> hm, dialling in seems to be difficult today 16:59:47 <Zakim> +baojie 17:00:00 <Zakim> +??P16 17:00:04 <Zakim> +??P17 17:00:05 <bmotik> Zakim, ??P16 is me 17:00:06 <Zakim> +bmotik; got it 17:00:12 <ivan> zakim, dial ivan-voip 17:00:13 <Zakim> ok, ivan; the call is being made 17:00:16 <Zakim> +Ivan 17:00:18 <Zakim> -bmotik 17:00:27 <uli> zakim, ??P17 is me 17:00:28 <Zakim> +uli; got it 17:00:30 <ivan> zakim, mute me 17:00:30 <Zakim> Ivan should now be muted 17:00:38 <Zakim> +??P16 17:00:42 <bmotik> Zakim, ??P16 is me 17:00:42 <Zakim> +bmotik; got it 17:00:46 <bmotik> Zakim, mute me 17:00:46 <Zakim> bmotik should now be muted 17:00:48 <uli> zakim, mute me 17:00:48 <Zakim> uli should now be muted 17:01:11 <IanH> zakim, who is here? 17:01:11 <Zakim> On the phone I see bijan (muted), Peter_Patel-Schneider, IanH, calvanese, baojie, uli (muted), Ivan (muted), bmotik (muted) 17:01:13 <Zakim> On IRC I see ivan, uli, IanH, baojie, RRSAgent, Zakim, bmotik, bijan, pfps, calvanese, clu, sandro, trackbot 17:01:14 <Ratnesh> Ratnesh has joined #owl 17:01:32 <Zhe> Zhe has joined #owl 17:01:33 <calvanese> Topic: Admin 17:01:40 <IanH> zakim, who is here? 17:01:40 <Zakim> On the phone I see bijan (muted), Peter_Patel-Schneider, IanH, calvanese, baojie, uli (muted), Ivan (muted), bmotik (muted) 17:01:42 <Zakim> On IRC I see Zhe, Ratnesh, ivan, uli, IanH, baojie, RRSAgent, Zakim, bmotik, bijan, pfps, calvanese, clu, sandro, trackbot 17:01:43 <Zakim> +Zhe 17:01:43 <MarkusK_> MarkusK_ has joined #owl 17:01:49 <calvanese> IanH: roll call 17:01:55 <pfps> q+ 17:01:59 <IanH> q? 17:02:00 <calvanese> IanH: agenda amendments? 17:02:09 <Zhe> zakim, mute me 17:02:09 <Zakim> Zhe should now be muted 17:02:15 <bijan> +2 17:02:27 <IanH> q? 17:02:36 <IanH> ack ??p17 17:02:38 <uli> q- 17:02:38 <pfps> q- 17:02:40 <Zakim> +??P20 17:02:43 <Zakim> +??P24 17:02:45 <uli> 56 17:02:51 <Zakim> -??P24 17:02:52 <Ratnesh> Zakim, ??P20 is me 17:02:53 <Zakim> +Ratnesh; got it 17:03:00 <pfps> pfps: propose to put issue-56 into "under consideration for resolution" section 17:03:02 <calvanese> pfps: pfps move issue 56 under consideration for resolution 17:03:09 <bmotik> bmotik has joined #owl 17:03:13 <alanr> alanr has joined #owl 17:03:18 <msmith> msmith has joined #owl 17:03:21 <calvanese> IanH: moved 17:03:24 <Zakim> + +49.421.218.6.aabb 17:03:28 <Zakim> +??P28 17:03:34 <calvanese> Ianh: no other amendments 17:03:37 <pfps> minutes are OK 17:03:37 <uli> yes 17:03:40 <clu> zakim, aabb is me 17:03:40 <Zakim> +clu; got it 17:03:40 <uli> look fine 17:03:40 <calvanese> IanH: previous minutes? 17:03:41 <Zakim> +Alan_Ruttenberg 17:03:44 <clu> zakim, mute me 17:03:45 <Zakim> clu should now be muted 17:03:53 <Zakim> +msmith 17:03:57 <bcuencagrau> bcuencagrau has joined #owl 17:03:58 <calvanese> RESOLVED: accept previous minutes 17:04:30 <calvanese> IanH: action item status 17:04:37 <bcuencagrau> bcuencagrau has joined #owl 17:04:50 <calvanese> ... no pending review actions 17:05:03 <calvanese> ... due and overdue actions 17:05:13 <calvanese> ... Action 189 17:05:34 <pfps> isn't action-189 for reviewing mapping for the *previous* publication? 17:05:43 <bernardo> bernardo has joined #owl 17:06:04 <calvanese> ... alan: proposes to close the action 17:06:21 <Zakim> +??P32 17:06:21 <calvanese> RESOLVED: close action 189 17:06:28 <bernardo> Zakim, ??P32 is me 17:06:28 <Zakim> +bernardo; got it 17:06:32 <bernardo> Zakim, mute me 17:06:32 <Zakim> bernardo should now be muted 17:06:58 <calvanese> Ianh: action 217 17:07:25 <baojie> Jie Bao 17:07:34 <calvanese> baojie: continue for another week 17:07:53 <pfps> q+ 17:08:00 <IanH> q? 17:08:04 <IanH> ack pfps 17:08:06 <calvanese> IanH: action amendments for agenda F2F next week? 17:08:26 <calvanese> pfps: sent email on amendment 17:08:37 <calvanese> ... there is a disturbing asymmetry 17:08:40 <alanr> q+ 17:08:47 <IanH> ack alanr 17:09:33 <IanH> q? 17:09:39 <calvanese> alanr: have a session where to discuss options 17:10:09 <ewallace> ewallace has joined #owl 17:10:18 <calvanese> IanH: discuss later on agenda amendment 17:10:24 <calvanese> pfps: sounds fine 17:10:40 <calvanese> Ianh: no other agenda amendments 17:11:06 <pfps> q- 17:11:13 <calvanese> ... no teleconf next week because of F2F 17:11:26 <calvanese> Topic: reviewing and publishing 17:11:43 <calvanese> Ianh: first public draft published oct. 8 17:11:51 <uli> thanks, Sandro and others! 17:11:52 <pfps> hear, hear! 17:11:54 <calvanese> ... thank to all editors and contributors 17:11:56 <ivan> clap clap clap 17:11:56 <IanH> q? 17:12:20 <Zakim> +Evan_Wallace 17:12:28 <calvanese> Ianh: reviewing remaining documents 17:12:31 <IanH> q? 17:12:50 <bijan> q+ 17:11:26 <calvanese> SubTopic: Manchester Syntax 17:12:51 <calvanese> alanr: my review is not fully completed. not satisfied 17:12:56 <calvanese> ... not sure how to proceed 17:12:56 <bijan> q- 17:13:00 <bijan> THat was my question 17:13:14 <calvanese> Ianh: does this affect finishing the review? 17:13:14 <cgolbrei> cgolbrei has joined #owl 17:13:26 <calvanese> alanr: no, I'll finish the review 17:14:01 <Zakim> +??P21 17:14:10 <calvanese> IanH: comment that alan is not satisfied with editor's response. not sure how to proceed 17:14:17 <calvanese> alanr: take it as issue 17:14:24 <IanH> zakim, who is here? 17:14:28 <Zakim> On the phone I see bijan (muted), Peter_Patel-Schneider, IanH, calvanese, baojie, uli (muted), Ivan (muted), bmotik (muted), Zhe (muted), Ratnesh, clu (muted), MarkusK_, 17:14:33 <Zakim> ... Alan_Ruttenberg, msmith, bernardo (muted), Evan_Wallace, ??P21 17:14:38 <Zakim> On IRC I see cgolbrei, ewallace, bernardo, msmith, alanr, bmotik, MarkusK_, Zhe, Ratnesh, ivan, uli, IanH, baojie, RRSAgent, Zakim, bijan, pfps, calvanese, clu, sandro, trackbot 17:14:40 <cgolbrei> ??P21 cgolbrei 17:14:46 <calvanese> Ianh: alan will submit an issue on this 17:14:55 <ivan> zakim, ??P21 is christine 17:14:55 <Zakim> +christine; got it 17:14:55 <uli> zakim, ??P21 is cgolbrei 17:14:56 <Zakim> I already had ??P21 as christine, uli 17:11:26 <calvanese> SubTopic: Quick Reference Guide 17:15:02 <calvanese> Ianh: anything to report on quick reference guide 17:15:05 <IanH> q? 17:15:41 <calvanese> Zhe: at meeting of task force decided to redesign the card(?) 17:15:58 <ivan> q+ 17:16:06 <ivan> zakim, unmute me 17:16:06 <Zakim> Ivan should no longer be muted 17:16:06 <IanH> q? 17:16:08 <baojie> 17:16:29 <IanH> q? 17:16:32 <IanH> ack ivan 17:16:36 <calvanese> IanH: when new version of card(?) available, then new reviewing round 17:16:47 <bijan> q+ 17:17:13 <IanH> q? 17:17:19 <bijan> zakim, unmute me 17:17:19 <Zakim> bijan should no longer be muted 17:17:22 <calvanese> Ianh: proposes to close reviewing actions on the card 17:17:24 <IanH> q? 17:17:35 <calvanese> bijan: why developed on external wiki? 17:17:53 <uli> baojie 17:18:04 <uli> ...answered to Bijan 17:18:07 <ivan> +1 to bijan 17:18:13 <bijan> zakim, mute me 17:18:13 <Zakim> bijan should now be muted 17:18:14 <calvanese> ... we should do all the working group work on the working group wiki 17:18:28 <calvanese> ACTION: move reference card work on owl wiki 17:18:28 <trackbot> Sorry, couldn't find user - move 17:19:00 <calvanese> ACTION: baojie move reference card work on owl wiki 17:19:00 <trackbot> Sorry, couldn't find user - baojie 17:19:02 <IanH> q? 17:19:13 <bijan> zakim, unmute me 17:19:13 <Zakim> bijan should no longer be muted 17:19:20 <baojie> try Jie Bao? 17:19:23 <IanH> q? 17:19:25 <pfps> ACTION: JieBao move reference card work on owl wiki 17:19:25 <trackbot> Sorry, couldn't find user - JieBao 17:20:30 <calvanese> IanH: can BiJan discuss this with task force 17:20:49 <calvanese> bijan: ok 17:21:03 <calvanese> SubTopic: Requirements 17:21:03 <calvanese> IanH: what is the latest state with requirement? 17:21:13 <bijan> Bijan: I was wondering if the TF has determined whether they'll use HTML for the quick reference card. 17:21:16 <uli> Diego, this is cgolbrei 17:21:33 <calvanese> cgolbrei: since review updated requirements to take into account comments 17:22:20 <calvanese> ... actions till Oct. 20 to be done. by then we will have a complete version of requirements 17:22:27 <IanH> q? 17:22:31 <bijan> q- 17:22:31 <ewallace> yes 17:22:35 <calvanese> ... in touch with Ivan to improve narrative of usecases 17:22:41 <IanH> q? 17:22:44 <bijan> zakim, mute me 17:22:44 <Zakim> bijan should now be muted 17:22:51 <cgolbrei> evan 17:22:51 <ivan> :-) 17:22:53 <calvanese> IanH: so good progress has been done 17:22:55 <uli> s/Ivan/Evan 17:22:57 <bijan> I've not reviewed the requirements in quite some time 17:22:59 <ivan> s/ivan/evan/ 17:22:59 <calvanese> s/ivan/evan 17:23:05 <baojie> QRG is transferred to OWL Wiki 17:23:09 <calvanese> s/ivan/evan/ 17:23:15 <bijan> I've had reservations about it...if we hit a point where you'd like my (critical) feedback, let me know 17:23:25 <calvanese> Topic: Issues 17:24:17 <calvanese> IanH: there are a few issues remaining that have been talked to death, but still no clear right or wrong answer 17:24:44 <IanH> q? 17:24:49 <calvanese> ... we have just to vote with a majority on these. this is the reason for the slight change of title on this first part of the issues 17:25:01 <calvanese> IanH: no procedural questions 17:25:14 <calvanese> SubTopic: Issue 109 17:25:14 <calvanese> IanH: Issue 109 about namespaces 17:25:28 <IanH> 17:25:46 <calvanese> IanH: this email contains a reasonable summary of the issue 17:25:48 <bijan> that's the proposal 17:26:05 <pfps> Ian posted in one of my messages. Ivan is free to agree with it. 17:26:10 <calvanese> ivan: this is Peter's email, not mine 17:26:20 <IanH> 17:26:33 <calvanese> IanH: this is the one from Ivan that summarizes the discussion 17:27:05 <bijan> Key bit: 17:27:06 <bijan> From that point on the disagreement between Bijan and me is, I believe, 17:27:06 <bijan> a kind of a judgement call: 17:27:06 <bijan> - Bijan believes that introducing a _different_ URI for the purpose of 17:27:07 <bijan> #2 is too expensive, so to say, in terms of the user community, and that 17:27:07 <bijan> issue of this extra 'price' should have a higher priority than other 17:27:07 <calvanese> IanH: Ivan gives a short summary of the issue 17:27:08 <bijan> considerations 17:27:10 <bijan> - I am concerned that mixing two very different features/roles on the 17:27:12 <bijan> same URI is not a clear design and may be misleading (see also my remark 17:27:14 <bijan> below), and I do not feel the 'price' referred to by Bijan to be high 17:27:16 <bijan> enough to overrule this concern. 17:28:01 <calvanese> Ivan: summarizes issue 109 17:28:07 <bijan> q+ 17:28:40 <IanH> q? 17:28:44 <bijan> zakim, unmute me 17:28:44 <Zakim> bijan should no longer be muted 17:28:46 <calvanese> IanH: there is no new info to bring on this issue 17:29:17 <calvanese> bijan: would like to see a pointer to another place that says that it is bad engineering practice 17:29:56 <calvanese> ... it is not just a beaty contest, since it changes how I have to write software 17:30:03 <bijan> zakim, mute me 17:30:03 <Zakim> bijan should now be muted 17:30:16 <bijan> I've shown material effects! 17:30:28 <bijan> Yep 17:30:31 <bijan> Yes 17:30:35 <IanH> q? 17:30:38 <bijan> q- 17:30:39 <IanH> ack bijan 17:30:40 <calvanese> IanH: the long discussion we already had didn't seem to converge 17:30:44 <bijan> zakim, mute me 17:30:44 <Zakim> bijan should now be muted 17:31:22 <bijan> If there's a -1 then ask if we're going to lie down in the road 17:31:47 <calvanese> IanH: we could start with a straw poll first, and then try to find out which direction people are going 17:31:58 <alanr> q+ 17:32:02 <IanH> q? 17:32:29 <ivan> q+ 17:32:31 <bijan> "lie down in the road" is old w3c lingo for "make a formal objection" 17:32:33 <ivan> ack alanr 17:32:38 <IanH> ack alanr 17:32:58 <IanH> ack ivan 17:33:02 <IanH> q? 17:33:03 <pfps> Straw Poll: same = same namespace, different = different namespace (caps for road kill)?? 17:33:21 <calvanese> ivan: whatever the outcome is, w3c does not want to use this as a formal objection. 17:33:39 <calvanese> ... we will go with the majority, and would not make a formal objection out of that 17:33:47 <bijan> Manchester has not determined whether we'd formally object 17:34:02 <pfps> mixed case for Manchester :-) 17:34:06 <calvanese> IanH: straw poll 17:34:09 <ewallace> 0 17:34:12 <baojie> 0 17:34:14 <ivan> different 17:34:14 <pfps> same 17:34:18 <bijan> same 17:34:40 <IanH> STRAWPOLL: same = same namespace, different = different, 0 = don't care 17:34:50 <Zhe> 0 17:34:50 <pfps> ame :-) 17:34:51 <bijan> ame 17:34:52 <ewallace> 0 17:34:53 <msmith> same 17:34:53 <ivan> different 17:35:00 <uli> same 17:35:03 <bmotik> 0 17:35:04 <bernardo> same 17:35:04 <cgolbrei> same 17:35:05 <alanr> 0 17:35:05 <baojie> 0 17:35:05 <MarkusK_> same 17:35:10 <calvanese> 0 17:35:13 <clu> 0 17:35:42 <bijan> Yes 17:36:04 <calvanese> IanH; same is in vast majority. sandro sent email that he would vote "different" 17:36:14 <alanr> also manchester 17:36:33 <bijan> So are we settled? 17:36:58 <calvanese> IanH: only W3C voting different 17:37:11 <calvanese> IanH: let's have now a formal vote 17:37:58 <IanH> PROPOSAL: close issue-109 by resolving to use a single namespace for everything and that that namespace is the old OWL namespace and that we use it for the XML syntax elements and attributes. 17:38:11 <bijan> Ok by me 17:38:21 <pfps> I believe that the above is adequate 17:38:29 <pfps> q+ 17:38:35 <IanH> q? 17:38:40 <bijan> Separate issue? 17:38:40 <IanH> ack pfps 17:38:44 <bijan> zakim, unmute me 17:38:46 <Zakim> bijan should no longer be muted 17:38:56 <calvanese> pfps: there is a proposal for attributes having no namespace 17:39:38 <bijan> zakim, mute me 17:39:38 <Zakim> bijan should now be muted 17:39:50 <uli> or add (pending decision about namespacing of attributes in general) 17:39:54 <IanH> PROPOSAL: close issue-109 by resolving to use a single namespace for everything and that namespace is the old OWL namespace and that we use it for the XML syntax elements and attributes. 17:39:58 <ewallace> 0 NIST 17:39:59 <pfps> +1 (Alcatel-Lucent) 17:40:01 <uli> +1 (Manchester) 17:40:03 <MarkusK_> +1 (FZI) 17:40:04 <msmith> +1 (C&P) 17:40:05 <ivan> -1 (W3C) 17:40:09 <bernardo> 0 (OX) 17:40:09 <baojie> +1 (RPI) 17:40:09 <alanr> 0 (Science Commons) 17:40:11 <calvanese> 0 (FUB) 17:40:15 <cgolbrei> +1 uvsq 17:40:17 <Zhe> 0 (ORACLE) 17:40:39 <IanH> RESOLVED: close issue-109 by resolving to use a single namespace for everything and that namespace is the old OWL namespace and that we use it for the XML syntax elements and attributes. 17:40:50 <IanH> q? 17:41:01 <IanH> q? 17:41:07 <calvanese> pfps: can we resolve the part on attributes now? 17:41:08 <bijan> But I've heard no objection 17:41:11 <bijan> Sandro is for it 17:41:18 <bijan> All discussion is positive 17:41:34 <pfps> let's go for it! 17:41:40 <calvanese> IanH: have we discussed this adequately, and are we in a position to decide on it now? 17:41:44 <bijan> And it's standard XML practice 17:41:48 <calvanese> ivan: seems to be a nobrainer 17:41:55 <bijan> Yes 17:42:03 <IanH> PROPOSAL: attributes should have no namespace. 17:42:11 <bijan> attirbutes in owl/xml 17:42:35 <IanH> PROPOSAL: attributes in owl/xml should have no namespace. 17:42:36 <bijan> +1 17:42:41 <ivan> 1 17:42:45 <pfps> +1 (Alcatel-Lucent) 17:42:47 <MarkusK_> +1 (FZI) 17:42:48 <uli> +1 17:42:49 <Zhe> +1 (ORACLE) 17:42:49 <alanr> +1 (Science Commons) 17:42:50 <ewallace> +1 (NIST) 17:42:51 <ivan> 1 (W3C) 17:42:55 <msmith> +1 (C&P) 17:42:57 <IanH> +1 17:42:59 <Ratnesh> +1 (DERI) 17:43:02 <baojie> 1(RPI) 17:43:05 <bernardo> +1 17:43:06 <calvanese> +1 (FUB) 17:43:12 <cgolbrei> +1 (UVSQ) 17:43:16 <IanH> RESOLVED: attributes in owl/xml should have no namespace. 17:43:35 <IanH> q? 17:43:50 <calvanese> SubTopic: Issue 114 17:43:50 <calvanese> IanH: Issue 114 - which combinations of punning should be allowed? 17:43:58 <IanH> q? 17:44:04 <alanr> q+ 17:44:10 <IanH> q? 17:44:14 <IanH> ack alanr 17:44:15 <calvanese> ... nobody seems to object to the proposal for resolution 17:44:15 <bmotik> q+ 17:44:38 <uli> Alan, you phone line comes and goes 17:45:03 <IanH> q? 17:45:05 <pfps> q+ 17:45:09 <calvanese> alanr: summarizes issue 114 17:45:31 <bijan> zakim, mute me 17:45:31 <Zakim> bijan was already muted, bijan 17:45:43 <IanH> q? 17:45:52 <bmotik> Zakim, unmute me 17:45:52 <Zakim> bmotik should no longer be muted 17:46:06 <IanH> q? 17:46:10 <IanH> ack bmotik 17:46:33 <IanH> q? 17:46:36 <uli> yes 17:46:44 <uli> yes 17:46:46 <bernardo> yes 17:46:47 <calvanese> bmotik: i think I have a good set of answers to alan's questions 17:46:51 <msmith> yes, I can hear you both fine 17:47:09 <uli> re-dial? 17:47:12 <bijan> zakim, unmute me 17:47:12 <Zakim> bijan should no longer be muted 17:47:20 <bijan> Ian can can you hear me? 17:47:38 <alanr> ok I've got 17:47:41 <alanr> it 17:47:42 <uli> we still hear you 17:47:48 <bijan> zakim, mute me 17:47:48 <Zakim> bijan should now be muted 17:48:28 <calvanese> bmotik: the rhs of annotiations would also be URIs 17:48:48 <calvanese> ... this is technical, and difficult to discuss via phone 17:49:27 <IanH> q? 17:49:30 <calvanese> IanH: it would be fare to postpone the discussion today, and have a discussion via email. then resolve it at the F2F 17:49:35 <pfps> q- 17:50:23 <calvanese> SubTopic: Issue 138 17:50:23 <calvanese> IanH: Issue 138 on name of dateTime 17:50:34 <calvanese> pfps: summarizes issue 17:50:36 <IanH> q? 17:50:40 <ivan> q+ 17:50:41 <msmith> q+ 17:50:49 <IanH> ack ivan 17:51:23 <pfps> q+ 17:51:32 <calvanese> ivan: clarify next week with the XML schema people all remaining questions 17:51:36 <IanH> ack msmith 17:51:57 <IanH> ack pfps 17:51:58 <calvanese> msmith: there is still an issue with identity being different 17:52:15 <bmotik> Zakim, mute me 17:52:15 <Zakim> bmotik should now be muted 17:52:28 <bijan> Isnt' this just words? 17:52:40 <msmith> q+ 17:52:43 <bijan> I.e., does it matter if we call our identity "xsd equality"? 17:52:50 <calvanese> pfps: xml schema 1.1 identity is data structure identity. we are not using that as our semantic notion of identity. we are using equality 17:53:17 <IanH> q? 17:53:25 <calvanese> msmith: I will try to find out when the xml-schema people meet next week 17:53:38 <pfps> s/msmith/ivan/ 17:53:59 <bmotik> q+ to answer this 17:54:05 <IanH> ack msmith 17:54:12 <bmotik> Zakim, unmute me 17:54:12 <Zakim> bmotik should no longer be muted 17:54:21 <IanH> ack bmotik 17:54:21 <Zakim> bmotik, you wanted to answer this 17:54:37 <IanH> q? 17:55:11 <pfps> there is explicit wording in the xsd 1.1 document saying that smushing real and double is OK 17:55:28 <msmith> ok 17:55:31 <msmith> I'm happier, thanks 17:55:32 <calvanese> bmotik: we are doing here something similar to what done with numbers in general 17:55:46 <IanH> q? 17:56:00 <calvanese> IanH: we postpone issue 138 till we speak with the xml-schema people 17:57:16 <calvanese> SubTopic: Issue 56 17:57:16 <calvanese> IanH: as agreed at the beginning, we are moving issue 56 forward 17:57:20 <bijan> q+ 17:57:25 <IanH> q? 17:58:04 <calvanese> pfps: the issue is out of scope for our working group. there are better places to discuss it 17:58:13 <IanH> q? 17:58:16 <calvanese> ... e.g. OWLED 17:58:27 <alanr> misunderstanding 17:58:32 <alanr> no SHOULDs involved 17:58:35 <alanr> WG Note 17:58:53 <bijan> I have a meta point 17:59:01 <calvanese> pfps: summarizes the issue, and explain what "this" is 17:59:02 <bijan> zakim, unmute me 17:59:02 <Zakim> bijan should no longer be muted 17:59:35 <calvanese> bparsia: I don't see that a discussion would change people's positions 17:59:40 <bijan> zakim, mute me 17:59:40 <Zakim> bijan should now be muted 17:59:40 <ivan> an aside: Michael SperbergMcQueen will not be in Mandelieu:-( But Henry Thompson and Liam Quinn will be there 18:00:03 <bijan> q+ 18:00:07 <IanH> q? 18:00:11 <pfps> q+ 18:01:11 <IanH> q? 18:01:12 <bijan> zakim, unmute me 18:01:12 <Zakim> bijan should no longer be muted 18:01:17 <IanH> ack bijan 18:02:08 <calvanese> pfps: owled would allow us to do the work on this outside the working group, and save resources 18:02:17 <IanH> q? 18:02:21 <pfps> pfps: my view of the issue is to prepare a document that specifies repairs that tools should do to move RDF documents to OWL 2 Dl 18:02:21 <IanH> ack pfps 18:02:26 <bijan> zakim, mute me 18:02:26 <Zakim> bijan should now be muted 18:02:32 <calvanese> s/pfps/bparsia/ 18:02:50 <calvanese> pfps: I agree with bijan 18:02:58 <msmith> yes, bijan is the only Pellet person at the f2f 18:03:10 <IanH> q? 18:03:35 <ivan> q+ 18:03:38 <bijan> Manchester qua OWL Lint (CO-ODE) don't want to do it either :) 18:03:39 <IanH> q? 18:03:50 <pfps> i'm not interested in doing it at the F2F 18:04:00 <bijan> Why? 18:04:04 <IanH> q? 18:04:05 <pfps> yes, why? 18:04:07 <IanH> ack ivan 18:04:18 <msmith> q+ 18:04:25 <calvanese> IanH: the question seems to be whether to discuss this in the working group or ouside, not whether to discuss this at all 18:04:27 <IanH> q? 18:04:35 <msmith> q- 18:04:38 <bijan> q+ 18:04:49 <IanH> q? 18:05:10 <msmith> +1 to Ivan. From Pellet implementer perspective, this is not high priority in WG time 18:05:20 <pfps> +1 to Ivan 18:05:25 <bijan> +1 to ivan 18:05:30 <IanH> q? 18:06:01 <bijan> zakim, unmute me 18:06:01 <Zakim> bijan should no longer be muted 18:06:04 <IanH> q? 18:06:12 <IanH> ack bijan 18:06:28 <alanr> q+ 18:06:32 <calvanese> IanH: we can decide at the beginning of the f2f whether we discuss this 18:07:06 <IanH> q? 18:07:17 <bijan> zakim, mute me 18:07:17 <Zakim> bijan should now be muted 18:07:18 <IanH> ack alanr 18:07:34 <calvanese> IanH: bijan, is it ok to leave deciding on that at the beginning of the f2f? 18:07:39 <pfps> pointer please! 18:07:40 <bijan> I have *always* objected to doing in the wg 18:07:46 <pfps> q+ 18:07:47 <bijan> q+ 18:07:49 <bijan> zakim, mute me 18:07:49 <Zakim> bijan was already muted, bijan 18:07:50 <IanH> q? 18:07:53 <bijan> zakim, unmute me 18:07:53 <Zakim> bijan should no longer be muted 18:08:16 <alanr> +1 18:08:37 <IanH> q? 18:08:44 <ewallace> roadmap discussion, future tasks for life of OWL WG 18:09:32 <calvanese> alanr: I would not object to rename the session at the f2f from "discussion on issue 56" to "discussion on open issues" 18:09:34 <pfps> pfps: I don't remember a straw poll on repairs - I would like a pointer 18:09:37 <IanH> q? 18:09:43 <IanH> ack pfps 18:09:48 <IanH> ack bijan 18:10:10 <alanr> Nor did I suggest that Bijan said that 18:10:23 <bijan> zakim, mute me 18:10:23 <Zakim> bijan should now be muted 18:10:32 <IanH> q? 18:10:42 <bijan> zakim, unmute me 18:10:42 <Zakim> bijan should no longer be muted 18:11:00 <calvanese> animated discssion going on between alanr, bijan 18:11:00 <IanH> q? 18:11:13 <calvanese> PROPOSED: amend agenda of f2f 18:11:20 <bijan> zakim, mute me 18:11:20 <Zakim> bijan should now be muted 18:12:18 <calvanese> RESOLVED: amend agenda of f2f such that session on issue 56 is changed to "discussion on future working group activities" 18:12:39 <alanr> q+ 18:12:39 <IanH> q? 18:12:54 <IanH> ack alanr 18:13:03 <pfps> q+ 18:13:08 <ivan> q+ 18:13:16 <IanH> q? 18:13:20 <calvanese> SubTopic: Issue 145 18:13:20 <calvanese> IanH: we move to issue 145 18:13:50 <calvanese> alanr: summarizes issue 145 18:13:54 <IanH> q? 18:14:16 <bijan> application/xml+owl 18:14:23 <IanH> ack pfps 18:14:35 <bijan> what's the question? 18:14:45 <IanH> q? 18:14:55 <calvanese> pfps: I have nothing against having mime types for the manchester syntax etc., but I am confused why the xml syntax should have a mime type 18:14:58 <alanr> Sandro said 18:14:59 <alanr> The Last Call drafts for any syntax we expect to be transmitted over the 18:14:59 <alanr> web need to include mime type registrations. For example, see the one I 18:14:59 <alanr> did for RIF BLD: 18:15:00 <alanr> 18:15:00 <alanr> So someone needs to draft that for the OWL XML serialization. 18:15:07 <bijan> It's pretty easy 18:15:09 <alanr> end of what sandro said 18:15:11 <IanH> Question is: do we*need* a mime type for the XML syntax 18:15:14 <bijan> but tedious 18:15:18 <calvanese> ivan: to have a mime type we have to officially submit a request to ??? 18:15:20 <IanH> q? 18:15:26 <ivan> s/???/IETF/ 18:15:27 <IanH> ack ivan 18:15:59 <bijan> Example registration: 18:16:19 <calvanese> ivan: the obvious serialization of owl will inherti the mime type from RDF, so this is not an issue 18:16:39 <pfps> register early, register often :-0 18:16:46 <IanH> q? 18:16:54 <calvanese> IanH: is there any downside to registering mime types for the various syntaxes? 18:17:20 <pfps> sandro :-) 18:17:57 <calvanese> IanH: we can take this offiline 18:18:14 <calvanese> ACTION: IanH to find a volunteer for this 18:18:14 <trackbot> Sorry, couldn't find user - IanH 18:18:16 <IanH> q? 18:18:49 <calvanese> SubTopic: Issue 142 18:18:49 <calvanese> IanH: move to Issue 142 18:18:54 <IanH> q? 18:19:02 <bmotik> q+ 18:19:04 <alanr> q+ 18:19:07 <bmotik> Zakim, unmute me 18:19:07 <Zakim> bmotik was not muted, bmotik 18:19:18 <IanH> q? 18:19:22 <IanH> ack bmotik 18:19:24 <calvanese> IanH: are we doing anything to prove that Theorem 1 in the profiles document is true? 18:19:49 <calvanese> bmotik: a full proof of the theorem would require pages and pages, and would probably be useless 18:19:51 <IanH> q? 18:20:00 <ivan> q+ 18:20:01 <calvanese> ... I can provide a proof sketch 18:20:03 <IanH> q? 18:20:19 <IanH> q? 18:20:23 <IanH> ack alanr 18:20:26 <ivan> ack alanr 18:20:37 <calvanese> ACTION: bmotik to provide proof sketch for Theorem 1 in profiles document 18:20:37 <trackbot> Sorry, couldn't find user - bmotik 18:20:39 <bmotik> q+ 18:20:43 <pfps> q+ 18:20:46 <IanH> q? 18:21:23 <pfps> Alan is not reading the theorem correctly 18:21:33 <IanH> q? 18:21:38 <ivan> ack ivan 18:21:49 <calvanese> IanH: I believe annotations don't belong to the theorem 18:22:02 <alanr> let O1 and O2 be OWL 2 RL ontologies in both of which no URI is used for more than one type of entity (i.e., no URIs is used both as, say, a class and an individual), and where all axioms in O2 are assertions of the following form with a, a1, ..., an named individuals: 18:22:05 <IanH> ack bmotik 18:22:10 <alanr> so O1 can have annotations 18:22:11 <calvanese> ivan: does the theorem have to be proved? 18:22:20 <alanr> and O2 can have sameas 18:23:06 <calvanese> bmotik: agrees with Ian that we don't talk about entailments of annotations in OWL-DL 18:23:18 <pfps> Theorem 1 does not allow annotations in the consequent!!! 18:23:57 <IanH> q? 18:24:59 <calvanese> pfps: alan is wrong, boris is right, since annotations cannot be in the consequent 18:25:01 <bijan> I agree with boris and peter as well 18:25:04 <IanH> q? 18:25:07 <bijan> re: the theorem 18:25:09 <pfps> q- 18:25:11 <IanH> ack pfps 18:25:13 <IanH> q? 18:25:14 <calvanese> IanH: so differences in annotations do not impact on the theorem 18:25:19 <alanr> q+ 18:25:37 <calvanese> boris: answers Ivan's question 18:25:57 <calvanese> ... it is intuitively kind of clear that this holds. 18:26:03 <bijan> It's super ugly 18:26:19 <IanH> q? 18:26:21 <bijan> See the Jermey and Dave Turner "proof" about OWL Full consistency 18:26:28 <bijan> 60,000 lines of isabelle code 18:26:29 <calvanese> ... there is a transformation between derivations 18:26:46 <pfps> so change it into a conjecture? 18:27:32 <pfps> no 18:27:38 <calvanese> alanr: provides his understanding of the theorem 18:27:41 <pfps> no, no, no, no, no, no, no, no 18:27:47 <bmotik> q+ 18:27:53 <uli> but O2 is the 'question" ontology! 18:28:06 <pfps> we have already said how it happens 18:28:12 <IanH> q? 18:28:14 <calvanese> ... I'm not sure how annotations are ruled out 18:28:22 <IanH> ack alanr 18:28:43 <IanH> ack bmotik 18:28:59 <calvanese> boris: explains that putting sameas in O2 does not make a difference 18:29:27 <IanH> ack 18:29:30 <IanH> q? 18:29:43 <uli> makes sense 18:29:51 <bijan> +1 to conjecture 18:29:52 <calvanese> ... if you put sameas in O1, this would have additional consequences, but in O2 you are not allowed to answer questions that would detect such consequences 18:29:58 <ivan> q+ 18:29:58 <uli> makes sense if with proof sketch 18:30:01 <pfps> q+ 18:30:03 <bernardo> reasonable 18:30:04 <IanH> q? 18:30:07 <IanH> ack ivan 18:30:17 <bijan> q+ 18:30:22 <bijan> zakim, unmute me 18:30:22 <Zakim> bijan should no longer be muted 18:30:26 <calvanese> IanH: PROPOSED: change theorem to conjecture 18:30:28 <IanH> q? 18:30:35 <uli> if we are all happy with it? 18:30:45 <pfps> ideal solution is theorem+sketch 18:30:55 <calvanese> alanr: this calls for comments that request a proof 18:31:06 <IanH> ack pfps 18:31:09 <alanr> ok. I think I understand now. Thank's Boris 18:31:15 <calvanese> pfps: acceptable situation is theorem + proof sketch 18:31:27 <IanH> q? 18:31:32 <IanH> ack bijan 18:31:38 <calvanese> bijan: why do we care? 18:31:42 <IanH> q? 18:31:47 <pfps> q- 18:32:30 <IanH> q? 18:32:41 <IanH> q? 18:32:46 <calvanese> IanH: we need at least a sketch proof 18:32:47 <bijan> I think it is 18:32:54 <ivan> ewallace: we do not know 18:33:05 <IanH> q? 18:33:22 <bijan> zakim, unmute me 18:33:22 <Zakim> bijan was not muted, bijan 18:33:26 <IanH> q? 18:33:33 <Zhe> how long will it take Boris to produce the sketch? 18:33:58 <calvanese> bparsia: it is not hight priority to proof the theorem. we all believe that it holds. boris has better things to do 18:34:09 <Zhe> :) 18:34:11 <calvanese> s/proof/prove/ 18:34:24 <bijan> If that's suffices, then sure 18:34:30 <pfps> a short sketch would be useful 18:34:51 <IanH> q? 18:34:52 <calvanese> bmotik: I can produce the sketch in 5 sentences. If it takes more, I agree with Bijan that it is a waste of time. 18:35:08 <calvanese> ... I try to produce the 5 lines before the f2f 18:35:20 <calvanese> IanH: additional other business? 18:35:27 <uli> bye 18:35:28 <Zhe> bye 18:35:30 <Ratnesh> bye 18:35:31 <calvanese> ... closes the discussion | http://www.w3.org/2007/OWL/wiki/index.php?title=Chatlog_2008-10-15&oldid=14348 | CC-MAIN-2015-14 | en | refinedweb |
DSDP/PMC/PMC Minutes 11Mar10
< DSDP | PMC
Revision as of 07:26, 11 March 2010 by Dgaff.eclipse.gmail.com (Talk | contribs)
Time
Dial-in Info
Invited Attendees
- Mike Milinkovich
- Doug Gaff, PMC Lead
- Eric Cloninger, Sequoya
- Christian Kurzke, MTJ/Pulsar
- Shigeki Moride, NAB
- Martin Oberhuber, TM/TCF
- Dave Russo, RTSC
- Wayne Parrot, Blinki (absent)
New Business
- History
- Wind River stepped down from the Strategic Developer level almost 2 years.
- Doug Gaff and his team were still allocated to Wind River’s open source work, so DSDP was effectively still under sponsorship. Then early last year, Doug's management responsibilities at Wind changed significantly, and the official time allocated to DSDP became "whatever he could squeeze in."
- Doug took a new job in Dec of last year, and Wind declined to continue their sponsorship of the project.
- Doug's new job is unrelated to Eclipse, and he cannot continue to lead the project. The Foundation was unable to find another company or group of companies to devote the necessary time to lead the project.
- Where are are now
- Doug would like to see DSDP continue if it had good sponsorship, but the seems unlikely. It is more important that the technologies continue to live wherever they can get the most mentoring and exposure.
- Some discussion about this on the phone.
- Summary is that the PMC really enjoys working together, but no one has enough sponsorship from their respective companies to lead DSDP, and they need to focus on their projects.
- Proposal for moving Projects
- Blinki
- Move to Technology? Needs a reboot, because the project is failing to be open and transparent.
- Mike and Wayne from the Foundation will need to meet with Blinki leadership.
- Device Debugging
- DSF moved to CDT last year and is thriving there.
- The repository was left open for folks to remain on a frozen branch until after the CDT transition. The IP-XACT editor is also there, but it's no longer under development.
- Decision: Archive
- MTJ and Sequoyah
- Sequoyah moves to Tools and becomes a container project for mobile
- MTJ moves under Sequoyah
- RTSC
- Move to Technology since they are still incubating. Could move somewhere else when they are ready to exit incubation. Dave really wants the visibility to be high, though. Perhaps a future in CDT?
- TM
- Move to Tools and eventually have TCF (currently a component) as a separate project.
- NAB
- Move to Technology. Still in incubation.
- Will continue to use the DSDP namespace (org.eclipse.dsdp.nab). Shigeki feels that the DSDP brand is important to maintain (especially in Japan).
- Communication and Next Steps
- Doug to communicate this meeting on the dsdp-pmc mailing list.
- Doug to blog next week.
- Rearrange projects AFTER Helios! Wayne, Anne, and Webmaster team will help move.
- This PMC will need to meet occasionally to keep things running until Helios.
- Participation on planning council
- Votes needed from the PMC
- Ad-hoc calls as necessary
- Words of Thanks from Mike and Doug | http://wiki.eclipse.org/index.php?title=DSDP/PMC/PMC_Minutes_11Mar10&oldid=192062 | CC-MAIN-2015-14 | en | refinedweb |
25 August 2011 17:41 [Source: ICIS news]
WASHINGTON (ICIS)--A major manufacturers group on Thursday lowered its forecast for ?xml:namespace>
In its quarterly economic forecast, the Manufacturers Alliance said it expects
For 2012, the alliance forecasts GDP growth of 2.1%, a downgrade from its May outlook predicting a 2.7% expansion.
Daniel Meckstroth, chief economist for the alliance, cited the sharply lower economic performance in the
In late July the Department of Commerce (DOC) drastically cut its estimate of US GDP for the first quarter, saying that the economy barely expanded at all in the first three months at 0.4%.
The department’s initial estimate had been 1.9% GDP growth in the first quarter.
In addition, the department said that second quarter GDP growth was only 1.3%, well below the 1.8% pace that many economists had been expecting.
The
Normal annual GDP growth for the
With its outlook for
“Overall job growth will be disappointing,” Meckstroth said.
Other concerns also impact US economic prospects going forward, he said.
“We have already seen a stock market correction, and there could be further reverberations of the
“In addition, the political gridlock in solving the long term federal budget deficit lowers confidence in the US, and state and local governments are in an austerity mode,” he added.
“Unfortunately, there are relatively few economic drivers that are likely to accelerate over the rest of the year,” Meckstroth said.
Manufacturing is among the few bright spots in the
However, here too the alliance has lowered its expectations from just three months earlier, when it forecast manufacturing growth of 6.2% this year and 4.2% in | http://www.icis.com/Articles/2011/08/25/9488072/manufacturers-cut-us-growth-rate-forecasts-for-2011-2012.html | CC-MAIN-2015-14 | en | refinedweb |
UI.
The following set of the controls is included in the current API:
TextBox: A control which displays and accepts input of text.
Button: A simple command button control.
ToggleButton: A control that possesses the ability to be selected.
RadioButton: A kind of a toggle button that has another appearance.
CheckBox: A tri-state selection control with a tick mark when checked.
Hyperlink: An HTML like text label that responds to rollovers and clicks.
Slider: A control that enables selecting a value by sliding a knob.
ProgressBar: A component used to show the progress of a task.
ProgressIndicator: A component that displays the progress in a form of a pie chart.
ListView: A simple list of items that can be editable.
ScrollBar: An element that enables the graphical content of a container to be scrolled.
You can learn more about these controls in the Powerful UI Capabilities With Node-Based Controls tutorial. This article discusses some performance aspects of the UI controls implementation and suggests how to employ them in your applications. First, run this example.
ScrollPane and performance
As you may see, the main window does not contain a scroll bar. However, you can see it in action in the ScrollPane class that supports a mouse wheel. The values of the
boundsInParent variable defined in the Node class should be used in expressions to lay out the content. Never bind these values directly! The rational is that these values are updated many times per second and regular recalculations of expressions substantially decrease performance of your application. In order to resolve this problem, I created the Bounds class. The coordinates and dimensions are updated only when the corresponding values actually changed. If you compile and run the application by using the Bounds class, you will see that the values with the prefix content are constantly updated, even if the values are not changed. Another recommendation is to remove println, because the debug printing decreases performance as well.
ScalePane
Games often require to show all the graphical content constantly. That's why, I developed the ScalePane class for these purposes. It is constructed similar to the ScrollPane class and preserves the aspect ratio. Try this example to explore it in action. In some rare cases you might want to disable preserving the aspect ratio, however, run this example to ensure that is looks ugly.
Application
Having reviewed a lot of samples, I realized that each model of the application deployment requires specific actions. All the possible actions are implemented in the Application class. The application not only looks and works identically on the mobile device and in the full-screen mode, but additionally, it enables dragging the applet from a browser without pressing the Alt key. The main class of the example is very simple now.
Application {
title: "JavaFX Controls"
header: ImageView { ... }
background: LinearGradient { ... }
content: ScrollPane {
background: Color.WHITE
border: 10
content: ...
}
}
Refer to the source code of the Controls class for more infromation.
- Login or register to post comments
- Printer-friendly version
- malenkov's blog
- 9812 reads
Hello Sergey, First, I want
by ilyabo - 2010-03-10 06:04Hello Sergey, First, I want to thank you for sharing your work! I'd like to use your ScrollPane implementation in my project, but there is an issue with it. Maybe you can help me to resolve it. In your example code you bind the sizes of the ScrollPane to the sizes of the Scene. This works perfectly in situations when the ScrollPane must take the whole area of the scene or when the area given to the other components has a fixed size. But what if we need a flexible layout? I would like to be able to say that the ScrollPane should take the whole amount of the available space (like "growing" cells in GridBagLayout) except for the necessary space which is given to the other components, so that the size of the ScrollPane is calculated automatically without the need to manually subtract the sizes of the components in the bind expression (like in the commented code below). Is there a way to achieve this? Thanks in advance.
var button:Button;
def stage:Stage = Stage {
scene: Scene {
width: 800
height: 600
content: [
VBox {
content: [
button = Button {
text: "My Button"
}
ScrollPane {
width: bind stage.scene.width
height: bind stage.scene.height // - button.height
node: Group {
content: [
for (i in [0..20], j in [0..20]) {
Rectangle {
fill: Color.GRAY, stroke: Color.DARKGRAY
width: 50, height: 50
x: 50 * j, y: 50 * i
}
}
]
}
}
]
}
]
}
}
Hi Ilya
by malenkov - 2010-03-10 10:33
A custom node should not lay out itself, because it can be added into any container. You should set a layoutInfo variable, that will be used by a container to layout your node. Please, read the article written by Amy.
Hi Sergey
by dey210 - 2010-03-16 10:31Thanks for controls shared. Your ScrollPane is very interesting.
Can I add it to a simple Rectangle node because i want make many scrollable composants ?
If it is not possible, Can I manipulate many Application.fx(Stage) object into an principal stage?
thanks at advance, and sorry for my bad english.
Hello
by malenkov - 2010-03-17 03:37
I do not understand what do you want to do with the Rectangle class. It is not a container, so you can't add any node to a rectangle. You can use my ScrollPane with any container such as Group, Tile, and other.
The Application class corresponds to single stage. It was created to simplify application development for different deployment models: desktop, applet, mobile.
by malenkov - 2009-06-03 00:13It is very easy to create custom table. But I think that Tree is more complex. I hope that this functionality will be added. Stay tuned!
by vjsr71 - 2009-06-02 23:56Sergey, this is great news. Could you throw some light on the best practices for controls not covered by JavaFX (e.g. Tree, Table, etc)? Should we write wrapper classes for them in the short term? Also, can we expect JavaFX equivalents in the long term? Thanks.
by dnarmitage - 2009-06-09 02:52Just having a small problem with following the ScalePane and ScrollPane examples .... Compiler gives "javafx.geometry.Bounds is abstract; cannot be instantiated" on def bounds = Bounds { content: bind content } Am I missing something stupid here?
by malenkov - 2009-06-05 03:10@all: I do not develop JavaFX. I just create JavaFX samples for fun.
by malenkov - 2009-06-05 03:05@pupmonster: technically - possible, but legally - not possible
by malenkov - 2009-06-05 02:54@goron: JavaFX is based on Java. It is too difficult to create independent language from scratch.
by pupmonster - 2009-06-04 18:02Hiya all, Well, I must say I agree with goron. We have to be competitive on all major consumer operating systems. I would like to make a screenshot -- not possible here -- but I trust you will agree with this point -- two title bars are seriously dorky. I wonder if there really is technologically _nothing_ that can be done by Sun (or the open user community) to create a competitive user experience running JavaFX on OS X when Apple drags its feet? Some downloadable JDK/Jre Fixer or some such thing?
by goron - 2009-06-04 15:27> @pupmonster: The problem is that Java for OS X is supported by Apple. I don't know when it will be good enough. That's not a good attitude. If you want to be relevant in a world of Flex you have to do better, I think.
by jesperdj - 2009-06-04 14:38Created an issue in the JavaFX Jira: RT-4895
by jesperdj - 2009-06-04 14:29Sergey and sunburned, I'm running 64-bit Ubuntu and it JavaFX via webstart (including the Darkchat application) do not work on my system. The JNLP file for the JavaFX runtime, found here: only seems to mention "i386" and "x86" architectures for Linux. I don't know what the architecture ID for 64-bit Linux should be ("x86-64"? "x86_64"? "amd64"?), but it does not seem to be there. I hope that can be fixed soon... I also downloaded NetBeans with the JavaFX SDK 1.2, and that runs fine, but one of the example apps with a video player (MediaBox) does not work - I just get a window with the text "We're sorry, this video can't be played now". Do the video codecs work on 64-bit Linux?
by sunburned - 2009-06-04 09:45Hi Sergey, First, I'll get some more info about the javafx example problem and try to file a bug report. I think the problem may be related to the "Issue: RT-3780 and RT-3842: Cannot launch JNLP application" listed in the release notes. Since it's working on ubuntu (presumably 32-bit), maybe it's a 64-bit only problem with webstart. I downloaded the netbeans javafx sdk (linux) plugins, and the plugin sample fx code DOES run in netbeans, but the html files are broken in the browser. Second, that's great news to here that JWebPane is coming along nicely. Is there anywhere we can get a look at it? Thanks, --sunburned
by malenkov - 2009-06-04 01:13@pupmonster: The problem is that Java for OS X is supported by Apple. I don't know when it will be good enough.
by malenkov - 2009-06-04 00:55@sunburned: JavaFX works fine on Ubuntu at least. Could you please file a bug to javafx-jira.kenai.com?
by malenkov - 2009-06-04 00:50@etf: I meant that all necessary code can be easily created. GUI design will take time, but it can be similar to ListView.
by malenkov - 2009-06-04 00:36@tdanecito: JWebPane is ready. Almost. JMC supports external codecs installed on a client-side.
by pupmonster - 2009-06-03 11:42Hi Sergey, Thanks for the example app. Looking through it now and learning .... I have run the above on Linux and Windows and it looks good. I wonder if you or the Sun team are aware of problems running under OS X. a.) First OS X adds an extraneous/duplicate title frame header. Is this a programmatic issue I wonder? I sure hope that it is not in the JavaFX platform. b.) In the ListView in OS X when you scroll to the bottom with the down arrow key, i.e. not using the scrollbar and the mouse, the scroll bar disappears when you hit the bottom-most element. This problem does not occur, again, when running Linux or Windows. Regards, Steve
by walterln - 2009-06-03 08:41The animations used make the UI feel to respond very slow, for example when clicking the buttons rapidly. Also the textbox resizes depending on whether it has focus or not. If it is intended, the textbox label should resize with it (or the VBox layout needs to support baselines like it was added to the recent Swing layout managers).
by icewalker2g - 2009-06-03 08:28Is it just me, or do JavaFX apps tend to freeze for an appreciable amount of time when interacted with for the first time? i.e. when u click to perform something that should trigger an event or a repaint?
by etf - 2009-06-03 07:37What do you mean by "create custom table"? Just for fut it could be easy, but real, production quality table view widget is a big deal. Without such components JavaFX may be used only for playing, not for real work.
by sunburned - 2009-06-03 05:07The none of the examples here or on the javafx page, or in the javafx1.2 sdk work. In the past, even though I'm using debian linux amd64 with the jdk 1.6.0 u12, javafx examples would run through webstart in iceweasel (aka firefox). Now nothing works. I updated my jdk to u14 (including the link to the libnpjp2.so plugin), all examples are still broken even though applets run fine. I downloaded the linux javafx1.2 sdk and tried the examples there -- none would work. The error I'm getting for the "Controls" example (and all the other examples) is: "Unable to launch the application" Under the details, it says under the "Exception" tab: "Error: Unexpected exception: java.lang.reflect.InvocationTargetException.javaws.Launcher.executeApplication(Launcher.java:1302) at com.sun.javaws.Launcher.executeMainClass(Launcher.java:1248) at com.sun.javaws.Launcher.doLaunchApp(Launcher.java:1066) at com.sun.javaws.Launcher.run(Launcher.java:116) at java.lang.Thread.run(Thread.java:619)(URLClassLoader.java:200) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:188) at com.sun.jnlp.JNLPClassLoader.findClass(JNLPClassLoader.java:257) at java.lang.ClassLoader.loadClass(ClassLoader.java:307) at java.lang.ClassLoader.loadClass(ClassLoader.java:252) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320) ... 10 more" What's going on with JavaFX? I'd like to try it, but it's nothing but frustration.
by dnarmitage - 2009-06-09 03:36Yes - I am not using the Bounds class in your post - doh! Missed that highlight and assumed using javafx Bounds - Note to self - read carefully
by malenkov - 2009-08-04 22:13@carldea: Thanks, Carl. The ListBox bug is known, but the TextBox bug looks strange. I'll check it with the latest version and file the bug.
by carldea - 2009-08-04 19:19Hello Sergey, I seem to notice 2 strange behaviors once I click on the maximize button to max window size to screen (middle upper right corner of application). The textbox doesn't seem to allow user to type keystrokes when app is maximized. Also, in the ListBox control when selecting an item (say the 1st item) and you use the arrow key (down) to move to each item to the very end the scroll bar disappears sometimes. -Carl
rich text editor
by venba - 2009-09-09 14:22Do we have rich text editor in javafx??
Not yet as I know
by malenkov - 2009-10-27 07:30
Not yet as I know | https://weblogs.java.net/blog/malenkov/archive/2009/06/02/ui-controls-javafx-12 | CC-MAIN-2015-14 | en | refinedweb |
...one of the most highly
regarded and expertly designed C++ library projects in the
world. — Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
The strength of
BOOST_TTI_MEMBER_TYPE
to represent a type which may or may not exist, and which then can be subsequently
used in other macro metafunctions whenever a type is needed as a template parameter
without producing a compiler error, should not be underestimated. It is one
of the reasons why we have two different ways of using our generated metafunction
when introspecting for member data, a member function, or a static member function
of an enclosing type.
In the cases where we specify a composite syntax when using
BOOST_TTI_HAS_MEMBER_DATA,
BOOST_TTI_HAS_MEMBER_FUNCTION,
or
BOOST_TTI_HAS_STATIC_MEMBER_FUNCTION,
the signature for the member data, member function, or static member function
is a single type. For
BOOST_TTI_HAS_MEMBER_DATA
the signature is a pointer to member data, for
BOOST_TTI_HAS_MEMBER_FUNCTION
the signature is a pointer to a member function, and for
BOOST_TTI_HAS_STATIC_MEMBER_FUNCTION
the signature is divided between an enclosing type and a function in composite
format. This makes for a syntactical notation which is natural to specify,
but because of the notation we can not use the nested type functionality in
BOOST_TTI_MEMBER_TYPE for potential
parts of these composite types. If any part of this signature, which specifies
a composite of various types, is invalid, a compiler time error will occur.
But in the more specific cases, when we use
BOOST_TTI_HAS_MEMBER_DATA,
BOOST_TTI_HAS_MEMBER_FUNCTION,
and
BOOST_TTI_HAS_STATIC_MEMBER_FUNCTION,
our composite type in our signatures is broken down into their individual types
so that using
BOOST_TTI_MEMBER_TYPE
for any one of the individual types will not lead to a compile time error if
the type specified does not actually exist.
A few examples will suffice.
Given known types T and U, and the supposed type Ntype as a nested type of
U, we want to find out if type T has a member function whose signature is
void aMemberFunction(U::Ntype).
First using
BOOST_TTI_HAS_MEMBER_FUNCTION
using our composite form we would code:
#include <boost/tti/has_member_function.hpp> BOOST_TTI_HAS_MEMBER_FUNCTION(aMemberFunction) has_member_function_aMemberFunction<void (T::*)(U::Ntype)>::value;
If the nested type U::Ntype does not exist, this leads to a compiler error. We really want to avoid this situation, so let's try our alternative.
Second using
BOOST_TTI_HAS_MEMBER_FUNCTION
using our specific form we would code:
#include <boost/tti/member_type.hpp> #include <boost/tti/has_member_function.hpp> BOOST_TTI_HAS_MEMBER_TYPE(Ntype) BOOST_TTI_HAS_MEMBER_FUNCTION(aMemberFunction) typedef typename has_member_type_Ntype<U>::type OurType; has_member_function_aMember.
As a second example we will once again use the suppositions of our first example;
given known types T and U, and the supposed type Ntype as a nested type of
U. But this time let us look for a static member function whose signature is
void aStaticMemberFunction(U::Ntype).
First using
BOOST_TTI_HAS_STATIC_MEMBER_FUNCTION
using our composite form we would code:
#include <boost/tti/has_static_member_function.hpp> BOOST_TTI_HAS_STATIC_MEMBER_FUNCTION(aStaticMemberFunction) has_static_member_function_aStaticMemberFunction<T,void (U::Ntype)>::value;
Once again if the nested type U::Ntype does not exist, this leads to a compiler error, so let's try our alternative.
Second using
BOOST_TTI_HAS_STATIC_MEMBER_FUNCTION
using our specific form we would code:
#include <boost/tti/member_type.hpp> #include <boost/tti/has_static_member_function.hpp> BOOST_TTI_HAS_MEMBER_TYPE(Ntype) BOOST_TTI_HAS_STATIC_MEMBER_FUNCTION(aStaticMemberFunction) typedef typename has_member_type_Ntype<U>::type OurType; has_static_member_function_aStaticMember. | http://www.boost.org/doc/libs/1_57_0/libs/tti/doc/html/the_type_traits_introspection_library/tti_func_sig.html | CC-MAIN-2015-14 | en | refinedweb |
Platform independent extensible log class
Introduction:
The ability to log is commonly needed in every software project on every platform. I wrote this class to save time.
There are two basic log classes provided for easy use. One is CFileLog, which implements a file logging system. The other is CRegFileLog that implements a registry controled file logging system. The whole logging system is quit easy to extended for any propose.
How to use:
Demo code:
#include "MjLog.h" int main() { MjTools::CFileLog m_Log("test.log"); std::string a="aaa"; m_Log.Clear(); m_Log.AddLog("Abc"); m_Log.AddLog(a); MjTools::CFileLog m_Log1=m_Log; m_Log1.AddLog("From Log1"); #ifdef WIN32 //RegistryLogControl only valid in Windows system // construct a registry key controled log object. If // the specified registry key is found,the log is enabled MjTools::CRegFileLog m_regLog("reglog.log", "HKEY_LOCAL_MACHINE\\Software\\YourControlKeyName"); m_regLog.AddLog("reglog"); m_regLog.Pause(); m_regLog.AddLog("reglog1"); m_regLog.Resume(); m_regLog.AddLog("reglog2"); #endif return 0; }
How to compile:
The source code itself can be compiled and executed. You can use command line tool to compile it.
Under VC++:
CL /D"_TEST_" MjLog.cpp
This one may cause a link error. I don't know why.But if you use a win32 console project, no error occurs
Under BCC:
bcc32 /D_TEST_ mjlog.cpp
Under Linux:
g++ /D_TEST_ MjLog.cpp
Future Updates:
1. Make the class thread_safe.
2. Still thinking...
very crude solutionPosted by Legacy on 07/24/2003 12:00am
Originally posted by: Matthew Pasko
Keep in mind a quick solution to this type of problem would be to simply find the log file (FindFirstFile (CE)), look at the file size, and if the file size is half of the total space of how big the log should grow, move the file name (rename) the file to an archive log file. As time progresses, the old data overwritten and the log file(s) grows only so big.
Obvious crude solution, but effective for some applications.. and a snap to write.
;)
growth of log, over write old info!Posted by Legacy on 04/16/2002 12:00am
Originally posted by: Matthew Pasko
There is often limited space on a device for logging. How about function to limit growth size of the log to be adjustable and over writes old log information?
This would be great..
fix for link errors (VC++)Posted by Legacy on 02/06/2002 12:00am
Originally posted by: Edmond Nolan
Why would you want to add "Logging" overhead to the Registry?Posted by Legacy on 12/09/2001 12:00am
Originally posted by: Hector Santos
Maybe I miss something here, but why would you want to add "Logging" overhead to the Registry?
Just curious
Reply | http://www.codeguru.com/cpp/cpp/cpp_mfc/article.php/c4117/Platform-independent-extensible-log-class.htm | CC-MAIN-2015-14 | en | refinedweb |
Quick Links
RSS 2.0 Feeds
Lottery News
Event Calendar
Latest Forum Topics
Web Site Change Log
RSS info, more feeds
Topic closed. 8 replies. Last post 10 years ago by Todd.!
I have no idea of what is wrong with the California Lottery. I too have noticed their failure to post the winning midday numbers in a timely manner. I tried to E mail them to address the problem to no avail. It's not possilbe. I'm truely disgusted with them and am at a loss as to what to do about it.
C.A.
Well, the interest in the lottery in California (at least the Daily 3) is very low. If you look at the total number of winning tickets each day and compare it with other states...there is no comparison. We are lucky to get 1,000 winning tickets in one day, but many states have thousands of winning tickets each and every day. I don't do well at predicting California, so I seldom play it any more, but I do predict for others and a timely posting of the winning numbers would be helpful!!
The draw was 925. After going to the "winning numbers" page, click on "Daily 3" and it shows the current number!! It just is not on the "winning numbers" page, yet!! Amazing!
CA is always like that. they wait to post the jackpots for their games as well.
I've redone my website. Go to. I kept a lot of the old stuff, and I've added some new stuff. Look for more new stuff in the coming weeks.
Unfortunately Cal Lottery is trying harder to keep the money rather than make it more interesting for other players to want to play.
Hopefully, a letter to Ca's State Treasurer's Office would clean that up. Imagine, We'll take your money...But, we don't do this and we don't do that and we will do what the h@3% we want to do! Every other state is on point with an immediate drawing return (That is unless I visit them) but, it has not hit Calottery who or what keeps them there. What's wrong with them? I guess everything is going to be random. Including, knowing whether or not, you may have or have not won. Tell Arnold S. He'll slap 'em around a bit and get things on the Ball.
lottoscorp ...
Quote: Originally posted by CalifDude on November 15, 2004!
Hey Poway Dude, Try calling this number 1-800-568-8379. You can get the latest numbers from there, or make a complaint.
Platinum John in Escondido
Retired Grumman Aerospace Corporation F-14 Tomcat AVIONICS Field Engineer Extraordinaire
I know the CA lottery recently updated their web site (on Nov. 1st), so there may be a glitch with their automation. I would recommend continuing to send e-mails and calling them, so that the issue gets attention by the webmaster.-2015 Speednet Group. All rights reserved. | http://www.lotterypost.com/thread/100393 | CC-MAIN-2015-14 | en | refinedweb |
- Code: Select all
#include <lapacke.h>
int main() {}
It works fine when I compile with:
$ g++ -c test_lapack.cpp
but if I add the c++11 flag:
$ g++ -std=c++11 -c test_lapack.cpp
I get a massive amount of errors (see attachment to this post). I'm guessing this is because things in c++11 got more strict and now things that used to be fine in c++03 or c++98 are now deemed unsafe or deprecated. I don't know if I should be contacting the GNU gcc/g++ people or if this is something in the domain of LAPACK developers, or maybe there is an additional flag I can add that will suppress these errors? | https://icl.cs.utk.edu/lapack-forum/viewtopic.php?f=2&t=4315 | CC-MAIN-2015-14 | en | refinedweb |
Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo
You can subscribe to this list here.
Showing
1
results of 1
Transaction has the following method:
def setErrorOccurred(self, flag):
''' Invoked by the application if an exception is raised to the
application level. '''
self._errorOccurred = flag
self._servlet = None
I'm made a subtle change which is to remove the:
self._servlet = None
Which was added by Jay way back when under the cvs log msg "general
cleanup".
I need the transaction to know it's servlet, because in my current
application, I am implementing a custom exception handler to do some
additional application specific logging and the information comes from
the servlet.
An FYI in case this causes weirdness for anyone. But as far as I know,
it works fine.
-Chuck | http://sourceforge.net/p/webware/mailman/webware-devel/?viewmonth=200202&viewday=5 | CC-MAIN-2015-14 | en | refinedweb |
Product Version = NetBeans IDE 7.1.1 (Build 201202271535)
Operating System = Windows 7 version 6.1 running on amd64
Java; VM; Vendor = 1.7.0_03
Runtime = Java HotSpot(TM) 64-Bit Server VM 22.1-b02
While editing an FXML file in a Netbeans' JavaFX project, the Navigator window does not display any element hierarchy. Also, the FXML editor window does not provide any sugestions while typing some FXML element, even using the FXML xml namespace.
Navigator display fixed in jetmain:
Thanks Svata for help with this.
Please note that FXML code completion is a complex issue and is still under development, see
Integrated into 'main-golden', will be available in build *201205120400* on (upload may still be in progress)
Changeset:
User: Petr Somol <psomol@netbeans.org>
Log: #209819 - Navigator window does not display FXML elements hierarchy | https://netbeans.org/bugzilla/show_bug.cgi?id=209819 | CC-MAIN-2015-14 | en | refinedweb |
<<../_v1_banner.md>>
A component manifest (.cmx) is a JSON file with the file extension
.cmx. Component manifests are often located in a package’s
meta/ directory. The manifest contains information that declares how to run the component and what resources it receives. In particular, the component manifest describes how the component is sandboxed.
Here's a simple example of a cmx for an ELF binary component:
{ "include": [ "src/lib/syslog/client.shard.cmx" ], "program": { "binary": "bin/example_app", "args": [ "--example", "args" ] }, "sandbox": { "system": [ "data/sysmgr" ], "services": [ "fuchsia.posix.socket.Provider", "fuchsia.sys.Launcher" ] } }
And one for a flutter/dart component:
{ "program": { "data": "data/simple_flutter" }, "runner": "flutter_jit_runner" }
The optional
include property describes zero or more other component manifest files (or shards) to be merged into this component manifest.
In the example given above, the component manifest is including contents from a file provided by the
syslog library, thus ensuring that the component functions correctly at runtime if it attempts to write to syslog. By convention such files end with
.shard.cmx.
If working in fuchsia.git, include paths are relative to the source root of the Fuchsia tree.
You can review the outcome of merging any and all includes into a component manifest file by invoking the following command:
fx cmc include {{ "<var>" }}cmx_file{{ "</var>" }} --includepath $FUCHSIA_DIR
Includes can be recursive, meaning that shards can have their own includes.
The
program property describes the resources to execute the component.
If
runner is absent, the
program property is a JSON object with the following schema:
{ "type": "object", "properties": { "binary": { "type": "string" }, "args": { "type": "array", "items": { "type": "string" }, }, "env_vars": { "type": "array", "items": { "type": "string" }, }, } }
The
binary property describes where in the package namespace to find the binary to run the component, and the optional
args property contains the string arguments to be provided to the process. The optional
env_vars property specifies environment variables to provide to the binary where each element in the array uses the format
"VAR=VALUE", for example
"RUST_BACKTRACE=1"..
facets is an optional property that contains free-form JSON about the component. Facets can be consumed by things on the system to acquire additional metadata about a component.
The schema for
facets is:
{ "type": "object" }
As an example of a facet, the
fuchsia.test field is used to convey what additional services should be injected into testing environments. allow access to a
misc device, add the string
misc to the
dev array. Allowing access to individual
misc devices is not possible.
versions appears in the
pkgfs array, then
/pkgfs/versions will appear in the namespaces of components loaded from the package, providing access to all packages fully cached on the system.
The
services array defines a list of services from
/svc that the component may access. A typical component will require a number services from
/svc in order to play some useful role in the system. For example, if
"services" = [ "fuchsia.posix.socket.Provider", "fuchsia.sys.Launcher" ], the component will have the ability to launch other components and access network services. A component may declare any list of services in its
services,:
config-data, which will provide any configuration data available to the package this component is in that was provided in the config-data package on the system.
introspection, which requests access to introspect the system. The introspection namespace will be located at
/info_experimental.
isolated-persistent-storage, which requests access to persistent storage for the device, located in
/data in the package's namespace. This storage is isolated from the storage provided to other components.
isolated-cache-storage, which requests access to persistent storage for the device, located in
/cache in the package's namespace. This storage is isolated from the storage provided to other components. Unlike
isolated-persistent-storage, items placed in the storage provided by this feature will be deleted by the system to reclaim space when disk usage is nearing capacity.
isolated-temp, which requests that a temp directory be installed into the component's namespace at
/tmp. This is isolated from the system temp and the temp directories of other component instances. This directory is backed by an in-memory filesystem, and is thus cleared on device reboots.
root-ssl-certificates, which requests access to the root SSL certificates for the device. These certificates are provided in the
/config/ssl directory in the package's namespace.
hub, which requests access to the Hub directory scoped to the component instance's realm.
deprecated-shell, which requests access to the resources appropriate for an interactive command line. Typically, shells are granted access to all the resources available in the current environment. The
deprecated-shell feature also implies the
root-ssl-certificates and
hub features. As the name suggests, this feature is to be removed. Current uses of this feature are explicitly allowlisted, and new uses are discouraged.
shell-commands, which requests access to the currently available shell binaries (note: not “installed”, but “available”). Binaries are mapped into
/bin in the requesters namespace. Running these commands may require the
fuchsia.process.Resolver and
fuchsia.process.Launcher services also be requested.
vulkan, which requests access to the resources required to use the Vulkan graphics interface. This adds layer configuration data in the
/config/vulkan directory in the package's namespace.
deprecated-ambient-replace-as-executable, which provides legacy support for using the invalid handle with replace_as_executable.
factory-data, which requests access to the read-only factory partition for the device and places it at
/factory in the component's namespace.
durable-data, which requests access to the read-write durable partition for the device and places it at
/durable in the component's namespace. This partition is for storing persistent data that will survive a factory reset, and is only to be used for specific, approved use cases.
See sandboxing.md for more information about sandboxing. | https://fuchsia.googlesource.com/fuchsia/+/refs/heads/releases/field-image-dogfood/docs/concepts/components/v1/component_manifests.md | CC-MAIN-2021-39 | en | refinedweb |
1560845420
Are you curious how to use drag and drop with React? If so, this article is exactly for you! Have a good read.
Modern web applications have multiple forms of interaction. Among those, drag and drop is, certainly, one of the most appealing to the user. Apps such as Trello, Google Drive, Office 365 and Jira make heavy use of DnD and users simply love it.
To illustrate this article, we’ll implement a jigsaw puzzle in React. The puzzle consists of two boards: the first one with the pieces shuffled, the second one with no pieces and the original image in the background.
This is the original image of our puzzle:
We need to split it in multiple pieces. I think 40 is an optimal number, given the proportion of this image:
Then, we save all assets in the images folder:
Now we need to save the original image (non-splitted) in this same folder:
Tip: You don’t need to split the sample image by yourself. Just take a look at this sandbox and you’ll find all needed images right there.
After that, let’s create our component basic structure:
import React, { Component } from 'react'; import originalImage from './images/ny_original.jpg'; import './App.css'; class Jigsaw extends Component { state = { pieces: [], shuffled: [], solved: [] }; // ... }
In the above code, we’ve defined the component’s state. It has three arrays:
Once we’ve defined the structure of our component’s state, we need to set the initial values of each array. We can do this in the componentDidMount lifecycle method:
componentDidMount() { const pieces = [...Array(40)] .map((_, i) => ( { img: `ny_${('0' + (i + 1)).substr(-2)}.jpg`, order: i, board: 'shuffled' } )); this.setState({ pieces, shuffled: this.shufflePieces(pieces), solved: [...Array(40)] }); }
In the above snippet, we’ve initialized the three arrays of our component’s state. Did you notice the […Array(40)] part? We use it twice in this method. It’s leveraging the spread operator to create an iterable array with 40 items.
Each item of the pieces array is an object containing three properties:
To initialize the shuffled array, we’ve created a shufflePieces method. It randomizes the items of a given array by using the Durstenfeld shuffle algorithm:
shufflePieces(pieces) { const shuffled = [...pieces]; for (let i = shuffled.length - 1; i > 0; i--) { let j = Math.floor(Math.random() * (i + 1)); [shuffled[i], shuffled[j]] = [shuffled[j], shuffled[i]]; } return shuffled; }
Note: You don’t really need to understand this algorithm for the purpose of this article, but if you’re curious to know how it works, here’s a detailed explanation.
Now let’s see how the render method for this component looks like:
render() { return ( <div className="jigsaw"> <ul className="jigsaw__shuffled-board"> { this.state.shuffled.map((piece, i) => this.renderPieceContainer(piece, i, 'shuffled')) } </ul> <ol className="jigsaw__solved-board" style={{ backgroundImage: `url(${originalImage})` }}> { this.state.solved.map((piece, i) => this.renderPieceContainer(piece, i, 'solved')) } </ol> </div> ); }
As you can see, we create two lists, one for each board. The piece’s rendering logic was extracted to a separate method. Here it is:
renderPieceContainer(piece, index, boardName) { return ( <li key={index}> {piece && <img src={require(`./images/${piece.img}`)}/>} </li> ); }
This method uses short circuit evaluation to render the piece image conditionally.
I’ve added some styling to make it look nice. In this article, I won’t cover how it was made, but you can find the complete CSS file in this sandbox.
And here’s how our component looks:
It looks good. Doesn’t it? However, it does nothing for now. The next step is to implement the drag and drop logic itself.
To make it possible to drag a piece, we need to mark it with a special attribute called draggable:
renderPieceContainer(piece, index, boardName) { return ( <li key={index}> { piece && <img draggable src={require(`./images/${piece.img}`)}/> } </li> ); }
After that, we need to create a handler for the onDragStart event of our pieces:
piece && <img draggable onDragStart={(e) => this.handleDragStart(e, piece.order)} src={require(`./images/${piece.img}`)}/>
And here’s the handler itself:
handleDragStart(e, order) { e.dataTransfer.setData('text/plain', order); }
This is what the above snippets do:
The drag part is done. But dragging a piece is useless if you have no place to drop it. The first thing to do in order to make pieces droppable is to define a handler for the onDragOver event of our piece containers (the li tags of both lists). This handler will be very simple. So simple that, in fact, we can define it inline:
renderPieceContainer(piece, index, boardName) { return ( <li key={index} onDragOver={(e) => e.preventDefault()}> // ... }
By default, most zones of a page won’t be a valid dropping place. This is the reason why the normal behavior of the onDragOver event is to disallow dropping. To overcome this, we’re calling the event.preventDefault() method.
Finally, let’s implement the dropping logic. To do this, we must define a handler for the onDrop event of our piece containers:
renderPieceContainer(piece, index, boardName) { return ( <li key={index} onDragOver={(e) => e.preventDefault()} onDrop={(e) => this.handleDrop(e, index, boardName)}> // ... }
Now, the handler code:
handleDrop(e, index, targetName) { let target = this.state[targetName]; if (target[index]) return; const pieceOrder = e.dataTransfer.getData('text'); const pieceData = this.state.pieces.find(p => p.order === +pieceOrder); const origin = this.state[pieceData.board]; if (targetName === pieceData.board) target = origin; origin[origin.indexOf(pieceData)] = undefined; target[index] = pieceData; pieceData.board = targetName; this.setState({ [pieceData.board]: origin, [targetName]: target }) }
Let’s understand what’s happening here:
And here’s the puzzle working:
Take a look at the final sandbox.
1578050760
Vue Drag and drop is a feature of many interactive web apps. It provides an intuitive way for users to manipulate their data. Adding drag and drop feature is easy to add to Vue.js apps.
Here are 10 vue drop components that contribute to the flexibility of your vue application.
Vue component (Vue.js 2.0) or directive (Vue.js 1.0) allowing drag-and-drop and synchronization with view model array.
Based on and offering all features of Sortable.js
Demo:
Download:
Real-time kanban board built with Vue.js and powered by Hamoni Sync.
Demo:
Download:
Drag & drop hierarchical list made as a vue component.
Goals
Demo:
Download:
VueJS directive for drag and drop.
Native HTML5 drag and drop implementation made for VueJS.
Demo:
Download:
vue-grid-layout is a grid layout system, like Gridster, for Vue.js. Heavily inspired in React-Grid-Layout
Demo:
Download:
It’s a tree components(Vue2.x) that allow you to drag and drop the node to exchange their data .
Feature
Demo:
Download:
A Simple Drag & Drop example created in Vue.js.
Demo:
Download:
Vue Component for resize and drag elements.
Demo:
Download:
A fast and lightweight drag&drop, sortable library for Vue.js with many configuration options covering many d&d scenarios.
This library consists wrapper Vue.js components over smooth-dnd library.
Show, don’t tell !
Demo:
Download:
Drag and drop so simple it hurts
Demo:
Download:
#vue #vue-drag #vue-drop #drag-and-drop #vue-drag-and-drop | https://morioh.com/p/7c41c1bdd0fe | CC-MAIN-2021-39 | en | refinedweb |
The .NET Stacks, #60: 📝Logging improvements in .NET 6
This week, we're talking about logging improvements that are coming with the .NET 6 release.
Happy Monday! It's hard to believe it's August already. What are we talking about this week?
- One big thing: Logging improvements in .NET 6
- The little things: Notes from the community
- Last week in the .NET world
One big thing: Logging improvements in .NET 6
Over the last few months, we've highlighted some big features coming with .NET 6. Inevitably, there are some features that fly under the radar, including various logging improvements that are coming. Let's talk about a few of them.
Announced with Preview 4 at the end of May, .NET will include a new Microsoft.Extensions.Logging compile-time source generator. The
Microsoft.Extensions.Logging namespace will expose a
LoggerMessageAttribute type that generates "performant logging APIs" at compile-time. (The source code relies on
ILogger, thankfully.)
From the blog post:
The source generator is triggered when
LoggerMessageAttributeis used on
partiallogging methods. When triggered, it is either able to autogenerate the implementation of the
partialmethods it’s decorating, or produce compile-time diagnostics with hints about proper usage. The compile-time logging solution is typically considerably faster at run time than existing logging approaches. It achieves this by eliminating boxing, temporary allocations, and copies to the maximum extent possible.
Rather than using
LoggerMessage APIs directly, the .NET team says this approach offers simpler syntax with attributes, the ability to offer warnings as a "guided experience," support for more logging parameters, and more.
As a quick example, let's say I can't talk to an endpoint. I'd configure a
Log class like this:
public static partial class Log { [LoggerMessage(EventId = 0, Level = LogLevel.Critical, Message = "Could not reach `{uri}`")] public static partial void CouldNotOpenSocket(ILogger logger, string uri); }
There is much, much more at the Microsoft doc, Compile-time logging source generation.
On the topic of HTTP requests, ASP.NET Core has a couple of new logging improvements worth checking out, too. A big one is the introduction of HTTP logging middleware, which can log information about HTTP requests and responses, including the headers and body. According to the documentation, the middleware logs common properties like path, query, status codes, and headers with a call to
LogLevel.Information. The HTTP logging middleware is quite configurable: you can filter logging fields, headers to log (you need to explicitly define the headers to include), media type options, and a log limit for the response body.
This addresses a common use case for web developers. As always, you'll need to keep an eye on possible performance impacts and mask logging sensitive information (like PII data).
Lastly, with .NET 6 Preview 5 the .NET team rolled out subcategories for Kestrel log filtering. Previously, verbose Kestrel logging was quite expensive as all of Kestrel has shared the same category name. There are new categories on top of
*.Server.Kestrel, including
BadRequests,
Connections,
Http2, and
Http3. This allows you to be more selective on which rules you want to enable. As shown in the blog post, if you only want to filter on bad requests, it's as easy as this:
{ "Logging": { "LogLevel": { "Microsoft.AspNetCore.Kestrel.BadRequests": "Debug" } } }
If you want to learn more about ASP.NET Core logging improvements in .NET 6, Maryam Ariyan and Sourabh Shirhatti will be talking about it in Tuesday's ASP.NET standup.
The little things: Notes from the community
AutoMapper is a powerful but self-proclaimed "simple little library" for mapping objects. From Shawn Wildermuth on Twitter, Dean Dashwood has written about Three Things I Wish I Knew About AutoMapper Before Starting. It centers around subtleties when working with dependency injection in models, saving changed and nested data, and changing the models. It's actually a GitHub repo with a writeup and some code samples—highly recommended.
Sometimes one sentence does the trick: Cezary Piątek has a nice MsBuild cheatsheet on GitHub.
This week, Mike Hadlow wrote about creating a standalone
ConsoleLoggerProvider in C#. If you want to create a standalone provider to avoid working with dependency injection and hosting, Mike found an ... interesting ... design. He develops a simple no-op implementation.
Well, it's been 20 years since the Agile Manifesto, and Al Tenhundfeld celebrates it with the article Agile at 20: The Failed Rebellion. I think we can all agree that (1) everybody wants to claim they are Agile, and (2) Agile is a very, um, flexible word—nobody is really doing Agile (at least according to the Manifesto). He explores why; an interesting read.
🌎 Last week in the .NET world
🔥 The Top 3
- Oren Eini writes that System.Threading.Tasks.Task isn’t an execution unit.
- Andrew Lock continues his deep dive on StringBuilder, and so does Steve Gordon.
- Al Tenhundfeld speaks the truth on Agile.
📢 Announcements
- Dapr v1.3 is now available.
- Michael Hawker introduces the Community Toolkit.
- Uno Platform shows off improvements with 3.9.
📅 Community and events
- Jeremy Miller writes about Marten, the generic host builder in .Net Core, and more.
- Mike Brind is writing a book about Razor Pages.
- Sha Ma writes about GitHub's journey from monoliths to microservices.
- Richard Lander posts a discussion about the .NET open source project.
- Matthew MacDonald asks: what if GitHub Copilot worked like a real programmer?
- The .NET Docs Show talks to Vahid Farahmandian about gRPC and .NET 5.
- For community standups: ASP.NET talks to Isaac Abraham about F#, and Entity Framework talks about OData.
- Check out all the Focus on F# sessions on YouTube.
🌎 Web development
- The Code Maze blog writes about Onion Architecture in ASP.NET Core.
- Damien Bowden secures ASP.NET Core Razor Pages and Web APIs with Azure B2C External and Azure AD internal identities.
🥅 The .NET platform
- Mike Hadlow creates a standalone ConsoleLoggerProvider.
- Santosh Hari uses app secrets in .NET Core console apps.
⛅ The cloud
- April Edwards compares Azure Static Web Apps, Azure WebApps, and Azure Blob Storage Static Sites.
- Mark Heath writes about the serverless sliding scale.
- John Kilmister gets started with serverless SignalR and Azure Functions.
- Paul Michaels configures Cloudflare to work with Azure domains.
📔 Languages
- Anthony Giretti writes about extended property patterns in C# 10.
- Gergely Sinka toggles functionality in C# with feature flags.
🔧 Tools
- David Ramel writes about Entity Framework Core 6.
- Rachel Appel writes about Blazor debugging improvements in Rider 2021.2.
- Khalid Abuhakmeh writes an extension method that parses a redis-cli connection string for the StackExchange.Redis library.
- Dmitry Lyalin writes about hot reload in Visual Studio 2022.
- Kat Cosgrove writes about Kubernetes fundamentals.
- Matthew Jones creates a Dapper helper C# class to generate parameterized SQL.
📱 Xamarin
- Leomaris Reyes reproduces a fashion UI in Xamarin.
- Charlin Agramonte creates an expandable paragraph control in Xamarin.
🏗 Design, testing, and best practices
- Oren Eini writes about flexible business systems.
- Derek Comartin uses a message-driven architecture to decouple a monolith.
- Charles Humble writes about the future of microservices.
- Suzanne Scacca uses heatmaps to audit web design.
- Davide Bellone explores the tradeoffs of performance and clean code.
- Ben Nadel writes about feature flags.
- Jimmy Bogard continues his series on domain-driven refactoring.
- Ian Cartwright, Rob Horn, & James Lewis write about feature parity.
- InfoQ talks to Michael Perry about immutable architecture, the CAP theorem, and CRDTs.
🎤 Podcasts
- The 6-Figure Developer Podcast talks to Bryan Hogan about Polly.
- Adventures in .NET talks about the global reach of .NET.
- The Merge Conflict podcast talks about FOSS and .NET MAUI Web with Ooui.
- The .NET Core Podcast discusses Gremlinq With Daniel Weber.
- The Azure DevOps Podcast talks about what's coming for developers.
- Scott Hanselman talks to Don Syme about F#.
- The Azure Podcast talks about Azure Arc.
🎥 Videos
- The ASP.NET Monsters works on accessibility testing with Playwright and Axe.
- Luis Quintanilla rolls out a nice "intro to F#" video series.
- Shawn Wildermuth talks about changes coming to Startup in .NET 6.
- On .NET talks about ValueTask in C#. | https://www.daveabrock.com/2021/08/08/dotnet-stacks-60/ | CC-MAIN-2021-39 | en | refinedweb |
Applying customizations in the Xperience environment
With the Xperience MVC and Core development models, the live site application and administration application each have their own separate code base and project files. You can modify the system's functionality for both applications using the same customization API and endpoints described throughout the documentation.
However, customization of the administration application does not automatically apply to the live site application and vice versa. For every customization, you need to consider which applications should be affected and deploy your custom code accordingly:
Both applications – in many cases, customizations need to cover both applications to work consistently. Typical examples are customizations of e-commerce functionality or user-related actions, which can occur both on the live site and through the administration interface. See Deploying custom code to both applications.
Administration-only – for customizations that modify or extend parts of the administration interface or only affect functionality triggered in the administration. For example, extenders of UI elements or custom scheduled tasks.
Live site-only – for customizations of actions that only occur on the live site. For example, code that adjusts the contact recognition logic.
Deploying custom code to both applications
We recommend using the following approach to deploy shared custom code to both the live site and administration applications:
- Add a custom assembly (Class Library project) in one of the applications (under either the live site solution or the Xperience administration WebApp.sln solution).
- Create classes with the required custom code in the project.
Add the same project to the other application's solution.
Important: When working with the Xperience administration solution, make sure that you do NOT install the Kentico.Xperience.Libraries package into the CMSApp project. The project already references Xperience DLLs in the solution's Lib folder.
Having a project in both solutions ensures that changes are shared between the projects during development.
Note: After making changes in the shared project, you need to rebuild and potentially redeploy both the administration and live site applications (rebuilding just the solution where you made the changes is not sufficient).
If you also have customizations intended for only one of the applications, we recommend creating a separate Class Library project in each solution. In this scenario, you can share individual code files between the projects by adding them as links in Visual Studio.
Example
The following example demonstrates how to prepare a customization that automatically assigns new users to a default role. Users can be created both in the administration interface and on the live site (registration of visitors), so the customization is deployed to both applications to ensure that it works consistently.
Start by adding the customization to the live site project:
- Open your live site solution in Visual Studio.
- Add a custom assembly (Class Library project) with class discovery enabled to the solution, or re-use an existing assembly. For example, name the project Custom.
- Reference the Custom project from your live site web project.
Create a new class under the custom project, for example named CustomUserModule (the example uses an event handler to extend the user functionality):
using CMS; using CMS.DataEngine; using CMS.Membership; using CMS.SiteProvider; // Registers the custom module into the system [assembly: RegisterModule(typeof(Custom.CustomUserModule))] namespace Custom { public class CustomUserModule : Module { // Module class constructor, the system registers the module under the name "CustomUsers" public CustomUserModule() : base("CustomUsers") { } // Contains initialization code that is executed when the application starts protected override void OnInit() { base.OnInit(); // Assigns a handler to the Insert.After event of user objects UserInfo.TYPEINFO.Events.Insert.After += User_InsertAfterEventHandler; } // Handler method that runs when a new user object is created in the system private void User_InsertAfterEventHandler(object sender, ObjectEventArgs e) { if (e.Object != null) { // Gets an info object representing the new user UserInfo user = (UserInfo)e.Object; // Gets the "DefaultRole" role RoleInfo role = RoleInfo.Provider.Get("DefaultRole", SiteContext.CurrentSiteID); if (role != null) { // Assigns the role to the user UserInfoProvider.AddUserToRole(user.UserName, role.RoleName, SiteContext.CurrentSiteName); } } } } }
- Save all changes and Rebuild the solution.
Now add the customization to your Xperience administration project:
- Open your Xperience administration solution in Visual Studio (using the WebApp.sln file).
Right-click the solution in the Solution Explorer and select Add -> Existing Project.
- Navigate to the Custom folder in the web project and select the Custom.csproj file.
- Click Open.
Reference the Custom project from the administration web project (CMSApp).
Save all changes and Rebuild the solution.
The customization is now applied to both applications. The shared custom project allows you to keep changes in the custom code synchronized between the applications during development. When a new user registers on the live site or is created manually in the administration interface, they are automatically assigned to the DefaultRole role (you need to create a role with this code name).
Was this page helpful? | https://docs.xperience.io/custom-development/applying-customizations-in-the-xperience-environment | CC-MAIN-2021-39 | en | refinedweb |
NAMEsd_bus_reply_method_return, sd_bus_reply_method_returnv - Reply to a D-Bus method call
SYNOPSIS
#include <systemd/sd-bus.h>
int sd_bus_reply_method_return(sd_bus_message *call, const char *types, ...);
int sd_bus_reply_method_returnv(sd_bus_message *call, const char *types, va_list ap);
DESCRIPTIONsd_bus_reply_method_return() sends a reply to the call message. The type string types and the arguments that follow it must adhere to the format described in sd_bus_message_append(3). If no reply is expected to call, this function succeeds without sending a reply.
RETURN VALUEOn success, this function returns a non-negative integer. On failure, it returns a negative errno-style error code.
ErrorsReturned. | https://man.archlinux.org/man/core/systemd/sd_bus_reply_method_returnv.3.en | CC-MAIN-2021-39 | en | refinedweb |
Java SE Quiz yourself: Happens-before thread synchronization in Java with CyclicBarrier The CyclicBarrier class provides timing synchronization among threads, while also ensuring that data written by those threads prior to the synchronization is visible among those threads. by Simon Roberts and Mikalai Zaikin September 13, 2021 | Download a PDF of this article More quiz questions available here Given the following CBTest class import static java.lang.System.out; public class CBTest { private List<Integer> results = Collections.synchronizedList(new ArrayList<>()); class Calculator extends Thread { CyclicBarrier cb; int param; Calculator(CyclicBarrier cb, int param) { this.cb = cb; this.param = param; } public void run() { try { results.add(param * param); cb.await(); } catch (Exception e) { } } } void doCalculation() { // add your code here } public static void main(String[] args) { new CBTest().doCalculation(); } } Which code fragment, when added to the doCalculation method independently, will make the code reliably print 13 to the console? Choose one. A. CyclicBarrier cb = new CyclicBarrier(2, () -> { out.print(results.stream().mapToInt(v -> v.intValue()).sum()); }); new Calculator(cb, 2).start(); new Calculator(cb, 3).start(); B. CyclicBarrier cb = new CyclicBarrier(2); out.print(results.stream().mapToInt(v -> v.intValue()).sum()); new Calculator(cb, 2).start(); new Calculator(cb, 3).start(); C. CyclicBarrier cb = new CyclicBarrier(3); new Calculator(cb, 2).start(); new Calculator(cb, 3).start(); cb.await(); out.print(results.stream().mapToInt(v -> v.intValue()).sum()); D. CyclicBarrier cb = new CyclicBarrier(2); new Calculator(cb, 2).start(); new Calculator(cb, 3).start(); out.print(results.stream().mapToInt(v -> v.intValue()).sum()); Answer. The CyclicBarrier class is a feature of the java.util.concurrent package, and it provides timing synchronization among threads while also ensuring that data written by those threads prior to the synchronization is visible among those threads (this is the so-called “happens-before” relationship). These problems might otherwise be addressed using the synchronized, wait, and notify mechanisms, but those are generally considered low-level mechanisms that are harder to use correctly. If multiple threads are cooperating on a task, at least two problems must commonly be addressed. The data written by one thread must be read correctly by another thread when the data is needed. The other thread must have an efficient means of knowing when the necessary data has been prepared and is ready to be read. The CyclicBarrier addresses these problems by providing timing synchronization using a barrier point and a barrier action. The operation of the CyclicBarrier might be likened to a group of colleagues at a conference preparing to go to a presentation together. They get up in the morning and go about their routines individually, getting ready for the presentation and their day. When they’re ready, they go to the lobby of the hotel they’re staying in and wait for the others. When all the colleagues are in the lobby, they all leave at once to walk over to the conference room. Similarly, a CyclicBarrier is constructed with a count of “parties” as an argument. In the analogy, this represents the number of colleagues who plan to go to the presentation. In the real system, this is the number of threads that need to synchronize their activities. When a thread is ready, it calls the await() method on the CyclicBarrier (in the analogy, this is arriving in the lobby). At this point, one of two behaviors occurs. Suppose the CyclicBarrier was constructed with a “parties” count of 3. The first and second threads that call await() will be blocked, meaning their execution is suspended, using no CPU time, until some other occurrence causes the blocking to end. When the third thread calls await(), the blocking of the two threads that called await() before is ended, and all three threads are permitted to continue execution. (This is the second behavior mentioned earlier.) After the CyclicBarrier thread-execution block is ended, data written by any of the threads prior to calling await() will be visible (unless it is perhaps subsequently altered, which can confuse the issue) by all the threads that called await() on this blocking cycle of this CyclicBarrier. This is the happens-before relationship, and it addresses the visibility problem. After the threads are released, the CyclicBarrier can be reused for another synchronizing operation—that’s why the class has cyclic in the name. Note that some of the other synchronization tools in the java.util.concurrent API cannot be reused in this way. The CyclicBarrier provides two constructors. Both require the number of parties (threads) they are to control, but the second also introduces the barrier action. public CyclicBarrier(int parties, Runnable barrierAction) The barrier action defines an action that is executed when the barrier is tripped, that is, when the last thread enters the barrier. This barrier action will be able to see the data written by the awaiting threads, and any data written by the barrier action will be visible to the threads after they resume. The API documentation states, “Memory consistency effects: Actions in a thread prior to calling await() happen-before actions that are part of the barrier action, which in turn happen-before actions following a successful return from the corresponding await() in other threads.” Each of the quiz options creates a CyclicBarrier and passes it to a thread (created from the Calculator class). Each thread—or each calculator, if you prefer—performs a calculation and then adds the result of that calculation to a thread-safe List that’s shared between the two threads. (Note that the ArrayList itself isn’t thread-safe, but the Collections.synchronizedList method creates a thread-safe wrapper around it.) After adding the result to the List, the calculator thread calls the await() method on the CyclicBarrier. Subsequently, the intention is to pick up the data items that have been added to the List and print the sum. For this to work correctly, the summing operation must see all the data written by the calculators—and that must not occur until after the calculated values have been written. The code should achieve this using the CyclicBarrier. In option B, the attempt to calculate the sum and print the result precedes the construction and start of the two calculator threads. As a result, option B is incorrect. Option B might occasionally print the right answer; the situation is what’s called a race condition. Although unlikely, it’s not impossible that the JVM might happen to schedule its threads in a way that the calculations are completed before the summing and printing starts. It’s also possible that the data written in this situation might become visible to the thread that performs the summing and printing. However, such circumstances are unlikely at best and certainly not reliable. With option B, the output is most likely to be 0 (because the list is empty), but the values 4, 9, and 13 are all possible. There’s no way to predict what the results will be. Option D is a variation of option B with swapped lines. Although this looks like the calculations might be executed before the summing and printing operations, the same race-condition uncertainty exists. So, although this option is more likely than option B to print 13 on any given run, for the same reasons, all the values are possible. Therefore, option D is incorrect. Option C is almost correct but not quite. The CyclicBarrier is created with a parties count of 3. Three calls to await() are made, so the main thread would not proceed until the two calculations are complete. The timing would be correct, and the visibility issue would be correctly addressed such that the last line of the option—the line that computes and prints the sum—would work reliably if it were not for one remaining problem with the implementation shown. Blocking behaviors in the Java APIs are generally interruptible, and if they are interrupted, they break out of their blocked state and throw an InterruptedException, which is a checked exception. Because neither the code of option C nor the body of the doCalculation method into which the code is inserted includes code to address this exception, the code for option C fails to compile. In fact, the await() method throws another checked exception, BrokenBarrierException, and this is also unhandled. However, while you might not know about the BrokenBarrierException, you should know about the InterruptedException because it’s a fundamental and pervasive feature of Java’s thread-management model. Because these checked exceptions are unhandled and the code does not compile, option C is incorrect. In option A, the CyclicBarrier is created with a parties count of 2 (which will be the two calculator threads) and a barrier action. The barrier action is the lambda expression that aggregates results from working parties. The two Calculator threads invoke await() after they have written the result of their calculations to the list. This behavior ensures that both writes have occurred before—and the data is visible to—the barrier action. Then, the barrier action is invoked to perform the summing and printing. As a result, it’s guaranteed that both 4 and 9 are in the list before the summing and printing and that they are visible to that operation. Consequently, the output must be 13, and you know that option A is correct. Conclusion. The correct answer is option A. Related quizzes Quiz yourself: Rules about throwing checked exceptions in Java Quiz yourself: Using core functional interfaces Quiz yourself: Create worker threads using Runnable and Callable | https://blogs.oracle.com/javamagazine/java-cyclicbarrier-thread-synchronization | CC-MAIN-2021-39 | en | refinedweb |
You can create any number of custom windows in your app. These behave just like the Inspector, Scene or any other built-in ones. This is a great way to add a user interface to a sub-system for your game.
Making a custom Editor Window involves the following simple steps:
In order to make your Editor Window, your script must be stored inside a folder called “Editor”. Make a class in this script that derives from EditorWindow. Then write your GUI controls in the inner OnGUI function.
using UnityEngine; using UnityEditor; using System.Collections; public class Example : EditorWindow { void OnGUI () { // The actual window code goes here } }
MyWindow.js - placed in a folder called ‘Editor’ within your project.
In order to show the window on screen, make a menu item that displays it. This is done by creating a function which is activated by the MenuItem property.
The default behavior in Unity is to recycle windows (so selecting the menu item again would show existing windows. This is done by using the function EditorWindow.GetWindow Like this:
using UnityEngine; using UnityEditor; using System.Collections; class MyWindow : EditorWindow { [MenuItem ("Window/My Window")] public static void ShowWindow () { EditorWindow.GetWindow(typeof(MyWindow)); } void OnGUI () { // The actual window code goes here } }
Showing the MyWindow
This will create a standard, dockable editor window that saves its position between invocations, can be used in custom layouts, etc. To have more control over what gets created, you can use GetWindowWithRect the controls already available in the normal classes, so you can mix and match at will.
The following C# code shows how you can add GUI elements to your custom EditorWindow:
using UnityEditor; using UnityEngine; public class MyWindow : EditorWindow { string myString = "Hello World"; bool groupEnabled; bool myBool = true; float myFloat = 1.23f; // Add menu item named "My Window" to the Window menu [MenuItem("Window/My Window")] public static void ShowWindow() { //Show existing window instance. If one doesn't exist, make one. EditorWindow.GetWindow(typeof(MyWindow)); } void OnGUI() { GUILayout.Label ("Base Settings", EditorStyles.boldLabel); myString = EditorGUILayout.TextField ("Text Field", myString); groupEnabled = EditorGUILayout.BeginToggleGroup ("Optional Settings", groupEnabled); myBool = EditorGUILayout.Toggle ("Toggle", myBool); myFloat = EditorGUILayout.Slider ("Slider", myFloat, -3, 3); EditorGUILayout.EndToggleGroup (); } }
This example results in a window which looks like this:
For more info, take a look at the example and documentation on the EditorWindow page. | https://docs.unity3d.com/cn/2020.1/Manual/editor-EditorWindows.html | CC-MAIN-2021-39 | en | refinedweb |
Computational Category Theory in Python I: Dictionaries for FinSet
Category theory is a mathematical theory with reputation for being very abstract.
Category theory is an algebraic theory of functions. It has the flavor of connecting up little pipes and ports that is reminiscent of dataflow languages or circuits, but with some hearty mathematical underpinnings.
So is this really applicable to programming at all? Yes, I think so.
Here’s one argument. Libraries present an interface to their users. One of the measures of the goodness or badness of an interface is how often you are inclined to peek under the hood to get it to do the thing that you need. Designing these interfaces is hard. Category theory has taken off as a field because it has been found to be a useful and uniform interface to a surprising variety of very different mathematics. I submit that it is at least plausible that software interfaces designed with tasteful mimicry of category theory may achieve similar uniformity across disparate software domains. This is epitomized for me in Conal Elliott’s Compiling to Categories.
I think it is easy to have the miscomprehension that a fancy language like Haskell or Agda is necessary to even begin writing software that encapsulates category theory based ideas, but this is simply not the case. I’ve been under this misapprehension before.
It just so happens that category theory is especially useful in those languages for explaining some programming patterns especially those concerning polymorphism. See Bartosz Milewski’s Category theory for Programmers.
But this is not the only way to use category theory.
There’s a really delightful book by Rydeheard and Burstall called Computational Category Theory. The first time I looked at it, I couldn’t make heads or tails of it, going on the double uphill battle of category theory and Standard ML. But looking at it now, it seems extremely straightforward and well presented. It’s a cookbook of how to build category theoretic interfaces for software.
So I think it is interesting to perform some translation of its concepts and style into python, the lingua franca of computing today.
In particular, there is a dual opportunity to both build a unified interface between some of the most commonly used powerful libraries in the python ecosystem and also use these implementations to help explain categorical concepts in concrete detail. I hope to have the attention span to do this following:
-
A very simple category is that of finite sets. The objects in the category can be represented by python sets. The morphisms can be represented by python dictionaries. Nothing abstract here. We can rip and tear these things apart any which way we please.
The manipulations are made even more pleasant by the python features of set and dictionary comprehension which will mimic the definitions you’ll find on the wikipedia page for these constructions quite nicely.
Composition is defined as making a new dictionary by feeding the output of the first dictionary into the second. The identity dictionary over a set is one that has the same values as keys. The definition of products and coproducts (disjoint union) are probably not too surprising.
One really interesting thing about the Rydeheard and Burstall presentation is noticing what are the inputs to these constructions and what are the outputs. Do you need to hand it objects? morphisms? How many? How can we represent the universal property? We do so by outputting functions that construct the required universal morphisms. They describe this is a kind of skolemization . The constructive programmatic presentation of the things is incredibly helpful to my understanding, and I hope it is to yours as well.
Here is a python class for FinSet. I’ve implemented a couple of interesting constructions, such as pullbacks and detecting monomorphisms and epimorphisms.
I’m launching you into the a deep end here if you have never seen category theory before (although goddamn does it get deeper). Do not be surprised if this doesn’t make that much sense. Try reading Rydeheard and Burstall chapter 3 and 4 first or other resources.
from collections import Counter class FinSet(): def init(self, dom ,cod , f): ''' In order to specify a morphism, we need to give a python set that is the domain, a python set that is the codomain, and a dictionary f that encodes a function between these two sets. We could by assumption just use f.keys() implicitly as the domain, however the codomain is not inferable from just f. In other categories that domain might not be either, so we chose to require both symmettrically. ''' assert( dom == set(f.keys())) # f has value for everything in domain assert( all( [y in cod for y in f.value()] )) # f has only values in codomain self.cod = cod self.dom = dom self.f = f def __getitem__(self,i): # a convenient overloading. return self.f[i] def compose(f,g): ''' Composition is function composition. Dictionary comprehension syntax for the win! ''' return FinSet( g.dom, f.cod, { x : f[g[x]] for x in g.dom }) def idd(dom): ''' The identity morphism on an object dom. A function mapping every x to itself''' return FinSet(dom, dom, { x : x for x in dom}) def __equal__(f,g): assert(f.dom == g.dom) # I choose to say the question of equality only makes sense if the arrows are parallel. assert(f.cod == g.cod) # ie. they have the same object at head and tail return f.f == g.f def terminal(dom): ''' The terminal object is an object such that for any other object, there is a unique morphism to the terminal object This function returns the object itself {()} and the universal morphism from dom to that object''' return {()} , FinSet(dom, {()} , {x : () for x in dom} ) def initial(cod): ''' The initial object is an object such that for any other object, there is a unique morphsm from the initial object to that object. It is the dual of the terminal object. In FinSet, the initial object is the empty set set({}). The mapping is then an empty dictionary dict({})''' return set({}) , FinSet(set({}), cod, dict({})) def monic(self): ''' Returns bool of whether mapping is injective. In other words, maps every incoming element to unique outgoing element. In other words, does `self @ g == self @ f` imply `g == f` forall g,f Counter class counters occurences''' codomain_vals = self.f.values() counts = Counter(codomain_vals).values() # return all([count == 1 for count in counts]) # no elements map to same element def epic(self): ''' is mapping surjective? In other words does the image of the map f cover the entire codomain ''' codomain_vals = self.f.keys() return set(codomain_vals) == self.cod # cover the codomain def product(a,b): # takes a sepcific product ab = { (x,y) for x in a for y in b } p1 = FinSet( ab, a, { (x,y) : x for (x,y) in ab } ) p2 = FinSet( ab, b, { (x,y) : y for (x,y) in ab } ) return ab, p1, p2 , lambda f,g: FinSet( f.dom, ab, { x : (f[x],g[x]) for x in f.dom } ) # assert f.dom == g.dom, f.cod == a, g.cod == b def coproduct(a,b): ab = { (0,x) for x in a }.union({ (1,y) for y in b }) i1 = FinSet( a, ab, { x : (0,x) for x in a } ) i2 = FinSet( b, ab, { y : (1,y) for y in b } ) def fanin(f,g): return { (tag,x) : (f[x] if tag == 0 else g[x]) for (tag,x) in ab } return ab, i1, i2, fanin def equalizer(f,g): ''' The equalizer is a construction that allows one to talk about the solution to an equation in a categorical manner An equation is f(x) = g(x). It has two mappings f and g that we want to somehow be the same. The solution to this equation should be a subset of the shared domain of f and g. Subsets are described from within FinSet by maps that map into the subset. ''' assert(f.dom == g.dom) assert(f.cod == g.cod) e = { x for x in f.dom if f[x] == g[x] } return e, FinSet(e, f.dom, {x : x for x in e}) def pullback(f,g): # solutions to f(x) = g(y) assert(f.cod == g.cod) e = {(x,y) for x in f.dom for y in g.dom if f[x] == g[y]} # subset of (f.dom, g.dom) that solves equation p1 = FinSet( e, f.dom, { (x,y) : x for (x,y) in e } ) # projection 1 p2 = FinSet( e, g.dom, { (x,y) : y for (x,y) in e } ) # projection 2 def univ(q1,q2): ''' Universal property: Given any other commuting square of f @ q1 == g @ q2, there is a unique morphism that injects into e such that certain triangles commute. It's best to look at the diagram''' assert(q1.cod == p1.cod) # q1 points to the head of p1 assert(q2.cod == p2.cod) # q2 points to the head of p2 assert(q1.dom == q2.dom) # tails of q1 and q2 are the same assert(f @ q1 == g @ q2) # commuting square condition return FinSet( q1.dom, e , { z : ( q1[z] , q2[z] ) for z in q1.dom } ) return e, p1, p2, univ
Here’s some fun exercises (Ok. Truth time. It’s because I got lazy). Try to implement exponential and pushout for this category. | https://www.philipzucker.com/computational-category-theory-in-python-i-dictionaries-for-finset/ | CC-MAIN-2021-39 | en | refinedweb |
WILD represent real-world distribution shifts. These datasets show distribution shifts in training and testing data on different cameras, time periods, countries, demographics, molecular scaffolds, etc., which causes significant performance drop in baseline models. It is maintained by many researchers at Stanford, and some others from Berkley, Cornell, Caltech Universities and Microsoft Research team.
FMoW – Building and land classification across different regions and years
Machine learning techniques can enable global-scale monitoring of sustainability, specifically in data-poor regions using satellite imagery and other remotely sensed data to meet economic challenges. In just areas gathering data is expensive. Wilds tries to bridge this data gap that can thereby improve research and decision-making to undertake policies and humanitarian efforts such as population density, tracking deforestation, crop yield prediction, poverty mapping and addressing other such issues. As human activity and environmental processes can often cause changes to the natural environment, thus ML models must be trained robustly to distribution shifts over time.
Deep Learning DevCon 2021 | 23-24th Sep | Register>>
Dataset design: The input x in this dataset is a satellite image, and the target label y is one of 62 categories of land use. The domain d is measured on time and geographical regions. Wilds aims at solving a domain generalization in terms of time and subpopulation performance in terms of region.
PovertyMap – Poverty mapping across different countries
Proper predictions of poverty measures are necessary for directing policy decisions in developing and poverty-stricken countries. Actually, poverty ground truth measurements are lacking for most developing countries, since it’s a difficult task to gather information in this regard. In some countries, it may have never been conducted a survey or had gaps of over a decade between surveys. The lack of labels generation in certain countries creates a natural scenario for the need for model generalization in case of unseen countries. Shift across countries based on model performance is considered the effect of rural vs. urban subpopulations. Improving performance within rural subpopulation will help developing countries like Africa.
Dataset design: The input x is a satellite image, and the output label y is a real-valued asset wealth index. The domain d is measured on countries. Wilds aims to solve both a domain generalization problem in terms of country borders and improve subpopulation performance in terms of urban and rural areas.
iWildCam – Species classification across different camera traps
In the 2020 Living Planet Report claimed that animal populations have declined by 68% on average since 1970. In the present climate diversity, the proper mapping between climate change and wildlife biodiversity loss has become a serious issue to be concerned about. For monitoring wildlife, one of the primary methods that have been adopted is placing heat or motion-activated cameras into the wild. These cameras can track data much faster than anyone can process it, as a result of this ecologists have taken up computer vision solutions. Static cameras like this are capable of capturing signals that are correlated in space and time. The correlation causes overfitting and thereby poor generalization as compared to new sensor deployments, degrading scalability factor of computer vision solutions.
Dataset Design: This is a case of multi-class species classification. The input x is a photo captured by the camera, the output label y is divided into 186 different classes of animal species, and the domain d is a measurement that identifies which camera trap took the photo.
Camelyon17 – Tumor identification across different hospitals
Medical applications models are generally trained on a small set of data acquired from some hospitals due to patients’ privacy issues but get deployed to other hospitals as well. Model accuracy can degrade in the data collection and processing variations through different hospitals not included in the training set. This variation can be caused by many sources for example studying tissue slides under a microscope, differences can arouse in slide staining or the patient population or image acquisition. Wilds studies this distribution shift by building a patch-based variant of the Camelyon17 dataset.
Dataset design: This is a binary classification task. The input x is a histopathological image, the output label y is a binary indicator that contains any tumour tissue or not, and the domain d is a measurement that identifies the hospital.
OGB-MolPCBA – Molecular property prediction across different scaffolds
Drug discovery is a time-consuming procedure by now we must have realised. The entire process takes many years, during which many experiments are conducted to find a potent molecule. For computer-aided search, accurate and generalizable molecular property predictor is useful over a large collection of small molecules in order to detect those structures which have the most probability to bind to a drug target. Accurate computer vision aided solutions can largely reduce redundant experiments, hence help in accelerating the drug discovery process. The biggest challenge here remains is that molecular properties prediction over a variety of molecules screened from the large chemical database. It is thus crucial for models to generalize out of the dataset molecules that are structurally different from training ones.
Dataset Design: This is a multi-task classification problem. The input x is a graphical representation of a molecule, the target label y is a binary vector of length 128 types of biological activity, and the domain d is the scaffold group that the molecule is a part of.
Amazon – Sentiment classification across different users
As discussed above for medical data likewise for text data also models are similarly trained on collected data and deployed as an all-purpose model across a wide range of users. Hence these models can show performance disparities. These drawbacks of performance gaps in applications have urged for the need for good performance across a wide range of users. Additionally, the indicative unfairness of models, their failure to learn the actual task, thereby leading to biasness. Wilds makes use of inter-individual performance disparities for the sentiment classification task on the Amazon-wilds dataset where the goal is to train models with high performance in terms of reviewers.
Dataset design: This is a multi-class sentiment classification task. The input x is the text for review, the target label y is the star corresponding to the rating from 1 to 5, and the domain d is used as the indicator of the user who wrote the review.
CivilComments – Toxicity classification across demographic identities
Automatic review management of user-generated text such as detecting if a comment is negative is an important task for moderating the huge amount of volume of text being written daily on the Internet. Earlier works have shown documented biases in automatic moderation tools, for example, comment classifiers have shown the particular mention of certain demographic groups. Wilds has made a modified version of the CivilComments dataset, a large collection of comments on online posts/articles etc. taken from the Civil Comments platform and annotated for negativity and demographic mentions by multiple crowd workers.
Dataset design: This a binary classification task of predicting whether or not a comment is negative. The input x is a comment comprising one or more sentences, and the target label y is whether it is marked negative or not. The domain annotation d is a multi-dimensional binary vector denoting whether the comment mentions each of the eight demographic entities LGBTQ, male, female, Christian, Muslim, other religions, Black, and White.
Usage
WILDS has an open-source Python package which provides a standardized interface for all datasets.
Installation
pip install wilds
Or
git clone
Additional dependencies
pip install torch-scatter -f{TORCH}+${CUDA}.html
pip install torch-sparse -f{TORCH}+${CUDA}.html
pip install torch-cluster -f{TORCH}+${CUDA}.html
pip install torch-spline-conv -f{TORCH}+${CUDA}.html
pip install torch-geometric
pip install transformers
Default models
python run_expt.py --dataset civilcomments --algorithm groupDRO --root_dir data --download
Data loading
from wilds.datasets.iwildcam_dataset import IWildCamDataset from wilds.common.data_loaders import get_train_loader import torchvision.transforms as transforms
dataset = IWildCamDataset(download=True)
# Getting the training set
train_data = dataset.get_subset('train', transform=transforms.Compose([transforms.Resize((224,224)), transforms.ToTensor()]))
# Preparing the data loader
train_loader = get_train_loader('standard', train_data, batch_size=16)
Training code snippet
def train(algorithm, datasets, general_logger, config, epoch_offset, best_val_metric): for epoch in range(epoch_offset, config.n_epochs): general_logger.write('\nEpoch [%d]:\n' % epoch)
# First run training
run_epoch(algorithm, datasets['train'], general_logger, epoch, config, train=True)
# Then run val
val_results = run_epoch(algorithm, datasets['val'], general_logger, epoch, config, train=False) curr_val_metric = val_results[config.val_metric] general_logger.write(f'Validation {config.val_metric}: {curr_val_metric:.3f}\n')
# Then run everything else
if config.evaluate_all_splits: additional_splits = [split for split in datasets.keys() if split not in ['train','val']] else: additional_splits = config.eval_splits for split in additional_splits: run_epoch(algorithm, datasets[split], general_logger, epoch, config, train=False) if best_val_metric is None: is_best = True else: if config.val_metric_decreasing: is_best = curr_val_metric < best_val_metric else: is_best = curr_val_metric > best_val_metric if is_best: best_val_metric = curr_val_metric if config.save_step is not None and (epoch + 1) % config.save_step == 0: save(algorithm, epoch, best_val_metric, os.path.join(config.log_dir, '%d_model.pth' % epoch)) if config.save_last: save(algorithm, epoch, best_val_metric, os.path.join(config.log_dir, 'last_model.pth')) if config.save_best and is_best: save(algorithm, epoch, best_val_metric, os.path.join(config.log_dir, 'best_model.pth')) general_logger.write(f'Best model saved at epoch {epoch}\n') general_logger.write('\n')
def run_epoch(algorithm, dataset, general_logger, epoch, config, train): if dataset['verbose']: general_logger.write(f"\n{dataset['name']}:\n") if train: algorithm.train() else: algorithm.eval()
# Not preallocating memory is slower
# but makes it easier to handle different types of data loaders
# (which might not return exactly the same number of examples per epoch)
epoch_y_true = [] epoch_y_pred = [] epoch_metadata = [] # Using enumerate(iterator) can sometimes leak memory in some environment so instead manually incrementing batch_idx batch_idx = 0 iterator = tqdm(dataset['loader']) if config.progress_bar else dataset['loader'] for batch in iterator: if train: batch_results = algorithm.update(batch) else: batch_results = algorithm.evaluate(batch)
# These tensors are already detached, but we need to clone them again
# Otherwise they don’t get garbage collected properly in some versions
# The subsequent detach is just for safety
# (they should already be detached in batch_results)
epoch_y_true.append(batch_results['y_true'].clone().detach()) epoch_y_pred.append(batch_results['y_pred'].clone().detach()) epoch_metadata.append(batch_results['metadata'].clone().detach()) if train and (batch_idx+1) % config.log_every==0: log_results(algorithm, dataset, general_logger, epoch, batch_idx) batch_idx += 1 results, results_str = dataset['dataset'].eval( torch.cat(epoch_y_pred), torch.cat(epoch_y_true), torch.cat(epoch_metadata)) if config.scheduler_metric_split==dataset['split']: algorithm.step_schedulers( is_epoch=True, metrics=results, log_access=(not train))
# log after updating the scheduler in case it needs to access the internal logs
log_results(algorithm, dataset, general_logger, epoch, batch_idx) results['epoch'] = epoch dataset['eval_logger'].log(results) if dataset['verbose']: general_logger.write('Epoch eval:\n') general_logger.write(results_str) return results
Evaluators
from wilds.common.data_loaders import get_eval_loader
# Getting the test set
test_data = dataset.get_subset('test', transform=transforms.Compose([transforms.Resize((224,224)), transforms.ToTensor()]))
# Preparing the data loader
test_loader = get_eval_loader('standard', test_data, batch_size=16)
# Getting predictions for the full test set
for x, y_true, metadata in test_loader: y_pred = model(x)
# Evaluation
dataset.eval(all_y_pred, all_y_true, all_metadata)
End Notes
WILDS tries to put a generalized dataset covering diverse data across visuals and text. It is under constant development, in future we can expect to see more benchmarked datasets to produce high quality trained data models that can address complex problems. | https://analyticsindiamag.com/what-is-wilds-dataset-by-stanford-a-complete-guide/ | CC-MAIN-2021-39 | en | refinedweb |
C# 9 Deep Dive: Top-Level Programs
In a C# 9 deep dive, we talk about how top-level programs work with status codes, async, arguments, and local functions.
This is the fourth post in a six-post series on C# 9 features in-depth:
- Post 1 - Init-only features
- Post 2 - Records
- Post 3 - Pattern matching
- Post 4 (this post) - Top-level programs
- Post 5 - Target typing and covariant returns
- Post 6 - Putting it all together with a scavenger hunt
Typically, when you learn to write a new C# console application, you are required by law to start with something like this:
class Program { static void Main(string[] args) { Console.WriteLine("Hello, world!"); } }
Imagine you’re trying to teach someone how a program works. Before you even execute a line of code, you need to talk about:
- What are classes?
- What is a function?
- What is this
argsstring array?
Sure, to you and me it likely won’t be long before that needs to come up but the barrier for entry becomes higher—especially when you look at how simple it is to get started with something like Python or JavaScript.
This post covers the following topics.
- The simple, obligatory “Hello, world” example
- Return a status code
- Await things
- Access command-line arguments
- Local functions
- Wrapping up
The simple, obligatory “Hello, world” example
With C# 9 top-level programs, you can take away the
Main method and condense it to something like this:
using System; Console.WriteLine("Hello, world!");
And, don’t worry: I know what you’re thinking. Let’s make this a one-liner.
System.Console.WriteLine("Hello, world!");
If you look at what Roslyn generates, from Sharplab, nothing should shock you:
[CompilerGenerated] internal static class $Program { private static void $Main(string[] args) { Console.WriteLine("Hello, world!"); } }
No surprise here. It’ll generate a
Program class and the traditional main method for you.
Can we be honest? I thought this was where it ended: a nice, clean way to simplify a console app. But! As I read the Welcome to C# 9 announcement a little closer, Mads Torgersen writes:.
That is super interesting. Let’s try it out and see what happens in Sharplab.
Return a status code
We can return anything from our top-level program. If we return 0 like we did in the good old days, let’s do this:
System.Console.WriteLine("Hello, world!"); return 0;
Roslyn gives us this:
[CompilerGenerated] internal static class $Program { private static int $Main(string[] args) { Console.WriteLine("Hello, world!"); return 0; } }
Await things
Mads says we can
await things. Let’s await something—let’s call the
icanhazdadjoke API, shall we?
Let’s try this code:
using System.Net.Http; using System; using System.Net.Http.Headers; using (var httpClient = new HttpClient()) { httpClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("text/plain")); Console.WriteLine(httpClient.GetStringAsync(new Uri("")).Result); }
As you can see, nothing it can’t handle:
[CompilerGenerated] internal static class $Program { private static void $Main(string[] args) { HttpClient httpClient = new HttpClient(); try { httpClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("text/plain")); Console.WriteLine(httpClient.GetStringAsync(new Uri("")).Result); } finally { if (httpClient != null) { ((IDisposable)httpClient).Dispose(); } } } }
OK, so I called
GetStringAsync but I kinda lied—I haven’t done an await or returned a
Task.
If we do this thing:
using System.Threading.Tasks; await Task.CompletedTask; return 0;
Watch what happens! We’ve got a
TaskAwaiter and an
AsyncStateMachine. What, you thought async was easy? Good thing: it’s relatively easy with top-level functions.
[CompilerGenerated] internal static class $Program { [StructLayout(LayoutKind.Auto)] private struct <$Main>d__0 : IAsyncStateMachine { public int <>1__state; public AsyncTaskMethodBuilder<int> <>t__builder; private TaskAwaiter <>u__1; private void MoveNext() { int num = <>1__state; int result; try { TaskAwaiter awaiter; if (num != 0) { awaiter = Task.CompletedTask.GetAwaiter(); if (!awaiter.IsCompleted) { num = (<>1__state = 0); <>u__1 = awaiter; <>t__builder.AwaitUnsafeOnCompleted(ref awaiter, ref this); return; } } else { awaiter = <>u__1; <>u__1 = default(TaskAwaiter); num = (<>1__state = -1); } awaiter.GetResult(); result = 0; } catch (Exception exception) { <>1__state = -2; <>t__builder.SetException(exception); return; } <>1__state = -2; <>t__builder.SetResult(result); } void IAsyncStateMachine.MoveNext() { //ILSpy generated this explicit interface implementation from .override directive in MoveNext this.MoveNext(); } [DebuggerHidden] private void SetStateMachine(IAsyncStateMachine stateMachine) { <>t__builder.SetStateMachine(stateMachine); } void IAsyncStateMachine.SetStateMachine(IAsyncStateMachine stateMachine) { //ILSpy generated this explicit interface implementation from .override directive in SetStateMachine this.SetStateMachine(stateMachine); } } [AsyncStateMachine(typeof(<$Main>d__0))] private static Task<int> $Main(string[] args) { <$Main>d__0 stateMachine = default(<$Main>d__0); stateMachine.<>t__builder = AsyncTaskMethodBuilder<int>.Create(); stateMachine.<>1__state = -1; stateMachine.<>t__builder.Start(ref stateMachine); return stateMachine.<>t__builder.Task; } private static int <Main>(string[] args) { return $Main(args).GetAwaiter().GetResult(); } }
Access command-line arguments
A nice benefit here is that, like with a command line program, you can specify command line arguments. This is typically done by parsing the
args[] that you pass into your
Main method, but how is this possible with no
Main method to speak of?
The
args are available as a “magic” parameter, meaning you should be able to access them without passing them in. MAGIC.
Let’s say I wanted something like this:
using System; var param1 = args[0]; var param2 = args[1]; Console.WriteLine($"Your params are {param1} and {param2}.");
Here’s what Roslyn does:
[CompilerGenerated] internal static class $Program { private static void $Main(string[] args) { string text = args[0]; string text2 = args[1]; string[] array = new string[5]; array[0] = "Your params are "; array[1] = text; array[2] = " and "; array[3] = text2; array[4] = "."; Console.WriteLine(string.Concat(array)); } }
Local functions
Now, for my last trick, local functions.
Let’s whip us this code to test out our top-level program.
using System; DaveIsTesting(); void DaveIsTesting() { void DaveIsTestingAgain() { Console.WriteLine("Dave is testing again."); } Console.WriteLine("Dave is testing."); DaveIsTestingAgain(); }
I have to admit, I’m pretty excited to see what Roslyn decides to do on this one:
[CompilerGenerated] internal static class $Program { private static void $Main(string[] args) { <$Main>g__DaveIsTesting|0_0(); } internal static void <$Main>g__DaveIsTesting|0_0() { Console.WriteLine("Dave is testing."); <$Main>g__DaveIsTestingAgain|0_1(); } internal static void <$Main>g__DaveIsTestingAgain|0_1() { Console.WriteLine("Dave is testing again."); } }
Or not. They are just split out into different functions in my class. Carry on.
Wrapping up
In this post, we’ve put top-level programs through its paces by seeing how it works with status code, async calls, command line arguments, and local functions. I’ve found that this can be a lot more powerful that slimming down lines of code.
What have you used it for, so far? Anything I miss? Let me know in the comments. | https://www.daveabrock.com/2020/07/09/c-sharp-9-top-level-programs/ | CC-MAIN-2021-39 | en | refinedweb |
AND EVERYONE ELSE :D
AND EVERYONE ELSE :D
UPD: The problem is fixed now
I'm trying to register for the upcoming Codeforces Round #602 (Div. 2, based on Technocup 2020 Elimination Round 3) but I can't for some reasons. When I click the Register button, nothing happens. Is it a bug or something? What should I do?
I have a question. I have enabled "are test points allowed?" from the general description. Then I got a new column to specify test points individually. Please look at the picture bellow:
And the picture of Groups points policy:
Now, what to do? I have 20 test cases for group 1, 30 for group 2 and 50 for group 3. I want to give partial points to each of the individual groups. I want to give 20 points to group 1, 30 to group 2 and 50 to group 3. But how to do this? I can't find any option for this. And I set 1 point for each individual test cases. Will they be added up? I mean I have 20 cases for group 1 and each of them worths 1 point. Will they add up and make 20 points for that particular group?
Edit: This is my Checker code: Checker
Today.
You have to solve these problems to develop DP skills
Different types of Dynamic programming problems in one blog
Thank You So Much.
Recently I tried to solve Loj-1339 and found some ideas.
The main problem is :).
It can be done with Mo' algorithm. If I was told to find number of distinct integers from i to j then I can easily find the answer using Mo. but problems occur when max-min is wanted. I'm not able to keep track of the maximum community. When I add values, I can do this, but while removing,I can't find out a way to track the maximum. One of my friends told me to make a sqrt decomposition for the array. After adding/removing, somehow we have to maintain the aux[] array, and for queries, suppose we have sqrt(n) blocks, then from i to j, if any block is fully inside i and j then we just take the max from the block Id and if it doesn't , we just loop through the array. But I am not sure about this idea, also I can't implement it properly, can you please help me about this problem ? Thanks in advance.
Hello. I think codeforces contribution calculation system so complicated.I don't know how it is calculated. Can anyone tell me ? Is there any formulas or something else just like rating change system ? Sometimes I see that some of my comments has +45 contribution, but my contribution increases only 4-5 and sometimes +3/+4 changes the main contribution more than that. Suppose, I get downvote(let x) in a comment, but my main contribution changes randomly(I think so).
I didn't get any kinds of patterns. What's your opinion about it ? I wonder how it is calculated. If you know anything about it, please inform me, I'll be grateful to you. Thanks in advance.
Recently I have to solve a problem like : given an array, update n times in range
[Li..Ri] and then output the array .I updated them using segment tree and I found the array by querying indexes one by one. It took nlog(n). Can I do it in O(n) ? And additionally, can I find the condition of any index in O(1) ? Thanks in advance.
Recently, I learned
Bitmask DP and used only a variable to see if some block was visited or not. But now I want to do it using
std::bitset. Will it be more or less efficient than the first one ? If yes or no, why ? I'm just confused. I think bitset should be fine. and I want to use it because it is easy to use.What's your opinion ? thanks in advance.
Hello , Can someone set IOI style problems using codeforces polygon ? Actually can we make a contest using IOI style problems, where we will be able to use subtasks and other stuffs ?
Hello, I have been trying a lot to solve Trail Maintence(LOJ), but I am getting RTE every time. I have been trying in many different ways, but at last I am getting RTE, don't know why. Can anyone help me debugging it please?
Thanks in advance
/** *First of all, I will be taking input until the whole graph is connected, and also, *For every query, I have to output -1. *After getting the whole graph connected, I have to perform my first MST. *then I will output the mst as well. *then I have to input every remained query and for every steps, the last added edge *will make e cycle, and we have to remove the largest edge form the graph, which is unnecessary.. then.. output the ans it :D **/ #include <bits/stdc++.h> using namespace std; const int N = 503; struct pii{ int a; int b; int c; pii(){a = 0,b = 0,c = 0;} pii(int m,int n,int o){ a = m; b = n; c = o; } };bool operator<(pii a,pii b){return a.c < b.c;} int n,pos,size,ans,parent[N],q,u,v,w; vector<pii>mst; pii ara[N+12]; void makeset(){for(int i = 0; i < N;i++)parent[i] = i;} int find(int n){return n==parent[n]?n:parent[n]=find(parent[n]);} void Union(int a,int b){ parent[find(a)] = find(b);} int first_mst() { sort(mst.begin(),mst.end()); makeset(); size = mst.size(); int sum = 0; for(int i = 0; i < size;i++){ if(find(mst[i].a) != find(mst[i].b)){ Union(mst[i].a,mst[i].b); ara[pos++] = pii(mst[i].a,mst[i].b,mst[i].c); sum += mst[i].c; } } return sum; } void mst2() { size = pos; sort(ara,ara+size); makeset(); int indx = -1; int sum = 0; for(int i = 0; i < size;i++){ if(find(ara[i].a) != find(ara[i].b)){ Union(ara[i].a,ara[i].b); sum += ara[i].c; } else{ indx = i; } } if(indx == pos-1){ pos--; } else if(indx != -1){ pii mm = ara[pos-1]; ara[indx] = mm; pos--; } printf("%d\n",sum); } int main() { //freopen("in.txt","r",stdin); int t,caseno = 0; scanf("%d",&t); while(t--){ mst.clear(); pos = 0; makeset(); scanf("%d%d",&n,&q); printf("Case %d:\n",++caseno); int k = n; while(q--){ scanf("%d%d%d",&u,&v,&w); mst.push_back(pii(u,v,w)); if(find(u) != find(v)){ k--; Union(u,v); } if(k == 1)break; printf("-1\n"); } int ans = first_mst(); printf("%d\n",ans); while(q--){ scanf("%d%d%d",&u,&v,&w); ara[pos++] = pii(u,v,w); mst2(); } } return 0; }
Hello, I have tried a lot to solve MKTHNUM-spoj. For solving this , I have slightly changed the problem statement, it goes like this :
you are asked to find a number such that there are k-1 numbers less than that in range [l...r]
Then I made a merge sort tree and sorted the initial array and Binary Searched it. My time complexity is : O((log2(n))2) . I am getting Runtime Error in case 11 (i think). But couldn't find the bug :'( .
Updt: Now I am getting wrong answer. first 14 test cases ran smmothly
here goes my code :
#include<bits/stdc++.h> #define all(v) v.begin(),v.end() using namespace std; const int N = 100099; vector<int>tree[N*3]; int ara[N+12]; void build(int at ,int l,int r) { if(l == r){ tree[at].push_back(ara[l]); return; } int mid = (l+r)/2; build(at*2,l,mid); build(at*2+1,mid+1,r); merge(all(tree[at*2]),all(tree[at*2+1]),back_inserter(tree[at])); } int query(int at,int L,int R,int l,int r,int indx) { if(l > R or r < L)return 0; if(L >= l and r >= R){ int pp = upper_bound(all(tree[at]),ara[indx])-tree[at].begin(); return pp; } int mid = (L+R)/2; return query(at*2,L,mid,l,r,indx) + query(at*2+1,mid+1,R,l,r,indx); } int main() { int n,q,l,r,k; scanf("%d%d",&n,&q); for(int i= 1; i <= n;i++){ scanf("%d",&ara[i]); } build(1,1,n); sort(ara+1,ara+1+n); while(q--){ scanf("%d%d%d",&l,&r,&k); int high = n,low = 1,mid,ans = -1; int cnt = 0; while(low <= high){ mid = (low+high)/2; int pp = query(1,1,n,l,r,mid); if(k <= pp){ ans = mid; high = mid-1; } else low = mid+1; } printf("%d\n",ans); } return 0; }
Now I want to solve some problems and learn advanced algos related to it. please help me by giving me more sources.
Hello codeforces , I tried to solve Light Oj : 1110 — An Easy LCS but getting TLE , here is My code please help me to reduce my runtime :) thanks for advance :)
I saw someone to use a function instead of cin/scanf to input a number;
like that :
int n; n = in();
where in() function is given bellow :
template <typename T> T in(){char ch;T n = 0;bool ng = false;while (1){ch =getchar();if (ch == '-'){ng = true;ch = getchar();break;}if (ch>='0' && ch<='9')break;}while (1){n = n*10 + (ch - '0');ch = getchar();if (ch<'0' || ch>'9')break;}return (ng?-n:n);} ?
i don't know what is faster , i also don't know , how scanf or cin works , can anyone please tell me ????
Hello codeforces , hello everybody , you all know , ACM ICPC is gonna be started , :) , and many of the participants are also CF users , and let's start a game for them , according to codeforces ratings , can you tell me which team is first ???
All you have to do , calculate their average ratings , and add them ( add the three ratings ) and divide them by 3 , Because there are 3 members in a team , and finally you will get a average rating , and according to them , you have to find the maximum rating of all the teams , and comment the name of their country and team/University name . and if you get board , you should not do it , you will suggest the avg ratings of a team in the comment box and i will fix it .
the winner will be included soon i wish :D :D
happy coding everybody :D :D good luck 1.
Hello codeforces , I have recently started learning BFS/DFS . and I tried solving following problem : Uva 10653 and My code
please someone help me debug my problem
Hello codeforces , I am a beginner , and I started learning Dynamic programming a few days ago . I haven't learned any specific algorithm yet , but i wanna practice some DP problems , can anyone suggest some easy DP problems please ??
Hello everybody , I have tried a lot to solve the light oj 1087 — Diablo problem .
problem link :
My test sample output is not matching , I have tried a lot , I am a noob , please help me to debug my code . Why does it occur ??
my code :
Please help me , Downvote ?? It's okay . But please help me ..................
Hello , I can't realize why the judge output is much strange !!! problem link :
my code :
judge status :
in my compiler , answer is okay , but where is the problem ?? I have also tested in different online compilers also
I have tried a lot to solve the "milking cow" problem of USACO . But WA in test 7 . I don't know what is the problem , i have tried multiple approaches , but it still wrong in test 7 . can you please help me to find my bug please ????
my code :
problem source :
please please please help me guys , I have tried my best .
Just one question , am I doing any kinds of segmentation violation ?? such as -> wrong ara indexing or accessing memory out of bounds etc etc ???
memory limit : 256 mb
code :
Hello everybody , please help me , i was solving problems in light oj , "curious Robin Hood" and getting wrong answer. problem link :
code link :
i am a beginner and i recently learned a bit about segment tree , i used segment tree in this solution , but i can't find my faults , can anyone please help me ?
everything seems okay , where is the problem ? can you explain ? problem link :
my code :
I was solving problems in light online judge , and got "output Limit Exceeded" ! but I still don't know about it . Can you please tell me what does it mean ? and when it happens ? problem link : my code :
Hello everybody , how are you ? Today I am going to describe some contents (programming) Which can be helpful for contestants.
At first I assume that you all know about C/C++ language . But It doesn't matter . I think you also know about conditional logic , loop , datatype(input/output) , strings, array, function , basic bitwise operation etc . I think then you should learn lots of Mathematical terms such as :
Prime number , GCD , LCM , Eular's Totirnt Function , Bigmod , Modular inverse , Extended GCD , combinatorics, permutation , combination , inclusion exclusion principle ,probability , Expected value , Base conversation , Big integer , cycle , Gawssian elimination etc etc........
Then.....
you have to learn about sorting and searching . like : Insertion sort , bubble sort , merge sort , selection sort , merge sort , county sort etc . NOTE : THERE IS A THING NAMED "qsort" function is C language.....don't use it . There is an algorithm named "anti-qsort" . If you use it in codeforces , anyone can easily hack your solution because anti-qsort algorithm generates worst case for qsort and in the worst case , it's time complexity will be O(n^2) . But you can use Standerd Template Library .
Then........
You have to learn Binary search , Backtraking , ternary search etc...
Then.......
you have a task to learn lots of data structures . Such as : Linked list , Stack,Queue,heap , vector,graph,tree,segment tree , binary search tree , sqare root segmentation , disjoint set union , Lazy with proposition , lazy without proposition , Binary indexed tree , map , set , pair etc etc....
Then........
You haVe to learn greedy technique , Dyanamic programming , Graph algorithm , Flow , Adhoc , Geometry etc etc , I have summarized the contents bellow : 1. Dynamic Programming 2. Greedy 3 .Complete Search 4 . Flood Fill 5 . Shortest Path 6 . Recursive Search Techniques 7 . Minimum Spanning Tree 8 . Knapsack 9 . Computational Geometry 10 .Network Flow 11 . Eulerian Path 12 . Two-Dimensional Convex Hull 13 . BigNums 14 . Heuristic Search 15 . Approximate Search 16 . Ad Hoc Problems
If you clearup all those above , you will find a great programmer inside you , but mind it , It is not an easy tusk....You have to practice a lot . If you don't practice , you will never be able to be a good programmer .....!!!
SO , NO MORE TODAY ,,,, STAY WELL EVERYBODY,,,,ALWAYS TRY TO HELP OTHERS...HAPPY CODING ! | https://codeforces.com/blog/Ahnaf.Shahriar.Asif | CC-MAIN-2021-39 | en | refinedweb |
Bouncing a Ball with Mixed Integer Programming
Edit: A new version.
Here I made a bouncing ball using mixed integer programming in cvxpy. Currently we are just simulating the bouncing ball internal to a mixed integer program. We could turn this into a control program by making the constraint that you have to shoot a ball through a hoop and have it figure out the appropriate initial shooting velocity.
import numpy as np import cvxpy as cvx import matplotlib.pyplot as plt N = 100 dt = 0.05 x = cvx.Variable(N) v = cvx.Variable(N) collision = cvx.Variable(N-1,boolean=True) constraints = [] M = 20 # Big M trick #initial conditions constraints += [x[0] == 1, v[0] == 0] for t in range(N-1): predictedpos = x[t] + v[t] * dt col = collision[t] notcol = 1 - collision[t] constraints += [ -M * col <= predictedpos , predictedpos <= M * notcol] #enforce regular dynamics if col == 0 constraints += [ - M * col <= x[t+1] - predictedpos, x[t+1] - predictedpos <= M * col ] constraints += [ - M * col <= v[t+1] - v[t] + 9.8*dt, v[t+1] - v[t] + 9.8*dt <= M * col ] # reverse velcotiy, keep position the same if would collide with x = 0 constraints += [ - M * notcol <= x[t+1] - x[t], x[t+1] - x[t] <= M * notcol ] constraints += [ - M * notcol <= v[t+1] + 0.8*v[t], v[t+1] + 0.8*v[t] <= M * notcol ] #0.8 restitution coefficient objective = cvx.Maximize(1) prob = cvx.Problem(objective, constraints) res = prob.solve(solver=cvx.GLPK_MI, verbose=True) print(x.value) print(v.value) plt.plot(x.value, label='x') plt.plot(v.value, label= 'v') plt.plot(collision.value, label = 'collision bool') plt.legend() plt.xlabel('time') plt.show()
Pretty cool.
The trick I used this time is to make boolean indicator variables for whether a collision will happen or not. The big M trick is then used to actually make the variable reflect whether the predicted position will be outside the wall at x=0. If it isn’t, it uses regular gravity dynamics. If it will, it uses velocity reversing bounce dynamics
Just gonna dump this draft out there since I’ve moved on (I’ll edit this if I come back to it). You can embed collisions in mixed integer programming. I did it below using a strong acceleration force that turns on when you enter the floor. What this corresponds to is a piecewise linear potential barrier.
Such a formulation might be interesting for the trajectory optimization of shooting a hoop, playing Pachinko, Beer Pong, or Pinball.
using JuMP using Cbc using Plots N = 50 T = 5 dt = T/N m = Model(solver=CbcSolver()) @variable(m, x[1:N]) # , Bin @variable(m, v[1:N]) # , Bin @variable(m, f[1:N-1]) @variable(m, a[1:N-1], Bin) # , Bin @constraint(m, x[1] == 1) @constraint(m, v[1] == 0) M = 10 for t in 1:N-1 @constraint(m, x[t+1] == x[t] + dt*v[t]) @constraint(m, v[t+1] == v[t] + dt*(10*(1-a[t])-1)) #@constraint(m, v[t+1] == v[t] + dt*(10*f[t]-1)) @constraint(m, M * a[t] >= x[t+1]) #if on the next step projects into the earth @constraint(m, M * (1-a[t]) >= -x[t+1]) #@constraint(m, f[t] <= M*(1-a[t])) # we allow a bouncing force end k = 10 # @constraint(m, f .>= 0) # @constraint(m, f .>= - k * x[2:N]) # @constraint(m, x[:] .>= 0) E = 1 #sum(f) # 1 #sum(x) #sum(f) # + 10*sum(x) # sum(a) @objective(m, Min, E) solve(m) println(x) println(getvalue(x)) plotly() plot(getvalue(x)) #plot(getvalue(a)) gui()
More things to consider:
Is this method trash? Yes. You can actually embed the mirror law of collisions directly without needing to using a funky barrier potential.
You can extend this to ball trapped in polygon, or a ball that is restricted from entering obstacle polygons. Check out the IRIS project - break up region into convex regions Gives good support for embedding conditional variables. On a related note, gives a good way of defining piecewise linear functions using Mixed Integer programming.
Pajarito is another interesting Julia project. A mixed integer convex programming solver.
Russ Tedrake papers -
Break up obstacle objects into delauney triangulated things. | https://www.philipzucker.com/bouncing-a-ball-with-mixed-integer-programming/ | CC-MAIN-2021-39 | en | refinedweb |
Hello nice people
I am pulling data from REST API in a jupyter notebook in DSS and do a lot of things on the pandas dataframe I am creating
I would like to save the dataframe as a dataset I can later explore whitin the project I am working in.
I am trying something like :
if not results.empty: output_data = dataiku.Dataset(instrument_name+"_" + event_name + "_" + timestr) output_data.write_dataframe(results)
But I am always running through a problem
Unable to fetch schema for PROJ1.participant_screening_20210623-233350: dataset does not exist: PROJ1.participant_screening_20210623-233350
I tried some other alternatives to write the dataframe in a dataset but dss seems to look for a scheman with the project name (PROJ1) ?
Any easy way to get the dataframes into dss datasets ?
PS : this is a test instance, am not using a database for intermediate datasets but writing on disc for testing purposes
Thanks
Rad
Following is an example of how to create a new dataset in Python and then write a dataframe to it.
import dataiku dataset_name = 'TEST' # Get a handle to the current project client = dataiku.api_client() project = client.get_project(dataiku.default_project_key()) # Create a SQL dataset (you can create other types by specifying different parameters for the with_store_into method) # Documentation here: # Note that documentation shows project.new_managed_dataset which is incorrect builder = project.new_managed_dataset_creation_helper(dataset_name) builder.with_store_into("NZ_DSWRK") builder.create() # Write dataframe to dataset dataiku.Dataset(dataset_name).write_with_schema(df)
The dataset will show in the UI as not built. You can right click on the dataset and choose "mark as built" to fix this.
Note that this example uses both the "external" api (dataikuapi) to create the dataset and the internal api (dataiku) to write the dataframe to the dataset. More the differences here.
Hope this helps.
Marlan
You can write to any type of SQL database you have a connection set up for. The example I gave included a connection for a Netezza database. Not sure what you mean about storing intermediate tables. We write both intermediate and final output data to Netezza.
To create a file dataset, use the filesystem folders connection in the "with_store_into" method, e.g., builder.with_store_into('filesystem_folders')
Note also that you can pass "overwrite=True" to the create method.
Marlan
Fantastic,
thank you for clarifying, this helped very much | https://community.dataiku.com/t5/Using-Dataiku-DSS/Creating-dataset-from-Pandas/m-p/17647/highlight/true | CC-MAIN-2021-39 | en | refinedweb |
On Montag, 12. Februar 2018 00:41:54 CET Christian Schudt wrote: > - Generally I am unsure if using the "xml:lang" and „name" from the > identities is a good idea at all, because these two attributes should not > change the capabilities of an entity. Name and language is just for humans. > I.e. if a server sends german identities for one user and english > identities for the next user (depending on their client settings/stream > header), the server still has the same identities, which should result in > the same verification string, shouldn’t it?
First of all, I think previously, an entity answering a disco#info request always sent all translated identities, so that would not have been an issue. You’re touching on a more general thing though which I’d like to discuss. We could separate the hash into three hashes, one for identities, one for features and one for forms (or maybe two: identities and forms+features). This has the upside that human readable identifiers don’t interfere with protocol data (features/forms) in many cases (I think the identities are more rarely used in protocols, but I might be wrong). The obvious downside is that we need to transfer more data in the presence (twice or thrice the amount for ecaps2). I’d like to know what you people think of it. Since this is still Experimental, I’d be fine with bumping the namespace and getting this done. But I’m afraid that the bandwidth costs will outweigh the advantages. We have ~100 bytes for a 256 bit hashsum (including wrapper XML). We would end up with more than half a kilobyte (~0.6 kB) for ecaps2 if we split the hashes and assume that each entity uses two hash functions with 256 bits each (which I think is a reasonable assumption). If we have caps optimization, the impact would probably be neglectible, but I’m not sure if we can assume that. I’d like to get input from you folks on that. kind regards, Jonas
signature.asc
Description: This is a digitally signed message part.
_______________________________________________ Standards mailing list Info: Unsubscribe: standards-unsubscr...@xmpp.org _______________________________________________ | https://www.mail-archive.com/standards@xmpp.org/msg18634.html | CC-MAIN-2018-47 | en | refinedweb |
Using ActiveResources from Flex? Using FlexUnit? Here is a nice way to write your tests.
package tests
{
import flexunit.framework.*;
import mx.rpc.AsyncToken;
import mx.rpc.events.ResultEvent;
import resources.Raffles;
public class TestRaffles extends BaseTestCase
{
private var raffles:Raffles;
public function TestRaffles(name : String = null)
{
super(name);
fixtures([“raffles”]);
raffles = new Raffles();
}
public function testRemoteFindRaffle():void
{
assertRemote(raffles.show(1));
}
public function assertRemote_testRemoteFindRaffle(data:Object):void
{
Assert.assertTrue(“Raffle show successfully called”, data is ResultEvent);
assertEquals(“MyString”, data.result.name);
}
}
}
Note this code is not yet a plugin and is using code you can find here:. I was starting to use it on multiple projects so I thought it was to time find a home for it. Also it is using the org.onrails.rails.ActiveResourceClient Flex class. I would recommend that you use Alex MacCaw’s ActvieResrouce for Actionscript. I still need to talk with Alex and integrate this fixture loading code with his code. AMF addAsync method, we just add the convenience assertRemote function to setup all the callbacks.
To make this work for you Flex with Rails project. You need to fixtures_controller.rb to your controllers and setup the following routes:
if RAILS_ENV == “test” map.resources :fixtures, :new => { :test_results => :post } map.crossdomain ‘/crossdomain.xml’, :controller => ‘fixtures’, :action => ‘crossdomain’ end
You need to extend your Flex TestCase from tests.BaseTestCase.
Enjoy,
Daniel. | http://onrails.org/2007/11/05/sweet-way-to-write-flex-unit-tests-for-rails.html | CC-MAIN-2018-47 | en | refinedweb |
Linked List Assignment
Bajet $10-30 USD [login to view URL] and [login to view URL] needed for the assignment.
You cannot efficient runtime possible. You cannot use the iterator object of the doubly linked list value)
This method returns true if this list contains the specified value, otherwise it returns false.
int indexOf(AnyType value)
This method returns the index of the first occurrence of the specified value in this list, or -1 if this list does not contain the value.
int lastIndexOf(AnyType value)
This method returns the index of the last occurrence of the specified value in this list, or -1 if this list does not contain the value.
boolean removeFirstOccurrence(AnyType value)
This method removes the first occurrence of the specified value in this list (when traversing the list from head to tail). If the list does not contain the value, it is unchanged. The method returns true if the list contained the specified value, otherwise it returns false.
boolean removeLastOccurence(AnyType value)
This method removes the last occurrence of the specified value in this list (when traversing the list from head to tail). If the list does not contain the value, it is unchanged. The method returns true if the list contained the specified value, otherwise it returns false.
AnyType[] toArray()
This method returns an array containing all of the values in this list in proper sequence (from first to last value).
You do not need to write any other code for the assignment but you will need to thoroughly test your code implementation for the additional and executes without significant runtime errors then the grade computes as follows:
Followed proper submission instructions, 3 points:
1. Was the file submitted a zip file.
2. The zip file has the correct filename.
3. The contents of the zip file are in the correct format.
List Iterator execution:
4. The clone method works and executes properly, 3 points.
5. The contains method works and executes properly, 3 points.
6. The indexOf method works and executes properly, 2 points.
7. The lastIndexOf method works and executes properly, 2 points.
8. The removeFirstOccurrence method works and executes properly, 2 points.
9. The removeLastOccurrence method works and executes properly, 2 points.
10. The toArray method works and executes properly, 3 points.
Late submission penalty: assignments submitted after the due date are subjected to a 4 point deduction for each day late.
Submission Instructions
You’ll place the [login to view URL] file containing your implementation of the doubly linked list in a Zip file. The file should NOT be a 7z or rar file! You can follow the directions below for creating a zip file depending on the operating system running on the computer containing your assignment’s [login to view URL] file.
Creating a Zip file in Microsoft Windows (any version):
1. Right-click the [login to view URL] file to display a pop-up menu.
2. Click on Send to.
3. Click on Compressed (zipped) Folder.
4. Rename your Zip file as described below.
5. Follow the directions below to submit your assignment.
Creating a Zip file in Mac OS X:
1. Click File on the menu bar.
2. Click on Compress “[login to view URL]”.
3. Mac OS X creates the file DoublyLinkedList.java.zip.
4. Rename [login to view URL] as described below.
5. Follow of courses you're taking this semester. Click on Content in the green area on the left side of the webpage.
You will see the Assignment 1 – Linked List Assignment.
Click on the assignment.
Upload your Zip file and then click the submit button to submit your index);
Iterator<AnyType> iterator();
—————————
[login to view URL]
import [login to view URL];
import [login to view URL];
import [login to view URL];
public class DoublyLinkedList<AnyType> implements List<AnyType>
{
private static class Node<AnyType>
{
private AnyType data; void setPrev(Node<AnyType> p) { prev = p; }
public Node<AnyType> getNext() { return next; }
public void setNext(Node<AnyType> n) { next = n; }
}
private int theSize;
private](header);
theSize = 0;
}
public int size()
{
return theSize;
}
public boolean isEmpty()
{
return (size() == 0);
}
private Node<AnyType> getNode(int index)
{
return (getNode(index, 0, size()-1));
}
private Node<AnyType> getNode(int index, int lower, int upper)
{
Node<AnyType> currNode;
if (index < lower || index > upper)
throw new IndexOutOfBoundsException();
int n = size();
if (index < n/2)
{
currNode = [login to view URL]();
for (int i = 0; i < index; i++) currNode = [login to view URL]();
}
else
{
currNode = trailer;
for (int i = n; i > index; i--) currNode = [login to view URL]();
}
return currNode;
}
public AnyType get(int index)
{
Node<AnyType> indexNode = getNode(index);
return [login to view URL]();
}
public AnyType set(int index, AnyType newValue)
{
Node<AnyType> indexNode = getNode(index);
AnyType oldValue = [login to view URL]();
[login to view URL](newValue);
return oldValue;
}
public boolean add(AnyType newValue)
{
add(size(), newValue);
return true;
}
public void add(int index, AnyType newValue)
{
addBefore(getNode(index, 0, size()), newValue);
}
private void addBefore(Node<AnyType> nextNode, AnyType newValue)
{
Node<AnyType> prevNode = [login to view URL]();
Node<AnyType> newNode = new Node<>(newValue, prevNode, nextNode);
[login to view URL](newNode);
[login to view URL](newNode);
theSize++;
modCount++;
}
public AnyType remove(int index)
{
return remove(getNode(index));
}
private AnyType remove(Node<AnyType> currNode)
{
Node<AnyType> prevNode = [login to view URL]();
Node<AnyType> nextNode = [login to view URL]();
[login to view URL](nextNode);
[login to view URL](prevNode);
theSize--;
modCount++;
return [login to view URL]();
}
public DoublyLinkedList<AnyType> clone()
{
}
public boolean contains(AnyType value)
{
}
public int indexOf(AnyType value)
{
}
public int lastIndexOf(AnyType value)
{
}
public boolean removeFirstOccurrence(AnyType value)
{
}
public boolean removeLastOccurence(AnyType value)
{
}
public AnyType[] toArray()
{
}
public Iterator<AnyType> iterator()
{
return new LinkedListIterator();
}
private class LinkedListIterator implements Iterator<AnyType>
{
private Node<AnyType> cursor;
private int expectedModCount;
private boolean okToRemove;
LinkedListIterator()
{
cursor = [login to view URL]();
expectedModCount = modCount;
okToRemove = false;
}
public boolean hasNext()
{
return (cursor != trailer);
}
public AnyType next()
{
if (modCount != expectedModCount)
throw new ConcurrentModificationException();
if (!hasNext())
throw new NoSuchElementException();
AnyType nextValue = [login to view URL]();
cursor = [login to view URL]();
okToRemove = true;
return nextValue;
}
public void remove()
{
if (modCount != expectedModCount)
throw new ConcurrentModificationException();
if (!okToRemove)
throw new IllegalStateException();
[login to view URL]([login to view URL]());
expectedModCount++;
okToRemove = false;
}
}
} | https://www.my.freelancer.com/projects/software-architecture/linked-list-assignment/ | CC-MAIN-2018-47 | en | refinedweb |
Introduction
Machine Learning is a fast evolving field – but a few things would remain as they were years ago. One such thing is ability to interpret and explain your machine learning models. If you build a model and can not explain it to your business users – it is very unlikely that it will see the light of the day.
Can you imagine integrating a model into your product without understanding how it works? Or which features are impacting your final result?
In addition to backing from stakeholders, we as data scientists benefit from interpreting our work and improving upon it. It’s a win-win situation all around!
The first article of this fast.ai machine learning course saw an incredible response from our community. I’m delighted to share part 2 of this series, which primarily deals with how you can interpret a random forest model. We will understand the theory and also implement it in Python to solidify our grasp on this critical concept.
As always, I encourage you to replicate the code on your own machine while you go through the article. Experiment with the code and see how different your results are from what I have covered in this article. This will help you understand the different facets of both the random forest algorithm and the importance of interpretability.
Table of contents
- Overview of Part 1 (Lessons 1 and 2)
- Introduction to Machine Learning : Lesson 3
2.1 Building a Random Forest
2.2 Confidence Based on Tree Variance
2.3 Feature Importance
- Introduction to Machine learning : Lesson 4
3.1 One Hot Encoding
3.2 Removing Redundant features
3.3 Partial Dependence
3.4 Tree Interpreter
- Introduction to Machine Learning : Lesson 5
4.1 Extrapolation
4.2 Random Forest from scratch
- Additional Topics
Overview of Part 1 (Lessons 1 and 2)
Before we dive into the next lessons of this course, let’s quickly recap what we covered in the first two lessons. This will give you some context as to what to expect moving forward.
- Data exploration and preprocessing : Explored the bulldozer dataset (link), imputed missing values and converted the categorical variables into numeric columns that are accepted by the ml models. We also created multiple features from the date column using date_part function from fastai library.
- Building a Random Forest model and creating a validation set: We implemented a random forest and calculated the score on the train set. In order to make sure that the model is not overfitting, a validation set was created. Further we tuned the parameters to improve the performance of the model.
- Introduction to Bagging: The concept of bagging was introduced in the second video. We also visualized a single tree that provided a better understanding about how random forests work.
We will continue working on the same dataset in this article. We will have a look at what are the different variables in the dataset and how can we build a random forest model to make valuable interpretations.
Alright, it’s time to fire up our Jupyter notebooks and dive right in to lesson#3!
Introduction to Machine Learning: Lesson 3
You can access the notebook for this lesson here. This notebook will be used for all the three lessons covered in this video. You can watch the entire lesson in the below video (or just scroll down and start implementing things right away):
NOTE: Jeremy Howard regularly provides various tips that can be used for solving a certain problem more efficiently, as we saw in the previous article as well. A part of this video is about how to deal with very large datasets. I have included this in the last section of the article so we can focus on the topic at hand first.
Let’s continue from where we left off at the end of lesson 2. We had created new features using the date column and dealt with the categorical columns as well. We will load the processed dataset which includes our newly engineered features and the log of the saleprice variable (since the evaluation metric is RMSLE):
#importing necessary libraries %load_ext autoreload %autoreload 2 %matplotlib inline from fastai.imports import * from fastai.structured import * from pandas_summary import DataFrameSummary from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier from IPython.display import display from sklearn import metrics #loading preprocessed file PATH = "data/bulldozers/" df_raw = pd.read_feather('tmp/bulldozers-raw') df_trn, y_trn, nas = proc_df(df_raw, 'SalePrice')
We will define the necessary functions which we’ll be frequently using throughout our implementation.
#creating a validation set) #define function to calculate rmse and print score)
The next step will be to implement a random forest model and interpret the results to understand our dataset better. We have so far learned that random forest is a group of many trees, each trained on a different subset of data points and features. Each individual tree is as different as possible, capturing unique relations from the dataset. We make predictions by running each row through each tree and taking the average of the values at the leaf node. This average is taken as the final prediction for the row.
While interpreting the results, it is necessary that the process is interactive and takes lesser time to run. To make this happen, we will make two changes in the code (as compared to what we implemented in the previous article):
- Take a subset of the data:
set_rf_samples(50000)
We’re only using a sample as working with the entire data will take a long time to run. An important thing to note here is that the sample should not be very small. This might end up giving a different result and that’ll be detrimental to our entire project. A sample size of 50,000 works well.
#building a random forest model m = RandomForestRegressor(n_estimators=40, min_samples_leaf=3, max_features=0.5, n_jobs=-1, oob_score=True) m.fit(X_train, y_train) print_score(m)
- Make predictions in parallel
Previously, we made predictions for each row using every single tree and then we calculated the mean of the results and the standard deviation.
%time preds = np.stack([t.predict(X_valid) for t in m.estimators_]) np.mean(preds[:,0]), np.std(preds[:,0]) CPU times: user 1.38 s, sys: 20 ms, total: 1.4 s Wall time: 1.4 s
You might have noticed that this works in a sequential manner. Instead, we can call the predict function on multiple trees in parallel! This can be achieved using the parallel_trees function in the fastai library.
def get_preds(t): return t.predict(X_valid) %time preds = np.stack(parallel_trees(m, get_preds)) np.mean(preds[:,0]), np.std(preds[:,0])
The time taken here is less and the results are exactly the same! We will now create a copy of the data so that any changes we make do not affect the original dataset.
x = raw_valid.copy()
Once we have the predictions, we can calculate the RMSLE to determine how well the model is performing. But the overall value does not help us identify how close the predicted values are for a particular row or how confident we are that the predictions are correct. We will look at the standard deviation for the rows in this case.
If a row is different from those present in the train set, each tree will give different values as predictions. This consequently means means that the standard deviation will be high. On the other hand, the trees would make almost similar predictions for a row that is quite similar to the ones present in the train set, t, i.e., the standard deviation will be low. So, based on the value of the standard deviations we can decide how confident we are about the predictions.
Let’s save these predictions and standard deviations:
x['pred_std'] = np.std(preds, axis=0) x['pred'] = np.mean(preds, axis=0)
Confidence based on Tree Variance
Now, let’s take up a variable from the dataset and visualization it’s distribution and understand what it actually represents. We’ll begin with the Enclosure variable.
- Figuring out the value count of each category present in the variable Enclosure:
x.Enclosure.value_counts().plot.barh()
- For each category, below are the mean values of saleprice, prediction and standard deviation.
flds = ['Enclosure', 'SalePrice', 'pred', 'pred_std'] enc_summ = x[flds].groupby('Enclosure', as_index=False).mean() enc_summ
The actual sale price and the prediction values are almost similar in three categories – ‘EROPS’, ‘EROPS w AC’, ‘OROPS’ (the remaining have null values). Since these null value columns do not add any extra information, we will drop them and visualize the plots for salesprice and prediction:
enc_summ = enc_summ[~pd.isnull(enc_summ.SalePrice)] enc_summ.plot('Enclosure', 'pred', 'barh', xerr='pred_std', alpha=0.6, xlim=(0,11));
Note that the small black bars represent standard deviation. In the same way, let’s look at another variable – ProductSize.
#the value count for each category raw_valid.ProductSize.value_counts().plot.barh();
#category wise mean for sale price, prediction and standard deviation flds = ['ProductSize', 'SalePrice', 'pred', 'pred_std'] summ = x[flds].groupby(flds[0]).mean() summ
We will take a ratio of the standard deviation values and the sum of predictions in order to compare which category has a higher deviation.
(summ.pred_std/summ.pred).sort_values(ascending=False)
ProductSize Large 0.034871 Compact 0.034297 Small 0.030545 Large / Medium 0.027799 Medium 0.026928 Mini 0.026247 dtype: float64
The standard deviation is higher for the ‘Large’ and ‘Compact’ categories. Why do you that is? Take a moment to ponder the answer before reading on.
Have a look at the bar plot of values for each category in ProductSize. Found the reason? We have a lesser number of rows for these two categories. Thus, the model is giving a relatively poor prediction accuracy for these variables.
Using this information, we can say that we are more confident about the predictions for the mini, medium and medium/large product size, and less confident about the small, compact and large ones.
Feature Importance
Feature importance is one of the key aspects of a machine learning model. Understanding which variable is contributing the most to a model is critical to interpreting the results. This is what data scientists strive for when building models that need to be explained to non-technical stakeholders.
Our dataset has multiple features and it is often difficult to understand which feature is dominant. This is where the feature importance function of random forest is so helpful. Let’s look at the top 10 most important features for our current model (including visualizing them by their importance):
fi = rf_feat_importance(m, df_trn) fi[:10]
fi.plot('cols', 'imp', figsize=(10,6), legend=False);
That’s a pretty intuitive plot. Here’s a bar plot visualization of the top 30 features:
def plot_fi(fi): return fi.plot('cols','imp','barh', figsize=(12,7), legend=False) plot_fi(fi[:30]);
Clearly YearMade is the most important feature, followed by Coupler_System. The majority of the features seems to have little importance in the final model. Let’s verify this statement by removing these features and checking whether this affects the model’s performance.
So, we will build a random forest model using only the features that have a feature importance greater than 0.005:
to_keep = fi[fi.imp>0.005].cols len(to_keep)
24
df_keep = df_trn[to_keep].copy() X_train, X_valid = split_vals(df_keep, n_trn) m = RandomForestRegressor(n_estimators=40, min_samples_leaf=3, max_features=0.5, n_jobs=-1, oob_score=True) m.fit(X_train, y_train) print_score(m)
[0.20685390156773095, 0.24454842802383558, 0.91015213846294174, 0.89319840835270514, 0.8942078920004991]
When you think about it, removing redundant columns should not decrease the model score, right? And in this case, the model performance has slightly improved. Some of the features we dropped earlier might have been highly collinear with others, so removing them did not affect the model adversely. Let’s check feature importance again to verify our hypothesis:
fi = rf_feat_importance(m, df_keep) plot_fi(fi)
The difference between the feature importance of the YearMade and Coupler_System variables is more significant. From the list of features removed, some features were highly collinear to YearMade, resulting in distribution of feature importance between them.
On removing these features, we can see that the difference between the importance of YearMade and CouplerSystem has increased from the previous plot. Here is a detailed explanation of how feature importance is actually calculated:
- Calculate the r-square considering all the columns: Suppose in this case it comes out to be 0.89
- Now randomly shuffle the values for any one column, say YearMade. This column has no relation to the target variable
- Calculate the r-square again: The r-square has dropped to 0.8. This shows that the YearMade variable is an important feature
- Take another variable, say Enclosure, and shuffle it randomly
- Calculate the r-square: Now let’s say the r-square is coming to be 0.84. This indicates that the variable is important but comparatively less so than the YearMade variable
And that wraps up the implementation of lesson #3! I encourage you to try out these codes and experiment with them on your own machine to truly understand how each aspect of a random forest model works.
Introduction to Machine Learning : Lesson 4
In this lesson, Jeremy Howard gives a quick overview of lesson 3 initially before introducing a few important concepts like One Hot Encoding, Dendrogram, and Partial Dependence. Below is the YouTube video of the lecture (or you can jump straight to the implementation below):
One-Hot Encoding
In the first article of the series, we learned that a lot of machine learning models cannot deal with categorical variables. Using proc_df, we converted the categorical variables into numeric columns. For example, we have a variable UsageBand, which has three levels -‘High’, ‘Low’, and ‘Medium’. We replaced these categories with numbers (0, 1, 2) to make things easier for ourselves.
Surely there must be another way of handling this that takes a significantly less effort on our end? There is!
Instead of converting these categories into numbers, we can create separate columns for each category. The column UsageBand can be replaced with three columns:
- UsageBand_low
- UsageBand_medium
- UsageBand_high
Each of these has 1s and 0s as the values. This is called one-hot encoding.
What happens when there are far more than 3 categories? What if we have more than 10? Let’s take an example to understand this.
Assume we have a column ‘zip_code’ in the dataset which has a unique value for every row. Using one-hot encoding here will not be beneficial for the model, and will end up increasing the run time (a lose-lose scenario).
Using proc_df in fastai, we can perform one-hot encoding by passing a parameter max_n_cat. Here, we have set the max_n_cat=7, which means that variables having levels more than 7 (such as zip code) will not be encoded, while all the other variables will be one-hot encoded., oob_score=True) m.fit(X_train, y_train) print_score(m)
[0.2132925755978791, 0.25212838463780185, 0.90966193351324276, 0.88647501408921581, 0.89194147155121262]
This can be helpful in determining if a particular level in a particular column is important or not. Since we have separated each level for the categorical variables, plotting feature importance will show us comparisons between them as well:
fi = rf_feat_importance(m, df_trn2) fi[:25]
Earlier, YearMade was the most important feature in the dataset, but EROPS w AC has a higher feature importance in the above chart. Curious what this variable is? Don’t worry, we will discuss what EROPS w AC actually represents in the following section.
Removing redundant features
So far, we’ve understood that having a high number of features can affect the performance of the model and also make it difficult to interpret the results. In this section, we will see how we can identify redundant features and remove them from the data.
We will use cluster analysis, more specifically hierarchical clustering, to identify similar variables. In this technique, we look at every object and identify which of them are the closest in terms of features. These variables are then replaced by their midpoint. To understand this better, let us have a look at the cluster plot for our dataset:
from scipy.cluster import hierarchy as hc corr = np.round(scipy.stats.spearmanr(df_keep).correlation, 4) corr_condensed = hc.distance.squareform(1-corr) z = hc.linkage(corr_condensed, method='average') fig = plt.figure(figsize=(16,10)) dendrogram = hc.dendrogram(z, labels=df_keep.columns, orientation='left', leaf_font_size=16) plt.show()
From the above dendrogram plot, we can see that the variables SaleYear and SaleElapsed are very similar to each other and tend to represent the same thing. Similarly, Grouser_Tracks, Hydraulics_Flow, and Coupler_System are highly correlated. The same happens with ProductGroup & ProductGroupDesc and fiBaseModel & fiModelDesc. We will remove each of these features one by one and see how it affects the model performance.
First, we define a function to calculate the Out of Bag (OOB) score (to avoid repeating the same lines of code):
#define function to calculate oob score def get_oob(df): m = RandomForestRegressor(n_estimators=30, min_samples_leaf=5, max_features=0.6, n_jobs=-1, oob_score=True) x, _ = split_vals(df, n_trn) m.fit(x, y_train) return m.oob_score_
For the sake of comparison, below is the original OOB score before dropping any feature:
get_oob(df_keep) 0.89019425494301454
We will now drop one variable at a time and calculate the score:
for c in ('saleYear', 'saleElapsed', 'fiModelDesc', 'fiBaseModel', 'Grouser_Tracks', 'Coupler_System'): print(c, get_oob(df_keep.drop(c, axis=1)))
saleYear 0.889037446375 saleElapsed 0.886210803445 fiModelDesc 0.888540591321 fiBaseModel 0.88893958239 Grouser_Tracks 0.890385236272 Coupler_System 0.889601052658
This hasn’t heavily affected the OOB score. Let us now remove one variable from each pair and check the overall score:
to_drop = ['saleYear', 'fiBaseModel', 'Grouser_Tracks'] get_oob(df_keep.drop(to_drop, axis=1)) 0.88858458047200739
The score has changed from 0.8901 to 0.8885. We will use these selected features on the complete dataset and see how our model performs:
df_keep.drop(to_drop, axis=1, inplace=True) X_train, X_valid = split_vals(df_keep, n_trn) reset_rf_samples() m = RandomForestRegressor(n_estimators=40, min_samples_leaf=3, max_features=0.5, n_jobs=-1, oob_score=True) m.fit(X_train, y_train) print_score(m)
[0.12615142089579687, 0.22781819082173235, 0.96677727309424211, 0.90731173105384466, 0.9084359846323049]
Once these variables are removed from the original dataframe, the model’s score turns out to be 0.907 on the validation set.
Partial Dependence
I’ll introduce another technique here that has the potential to help us understand the data better. This technique is called Partial Dependence and it’s used to find out how features are related to the target variable.
from pdpbox import pdp from plotnine import * set_rf_samples(50000)) m.fit(X_train, y_train); plot_fi(rf_feat_importance(m, df_trn2)[:10]);
Let us compare YearMade and SalePrice. If you create a scatter plot for YearMade and SaleElapsed, you’d notice that some vehicles were created in the year 1000, which is not practically possible.
df_raw.plot('YearMade', 'saleElapsed', 'scatter', alpha=0.01, figsize=(10,8));
These could be the values which were initially missing and have been replaced with 1,000. To keep things practical, we will focus on values that are greater than 1930 for the YearMade variable and create a plot using the popular ggplot package.
x_all = get_sample(df_raw[df_raw.YearMade>1930], 500) ggplot(x_all, aes('YearMade', 'SalePrice'))+stat_smooth(se=True, method='loess')
This plot shows that the sale price is higher for more recently made vehicles, except for one drop between 1991 and 1997. There could be various reasons for this drop – recession, customers preferred vehicles of lower price, or some other external factor. To understand this, we will create a plot that shows the relationship between YearMade and SalePrice, given that all other feature values are the same.
x = get_sample(X_train[X_train.YearMade>1930], 500) def plot_pdp(feat, clusters=None, feat_name=None): feat_name = feat_name or feat p = pdp.pdp_isolate(m, x, feat) return pdp.pdp_plot(p, feat_name, plot_lines=True, cluster=clusters is not None, n_cluster_centers=clusters) plot_pdp('YearMade')
This plot is obtained by fixing the YearMade for each row to 1960, then 1961, and so on. In simple words, we take a set of rows and calculate SalePrice for each row when YearMade is 1960. Then we take the whole set again and calculate SalePrice by setting YearMade to 1962. We repeat this multiple times, which results in the multiple blue lines we see in the above plot. The dark black line represents the average. This confirms our hypothesis that the sale price increases for more recently manufactured vehicles.
Similarly, you can check for other features like SaleElapsed, or YearMade and SaleElpased together. Performing the same step for the categories under Enclosure (since Enclosure_EROPS w AC proved to be one of the most important features), the resulting plot looks like this:
plot_pdp(['Enclosure_EROPS w AC', 'Enclosure_EROPS', 'Enclosure_OROPS'], 5, 'Enclosure')
Enclosure_EROPS w AC seems to have a higher sale price as compared to the other two variables (which have almost equal values). So what in the world is EROPS? It’s an enclosed rollover protective structure which can be with or without an AC. And obviously, EROPS with an AC will have a higher sale price.
Tree Interpreter
Tree interpreter in another interesting technique that analyzes each individual row in the dataset. We have seen so far how to interpret a model, and how each feature (and the levels in each categorical feature) affect the model predictions. So we will now use this tree interpreter concept and visualize the predictions for a particular row.
Let’s import the tree interpreter library and evaluate the results for the first row in the validation set.
from treeinterpreter import treeinterpreter as ti df_train, df_valid = split_vals(df_raw[df_keep.columns], n_trn) row = X_valid.values[None,0] row
array([[4364751, 2300944, 665, 172, 1.0, 1999, 3726.0, 3, 3232, 1111, 0, 63, 0, 5, 17, 35, 4, 4, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 12, 0, 0, 0, 0, 0, 3, 0, 0, 0, 2, 19, 29, 3, 2, 1, 0, 0, 0, 0, 0, 2010, 9, 37, 16, 3, 259, False, False, False, False, False, False, 7912, False, False]], dtype=object)
These are the original values for first row (and it’s every column) in the validation set. Using tree interpreter, we will make predictions for the same using a random forest model. Tree interpreter gives three results – prediction, bias and contribution.
- Predictions are the values predicted by the random forest model
- Bias is the average value of the target variable for the complete dataset
- Contributions are the amount by which the predicted value was changed by each column
The value of Coupler_System < 0.5 increased the value from 10.189 to 10.345 and enclosure less than 0.2 reduced the value from 10.345 to 9.955, and so on. So the contributions will represent this change in the predicted values. To understand this in a better way, take a look at the table below:
In this table, we have stored the value against each feature and the split point (verify from the image above). The change is the difference between the value before and after the split. These are plotted using a waterfall chart in Excel. The change seen here is for an individual tree. An average of change across all the trees in the random forest is given by contribution in the tree interpreter.
Printing the prediction and bias for the first row in our validation set:
prediction, bias, contributions = ti.predict(m, row) prediction[0], bias[0]
(9.1909688098736275, 10.10606580677884)
The value of contribution of each feature in the dataset for this first row:
idxs = np.argsort(contributions[0]) [o for o in zip(df_keep.columns[idxs], df_valid.iloc[0][idxs], contributions[0][idxs])]
[('ProductSize', 'Mini', -0.54680742853695008), ('age', 11, -0.12507089451852943), ('fiProductClassDesc', 'Hydraulic Excavator, Track - 3.0 to 4.0 Metric Tons', -0.11143111128570773), ('fiModelDesc', 'KX1212', -0.065155113754146801), ('fiSecondaryDesc', nan, -0.055237427792181749), ('Enclosure', 'EROPS', -0.050467175593900217), ('fiModelDescriptor', nan, -0.042354676935508852), ('saleElapsed', 7912, -0.019642242073500914), ('saleDay', 16, -0.012812993479652724), ('Tire_Size', nan, -0.0029687660942271598), ('SalesID', 4364751, -0.0010443985823001434), ('saleDayofyear', 259, -0.00086540581130196688), ('Drive_System', nan, 0.0015385818526195915), ('Hydraulics', 'Standard', 0.0022411701338458821), ('state', 'Ohio', 0.0037587658190299409), ('ProductGroupDesc', 'Track Excavators', 0.0067688906745931197), ('ProductGroup', 'TEX', 0.014654732626326661), ('MachineID', 2300944, 0.015578052196894499), ('Hydraulics_Flow', nan, 0.028973749866174004), ('ModelID', 665, 0.038307429579276284), ('Coupler_System', nan, 0.052509808150765114), ('YearMade', 1999, 0.071829996446492878)]
Note: If you are watching the video simultaneously with this article, the values may differ. This is because initially the values were sorted based on index which presented incorrect information. This was corrected in the later video and also in the notebook we have been following throughout the lesson.
Introduction to Machine Learning : Lesson 5
You should have a pretty good understanding of the random forest algorithm at this stage. In lesson #5, we will focus on how to identify whether model is generalizing well or not. Jeremy Howard also talks about tree interpreters, contribution, and understanding the same using a waterfall chart (which we have already covered in the previous lesson, so will not elaborate on this further). The primary focus of the video is on Extrapolation and understanding how we can build a random forest algorithm from scratch.
Extrapolation
A model might not perform well if it’s built on data spanning four years and then used to predict the values for the next one year. In other words, the model does not extrapolate. We have previously seen that there is a significant difference between the training score and validation score, which might be because our validation set consists of a set of recent data points (and the model is using time dependent variables for making predictions).
Also, the validation score is worse than the OOB score which should not be the case, right? A detailed explanation of the OOB score has been given in part 1 of the series. One way of fixing this problem is by attacking it directly – deal with the time dependent variables.
To figure out which variables are time dependent, we will create a random forest model that tries to predict if a particular row is in the validation set or not. Then we will check which variable has the highest contribution in making a successful prediction.
Defining the target variable:
df_ext = df_keep.copy() df_ext['is_valid'] = 1 df_ext.is_valid[:n_trn] = 0 x, y, nas = proc_df(df_ext, 'is_valid') m = RandomForestClassifier(n_estimators=40, min_samples_leaf=3, max_features=0.5, n_jobs=-1, oob_score=True) m.fit(x, y); m.oob_score_
0.99998753505765037
The model is able to separate the train and validation sets with a r-square value 0.99998, and the most important features are SaleID, SaleElapsed, MachineID.
fi = rf_feat_importance(m, x) fi[:10]
- SaleID is certainly not a random identifier, it should ideally be in an increasing order
- Looks like MachineID has the same trend and is able to separate the train and validation sets
- SaleElapsed is the number of days from the first date in the dataset. Since our validation set has the most recent values from the complete data, SaleElapsed would be higher in this set. To confirm the hypothesis, here is the distribution of the three variables in train and test:
feats=['SalesID', 'saleElapsed', 'MachineID'] (X_train[feats]/1000).describe()
(X_valid[feats]/1000).describe()
It is evident from the tables above that the mean value of these three variables is significantly different. We will drop these variables, fit the random forest again and check the feature importance:
x.drop(feats, axis=1, inplace=True) m = RandomForestClassifier(n_estimators=40, min_samples_leaf=3, max_features=0.5, n_jobs=-1, oob_score=True) m.fit(x, y); m.oob_score_
0.9789018385789966
fi = rf_feat_importance(m, x) fi[:10]
Although these variables are obviously time dependent, they can also be important for making the predictions. Before we drop these variables, we need to check how they affect the OOB score. The initial OOB score in a sample is calculated for comparison:
set_rf_samples(50000) feats=['SalesID', 'saleElapsed', 'MachineID', 'age', 'YearMade', 'saleDayofyear'] X_train, X_valid = split_vals(df_keep, n_trn) m = RandomForestRegressor(n_estimators=40, min_samples_leaf=3, max_features=0.5, n_jobs=-1, oob_score=True) m.fit(X_train, y_train) print_score(m)
[0.21136509778791376, 0.2493668921196425, 0.90909393040946562, 0.88894821098056087, 0.89255408392415925]
Dropping each feature one by one:
for f in feats: df_subs = df_keep.drop(f, axis=1) X_train, X_valid = split_vals(df_subs, n_trn) m = RandomForestRegressor(n_estimators=40, min_samples_leaf=3, max_features=0.5, n_jobs=-1, oob_score=True) m.fit(X_train, y_train) print(f) print_score(m)
SalesID 0.20918653475938534, 0.2459966629213187, 0.9053273181678706, 0.89192968797265737, 0.89245205174299469] saleElapsed [0.2194124612957369, 0.2546442621643524, 0.90358104739129086, 0.8841980790762114, 0.88681881032219145] MachineID [0.206612984511148, 0.24446409479358033, 0.90312476862123559, 0.89327205732490311, 0.89501553584754967] age [0.21317740718919814, 0.2471719147150774, 0.90260198977488226, 0.89089460707372525, 0.89185129799503315] YearMade [0.21305398932040326, 0.2534570148977216, 0.90555219348567462, 0.88527538596974953, 0.89158854973045432] saleDayofyear [0.21320711524847227, 0.24629839782893828, 0.90881970943169987, 0.89166441133215968, 0.89272793857941679]
Looking at the results, age, MachineID and SaleDayofYear actually improved the score while others did not. So, we will remove the remaining variables and fit the random forest on the complete dataset.
reset_rf_samples() df_subs = df_keep.drop(['SalesID', 'MachineID', 'saleDayofyear'],axis=1) X_train, X_valid = split_vals(df_subs, n_trn) m = RandomForestRegressor(n_estimators=40, min_samples_leaf=3, max_features=0.5, n_jobs=-1, oob_score=True) m.fit(X_train, y_train) print_score(m)
[0.1418970082803121, 0.21779153679471935, 0.96040441863389681, 0.91529091848161925, 0.90918594039522138]
After removing the time dependent variables, the validation score (0.915) is now better than the OOB score (0.909). We can now play around with other parameters like n_estimator on max_features. To create the final model, Jeremy increased the number of trees to 160 and here are the results:
m = RandomForestRegressor(n_estimators=160, max_features=0.5, n_jobs=-1, oob_score=True) %time m.fit(X_train, y_train) print_score(m)
CPU times: user 6min 3s, sys: 2.75 s, total: 6min 6s Wall time: 16.7 s [0.08104912951128229, 0.2109679613161783, 0.9865755186304942, 0.92051576728916762, 0.9143700001430598]
The validation score is 0.92 while the RMSE drops to 0.21. A great improvement indeed!
Random Forest from Scratch
We have learned about how a random forest model actually works, how the features are selected and how predictions are eventually made. In this section, we will create our own random forest model from absolute scratch. Here is the notebook for this section : Random Forest from scratch.
We’ll start with importing the basic libraries:
%load_ext autoreload %autoreload 2 %matplotlib inline from fastai.imports import * from fastai.structured import * from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier from IPython.display import display from sklearn import metrics
We’ll just use two variables to start with. Once we are confident that the model works well with these selected variables, we can use the complete set of features.
PATH = "data/bulldozers/" df_raw = pd.read_feather('tmp/bulldozers-raw') df_trn, y_trn, nas = proc_df(df_raw, 'SalePrice')) x_sub = X_train[['YearMade', 'MachineHoursCurrentMeter']]
We have loaded the dataset, split it into train and validation sets, and selected two features – YearMade and MachineHoursCurrentMeter. The first thing to think about while building any model from scratch is – what information do we need? So, for a random forest, we need:
- A set of features – x
- A target variable – y
- Number of trees in the random forest – n_trees
- A variable to define the sample size – sample_sz
- A variable for minimum leaf size – min_leaf
- A random seed for testing
Let’s define a class with the inputs as mentioned above and set the random seed to 42.
class TreeEnsemble(): def __init__(self, x, y, n_trees, sample_sz, min_leaf=5): np.random.seed(42) self.x,self.y,self.sample_sz,self.min_leaf = x,y,sample_sz,min_leaf self.trees = [self.create_tree() for i in range(n_trees)] def create_tree(self): rnd_idxs = np.random.permutation(len(self.y))[:self.sample_sz] return DecisionTree(self.x.iloc[rnd_idxs], self.y[rnd_idxs], min_leaf=self.min_leaf) def predict(self, x): return np.mean([t.predict(x) for t in self.trees], axis=0)
We have created a function create_trees that will be called as many times as the number assigned to n_trees. The function create_trees generates a randomly shuffled set of rows (of size = sample_sz) and returns DecisionTree. We’ll see DecisionTree in a while, but first let’s figure out how predictions are created and saved.
We learned earlier that in a random forest model, each single tree makes a prediction for each row and the final prediction is calculated by taking the average of all the predictions. So we will create a predict function, where .predict is used on every tree to create a list of predictions and the mean of this list is calculated as our final value.
The final step is to create the DecisionTree. We first select a feature and split point that gives the least error. At present, this code is only for a single decision. We can make this recursive if the code runs successfully.
class DecisionTree(): def __init__(self, x, y, idxs=None, min_leaf=5): if idxs is None: idxs=np.arange(len(y)) self.x,self.y,self.idxs,self.min_leaf = x,y,idxs,min_leaf self.n,self.c = len(idxs), x.shape[1] self.val = np.mean(y[idxs]) self.score = float('inf') self.find_varsplit() # This just does one decision; we'll make it recursive later def find_varsplit(self): for i in range(self.c): self.find_better_split(i) # We'll write this later! def find_better_split(self, var_idx): pass @property def split_name(self): return self.x.columns[self.var_idx] @property def split_col(self): return self.x.values[self.idxs,self.var_idx] @property def is_leaf(self): return self.score == float('inf') def __repr__(self): s = f'n: {self.n}; val:{self.val}' if not self.is_leaf: s += f'; score:{self.score}; split:{self.split}; var:{self.split_name}' return s
self.n defines the number of rows used in each tree and self.c is the number of columns. Self.val calculates the mean of predictions for each index. This code is still incomplete and will be continued in the next lesson. Yes, part 3 is coming soon!
Additional Topics
- Reading a large dataset in seconds: The time to load a dataset reduces if we provide the data type of the variables at the time of reading the file itself. Use this dataset which has over a 100 million rows to see this in action.
types = {'id': 'int64', 'item_nbr': 'int32', 'store_nbr': 'int8', 'unit_sales': 'float32', 'onpromotion': 'object'} %%time df_test = pd.read_csv(f'{PATH}test.csv', parse_dates = ['date'], dtype=types, infer_datetime_format=True)
CPU times: user 1min 41s, sys: 5.08s, total: 1min 46s Wall time: 1min 48s
- Cardinality: This is the number of levels in a categorical variable. For the UsageBand variable, we had three levels – High, Low and Medium. Thus the cardinality is 3.
- Train-validation-test: It is important to have a validation set to check the performance of the model before we use it on the test set. It often happens that we end up overfitting our model on the validation set. And if the validation set is not a true representative of the test set, then the model will fail as well. So the complete data should be split into train, validation and test set, where the test set should only be used at the end (and not during parameter tuning).
- Cross validation: Cross validation set is creating more than one validation set and testing the model on each. The complete data is shuffled and split into groups, taking 5 for instance. Four of these groups are used to train the model and one is used as a validation set. In the next iteration, another four are used for training and one is kept aside for validation. This step will be repeated five times, where each set is used as a validation set once.
End Notes
I consider this one of the most important articles in this ongoing series. I cannot stress enough on how important model interpretability is. In real-life industry scenarios, you will quite often face the situation of having to explain the model’s results to the stakeholder (who is usually a non-technical person).
Your chances of getting the model approved will lie in how well you are able to explain how and why the model is behaving the way it is. Plus it’s always a good idea to always explain any model’s performance to yourself in a way that a layman will understand – this is always a good practice!
Use the comments section below to let me know your thoughts or ask any questions you might have on this article. And as I mentioned, part 3 is coming soon so stay tuned!You can also read this article on Analytics Vidhya's Android APP
2 Comments
The link to the dataset is not working.
Hi Charles,
The links are working at my end. I’ll anyway share them below, please visit the link and register in the competition: | https://www.analyticsvidhya.com/blog/2018/10/interpret-random-forest-model-machine-learning-programmers/ | CC-MAIN-2018-47 | en | refinedweb |
Before proceeding, take a moment to review the Inherit from the Business Class Library Class (EF) lesson.
Open the Updater.cs (Updater.vb) file, located in the MySolution.Module project's Database Update folder. Add the following code to the ModuleUpdater.UpdateDatabaseAfterUpdateSchema method.
using MySolution.Module.BusinessObjects; //... public class Updater : DevExpress.ExpressApp.Updating.ModuleUpdater { //... public override void UpdateDatabaseAfterUpdateSchema() { base.UpdateDatabaseAfterUpdateSchema(); Contact contactMary = ObjectSpace.FindObject<Contact>( CriteriaOperator.Parse("FirstName == 'Mary' && LastName == 'Tellitson'")); if (contactMary == null) { contactMary = ObjectSpace.CreateObject<Contact>(); contactMary.FirstName = "Mary"; contactMary.LastName = "Tellitson"; contactMary.Email = "tellitson@example.com"; contactMary.Birthday = new DateTime(1980, 11, 27); } //... ObjectSpace.CommitChanges(); } }
After adding the code above, the Contact object will be created in the application database, if it does not already exist.
Each time you run the application, it compares the application version with the database version and finds changes in the application or database. If the database version is lower than the application version, the application raises the XafApplication.DatabaseVersionMismatch event. This event is handled by the WinForms and ASP.NET applications in a solution template. When the application runs in debug mode, this event handler uses the built-in Database Updater to update the application's database. After the database schema is updated, the ModuleUpdater.UpdateDatabaseAfterUpdateSchema method is called. In this method, you can save the required business objects to the database.
As you can see in the code above, eXpressApp Framework (XAF) uses an Object Space object to manipulate persistent objects (see Create, Read, Update and Delete Data).
To specify the criteria passed as a parameter in the BaseObjectSpace.FindObject method call, the CriteriaOperator is used. Its CriteriaOperator.Parse method converts a string, specifying a criteria expression to its CriteriaOperator equivalent. To learn more on how to specify criteria, refer to the Ways to Build Criteria topic.
Run the WinForms or ASP.NET application. Select the Contact item in the navigation control. Notice that the new contact, "Mary Tellitson", appears in the list to the right.
Note that in the Inherit from the Business Class Library Class (EF) lesson, the database initializer was set to clear the database in case the business model changes. This means that all objects created at runtime will be deleted after the next change in the business model. It is recommended that you use the approach described in this lesson to create all objects required for testing purposes. They will persist in your application regardless of the database.
You can see the code for this tutorial in the EFDemo.Module | Database Update | Updater.cs (Updater.vb) file of the EF Demo (Code First) demo installed with XAF. By default, the EF Demo (Code First) application is installed in %PUBLIC%\Documents\DevExpress Demos 18.2\Components\eXpressApp Framework\EFDemoCodeFirst.
Next Lesson: Implement Custom Business Classes and Reference Properties (EF) | https://docs.devexpress.com/eXpressAppFramework/113631/getting-started/comprehensive-tutorial/business-model-design/business-model-design-with-entity-framework/supply-initial-data-ef | CC-MAIN-2018-47 | en | refinedweb |
Commit 4b684657: Laying the Foundation— Rovani in C♯
Two key library decisions needed to be made this week: choosing the object-relational Mapping (ORM) framework, and choosing a membership provider. To cut right to the end, Entity Framework and ASP.NET Identity were the solutions that I have chosen to utilize throughout the software. This week’s commit has been getting the initial implementation created, customized, and tested.
- Create NotImplementedAttribute
- Extend Identity membership classes
- Create Interface for data context
- Create Data Context
- Create minimal tests
- Minor Clean ups
NotImplementedAttribute
Sort of like a TODO in the code, I pepper classes that still need work, or are just a placeholder shell with this attribute. This makes it so I can easily add a Code Analysis rule to mark it as incomplete, or I can add an ObsoleteAttribute to the NotImplementedAttribute. The compiler will issue a warning and I can easily track down any places I left marked as Not Implemented.
Vigil Identity Classes
By default, the Identity classes use a string as the primary key. Internally, it is a serialized Guid, but I could not find a good reason why this was masked. Also, I wanted to play with inheriting the classes ? so I did exactly that. For now, the “System” classes are just the Identity classes, with a Guid key, and table names explicitly set to match the class name (thus overriding the IdentityDbContext’s default of naming them “AspNet_____”).
Data Context
Now is when the fun gets to start ? time to start making some actual progress towards real data.
- Create the IVigilContext interface, to expose the bare minimums.
- Create the Database Context: VigilContext
- Inherit it from the IdentityDbContext generic class (so I can use those beautiful Vigil____ classes I created).
- Inherit it from the IVigilContext interface and implement the “AffectedBy” and “Now” properties.
- Create three tests:
- Initialize and Create the database ? this runs Entity Framework model validation. I wish I could find a way to do this without actually having to create the database, but I was unable to find a way to expose the EF method that runs the validation.
- Verify that the explicit constructor is setting the AffectedBy and Now properties. It is a menial test, but it makes sure no one breaks it in the future.
- Assert that the Set method works for at least one model. This serves two purposes ? it makes sure that the Context is calling its base class’s Set method, and it makes it so that code gets covered by a test, thus passing Code Coverage analysis.
Minor Clean Up Duty
I misspelled the name of the core project, spelling it “Vigi.Data.Core”. This meant renaming the project, updating the Assembly name, default Namespace, the folder name (which also caused a rename on all files in the project), the namespace in existing classes, and references to the project. Thankfully, I caught it before I was creating objects that referencing those elsewhere.
In order to get complete Code Coverage for unit tests, I created a test just to call the constructor for the NotImplemented Attribute (it throws a NotImplemented Exception). However, the last line in that test is an Assert.Fail, which should never be reached. This causes the Code Coverage to say that line was never executed, and thus not covered. Obviously, I do not care about Code Coverage in the Testing project, so I added a codeCoverage.runsettings file, which explicitly excludes the Vigil.Testing.Data assembly from analysis.
Added Vigil Users and Roles implementation of the Identity framework, and added two quick tests for VigilContext.
—Commit 9bca8542c5a0f099032631c133215a5bd9c28aae
Added Entity Framework and Identity Framework. Customized the Identity objects with Vigil objects, including using a Guid as the primary key, and adding a RoleType enum. Added a .runsettings file to exclude the Vigil.Testing.Data project from Code Coverage reports. Corrected the “Vigi.Data.Core” misspelling to “Vigil.Data.Core”.
—Commit 8be35473aa84138f6ee362d083dd424726345089
Added tests to validate that the Vigil* tables have the correct names, since there was a problem with them being overwritten by the IdentityDbContext.
—Commit 4b68465701f111a806a920e7ec4d12de697aeee7 | https://rovani.net/Commit-4b684657/ | CC-MAIN-2018-47 | en | refinedweb |
Configure Hardware Load Balancing for Hub Transport Servers
Applies to: Exchange Server 2010 SP3, Exchange Server 2010 SP2
Topic Last Modified: 2012-07-23
You can configure a hardware load balancing solution deployed and verified a hardware load balancing device.
You have reviewed Understanding SMTP Failover and Load Balancing in Transport.
You first need to create an SMTP namespace that will be used for Windows Network Load Balancing .. that you added to this Hub Transport server in step 3. hardware load balancing. | https://technet.microsoft.com/en-us/library/gg476049(v=exchg.80) | CC-MAIN-2016-50 | en | refinedweb |
Kodikologie und Paläographie im digitalen Zeitalter 2. Codicology and Palaeography in the Digital Age 2
- Elijah Milton Harvey
- 1 years ago
- Views:
Transcription, Christiane Fritze, Georg Vogeler unter Mitarbeit von in collaboration with Bernhard Assmann, Malte Rehbein, Patrick Sahle 2010 BoD, Norderstedt
2 über abrufbar Online-Fassung Herstellung und Verlag der Druckfassung: Books on Demand GmbH, Norderstedt 2010 ISBN: Einbandgestaltung: Johanna Puhl, basierend auf dem Entwurf von Katharina Weber Satz: Stefanie Mayer und L A TEX
3 Semantic Technologies for Manuscript Descriptions Concepts and Visions Robert Kummer Abstract The contribution at hand relates recent developments in the area of the World Wide Web to codicological research. In the last number of years, an informational extension of the internet has been discussed and extensively researched: the Semantic Web. It has already been applied in many areas, including digital information processing of cultural heritage data. The Semantic Web facilitates the organisation and linking of data across websites, according to a given semantic structure. Software can then process this structural and semantic information to extract further knowledge. In the area of codicological research, many institutions are making eorts to improve the online availability of handwritten codices. If these resources could also employ Semantic Web techniques, considerable research potential could be unleashed. However, data acquisition from less structured data sources will be problematic. In particular, data stemming from unstructured sources needs to be made accessible to Semantic Web tools through information extraction techniques. In the area of museum research, the CIDOC Conceptual Reference Model (CRM) has been widely examined and is being adopted successfully. The CRM translates well to Semantic Web research, and its concentration on contextualization of objects could support approaches in codicological research. Further concepts for the creation and management of bibliographic coherences and structured vocabularies related to the CRM will be considered in this chapter. Finally, a user scenario showing all processing steps in their context will be elaborated on. Zusammenfassung Der Beitrag bezieht neue Entwicklungen im World Wide Web auf die kodikologische Forschung. Seit einiger Zeit wird eine informationelle Erweiterung des Internet diskutiert und in vielen Bereichen, auch der digitalen Informationsverarbeitung des kulturellen Erbes, ausführlich erforscht und getestet: das Semantic Web. Das Konzept beinhaltet, dass Daten auf der Ebene ihrer Bedeutung miteinander verknüpft werden, damit Computer diese verarbeiten und weitere Informationen daraus gewinnen können. Im Bereich der Kodikologie gibt es schon seit einigen Jahren Bemühungen, handschriftliche Kodizes online verfügbar zu machen. Wenn auch diese den Schritt Kodikologie und Paläographie im Digitalen Zeitalter 2 Codicology and Palaeography in the Digital Age 2. Hrsg. Franz Fischer, Christiane Fritze, Georg Vogeler, unter Mitarbeit von Bernhard Assmann, Malte Rehbein, Patrick Sahle. Schriften des Instituts für Dokumentologie und Editorik 3. Norderstedt: Books on Demand,
4 134 Robert Kummer in das Semantic Web vollziehen würden, könnten daraus nicht unerhebliche Forschungspotenziale abgeleitet werden. Die Datengewinnung aus wenig strukturierten Datenquellen ist dabei nicht unproblematisch. Insbesondere Daten aus unstrukturierten Quellen müssen zunächst mittels Verfahren der Informationsextraktion einer weiteren Verarbeitung im Sinne des Semantic Web zugänglich gemacht werden. Im Umfeld der Museumsforschung wird das CIDOC Conceptual Reference Model (CRM) ausführlich diskutiert und bereits gewinnbringend eingesetzt. Das CRM lässt sich gut auf die Forschung des Semantic Web beziehen und seine Konzentration auf Kontextualisierung von Objektzusammenhängen könnte der kodikologischen Forschung entgegenkommen. Weitere Konzepte und Standards im Umfeld des CRM zur Erstellung und Verwaltung bibliographischer Zusammenhänge und strukturierter Vokabulare werden in die Überlegungen einbezogen. Abgerundet wird die Betrachtung durch ein Benutzungsszenario, an dem verschiedene Verarbeitungsschritte in ihren Zusammenhang gestellt werden. 1. Semantic Codicology How can the methods and tools of the Semantic Web be applied to the domain of codicology? Many handwritten codices have already been published online, mainly for viewing. Catalogs and common information retrieval techniques (e.g. full-text searching) enable discovery of information. But could additional research potential be unlocked by also making this information available according to the concepts of the Semantic Web? Could we ask and approach other questions by processing this information with the tools that have been developed in this area? For the study and description of a specic codex, knowledge from several disciplines needs to be considered such as, for example, philology. In addition, statistical techniques have been employed to elaborate stemmata for single texts. Geographic and chronological dissemination of scripts and decorations have been considered as signicant features. With regard to the individual codex, manual and technical aspects of production require study, for instance, queries regarding material (papyrus, parchment or paper), binding of folios and quires, ink and writing utensils, book decorations and provenance. Codicology simultaneously treats its research objects as material artifacts and as abstract documents. Thus, an analogy between codicology and archaeology can be drawn to a certain extent. For example, during his studies of the Rothschild collection, Delaissé showed a strong commitment to what he called the archaeology of the book (Delaissé, Marrow, and de Wit; Maniaci). In order to assess the research potential of the Semantic Web for the domain of codicology, this standard should be evaluated with a focus on how methods of codicology translate into methods of Semantic Web research; explicit contextual modeling of information could be the key method that is common to both. In particular,
5 Semantic Technologies for Manuscript Descriptions 135 focusing on contextual coherences of objects, as the Semantic Web does, could support the methods of codicology. The following passages provide an overview of Semantic Web concepts and tools. Semantic Web research itself needs to be considered as part of research in information integration and articial intelligence. Findings in these research areas will not be exhaustively presented but rather mentioned when appropriate. Concepts and tools that have evolved as part of Semantic Web research will be introduced by an example that relates to codicology. However, no suggestions for concrete applications will be made. User scenarios have been considered helpful both for envisioning future software applications and implementing existing ideas (Alexander). An exhaustive user scenario would certainly help to understand where the ideas of the Semantic Web could support codicology in the future. Hopefully, this contribution will help to create such a user scenario for Codicology and the Semantic Web. 2. Semantic Web Research Usually, information on a specic research topic is scattered among several cultural heritage information systems. In many cases, information can only be processed according to user-needs if it has been integrated. Integrated information systems can process data in a more complete fashion and usually provide better results. Additionally, they oer one consistent way of dealing with the data instead of users having to learn many user metaphors. Thus, if data stemming from dierent information systems is to be processed in a uniform way, it needs to be harmonized in terms of syntax and semantics. For many operations, the integrated information must reside in the main memory of a single computer to be processed eciently and according to the needs of users. Berners-Lee, Hendler, and Lasilla conceptualized a so-called vision piece that describes an infrastructure to provide greater capabilities. The authors argue that the available data on the World Wide Web has been designed for humans to read and process. They point out the importance of particular pieces of software, so-called intelligent software agents. These software artifacts are reminiscent of rational agents described by Russell and Norvig. An agent, in this sense, is designed to aid humans in information processing by acting rationally to collect, process and share data. To enable this, data currently published as part of the World Wide Web needs to be represented in certain ways. This holds true for web pages but also for databases that are considered to be part of the deep web. 1 Consequently, research in the area of the Semantic Web has developed from several branches of information technology: building 1 Some web sites are generated dynamically each time a user requests a page. This part of the web is dicult to index by search engines like Google. Usually, the data is managed in some kind of proprietary structure
6 136 Robert Kummer models, computing with knowledge and exchanging information (Hitzler, Krötzsch, and Rudolph). In order to make information accessible for automatic processing, it has to be formalised. In the scope of the Semantic Web several concepts have been proposed. XML seems to be well established in the digitisation community, and is often used for encoding information about a codex and its contents. The Text Encoding Initiative (TEI) provides a set of XML tags for this task in chapter ten of the TEI guidelines. The Semantic Web community has proposed a recommendation that can be expressed in XML. The Resource Description Framework (RDF) can be used to express so-called triples that are simple statements which take the ordered form: subject, predicate, object. RDF is a data model that allows us to make statements about subjects. The World Wide Web Consortium provides an excellent introductory text on RDF (Manola and Miller). A statement like De natura rerum is written by Beda Venerabilis can be easily expressed as the RDF triple: De natura rerum (subject): is written by (predicate): Beda Venerabilis (object). Subject, predicate and object may each be identied by a Universal Resource Identier (URI). A URI is a simple string of characters that is used to identify a thing. The most commonly used form of a URI is an URL (Uniform Resource Locator), which are used daily to direct a web browser to a web page like <>. It is common practice to use URIs that have the form of URLs in the Semantic Web community to refer to a thing. Another interesting aspect of the Semantic Web is that its community actively researches techniques from articial intelligence. Many Semantic Web tools make use of so-called inference engines to deduct new knowledge from databases. Thereby, structured and sophisticated queries can rely on a larger amount of information than originally available. Furthermore, so-called ontologies comprise taxonomies and rules that can be deployed to process data according to the intended meaning. 2 XML-based data, RDF, URIs and ontologies provide the tools for manipulating data as information, and even as knowledge, but can also support information sharing. With the help of ontologies dierent communities can agree on the meaning of certain concepts. By coordinating denitions that dierent communities have developed, a shared understanding of concepts can be achieved. This allows data stemming from dierent information systems to be processed according to the intended and agreed meaning. that makes it dicult to share and process outside the boundaries of the system; clearly an issue that the Semantic Web tries to deal with. 2 Although it has its origin in philosophy, in computer science the term ontology refers to a formal representation of knowledge about a certain domain. The notion of an ontology will be elaborated on in chapter 4.
7 Semantic Technologies for Manuscript Descriptions Extracting and Modeling Information The previous section has introduced a suite of concepts that should help to put the main ideas of the Semantic Web into practice. The central concept to encode, share and process information is the triple. So-called Triplestores are computer programs that control the creation, use and maintenance of data that has the form of triples. Unlike traditional relational databases, Triplestores are purpose-built for dealing with data encoded according to RDF. However, relational databases may be used to make data persistent. The RDF data model builds on the notion of a graph. 3 The process of lling these stores with data will be described in this section. It becomes apparent that the subject and object of a triple need to be atomic units of discourse that can be identied by a URI (e.g. <> : <> : <> ). We want to be able to refer to exactly one concept or thing to make a statement about it. But information sources need to deliver information of high quality and granularity to establish these triples. Therefore, in some cases, information needs to be extracted rather than mapped to each data source if it does not provide enough structure. Many information systems that deal with cultural heritage material use information retrieval methods to provide searching capabilities. In fact, traditional information retrieval tries to deal with material that is not very well-structured, such as full-text. In contrast, information extraction aims at extracting useful structured information from, for example, a text document (Konchady). In the eld of Semantic Web research it would be desirable to extract triples from semi-structured sources as well as from highly structured sources (like formal descriptions of manuscripts in a catalogue). In Section 6, a user scenario will be developed that relies on highly structured data and would not be possible with traditional information retrieval. Often, information retrieval starts with identifying named entities such as people, places and institutions in documents (Cardie), not unlike a traditional printed index. Where do we nd data in the eld of codicology? Many institutions have decided to publish digital information about codices according to certain standards of description. Listing 1 shows how information about a manuscript can be encoded using TEI. It is neither completely structured nor completely unstructured. Some information is highly structured, for example, the reference to material in line 12. Other information is less structured, like the information about the history of the manuscript in line 18. Inside this element some information is structured, like the reference to the archbishop 3 A graph is a concept from mathematics that is structured as sets of ordered pairs. Each pair consists of two edges that are connected by arcs. This is reminiscent of a triple where subject and object are the edges that are connected by the predicate (arc). If one edge connects to many other edges, a whole network of knowledge can emerge from simple triple statements.
8 138 Robert Kummer of Cologne, but other information lacks precision, like the reference to a Cistercian convent. 1 <teiheader> 2 <filedesc> 3 <titlestmt> 4 <title>biblia Sacra</title> 5 </titlestmt> 6 <sourcedesc> 7 <msdesc xml: 8 <physdesc> 9 <objectdesc form="codex"> 10 <supportdesc material="perg"> 11 <support> 12 <material>parchment</material> 13 </support> 14 </supportdesc> 15 </objectdesc> 16 </physdesc> 17 <history> 18 <origin>liber sancti petri a pio patre herimanno datus (<date>9/10c</date>, < locus> f. 1r</locus> <persname>herimann Abp. of Koeln <date> </ date></persname>) ;</origin> <provenance><persname> Rutgheri </persname>. ( <date>9 or 10c?</date>, <locus>f. 1r</locus>) ; <quote>hic liber est sancti petri in colonia concessus conventui de prato sancte marie per manum domini alberti subdecani, quem idem conventus reddet sine contradictione, cum <sic>repitittus</sic> fuerit a capitulo sancti petri, sicut continetur in litteris, quibus se predictus sanctimonialium conventus obligavit. Et in eo sunt multa folia truncata. <date>anno MCCXLI </date>.</quote> (<locus>f. 1r</locus>, notice dated <date>1241 </date>, that this book was lent by cathedral to the convent of the Prata S. Mariae, also called Benden; this Cistercian convent for women was founded <date> 1207 </date> in the area of Bruehl [about 10 km south of Cologne]; [... ]< /provenance> 19 </ history> 20 </msdesc> 21 </sourcedesc> 22 </ filedesc> 23 </teiheader> Listing 1. Manuscript description as part of a TEI encoded document. What can we do about semi-structured text? Highly structured data usually can be extracted very well. The tag <material> indicates that the contained value will denote a certain material. A triple like kn , consists of, Pergament can easily be constructed if the meaning of the attribute xml:id is known in this context. However, the tag <history>[...] this book was lent by cathedral to the convent of the Prata S. Mariae, [...]</history> will be harder to extract unless a well maintained list of cathedrals and convents supports the information extraction tool.
9 Semantic Technologies for Manuscript Descriptions 139 The result of the extraction process should be a triple like kn , was lent to, Prata S. Mariae. 4 Structured queries that rely on the semantics of information are only possible if the data model is also highly structured. Therefore, the migration of information to the Semantic Web cannot be limited to adopting its concepts but needs to aim at making information explicit that was implicit before. Listing 2 shows how some of the TEI information has been encoded according to Semantic Web concepts. In this case the information on the manuscript has been saved as a text le encoded in Turtle. 5 rdf: < rdf syntax ns#>. crm: < crm. org/100302/>. < koeln.de/>. 4 5 :kn rdf:type crm:e22_man Made_Object; 7 crm:p1_is_identified_by [ 8 rdf:value "Koln, Dombibliothek, Codex 9 ]. 10 crm:p128_carries :kn doc ; 11 crm:p45_consists_of :parchment :kn doc 14 rdf:type crm:31_document; 15 crm:p1_is_identified_by [ 16 rdf:value " Biblia 17 ]; 18 crm:p1_is_identified_by [ 19 rdf:value "Vulgata 20 ] :parchment 23 rdf:type crm:e57_material: 24 crm:p1_is_identified_by [ 25 rdf:value 26 ]; 27 crm:p1_is_identified_by [ 28 rdf:value 29 ]. Listing 2. Manuscript information modelled in Turtle. 4 This triple is a shortcut of a more complex set of triples that include the actor who surrendered the custody and the actor that the custody was surrendered to. A special interest group has been formed to reasearch the relation of markup like TEI to ontologies (Eide and Ore). 5 Turtle (Terse RDF Triple Language) is a serialization format for RDF. In this context, serialization means to dump triple data to a le for persistence or transmission. Turtle is a popular systax for RDF because it is more human-readable than XML. However, according to the recommendation, RDF should be serialized as RDF/XML (the XML syntax for expressing RDF).
10 140 Robert Kummer Lines 1 to 3 dene dierent namespaces that can be reused throughout the document to guarantee that keywords are unique although they have been collected from dierent information sources. 6 The rest of the document can be read as an aggregation of simple subject, predicate and object statements. The information in the gure already adheres to the CIDOC CRM that will be described in the following section. Lines 5 and 6 express that there is an entity :kn which is an instance of the class E22_Man-Made_Object. Line 11 adds the information that :kn consists of parchment. The expression :kn , of course, denotes the physical codex that is part of the collection of the Diözesan- und Dombibliothek Köln. 7 This information is highly structured and additional semantic information has been made explicit, thus satisfying the precondition for complex query processing. 4. The Role of Ontologies In the eld of Semantic Web research the Resource Description Framework (RDF) has been proposed as an approach to conceptually model data of a certain domain. An example of how data can be encoded according to RDF has been presented in listing 2. However, RDF does not make any recommendations as to how a certain domain could be structured or which terminology should be used. Like a traditional relational database it does not make any statements about the meaning of data, and many of the semantics have to be modeled as part of the application logic of a computer program. Ontologies have been proposed as a much richer approach to model the semantics of information. The CIDOC CRM mentioned before has been explicitly modeled as an ontology and its inventors introduced it as an ontological approach (Dörr). Information technology took the word ontology from philosophy in an analogy but redened the term to t its needs. In fact, ontologies have been considered as being a silver-bullet for information integration (Fensel). Basically, ontologies have been introduced to support communication processes in larger groups. They have been developed to help organisations nd a common language and understanding of important domain concepts. In comparison to at glossaries or terminology lists, ontologies comprise a complex thesaurus-like structure, additional rules and 6 Because it would be cumbersome to write the full URI of each part of the triple, so-called namespaces can be dened. A namespace denition binds a part of the full URI to a qualied name that can be used throughout the document. The base namespace is applied to all names that omit the qualied name in front of the colon connecting the qualied name with its sux. For example, :kn translates to < > and rdf:value to < 22-rdf-syntax-ns#value>. 7 The digital facsimiles of the Diözesan- und Dombibliothek Köln have been published as Codices Electronici Ecclesiae Coloniensis (Thaller and Finger). kn28 denotes the identication code of the Diözesan- und Dombibliothek.
11 Semantic Technologies for Manuscript Descriptions 141 Figure 1. The Class hierarchy of the CIDOC CRM. restrictions. 8 Thus, they not only dene certain notions but can also encode complex interrelations. Although there is no canonical denition of what an ontology is in information technology, Gruber was the rst to formalise the topic. Ontologies can be constructed with the Web Ontology Language (OWL). 9 In 2006, the CIDOC Conceptual Reference Model was accepted as ocial standard ISO 21127:2006 (Crofts et al.). It provides a taxonomy for expressing information about material objects in the cultural-heritage area. Like any ontology, it can be used both to support communication processes in larger communities that strive for sharing of information and to implement software systems that integrate information from dierent information systems. A hierarchy of classes denes concepts that are commonly referred to in museum documentation practice. And so-called properties form relations between these conceptual classes. Up to now, the CRM has been used in several integration projects. 10 Figure 1 shows a part of the class hierarchy provided by the CIDOC CRM. The visualization has been generated from an OWL implementation of the Erlangen CRM 8 For example, a rule that states the uncle relationship in a ctional family ontology can have the form [rule1: (?f pre:father?a) (?u pre:brother?f) -> (?u pre:uncle?a)] (the rule is written in the syntax of the Jena Semantic Web framework; the example is taken from <. sourceforge.net/inference/#rules>). Rules are evaluated and processed by inference engines to create new facts (triples). An example of a restriction is that a human always has, at most, two arms. 9 More information about OWL can be found at Smith, Welty, and McGuiness. 10 Two examples would be SCULPTEUR (Giorgini) and BRICKS.
12 142 Robert Kummer Figure 2. An information carrier in the CRM hierarchy. (Schiemann et al.). For modeling data in the eld of codicology, the CRM provides both classes for modeling a codex (for example, as a physical man-made object or information carrier) and the contained work (for example, as a document). But also classes for describing additional contextual information such as the condition of a codex, people involved in its creation and the history of ownership. The following section will introduce two relevant classes: E84_Information_Carrier and E31_Document. In order to describe a codex as a material thing, for example, one might use the CRM class E84.Information Carrier. From the ocial CIDOC CRM documentation: This class comprises all instances of E22 Man-Made Object that are explicitly designed to act as persistent physical carriers for instances of E73 Information Object. This allows a relationship to be asserted between an E19 Physical Object and its immaterial information contents (Crofts et al. 67). Figure 2 shows the class as part of the inheritance hierarchy of the CRM. It is important to keep in mind that each class inherits all the features of its super class. The contained textual material considered as a conceptual object can be modeled as E31.Document. The ocial documentation denes that this class comprises identiable immaterial items, which make propositions about reality. These propositions may be expressed in text, graphics, images, audiograms, videograms or by other similar means
13 Semantic Technologies for Manuscript Descriptions 143 Figure 3. A document in the CRM hierarchy. (Crofts et al. 48). Figure 3 shows how a document according to the CRM is also a man-made thing but without material character. However, it would be beyond the scope of this contribution to discuss if a document in this sense can describe the features of a certain hand writing or whether another class like E36.Visual_Item would be a better t for this. The structure of the CIDOC CRM relies heavily on the notion of events. Dörr and Kritsotaki argue that modeling events in metadata is helpful for dealing with cultural heritage information. For example, the notion of an event can be helpful for expressing uncertain information. The class E13.Attribute_Assignment, which is a sub-class of E7.Activity, has been provided to emphasize how a statement about something came about. Opinions of dierent authors can be distinguished by using E13.Attribute_Assignment for each researcher s assertion about a codex. Additionally, the history of ownership of a codex can be modeled by using events. Although developed in a museum context, classes like E10.Transfer_of_Custody and E8.Acquisition (also both sub-classes of E7.Activity ) suggest that the CRM provides structures that can be adapted to the needs of research in the eld of codicology. But how do codices relate to the contained works in the world of CIDOC CRM? The class hierarchies shown in gures 1, 2 and 3 do not display the properties mentioned above which are needed in order to relate instances of these classes to each other. Figure 4 highlights another perspective. Instead of the class hierarchy, the relations between
14 144 Robert Kummer Figure 4. Manuscript information graph visualization. the individuals are presented. Please note that this visualization has been automatically generated from the Turtle code in listing Two further developments worth mentioning that deal with structured vocabularies and bibliographies are SKOS and FRBRoo. Up to now, the discussion has focused on integrating dierent data models and schemas. However, dierent groups tend to refer to the same thing by dierent names. Just think of an international research environment where codex materials are referred to using national languages (e.g. Papier, paper and páipear ). Lines 22 to 29 in listing 2 demonstrates how two dierent names have been assigned to the URI :parchment. Here the URI denotes the material itself and the dierent names have been associated by using the CRM property crm:p1_is_identified_by. One way to approach the terminology problem is to provide structured controlled vocabularies by using SKOS (the Simple Knowledge Organization System). It is a family of formal languages designed for any type of structured controlled vocabulary (Miles and Bechhofer). While CIDOC CRM is a formalisation of how cultural heritage content can be encoded, SKOS is a formalisation of how structured terminologies can be encoded. Figure 5 illustrates how appellations of dierent materials can be expressed according to SKOS. It shows that the material which 11 RDF Gravity has been used to generate the visualization (Goyal and Westenthaler). For better readability the gure has been reworked by using a charting tool.
15 Semantic Technologies for Manuscript Descriptions 145 Figure 5. A graphical visualization of a vocabulary as SKOS. the URI :parchment refers to has been associated with the SKOS class Concept. 12 The hierarchical links :broader and :narrower indicate that :material is more general than :parchment and :paper. Of course, SKOS data is not valuable in itself but needs to be made available for information systems so that they can make use of the structured vocabularies. Another standard with a strong connection to Semantic Web research has been proposed in the eld of library science. The Functional Requirements for Bibliographic Records (FRBR) form a conceptual model developed by the International Federation of Library Associations Institutions (Tillett). FRBR distinguishes the notions Work, Expression, Manifestation and Item (IFLA Study Group on FRBR). According to the denition document, a Work is an intellectual creation (for example Moby Dick ) and the Expression is the realization of this creation in its distinct form (e.g. German translation of Moby Dick ). A Manifestation is the physical embodiment of an expression of a work. As an entity, manifestation represents all the physical objects that bear the same characteristics, in respect to both intellectual content and physical form (e.g. a certain edition of the German translation). Finally, an Item is a single exemplar of a manifestation. The entity dened as item is a concrete entity (a certain copy of a certain edition). A medieval codex would be dened as an Item in the terminology of FRBR. And since for each manifestation there is just one item, the distinction between Manifestation and Item does not seem to be pertinent to practical research in the eld of codicology. Although FRBR is powerful at modeling relations between these four layers, it does not come with the means to express the history of development for an old manuscript. 12 The SKOS reference document denes the class Concept : A SKOS concept can be viewed as an idea or notion; a unit of thought. However, what constitutes a unit of thought is subjective, and this denition is meant to be suggestive, rather than restrictive (Bechhofer and Miles).
16 146 Robert Kummer FRBR has been harmonised with the CIDOC CRM. Therefore, it has been expressed as a formal ontology that links to the classes of the CRM. The harmonisation project strives for better related representations of bibliographic and museum information, and to facilitate the integration, mediation and interchange of information (Dörr and Le Boef). In June 2009 the latest version of the standard was released (Aalberg et al.). 5. Putting it Together We have looked at how dierent cultural heritage information systems can use information extraction (for unstructured and semi-structured texts) and mapping (for databases or the deep web ) to get a grip on relevant entities. We have discussed how these entities and their relations can be modeled as RDF triples in a way that conforms to a standard like the CIDOC CRM. Adhering to this standard enables software to process data according to the intended meaning. It is helpful for further data fusion tasks and complex querying of information to integrate the information acquired from dierent sources in one physical place. This enables comprehensive mining, indexing and querying. Dierent architectures have been developed to tie together complex distributed systems ranging from distributed databases over middleware software to Service- Oriented Architectures with Web Services. 13 In the cultural heritage domain it has become very common to publish information using the OAI Protocol for Metadata Harvesting (OAI-PMH), which relies on the HTTP protocol. The OAI-PMH has been suggested by the Open Archives Initiative for publishing and collecting metadata. Data providers, such as archives, publish their metadata as XML and service providers harvest that data in order to oer further services (Lagoze et al.). Using this protocol it would be possible to publish both the TEI documents and additional semantic data in RDF. Recent suggestions in Semantic Web research tend to avoid cumbersome approaches in favor of light-weight infrastructures. In 2006, Berners-Lee articulated his thoughts on a concept that he called Linked Data. The core of this concept is not only to put data on the web in a certain manner but also to link related data. Since then, the notion of Linked Data refers to a set of best practices for publishing and connecting structured data on the World Wide Web. While the concept of Linked Data requires service providers to systematically crawl linked data to acquire information, OAI-PMH oers guided and streamed pulling of data. 13 A Service-Oriented Architecture provides loosely coupled software components that provide services. These services can be combined to solve certain tasks. A Web Service provides an application programming interface that can be called via the HTTP protocol. HTTP (Hypertext Transfer Protocol) is a standard that is used to transmit data over a network (in particular for the Internet). Therefore it is ubiquitously available.
17 Semantic Technologies for Manuscript Descriptions 147 Once the data has been integrated, tools are needed for further processing. Semantic Web Frameworks like Jena (Carroll et al.) support software developers in creating Semantic Web applications. They provide an integrated set of tools that facilitate the design and operation of a knowledge base that can deal with triples. Most Semantic Web frameworks provide a so-called SPARQL endpoint to query the knowledge base. SPARQL is an acronym that stands for SPARQL Protocol and RDF Query Language (Prud hommeaux and Seaborne). An endpoint is an entry point for a service that can be called over a network. SPARQL allows the formulation of queries that are highly structured. It does have some similarities to SQL. 14. But while SQL is commonly used to query or manipulate data in a relational model, SPARQL can be used to formulate complex graph-like query structures to query RDF data. 15 SPARQL queries can be transmitted over the HTTP protocol. Larger projects are picking up the idea of the Semantic Web and developing advanced applications. Information extraction projects like DBpedia mine the World Wide Web for structured data and make it accessible for the Semantic Web (Auer et al.). Another example is the SIMILE project (Mazzocchi, Garland, and Lee). It is more end-user-oriented than DBpedia and develops tools that examine the possibility of semantically processing digital assets. 6. User scenario The Semantic Web has been introduced and codicological material referenced, only the coherent user scenario remains to be described. Therefore, a simple use case for further elaboration will be presented here. It will develop around a simple question and highlight the implications of how Semantic Web technology will be aected. In the above-mentioned vision piece a user scenario has been developed that should motivate future research and funding in the area, but scenarios can also be used in the microcosm of system development. They help to reect on functional requirements that a single piece of software needs to fulll in order to meet a user s needs. User scenarios facilitate communication between software designers, programmers and end-users by providing a shared example. Imagine a researcher working on medieval codices. To push forward his current research project, he is interested in how texts spread in certain institutions in a specic region. 16 He has a good friend in the IT department of the university who enthusiastically reported on a new strain of research, the Semantic Web. According to this concept, digital information will be managed in a way that supports machine- 14 SQL is ISO standard ISO/IEC 9075:2008 and stands for Structured Query Language. 15 Up to now there is no W3C recommendation for the manipulation capabilities of SPARQL. However, most Semantic Web toolkits provide capabilities for data manipulation outside SPARQL and extensions for SPARQL are being developed. 16 I want to thank Almut Breitenbach and Patrick Sahle for their support in creating this user scenario.
18 148 Robert Kummer processing. The researcher wonders if this new technology could meet a requirement he has formulated as follows: For the geographic area of northern Germany, show all codices that contain texts of Classic Latin authors and that have been written in the 13 th century. Draw the results as circles on a map and use dierent colors for monasteries and nunneries. Many requirements need to be fullled to enable a system to process such a question. Certainly, the data that is needed to compile the results resides on scattered information systems, preferably encoded as structured manuscript descriptions. As a rst step, this data needs to ow from one information system to another. Catalogue data from dierent information systems has been published and is exposed via OAI-PMH as TEI. Imagine an information system that strives to support the researcher. It will request information from several data providers and gather it for further processing. This approach requires little eort for data providers. Other architectures could demand that one or more of the following pre-processing steps be performed by data providers before the data is published. As a rst step, information extraction needs to be performed on the acquired data by the central information system. The system aims to extract named entities and to assign the right unique identier (i.e. URI) to each entity. This step is rather important because without canonical names across the participating information systems all following steps will fail. Once entities and the relations that exist between them are represented as URIs, they can be stored as triples in some serialisation of RDF. To be available for processing, triples are held in main memory according to a suitable data structure. For exchanging information between dierent information systems, this data needs to be serialised in a le. An example for a serialisation has been given in listing 2. After ingesting the triples in a triplestore, the data will be available for further processing. The extracted entities alone are of very limited use unless they are aligned with additional background knowledge. This knowledge will be provided by specialised knowledge bases as triples. It comprises, for example, the geographic region, the monastery and religious order mentioned in the manuscript description. Without this background, none of which is contained in the metadata of the codex alone, the query of the researcher cannot be answered. But after adding the supplementary knowledge to the triplestore, additional facts are available that can be considered for query processing. For example, the three triples codex123, carries, document123, document 123, has author, Cicero (both extracted from codex information) and Cicero, has genre, Classical Latin work (added from background knowledge base) can be combined to reason that the codex contains a text of an author that has been attributed Classical Latin work. Additional facts can be derived by applying rules. And plausibility checks can be conducted to disclose contradictory information that may emerge by considering additional knowledge.
19 Semantic Technologies for Manuscript Descriptions 149 We assume that the information about the author of a text and its place of creation could be extracted from the codex metadata. Additionally, a group of theoretical researchers recorded their ndings by putting results in a specialized information system (for example that texts of a certain author usually can be ascribed to a specic genre). Another system contributes the geographic coordinates for a certain geographic region. If the information system that the researcher is using has access to all the above systems, they can now formulate queries that could not have been formulated before. Listing 3 shows a selected aspect of the aforementioned query in a formalised way. Its formalisation is little more than preliminary but seems to be sucient to discuss the process of formalisation. 1 PREFIX rdf: < rdf syntax ns#> 2 PREFIX crm: < crm. org/100302/> 3 PREFIX cod: <> 4 PREFIX skos: <> 5 BASE < koeln.de/> 6 7 SELECT?codex,?gender,?geo 8 WHERE { 9?codex rdf:type crm:e22_man Made_Object. 10?codex crm:p108i.was_produced_by?codexproduction. 11?codexProduction crm:p7. took_place_at?monastery. 12?monastery crm:p2. has_type?gender. 13?gender skos:broader cod:gender. 14 [... ] 15 } Listing 3. A SPARQL query that formalizes a research problem. As in listing 2, the query starts by dening a couple of namespaces including the base namespace to ensure that the used names are unique. The rest of the query is based on the syntax of SPARQL that serves to query triplestores. Then, the three variables?codex,?gender and?geo are dened to carry the results. The result will hold identiers of codices together with the information about the geographic unit the monastery belongs to and whether it is a nunnery or monastery. The following part of the query re-uses these variables and denes additional ones that are only used temporarily. The temporary variables like?monastery are needed to build the factual bridges from one piece of information to another. The property crm:p2.has_type connects the?monastery with its?gender. Multiple types can be connected with a crm:e22.man-made_object and therefore the property skos:broader has been used to restrict the value of the assigned type to be a specialisation of cod:gender, which could be either male or female. For real world use a graphical user interface needs to be implemented. The query demonstrated in listing 3 could not be formulated by a researcher in the eld of codicology or palaeography without undertaking signicant additional training. On
20 150 Robert Kummer the foundation of the mentioned Semantic Web framework, additional software layers need to mediate between the end-user and the bits and bytes. Many questions arise when thinking about such a system like questions of helpful interface design for easy interaction and formulation of very complex queries. Research in articial intelligence strives for processing queries in natural language. The Companions Project, for example, explores software that is reminiscent of the intelligent agents mentioned before (Benyon and Mival). 17 The information that has been acquired by evaluating the example query has the form of a table with the columns holding an identier for the codex, the information if it is a monastery or nunnery and its geographic region. Although this information is helpful for the researcher it would be useful to display the results as a map. With the advent of Web 2.0, mashups have become quite popular. 18 The ctional researcher could use a similar service to display a map of the region of his interest. The web application would draw a circle for each monastery on the map with dierent colors for monasteries and nunneries. By querying a geographic database, the geographic identier can be resolved to coordinates that are needed for the drawing task. It is obvious that only a few of the infrastructural elements which would facilitate the user scenario are available today. Scientists in the eld of codicology would need to make their factual knowledge available as a domain-specic knowledge base. Additionally, dealing with consistent and canonical URIs is anything but easy. Information extraction usually can only extract entities that it can look up in some kind of authority le or that it has been trained to nd by machine learning techniques. Other entities can be identied but not resolved to a canonical name, especially in the case of unstructured text. Another problem is the scalability of current Semantic Web Triplestores. Unlike relational databases they are still not well understood and cannot deal with massive amounts of data. However, the Semantic Web community has recognised this problem and is working on scalable solutions. One example is the OWLIM Semantic Repository (Ontotext AD) that scales to several billion triples. The W3C maintains a Wiki that lists Triplestores, sorted by their scalability. 17 The aforementioned SIMILE project is experimenting with dierent user interfaces that provide facetted browsing, timelines and maps. Another example would be PhiloSpace, a piece of software that has been developed within the COST framework. It can be used to establish semantic relations among entities of the philosophical domain. 18 Among other things, the concept of Web 2.0 means that users can interact with the web site and actively contribute to it. Mashups are one example for a web page that combines data and functionality from other resources to create a custom service.
Kodikologie und Paläographie im digitalen Zeitalter 2. Codicology and Palaeography in the Digital Age,:,
Joint Steering Committee for Development of RDA
Page 1 of 11 To: From: Subject: Joint Steering Committee for Development of RDA Gordon Dunsire, Chair, JSC Technical Working Group RDA models for authority data Abstract This paper discusses the models
Oct 15, 2004 3. Internet : the vast collection of interconnected networks that all use the TCP/IP protocols
E-Commerce Infrastructure II: the World Wide Web The Internet and the World Wide Web are two separate but related things Oct 15, 2004 1 Outline The Internet
STAR Semantic Technologies for Archaeological Resources.
STAR Semantic Technologies for Archaeological Resources Project Outline 3 year AHRC funded project Started January 2007, finish December 2009 Collaborators,
Introduction to Service Oriented Architectures (SOA)
Introduction to Service Oriented Architectures (SOA) Responsible Institutions: ETHZ (Concept) ETHZ (Overall) ETHZ (Revision) - Version from: 26.10.2007 1 Content 1. Introduction
Security Issues for the Semantic Web
Security Issues for the Semantic Web Dr. Bhavani Thuraisingham Program Director Data and Applications Security The National Science Foundation Arlington, VA On leave from The MITRE Corporation Bedford,
Linked Open Data A Way to Extract Knowledge from Global Datastores
Linked Open Data A Way to Extract Knowledge from Global Datastores Bebo White SLAC National Accelerator Laboratory HKU Expert Address 18 September 2014 Developments in science and information processing,
CONCEPT OF OPERATIONS FOR THE SWIM COMMON REGISTRY (SCR)
CONCEPT OF OPERATIONS FOR THE SWIM COMMON REGISTRY (SCR) FAA/SESAR APRIL 2015 Preface The proposed SWIM Common Registry (SCR) is envisioned as a comprehensive, systematic, and dynamic mechanism for publishing,
Linked Data for the Enterprise: Opportunities and Challenges
Linked Data for the Enterprise: Opportunities and Challenges Marin Dimitrov (Ontotext) Semantic Days 2012, Stavanger About Ontotext Provides software and expertise for creating, managing and exploiting
Chapter 11 Mining Databases on the Web
Chapter 11 Mining bases on the Web INTRODUCTION While Chapters 9 and 10 provided an overview of Web data mining, this chapter discusses aspects of mining the databases on the Web. Essentially, we use
Business Process Technology
Business Process Technology A Unified View on Business Processes, Workflows and Enterprise Applications Bearbeitet von Dirk Draheim, Colin Atkinson 1. Auflage 2010. Buch. xvii, 306 S. Hardcover ISBN 978
James Hardiman Library. Digital Scholarship Enablement Strategy
James Hardiman Library Digital Scholarship Enablement Strategy This document outlines the James Hardiman Library s strategy to enable digital scholarship at NUI Galway. The strategy envisages the development,
US Patent and Trademark Office Department of Commerce
US Patent and Trademark Office Department of Commerce Request for Comments Regarding Prior Art Resources for Use in the Examination of Software-Related Patent Applications [Docket No.: PTO-P-2013-0064]
Chapter 1 Databases and Database Users
Chapter 1 Databases and Database Users Copyright 2011 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 1 Outline Introduction An Example Characteristics of the Database Approach
Services supply chain management and organisational performance
Services supply chain management and organisational performance Irène Kilubi Services supply chain management and organisational performance An exploratory mixed-method investigation of service and manufacturing
CitationBase: A social tagging management portal for references
CitationBase: A social tagging management portal for references Martin Hofmann Department of Computer Science, University of Innsbruck, Austria m_ho@aon.at Ying Ding School of Library and Information
Big Data Management Assessed Coursework Two Big Data vs Semantic Web F21BD
Big Data Management Assessed Coursework Two Big Data vs Semantic Web F21BD Boris Mocialov (H00180016) MSc Software Engineering Heriot-Watt University, Edinburgh April 5, 2015 1 1 Introduction The purpose
Notes about possible technical criteria for evaluating institutional repository (IR) software
Notes about possible technical criteria for evaluating institutional repository (IR) software Introduction Andy Powell UKOLN, University of Bath December 2005 This document attempts to identify some of
LOD2014 Linked Open Data: where are we? 20 th - 21 st Feb. 2014 Archivio Centrale dello Stato. SBN in Linked Open Data
LOD2014 Linked Open Data: where are we? 20 th - 21 st Feb. 2014 Archivio Centrale dello Stato SBN in Linked Open Data Istituto Centrale per il Catalogo Unico delle biblioteche italiane (ICCU) 20-21/02/2014
Index Data's MasterKey Connect Product Description
Index Data's MasterKey Connect Product Description MasterKey Connect is an innovative technology that makes it easy to automate access to services on the web. It allows nonprogrammers to create 'connectors',
Copyright 2011 Pearson Education, Inc. Publishing as Pearson Addison-Wesley. Chapter 1 Outline
Chapter 1 Databases and Database Users Copyright 2011 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Introduction Chapter 1 Outline An Example Characteristics of the Database Approach Actors
Information Technology for KM
On the Relations between Structural Case-Based Reasoning and Ontology-based Knowledge Management Ralph Bergmann & Martin Schaaf University of Hildesheim Data- and Knowledge Management Group The
Annotation: An Approach for Building Semantic Web Library
Appl. Math. Inf. Sci. 6 No. 1 pp. 133-143 (2012) Applied Mathematics & Information Sciences @ 2012 NSP Natural Sciences Publishing Cor. Annotation: An Approach for Building Semantic Web Library Hadeel
Service Oriented Architecture
Service Oriented Architecture Charlie Abela Department of Artificial Intelligence charlie.abela@um.edu.mt Last Lecture Web Ontology Language Problems? CSA 3210 Service Oriented Architecture 2 Lecture Outline
The AXIS Conceptual Reference Model (Concepts and Implementation)
The AXIS Conceptual Reference Model (Concepts and Implementation) By Guy MARÉCHAL (2010 MAY 29) Summary A new modular and tailorable approach for the semantic modeling of static and dynamic knowledge
BUSINESS VALUE OF SEMANTIC TECHNOLOGY
BUSINESS VALUE OF SEMANTIC TECHNOLOGY Preliminary Findings Industry Advisory Council Emerging Technology (ET) SIG Information Sharing & Collaboration Committee July 15, 2005 Mills Davis Managing Director
Digital libraries of the future and the role of libraries
Digital libraries of the future and the role of libraries Donatella Castelli ISTI-CNR, Pisa, Italy Abstract Purpose: To introduce the digital libraries of the future, their enabling technologies and their
Information Services for Smart Grids
Smart Grid and Renewable Energy, 2009, 8 12 Published Online September 2009 (). ABSTRACT Interconnected and integrated electrical power systems, by their very dynamic
Linked Medieval Data: Semantic Enrichment and Contextualisation to Enhance Understanding and Collaboration
: Semantic Enrichment and Contextualisation to Enhance Understanding and Collaboration Prof. Dr. Stefan Gradmann Humboldt-Universität zu Berlin / School of Library and Information Science stefan.gradmann@ibi.hu-berlin.de
Best Practices for Structural Metadata Version 1 Yale University Library June 1, 2008
Best Practices for Structural Metadata Version 1 Yale University Library June 1, 2008 Background The Digital Production and Integration Program (DPIP) is sponsoring the development of documentation outlining,
Towards semantic interoperability of cultural information systems making ontologies work.
Towards semantic interoperability of cultural information systems making ontologies work. Magisterarbeit an der Philosophischen Fakultät der Universität zu Köln vorgelegt von Robert Kummer Contents 1 Introduction
Information, Organization, and Management
Information, Organization, and Management Unit 7: The Semantic Web: A Web of Data mhepp@computer.org Contents The Semantic Web Vision Core Components Comparison of Database Query Languages: SQL, SPARQL, CQL, DMX
ISSN: 2393-8528 Contents lists available at International Journal of Innovative Computer Science & Engineering Volume 3 Issue 2; March-April-2016; Page No. 09-13 A Comparison of Database
XML for Manufacturing Systems Integration
Information Technology for Engineering & Manufacturing XML for Manufacturing Systems Integration Tom Rhodes Information Technology Laboratory Overview of presentation Introductory material on XML NIST,
Software development process
OpenStax-CNX module: m14619 1 Software development process Trung Hung VO This work is produced by OpenStax-CNX and licensed under the Creative Commons Attribution License 2.0 Abstract A software development) | http://docplayer.net/667206-Kodikologie-und-palaographie-im-digitalen-zeitalter-2-codicology-and-palaeography-in-the-digital-age-2.html | CC-MAIN-2016-50 | en | refinedweb |
- Some Thoughts on XSL
- Not Only for Publishing...
- ... but Also for Data
- Where to Now?
- About Pineapplesoft Link
XML Expert Benoît Marchal gives a crash course in XSL, including its uses for web publishing and data management, as well as where XSL is headed in the future.
XSL is the XML Stylesheet Language, one of the numerous standards published by the W3C to support XML. I consider XSL one of major XML standards, along with namespaces and SAX. I rate XSL as major because almost every XML application will need it.
I did a fair amount of XSL work last month: the new XML book, which I am currently writing, explores several advanced XSL techniques. I also gave a customized XSL training for a local company. Finally, I still receive many comments following the "XML Programming for Teams" article I published in last September.
Last but not least, there are almost daily XSL-related announcements: new XSL processors, new formatters and, at long last, XSLFO is close to being final. One of the funniest XSL announcements came from Don Box who wrote a SOAP endpoint in XSL!
Crash Course on XSL
The naming "stylesheet" is very unfortunate. In most cases, a style sheet is a tool to format and publish documents, e.g. Cascading Style Sheet or Word style sheets (now called templates). Not quite with XSL or, to be more correct, XSL is 10% formatting and 90% non-formatting!
There are two main aspects in XSL. Firstly XSLT and XPath define a transformation language (the T in XSLT stands for transformation). In other words, it's a tool to take an XML document and transform it in another XML document (although HTML and text are also supported).
Secondly XSLFO (XSL Formatting Objects) is a language to describe mainly printed documents. This part of XSL is what you would expect in a style sheet, it's all about choosing font, boldness and page jumps. However, unlike XSLT and XPath, it's not a standard yet (but soon will be). | http://www.informit.com/articles/article.aspx?p=19628 | CC-MAIN-2016-50 | en | refinedweb |
In this post I am going to show you how to encrypt and decrypt data in asp.net using Symmetric algorithm.Before going to implementation details let discuss some security related terms before.What is Hashing
Hashing is one-way algorithm, means once the data is hashed you can’t recovered it latter.
Hashing is a cryptographic function that is used to provide a secure fingerprint of data. A common usage you may have encountered is to check a file you have downloaded.
If you want to secure your data, and latter you want to retrieve the original data. We used encryption because encryption is two-way processes. For encryption we requires following things
Key:Key is a piece of information that is used as an input parameter in encryption. The output of encryption is determined by the key.
Types of Encryption
are two main types of encryption and decryption algorithms, symmetric and asymmetric.Symmetric:
In symmetric encryption we used same key for encryption and decryption( As the diagram shows the same key is used for encrypting and decrypting the message).DES, Triple DES, RC2 and AES or Rijndael are examples of symmetric algorithms
Asymmetric Encryption:.
Follow these steps to encrypt data symmetrically:
- Choose an algorithm.
- Create or retrieve a key.
- Generate the IV.
- Convert the clear text data to an array of bytes.
- Encrypt the clear text byte array.
- Store the encrypted data and the IV.
What are the steps required for Decryption?
Following are the steps required for decrypting the data.
- Choose the same algorithm that was used to encrypt the data.
- Retrieve the key that was used.
- Retrieve the IV that was used.
- Retrieve the encrypted data.
- Decrypt the data.
- Convert the decrypted data back to its original format.
I am going to choose Triple DES algorithm for this demo.(Step#1)
Let's first generate IV (initialization vector) and Key (Step #2).For this,create a separate console application and paste following code inside Main method
static void Main(string[] args) { DESCryptoServiceProvider des = new DESCryptoServiceProvider(); des.GenerateIV(); des.GenerateKey(); Console.WriteLine("Key={0}",String.Join(",",des.Key)); Console.WriteLine("IV={0}", String.Join(",", des.IV)); }
Run the application and note down the value of IV and Key.
Create a new class named EncryptionManager and add following code
public class EncryptionManager { private byte[] Key = {89,83,45,236,140,228,180,79,209,164,231,131,28,7,110,73,140,235,118,52,225,46,202,118 }; private byte[] IV = { 161, 200, 187, 207, 22, 92, 119, 227 }; public string Encrypt(string inputString) { //Step #4 byte[] buffer = Encoding.ASCII.GetBytes(inputString); TripleDESCryptoServiceProvider tripleDes = new TripleDESCryptoServiceProvider() { Key = Key, IV = IV }; //Step #4,5,6 ICryptoTransform ITransform = tripleDes.CreateEncryptor(); return Convert.ToBase64String(ITransform.TransformFinalBlock(buffer, 0, buffer.Length)); } public string Decrypt(string inputString) { byte[] buffer = Convert.FromBase64String(inputString); TripleDESCryptoServiceProvider tripleDes = new TripleDESCryptoServiceProvider() { Key = Key, IV = IV }; ICryptoTransform ITransform = tripleDes.CreateDecryptor(); return Encoding.ASCII.GetString(ITransform.TransformFinalBlock(buffer, 0, buffer.Length)); } }
How to use
Create a new webpage and add following code inside it
<%@ Page <html xmlns=""> <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <table border="0" cellpadding="0" cellspacing="0"> <tr> <td> <asp:TextBox</asp:TextBox> </td> <td> <asp:TextBox</asp:TextBox> </td> <td> <asp:TextBox</asp:TextBox> </td> </tr> <tr> <td> <asp:Button <asp:Button </td> </tr> </table> </form> </body> </html>
using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.Security.Cryptography; using System.Text; using System.IO; public partial class EncryptDecrypt : System.Web.UI.Page { EncryptionManager manager = new EncryptionManager(); protected void Page_Load(object sender, EventArgs e) { } protected void btnEncrypt_Click(object sender, EventArgs e) { txtEncrypt.Text = manager.Encrypt(txtData.Text); } protected void btnDecrypt_Click(object sender, EventArgs e) { txtDecrypt.Text = manager.Decrypt(txtEncrypt.Text); } }
error will come
nice
for 1024 bit encryption
its nice code, but coming error when using with query string, in QS "+" sign coveting to bank space and it cause to error
What your saying is completely true. I know that everybody must say the same thing, but I just think that you put it in a way that everyone can understand. I also love the images you put in here. They fit so well with what your trying to say. Im sure you'll reach so many people with what you've got to say.
can i recover deleted files
very nice.easy coding.thank you
It's So Help Gull...Thnak You Very Much. | http://aspdotnetcodebook.blogspot.com/2011/07/how-to-encryptdecrypt-data-in-aspnet.html | CC-MAIN-2016-50 | en | refinedweb |
java.lang.Object
org.modeshape.common.util.LogContextorg.modeshape.common.util.LogContext
@Immutable public class LogContext
Provides a "mapped diagnostic context" (MDC) for use in capturing extra context information to be included in logs of multithreaded applications. Not all logging implementations support MDC, although a few do (including Log4J and Logback). Note that if the logging implementation does not support MDC, this information is ignored.
It can be difficult to understand what is going on within a multithreaded application. When multiple threads are working simultaneously, their log messages are mixed together. Thus, it's difficult to follow the log messages of a single thread. Log contexts provide a way to associate additional information with "the current context", and log messages can include that additional information in the messages.
Log contexts are managed for you, and so using them is very straightforward. Typically, log contexts are used within well-defined activities, and additional information is recorded in the context at the beginning of the activity and cleared at the end of the activity.
The following example shows how to set and clear this additional information:
LogContext.set("username","jsmith"); LogContext.set("operation","process"); ... // do work here ... LogContext.clear();Note that the actually values would not be hardcoded but would be retrieved from other objects available at the time.
If the logging system doesn't support MDC, then the additional information provided via LogContext is ignored. However, if the logging system is able to use MDC and it is set up with patterns that reference the keys, then those log messages will contain the values for those keys.
public LogContext()
public static void set(String key, String value)
valparameter) as identified with the
keyparameter into the current thread's context map. The
keyparameter cannot be null. The code>val parameter can be null only if the underlying implementation supports it.
This method delegates all work to the MDC of the underlying logging system.
key- the key
value- the value
IllegalArgumentException- in case the "key" parameter is null
public static String get(String key)
keyparameter. The
keyparameter cannot be null.
This method delegates all work to the MDC of the underlying logging system.
key- the key
keyparameter.
IllegalArgumentException- in case the "key" parameter is null
public static void remove(String key)
keyparameter using the underlying system's MDC implementation. The
keyparameter cannot be null. This method does nothing if there is no previous value associated with
key.
key- the key
IllegalArgumentException- in case the "key" parameter is null
public static void clear() | https://docs.jboss.org/modeshape/1.0.0.Beta1/api/org/modeshape/common/util/LogContext.html | CC-MAIN-2021-31 | en | refinedweb |
Introduction:
Here I will explain what is polymorphism in c#.net with example and different types of polymorphism (compile time & runtime polymorphism) in c#.net with example.
Description:
In previous posts I explained OOPS examples in c#, difference b/w array and arraylist in c#, difference b/w constant and readonly in c#, difference b/w view and stored procedure in sql, difference between wcf and web application, interview questions in asp.net, sql server, c# and many articles relating to interview questions in c#, asp.net, sql server, javascript, jquery. Now I will explain what is polymorphism in c#.net with example and different types of polymorphism in c#.net with example.)
Example
In above class we have two methods with same name but having different input parameters this is called method overloading or compile time polymorphism or early binding.
Run Time Polymorphism.
In base class if we declare methods with virtual keyword then only we can override those methods in derived class using override keyword
Example
If we run above code we will get output like as shown below
Output
In this way we can implement polymorphism concept in our applications.
40 comments :
But sir fr the first time output shold b derived class and thn base class if nt thn why??
Muito bom seus exemplos, gostei. obrigado por compartilhar conosco seus conhecimentos....
pls give us an simple example program for operator overloading..
Hi sir,
Please explain why compiler unable to detect in case of run time polymorphism.
Thanks,
Ajay
Nice Explination Suresh.
But the output the above program is wrong.
Could u tell me what is correct output?
No this suresh answer is perfectly right...both the time derived method will call
Please format your code correctly, indentations!
Your code is incorrect full stop (within the example your trying to give)
This is wrong, the true essence of run time method polymorphism is to override i'ts base class method.
So when instantiating the base class and calling the Sample1() method, the Console will write "Base Class" but what if you want to override that method in a Derived Class? We use polymorphism, which then the Console will write "Derived Class" if you have instantiated the derived class and called Sample1()
Output should be:
"Base Class"
"Derived Class"
Output is correct.
Thanks Suresh for such a nice article.......
Above output is correct. If you check Bclass instance is created by Dclass only. So it will call the same method again.
tyutii
Guys Output is 100 % correct Override means override base class method in child class that's the reason output returning only child class method
Here out Put Is correct as per Above Code,
The Reason Is:
Focus When Here We create Object Of Base Class,
Bclass objBc = new DClass();
If we Create Base Class Object as We Create Normally Like,
Bclass objBc = new Bclass ();
Then the Output Will be,
Derived Class
BASE Class
Yes Correct
The information was very much helpful .. thanks ...
But why would you use it? If you are not using the first Sample1() in the base class, why have it.
Thanks buddy
The type you have explained are types of Ad-Hoc polymorphism....
polymorphism is actually is of 4 type
Yes this is correct and excellent example...
Simple explanation ..
output should be
Derived Class
Derived Class
output should be
Derived Class
Base Class
Thanks for sharing
thanks .....
public class clsOverRiding
{
//virtual
public void GetData()
{
Console.WriteLine("Base Class OverLoading...............!");
}
}
public class ChildOverRiding : clsOverRiding
{
//override
public void GetData()
{
Console.WriteLine("Child Class OverLoading...............!");
}
}
class Program
{
static void Main(string[] args)
{
ChildOverRiding cd= new ChildOverRiding();
cd.GetData();
clsOverRiding BS = new ChildOverRiding();
BS.GetData();
Console.ReadKey();
}
}
public class clsOverRiding
{
//virtual
public virtual void GetData()
{
Console.WriteLine("Base Class OverLoading...............!");
}
}
public class ChildOverRiding : clsOverRiding
{
//override
public override void GetData()
{
Console.WriteLine("Child Class OverLoading...............!");
}
}
class Program
{
static void Main(string[] args)
{
ChildOverRiding cd= new ChildOverRiding();
cd.GetData();
clsOverRiding BS = new ChildOverRiding();
BS.GetData();
Console.ReadKey();
}
}
its really simple to understand.
Nice Post.
Thank u sir...
I have run your program . This output is absolutely correct.
And it is advise to others not to comment as given output is wrong.
very superficial
Good example. Thanks.
The above output is 100 % correct . If U want to know the actual result u can run using visual studio. try that .
but u did not explained what is early and late binding..
thank you its really understandable...........
yes,its correct solution and easy to understand.thank you
derived
base
yes, output is correct
1st question.
-------------------------------------
DClass objDc = new DClass();
objDc.Sample1();
//Derived Class is called, we can call derived class by using this method.
What is the use of calling derived class's method using base class object.??
// calling the base class method
Bclass objBc = new DClass();
objBc.Sample1();
---------------------------------------
2nd Question
Without inheriting the we can not implement the polymorphism,, ryt?
can any one explain why polymorphism required.what is needs?
Note: Only a member of this blog may post a comment. | https://www.aspdotnet-suresh.com/2013/09/polymorphism-in-c-with-example-types-of-polymorphism.html | CC-MAIN-2021-31 | en | refinedweb |
Overview
Working with APIs is both fun and educational.
Many companies like Google, Reddit and Twitter releases it’s API to the public
so that developers can develop products that are powered by its service.
Working with APIs learns you the nuts and bolts beneath the hood.
In this post, we will work the Weather Underground API.
Weather Underground (Wunderground)
We will build an app that will connect to ‘Wunderground‘ and retrieve.
Weather Forecasts etc.
Wunderground provides local & long range Weather Forecast, weather reports,
maps & tropical weather conditions for locations worldwide.
API
An API is a protocol intended to be used as an interface by software components
to communicate with each other. An API is a set of programming instructions and
standards for accessing web based software applications (such as above).
With API’s applications talk to each other without any user knowledge or
intervention.
Getting Started
The first thing that we need to do when we want to use an API, is to see if the
company provides any API documentation. Since we want to write an application for
Wunderground, we will go to Wundergrounds website
At the bottom of the page, you should see the “Weather API for Developers”.
The API Documentation
Most of the API features require an API key, so let’s go ahead and sign up for
a key before we start to use the Weather API.
In the documentation we can also read that the API requests are made over HTTP
and that Data features return JSON or XML.
To read the full API documentation, see this link.
Before we get the key, we need to first create a free account.
The API Key
Next step is to sign up for the API key. Just fill in your name, email address,
project name and website and you should be ready to go.
Recommended Python Training
For Python training, our top recommendation is DataCamp.
Many services on the Internet (such as Twitter, Facebook..) requires that you
have an “API Key”.
An application programming interface key (API key) is a code passed in by
computer programs calling an API to identify the calling program, its developer,
or its user to the Web site.
API keys are used to track and control how the API is being used, for example
to prevent malicious use or abuse of the API.
The API key often acts as both a unique identifier and a secret token for
authentication, and will generally have a set of access rights on the API
associated with it.
Current Conditions in US City
Wunderground provides an example for us in their API documentation.
Current Conditions in US City
If you click on the “Show response” button or copy and paste that URL into your
browser, you should something similar to this:
{ "response": { "version": "0.1" ,"termsofService": "" ,"features": { "conditions": 1 } } , "current_observation": { "image": { "url":"", "title":"Weather Underground", "link":"" }, "display_location": { "full":"San Francisco, CA", "city":"San Francisco", "state":"CA", "state_name":"California", "country":"US", "country_iso3166":"US", "zip":"94101", "magic":"1", "wmo":"99999", "latitude":"37.77500916", "longitude":"-122.41825867", "elevation":"47.00000000" }, .....
Current Conditions in Cedar Rapids
On the “Code Samples” page we can see the whole Python code to retrieve the
current temperature in Cedar Rapids.
Copy and paste this into your favorite editor and save it as anything you like.
Note, that you have to replace “0def10027afaebb7” with your own API key.
import urllib2 import json f = urllib2.urlopen('') json_string = f.read() parsed_json = json.loads(json_string) location = parsed_json['location']['city'] temp_f = parsed_json['current_observation']['temp_f'] print "Current temperature in %s is: %s" % (location, temp_f) f.close()
To run the program in your terminal:
python get_current_temp.py
Your program will return the current temperature in Cedar Rapids:
Current temperature in Cedar Rapids is: 68.9
What is next?
Now that we have looked at and tested the examples provided by Wunderground,
let’s create a program by ourselves.
The Weather Underground provides us with a whole bunch of “Data Features” that
we can use.
It is important that you read through the information there, to understand how
the different features can be accessed.
Standard Request URL Format
“Most API features can be accessed using the following format.
Note that several features can be combined into a single request.”
where:
0def10027afaebb7: Your API key
features: One or more of the following data features
settings (optional): Example: lang:FR/pws:0
query: The location for which you want weather information
format: json, or xml
What I want to do is to retrieve the forecast for Paris.
The forecast feature returns a summary of the weather for the next 3 days.
This includes high and low temperatures, a string text forecast and the conditions.
Forecast for Paris
To retrieve the forecast for Paris, I will first have to find out the country
code for France, which I can find here:
Next step is to look for the “Feature: forecast” in the API documentation.
The string that we need can be found here:
By reading the documentation, we should be able to construct an URL.
Making the API call
We now have the URL that we need and we can start with our program.
Now its time to make the API call to Weather Underground.
Note: Instead of using the urllib2 module as we did in the examples above,
we will in this program use the “requests” module.
Making the API call is very easy with the “requests” module.
r = requests.get(" Paris.json")
Now, we have a Response object called “r”. We can get all the information we need
from this object.
Creating our Application
Open your editor of choice, at the first line, import the requests module.
Note, the requests module comes with a built-in JSON decoder, which we can use
for the JSON data. That also means, that we don’t have to import the JSON
module (like we did in the previous example when we used the urllib2 module)
import requests
To begin extracting the information that we need, we first have to see
what keys that the “r” object returns to us.
The code below will return the keys and should return [u’response’, u’forecast’]
import requests r = requests.get(" Paris.json") data = r.json() print data.keys()
Getting the data that we want
Copy and paste the URL (from above) into a JSON editor.
I use but any JSON editor should do the work.
This will show an easier overview of all the data.
Note, the same information can be gained via the terminal, by typing:
r = requests.get(" Paris.json") print r.text
After inspecting the output given to us, we can see that the data that we are
interested in, is in the “forecast” key. Back to our program, and print out the
data from that key.
import requests r = requests.get(" Paris.json") data = r.json() print data['forecast']
The result is stored in the variable “data”.
To access our JSON data, we simple use the bracket notation, like this:
data[‘key’].
Let’s navigate a bit more through the data, by adding ‘simpleforecast’
import requests r = requests.get(" Paris.json") data = r.json() print data['forecast']['simpleforecast']
We are still getting a bit to much output, but hold on, we are almost there.
The last step in our program is to add [‘forecastday’] and instead of printing
out each and every entry, we will use a for loop to iterate through the dictionary.
We can access anything we want like this, just look up what data you are
interested in.
In this program I wanted to get the forecast for Paris.
Let’s see how the code looks like.
import requests r = requests.get("") data = r.json() for day in data['forecast']['simpleforecast']['forecastday']: print day['date']['weekday'] + ":" print "Conditions: ", day['conditions'] print "High: ", day['high']['celsius'] + "C", "Low: ", day['low']['celsius'] + "C", ' '
Run the program.
$ python get_temp_paris.py
Monday: Conditions: Partly Cloudy High: 23C Low: 10C Tuesday: Conditions: Partly Cloudy High: 23C Low: 10C Wednesday: Conditions: Partly Cloudy High: 24C Low: 14C Thursday: Conditions: Mostly Cloudy High: 26C Low: 15C
The forecast feature is just one of many. I will leave it up to you to explore
the rest.
Once you get the understanding of an API and it’s output in JSON, you understand
how most of them work.
More Reading
A comprehensive list of Python APIs
Weather Underground
Recommended Python Training
For Python training, our top recommendation is DataCamp. | https://www.pythonforbeginners.com/scraping/scraping-wunderground | CC-MAIN-2021-31 | en | refinedweb |
home english |
home deutsch |
Sascha |
Kontakt |
Pro |
Weblog |
Wiki
It has been quite some time that I announced that I'd be working
as a freelancer. Lots of stuff had to be done in that time, but finally
things are ready. I've founded my own little company and set up
a small website: Welcome to Betabug Sirius!
For once this isn't a "ZWiki-As-A-CMS" site (which is what I usually
do when I want a site to go up fast), but a plain, static html site,
done in vi. I know it won't scale when I'll want to expand it, but
I had fun coding it up in the old style.
So far I already have a few customers, working with Zope (mostly in
bugfixing and maintenance of existing / legacy sites) and with Pyramid (building
a brand new web application). There is also a project to build something
unique and "our own" on a longer horizon, involving technology and art.
There has been and still is a lot of bureaucracy, but so far the ride has
been smooth and sometimes even fun. Part of the strategy is to work
together with other companies to form flexible teams for each project.
That's something that has worked real well so far, giving me fun and
inspiration to work with others.
My MiniPlanet Zope product has been working steady and stable for some years, when suddenly a user request came along. Would it be possible to get a feed of all the items in a miniplanet? With this update it became possible.
MiniPlanet is an old-style Zope product. Probably the next thing I will have to do with it is to "eggify" it. That would make it possible to use it easier with current Zope versions (2.13)..
Have an old Zope site around? Want to migrate it? Have to decide what
to do with it? Givent that Zope appears to be dropping in hotness a
bit lately, these points come up more often now. There have been a few
questions about this on the #zope IRC channel lately. So I'm writing
down a few thoughts and FAQs on it..
While installing a Django application that uses Postgres on my MacBook
with Mac OS X 10.6 (Snow Leopard), I ran into some problems that were all
over the net. The symptom was that when I tried to pip install the
app's requirements, I got a traceback with something like this for
psycopg2:
ImportError:
dlopen([...]/psycopg2/_psycopg.so, 2): Symbol not found: _PQbackendPID
...
Expected in: flat namespace
Searching the web for this found lots of things, but they didn't seem to
work for me... because of some dead bodies in my Mac's basement.
Basically this errors happens due to a mismatch of 32 or 64 bitness of
Python, psycopg2, and Postgres. I had installed Postgres from the
official installer (9.1.1), Python 2.7 was standard from the system (which
is 64bit). First attempt to fix the mess, following a tipp from
Anna Vester, I tried to recompile psycopg2:
pip uninstall psycopg2
...
env ARCHFLAGS="-arch x86_64" pip install psycopg2
I'm not using sudo, because I'm in a virtualenv and also I didn't dual
install for i386 as suggested in the blog post. There were a lot of
posts out there telling me simply to reinstall psycopg2, which didn't
get me anywhere.
But... even with this it didn't work for me. I found some other hints
out there that got me looking in the right direction. The second
problem was that I had an old version of Postgres around, from when my
MacBook was running 10.4. The pg_config from that old version didn't
like the psycopg2, even in 64bit. Pointing my path to the new postgres
bin directory instead of the old one solved that too. stretches where the path was covered
with loose stones. All that was easily forgotten, because of the great
weather we had. Sunshine and a clear blue sky, cool air without any
wind, just enough haze in the air to make photography interesting.
We basically made a round trip, passing alongside the glen on top of the
edge at first, then on the upper end, descending down and returning next
to the stream (which didn't have any water) to our starting point. At
the point where we came down, there was kind of a wide valley housing
what were said to be wild horses. I don't know if they are really so
wild, but you can see them as little specs in the picture above. We were
having our lunch break in viewing distance of them.
Along with me I had my trusty old Firstflex, with some Fuji 400H (color
negative) and Kodak Tri-X (black+white). Those pictures obviously aren't
developed yet, the picture above was taken with a Lumix FT-3, which has
replaced my broken Pentax W60. The Lumix is good for image quality and
ruggedness, but the software is nothing to write home about. Basically
all you get is "push that button and hope for the best".
We were out and about for roughly seven hours.
After the hike everybody stormed a local taverna. Lots of food was eaten, lots of old and new
stories were told and the sore feed could relax a bit. Then everybody
drove home to Athens. Judging from myself, this was followed by some
good and deep sleep. | https://betabug.ch/blogs/ch-athens/monthlist_html?year=2011&month=11 | CC-MAIN-2021-31 | en | refinedweb |
table of contents
- buster 241-7~deb10u7
- buster-backports 247.3-5~bpo10+2
- testing 247.3-5
- unstable 247.3-6
- experimental 249-1
NAME¶libudev - API for enumerating and introspecting local devices
SYNOPSIS¶
#include <libudev.h>
pkg-config --cflags --libs libudev
DESCRIPTION¶libudev.h provides APIs to introspect and enumerate devices on the local system.
All functions require a libudev context to operate. This context can be create via udev_new(3). It is used to track library state and link objects together. No global state is used by libudev, everything is always linked to a udev context..
To introspect a local device on a system, a udev device object can be created via udev_device_new_from_syspath(3) and friends. The device object allows one to query current state, read and write attributes and lookup properties of the device in question.
To enumerate local devices on the system, an enumeration object can be created via udev_enumerate_new(3).
To monitor the local system for hotplugged or unplugged devices, a monitor can be created via udev_monitor_new_from_netlink(3).
Whenever libudev returns a list of objects, the udev_list_entry(3) API should be used to iterate, access and modify those lists.
Furthermore, libudev also exports legacy APIs that should not be used by new software (and as such are not documented as part of this manual). This includes the hardware database known as udev_hwdb (please use the new sd-hwdb(3) API instead) and the udev_queue object to query the udev daemon (which should not be used by new software at all). | https://dyn.manpages.debian.org/experimental/libudev-dev/libudev.3.en.html | CC-MAIN-2021-31 | en | refinedweb |
Chain of Responsibility
Chain of responsibility is a design pattern that allows to separate the sender of a request from its one or more recipients. The request is passed from recipient to recipient till it gets to the one competent to deal with it. The implementation should also consider the situation when the recipient is not found.
Motivation
If we tightly connected the class that sends a request and the class that processes it, we'd violate Low Coupling. We'd also lose the possibility to assign multiple handlers processing the request. This is often used in practice when applying various filters that are in a chain, one after another, and process the request somehow, gradually.
The use isn't limited only to linear data structures, but passing the responsibility can also be done at the element-to-parent level in tree structures. Sometimes, the pattern is combined with the Composite pattern, that defines the most efficient way to create such tree structures. This tree can sometimes be referred to as the Tree of Responsibility. Some frameworks use Chain of Responsibility to implement their event models.
Pattern
It's probably not surprising that
Handler, the recipient of the
request, is defined here as an interface. The sender of the request communicates
with this interface and it's implemented by the individual request handlers of
the chain or tree. These are connected to each other by references.
The handler typically provides methods for setting the next handler. Moreover, the method to handle the request, this method is polymorphic. And finally, the method to pass the request to the next handler. This can happen even if the request was handled only partially or not at all, e.g. because the handler is busy or not competent to handle the request.
Example
There are certainly many examples of Chain of Responsibility, as this is how many services work. For example, when ordering goods from abroad, your request is handled by a chain of post companies. Real practical use, however, is usually the implementation of some filters. These are chained one after another and block the request if it's not valid. Only the last link of such a chain would actually handle the request.
Let's create an example of a chain of such filters. We're gonna filter email requests which will first go through a spam filter, then by a user filter, and then potentially get to the handler that puts them in the inbox folder. Another handler could display a new email notification. Let's make an example of receiving an email that goes through several filters. We'll be able to add and change the filters freely thanks to the pattern.
Let's define the abstraction for the chain handlers:
public abstract class RequestHandler { private RequestHandler next; public RequestHandler setNext(RequestHandler next) { this.next = next; return next; } protected void passToNextHandler(Request request) { if (next != null) next.handleRequest(request); } public abstract void handleRequest(); }
Note that the
setNext() method returns the next handler in the
chain. This is because we can use Method Chaining to put the whole chain
together. E.g. this way:
spamFilter.setNext(userFilter).setNext(incommingMailHandler);
We can simply construct the whole chain on a single line. The concrete handlers can look as follows:
public class SpamFilter extends RequestHandler { public void handleRequest(Request request) { if (!request.Email.Text.Contains("free pills")) { // Simple spam filter passToNextHandler(request); } } } public class UserFilter extends RequestHandler { public List<String> restrictedAddresses = new ArrayList<String>(); public void handleRequest(Request request) { if (!restrictedAddresses.contains(request.email.address)) { // Simple user filter passToNextHandler(request); } } } public class IncommingMailHandler extends RequestHandler { private IncommingMail mail; public IncommingMailHandler(IncommingMail mail) { this.mail = mail; } public void handleRequest(Request request) { mail.add(request.email); } }
The features of the individual filters are, of course, extremely simplified
and illustrative only. We'd make the whole chain working by passing a request to
its first link,
SpamFilter.
Related Patterns
- Composite - Chain of responsibility isn't restricted to linear structures and can be also applied to trees, their implementation is the concern of the Composite pattern
No one has commented yet - be the first! | https://www.ictdemy.com/software-design/design-patterns/gof/gof-behavioral-patterns/chain-of-responsibility | CC-MAIN-2021-31 | en | refinedweb |
This is a new series Modules in Python.
Python Modules. There are different ways to import a module. This is also why Python comes with battery included
Importing Modules
Let’s how the different ways to import a module
import sys #access module, after this you can use sys.name to refer to things defined in module sys. from sys import stdout # access module without qualiying name. This reads from the module "sys" import "stdout", so that we would be able to refer "stdout"in our program. from sys import * # access all functions/classes in the sys module.
Catching Errors
I like to use the import statement to ensure that all modules can be loaded on the system. The Import Error basically means that you cannot use this module, and that you should look at the traceback to find out why.
Import sys try: import BeautifulSoup except ImportError: print 'Import not installed' sys.exit()
Recommended Python Training
For Python training, our top recommendation is DataCamp. | https://www.pythonforbeginners.com/modules-in-python/python-modules | CC-MAIN-2021-31 | en | refinedweb |
pthread_spin_init (3) - Linux Man Pages
pthread_spin_init: initialize or destroy a spin lock
NAME
pthread_spin_init, pthread_spin_destroy - initialize or destroy a spin lock
SYNOPSIS
#include <pthread.h> int pthread_spin_init(pthread_spinlock_t *lock, int pshared); int pthread_spin_destroy(pthread_spinlock_t *lock);
Compile and link with -pthread.
Feature Test Macro Requirements for glibc (see feature_test_macros(7)):
pthread_spin_init(),
pthread_spin_destroy():
- _POSIX_C_SOURCE >= 200112L
DESCRIPTIONGeneral note: Most programs should use mutexes instead of spin locks. Spin locks are primarily useful in conjunction with real-time scheduling policies. See NOTES.
The pthread_spin_init() function allocates any resources required for the use of the spin lock referred to by lock and initializes the lock to be in the unlocked state. The pshared argument must have one of the following values:
- PTHREAD_PROCESS_PRIVATE
- The spin lock is to be operated on only by threads in the same process as the thread that calls pthread_spin_init(). (Attempting to share the spin lock between processes results in undefined behavior.)
- PTHREAD_PROCESS_SHARED
- The spin lock may be operated on by any thread in any process that has access to the memory containing the lock (i.e., the lock may be in a shared memory object that is shared among multiple processes).
Calling pthread_spin_init() on a spin lock that has already been initialized results in undefined behavior.
The pthread_spin_destroy() function destroys a previously initialized spin lock, freeing any resources that were allocated for that lock. Destroying a spin lock that has not been previously been initialized or destroying a spin lock while another thread holds the lock results in undefined behavior.
Once a spin lock has been destroyed, performing any operation on the lock other than once more initializing it with pthread_spin_init() results in undefined behavior.
The result of performing operations such as pthread_spin_lock(3), pthread_spin_unlock(3), and pthread_spin_destroy(3) on copies of the object referred to by lock is undefined.
RETURN VALUEOn success, there functions return zero. On failure, they return an error number. In the event that pthread_spin_init() fails, the lock is not initialized.
ERRORSpthread_spin_init() may fail with the following errors:
- EAGAIN
- The system has insufficient resources to initialize a new spin lock.
- ENOMEM
- Insufficient memory to initialize the spin lock.
VERSIONSThese functions first appeared in glibc in version 2.2.
CONFORMING TOPOSIX.1-2001.
Support for process-shared spin locks is a POSIX option. The option is supported in the glibc implementation.
NOTESSpin locks should be employed in conjunction with real-time scheduling policies (SCHED_FIFO, or possibly SCHED_RR). Use of spin locks with nondeterministic scheduling policies such as SCHED_OTHER probably indicates a design mistake. The problem is that if a thread operating under such a policy is scheduled off the CPU while it holds a spin lock, then other threads will waste time spinning on the lock until the lock holder is once more rescheduled and releases the lock.
If threads create a deadlock situation while employing spin locks, those threads will spin forever consuming CPU time.
User-space spin locks are not applicable as a general locking solution. They are, by definition, prone to priority inversion and unbounded spin times. A programmer using spin locks must be exceptionally careful not only in the code, but also in terms of system configuration, thread placement, and priority assignment.
COLOPHONThis page is part of release 5.05 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. | https://www.systutorials.com/docs/linux/man/3-pthread_spin_init/ | CC-MAIN-2021-31 | en | refinedweb |
Using Camera within the scene?
- None_None233
@ omz, thanx for the very nice update but as i checked the photo module, i realized that accessing camera may not be available when a scene is running (for me it equals to when my app is running). is there any trick to use camera in a running app? like just to take a photo when a scene is running, and then back to scene and use photos.get_image to load it . Or pausing the scene and then resuming it again? for me, i guess, accessing camera within the apps is an important issue.
I'm having the same issue. If there is any way to dismiss the scene with a function like <code>stop()</code>, that would be very helpful. Thanks!
Before defining main scene class (the class containing the
draw(),
setup(), and other functions), call the
capture_image()method. Store the result as a variable. Within the
setup()function, make the image variable a global. Convert the image to RGBA using the
.convert()method, save the result of
load_pil_image(image)as
self.img, and use
self.imgto access the image during the scene. Here is an example:
from scene import * import Image import photos global imgs imgs = photos.capture_image() class MyScene (Scene): def setup(self): global imgs imgs.convert('RGBA') self.img = load_pil_image(imgs) def draw(self): image(self.img, 0, 0, self.size.w, self.size.h) run(MyScene(), frame_interval=1)
@Coder 123 you can format your code using the tag {pre}{pre}, replace curly braces with angled brackets <>
- Irrelevant44
Thanks @Coder123, but is there a way to get multiple images i.e. take a photo with capture_image(), load it into a scene and do some processing on it, then take another photo and repeat, kind of what would be required for a camera application or similar.
- achorrath233
I tried breaking out of a scene by raising an exception, but I was unable to catch it. Instead of being handled by the try/except clause I put around run(), it took me back to the interpreter and highlighted the line where the exception was raised.
- Irrelevant44
@Achorrath I tried this as well as other evil things (deleting references, unloading vital modules, etc) It seems that the scene is tied to the execution of the program, and when it gets created it gets loaded permanently into the main execution. Given that, can I put in a feature request for access to the camera while scenes are running. Even without the automatic camera interface would be reasonable; I don't know how iOS allows access to the camera but this approach seems unnecessarily clunky...
Resurrecting this old thread... I was just trying to open a capture_image interface from another simple UI view. The capture interface was called just fine with a button in a sheet view on the iPad, but when I tried to call the capture interface from a button in a fullscreen view, the whole program (script and Pythonista itself) froze. Has anybody else seen this?
Perhaps if you have a simple example that reproduces the code, we can find out the problem.
Often, this type of problem is caused when the ui thread is trying to do some animation, etc while the main thread is trying to show the dialog, camera, etc. The standard fix for such problems is to wrap the code in another function, and call ui.delay on it using a short delay, long enough to be sure the ui is finished doing whatever it was doing, maybe 0.5 second or something.
Here is a simple ui button which captures an image, then shows it in an ImageView. This does not crash on my ipad2. When I added v.close() as the first line inside of the action, it does crash (in this case, drops back to home screen) Ui.delaying the subsequent code by 0.5 sec allowed it to work again. I was also able to get the camera "stuck" if I kicked off an ui.animate before showing the camera.
import ui, photos, io, console v=ui.View() v.bg_color='white' I=ui.ImageView(frame=(0,150,640,480)) I.content_mode=ui.CONTENT_SCALE_ASPECT_FIT v.add_subview(I) b=ui.Button(frame=(50,50,100,100)) b.bg_color='red' b.title='select image' v.add_subview(b) @ui.in_background def myaction(sender): img=photos.capture_image() console.show_activity() if not img: return with io.BytesIO() as bIO: img.save(bIO, 'PNG') imgOut = ui.Image.from_data(bIO.getvalue()) I.image=imgOut console.hide_activity() b.action=myaction v.present('fullscreen')
- wradcliffe
This example works fine on my ipad as well. It does bring up a question though about whether it is possible to write an app that can draw over the top of the capture_image display. Suppose I want to write an app that captures a selfie and needs the user to align their face with a template. I the example you provide, I would have to get the user to go back and forth between the capture interface and the app interface which would be klunky.
You cannot. But you could always crop and scale the image, or for instance have user drag a box over face
@JonB, thanks for the example. It does work fine on my iPad as well. The syntax is essentially what I have in my code, with a ui.Button activating a photo capture in the background. I haven't had a ton of time to look at the possible differences.
I'll say this, though. The one thing that made things work in my fullscreen view, was having hide_title_bar=True. This was independent of having the console show/hide activity. If this points to anything in particular, let me know. If/when I have time to dig a bit more into it, I'll add a new post. | https://forum.omz-software.com/topic/1402/using-camera-within-the-scene | CC-MAIN-2021-31 | en | refinedweb |
Currently there is no natural way to execute a java process in the background. Here is a chat I had with merscwog in the IRC channel about this topic:
<edovale> any of you guys know how can I get the javaexec task not to wait for process termination before returning?
<merscwog> Not sure that it can be done right now. Adam would know for sure, but he's not on at the moment. Basically you would need something like the setDaemon() option that the JettyRun tasks have.
<merscwog> Presumably you want to join with the forked process sometime in a later task, or you just want to leave the forked process running until explicitly stopped.
<edovale> I will need to stop the process after the tests are done.
<edovale> Do you think a better approach could be to use the ant exec task?
<merscwog> I've used the built in groovy string execute() methods, or the standard Java ProcessBuilder and calling start() on that and handling the returned Process object later.
<edovale> Thanks, I'll look into that.
<merscwog> You also might consider filing a JIRA about enhancing the Exec task and JavaExec tasks to allow for running something in the background, and setting a Process object as part of the task that can be manipulated by a further task (to allow waitFor() and destroy())
<edovale> Are you then certain that it can not be done now?
<merscwog> No. I am not certain, but it delegates to a org.gradle.process.internal.DefaultJavaExecAction which only has one execute() method, and it has a waitForFinish() call directly before it checks to see if isIgnoreExitValue is set.
<merscwog> Hmm, I guess you could in theory simply override the javaExecHandleBuilder JavaExecAction object and do what you'd want.
<edovale> ok.. that sounds like certainty to me.. I could definetely overwrite the javaExecHandlerBuilder but the IMHO the task should expose this functionality in a more natural way. It doesn't seem to me this is an odd requirement.
<edovale> I'll file the jira issue..
Should be the equivalent of spawn=true in Ant Java task.
Related forum issue:
Are there any updates to this ticket? Can we include in 1.1 release?
@Pablo, it probably won't make it into the 1.1 release. We'll try and see if we can squeeze it in.
There are some pretty simple work-arounds:
def process = ['java', '-cp', 'some-path', 'SomeMainClass', 'arg'].execute()
process.in.close()
process.out.close()
process.err.close()
Thank you Adam we'll try those out. Please keep us posted.
Any chance to get this fixed?
It would be great i.e. to launch GWT superdev mode within the IDE and still have its gradle integration available for other useful things (like dependencies resolution). Also switching the java execution code to ant calls is not so painless because of issues related to different classpath definition.
I would really love this. This is now the second round of searching where I have this exact need.
Actually, what I also need, is the ability to fire off a process which spawns threads - and when the main-thread returns, I want the task to be finished. However, the threads run on in the background.
This is obviously going to be used for integration tests - whereby I start up e.g. a "self-contained Jetty" in a main method, and then when the application is up, the main method exits, and I can go for the tests.
Using ant.java, I can get this "when main method exits, the task is done" feature (by having both fork and spawn = false), but I ran into some other issues then (both that it runs within a restrictive security manager not e,g, allowing MBeans to be registered)..
Also using the ant.java, I can spawn it totally, but then I both loose the standard output, and I would need to implement some shaky polling strategy to check if the thing has come up. (Full fork/spawn/disown is also very good in itself - to e.g. on command line do a full build and then spawn the result - and exit Gradle. But when I need the spawn to just exist for the duration of the rest of the build, to interact with it from the rest of the build script for testing, the full fork is not issue is now tracked on GitHub: | https://issues.gradle.org/browse/GRADLE-1254.html | CC-MAIN-2021-31 | en | refinedweb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.